text
stringlengths 226
34.5k
|
---|
How to handle exeption in threading with queue in python?
Question: This is never print: "Exeption in threadfuncqueue handled by threadfuncqueue",
"Exeption in threadfuncqueue handled by main thread" and "thread test with
queue passed". Never quitting!
from threading import Thread
from Queue import Queue
import time
class ImRaiseError():
def __init__(self):
time.sleep(1)
raise Exception(self.__class__.__name__)
# place for paste worked code example from below
print "begin thread test with queue"
def threadfuncqueue(q):
print "\n"+str(q.get())
while not q.empty():
try:
testthread = ImRaiseError()
finally:
print "Exeption in threadfuncqueue handled by threadfuncqueue"
q = Queue()
items = [1,2]
for i in range(len(items)):
t = Thread(target=threadfuncqueue,args=(q,))
if(1 == i):
t.daemon = False
else:
t.daemon = True
t.start()
for item in items:
q.put("threadfuncqueue"+str(item))
try:
q.join() # block until all tasks are done
finally:
print "Exeption in threadfuncqueue handled by main thread"
print "thread test with queue passed"
quit()
How handle this exeption?
Example of worked code, but without queue:
print "=========== procedure style test"
def threadfunc(q):
print "\n"+str(q)
while True:
try:
testthread = ImRaiseError()
finally:
print str(q)+" handled by process"
try:
threadfunc('testproc')
except Exception as e:
print "error!",e
print "procedure style test ==========="
print "=========== simple thread tests"
testthread = Thread(target=threadfunc,args=('testthread',))
testthread.start()
try:
testthread.join()
finally:
print "Exeptoin in testthread handled by main thread"
testthread1 = Thread(target=threadfunc,args=('testthread1',))
testthread1.start()
try:
testthread1.join()
finally:
print "Exeptoin in testthread1 handled by main thread"
print "simple thread tests ==========="
Answer: # Short Answer
You're putting things in a queue and retrieving them, but if you're going to
join a queue, you need to mark tasks as done as you pull them out of the queue
and process them. [According to the
docs](https://docs.python.org/2/library/queue.html#Queue.Queue.task_done),
every time you enqueue an item, a counter is incremented, and you need to call
`q.task_done()` to decrement that counter. `q.join()` will block until that
counter reaches zero. Add this immediately after your `q.get()` call to
prevent main from being blocked:
q.task_done()
Also, I find it odd that you're checking `q` for emptiness _after_ you've
retrieved something from it. I'm not sure exactly what you're trying to
achieve with that so I don't have any recommendations for you, but I would
suggest reconsidering your design in that area.
# Other Thoughts
Once you get this code working you should take it over to [Code
Review](http://codereview.stackexchange.com/) because it is a bit of a mess.
Here are a few thoughts for you:
## Exception Handling
You're not actually "handling" the exception in `threadfuncqueue(q)`. All the
`finally` statement does is allow you to execute cleanup code in the event of
an exception. It does not actually catch and handle the exception. The
exception will still travel up the call stack. Consider this example, test.py:
try:
raise Exception
finally:
print("Yup!")
print("Nope!")
Output:
> Yup!
> Traceback (most recent call last):
> File "test.py", line 2, in
> raise Exception
> Exception
Notice that "Yup!" got printed while "Nope!" didn't. The code in the `finally`
block was executed, but that didn't stop the exception from propagating up the
stack and halting the interpreter. You need the `except` statement for that:
try:
raise Exception
except Exception: # only catch the exceptions you expect
print("Yup!")
print("Nope!")
Output:
> Yup!
> Nope!
This time both are printed, because we caught and handled the exception.
## Exception Raising
Your current method of raising the exception in your thread is needlessly
complicated. Instead of creating the whole `ImRaiseError` class, just raise
the exception you want with a string:
raise Exception('Whatever error message I want')
If you find yourself manually manipulating [mangled
names](http://stackoverflow.com/questions/12537904/python-private-name-
mangling-and-instance-vs-class-attributes) (like `self.__class__.__name__`),
you're usually doing something wrong.
## Extra Parentheses
Using parentheses around conditional expressions is generally frowned upon in
Python:
if(1 == i): # unnecessary extra characters
Try to break the C/C++/Java habit and get rid of them:
if 1 == i:
## Other
I've already gone beyond the scope of this question, so I'm going to cut this
off now, but there are a few other things you could clean up and make more
idiomatic. Head over to Code Review when you're done here and see what else
can be improved.
|
fileinput, inplace filtering, encoding
Question: I am attempting to use the `fileinput` module's [inplace filtering
feature](https://docs.python.org/dev/library/fileinput.html#fileinput.FileInput)
to rewrite an input file in place.
Needed to set encoding (both for read and write) to `latin-1` and attempted to
pass `openhook=fileinput.hook_encoded('latin-1')` to `fileinput.input` but was
thwarted by the error
ValueError: FileInput cannot use an opening hook in inplace mode
Upon closer inspection I see that the `fileinput` documentation clearly states
this: _You cannot use inplace and openhook together_
How can I get around this?
Answer: As far as I know, there is no way around this with the `fileinput` module. You
can accomplish the same task with a combination of the `codecs` module,
`os.rename()`, and `os.remove()`:
import os
import codecs
input_name = 'some_file.txt'
tmp_name = 'tmp.txt'
with codecs.open(input_name, 'r', encoding='latin-1') as fi, \
codecs.open(tmp_name, 'w', encoding='latin-1') as fo:
for line in fi:
new_line = do_processing(line) # do your line processing here
fo.write(new_line)
os.remove(input_name) # remove original
os.rename(tmp_name, input_name) # rename temp to original name
You also have the option of specifying a new encoding for the output file if
you want to change it, or leave it as `latin-1` when opening the output file
if you don't want it it to change.
I know this isn't the in-place modification you were looking for, but it will
accomplish the task you were trying to do and is very flexible.
|
Can't call static method inside class
Question: This is what i am trying to do call a static method inside a class to populate
the class variable.
import sys
import os
from HelpingData import *
class Inventory(object):
shipping_cost = 400.0
total_stock = calculate_total_stock.__func__()
def __init__(self, attributes={}):
self.inventory = {}
if attributes is None:
self.inventory = {}
else:
for key in attributes:
self.inventory[key] = attributes[key]
def getValue(self,attribute):
return self.inventory[attribute]
def setValue(self,attribute,value):
self.inventory[attribute]=value
@staticmethod
def calculate_total_stock():
total_stock = dict((item, 0) for item in product_names)
for nation in product_stock:
for item in nation:
total_stock[item] += nation[item]
return total_stock
And this is the error i am getting..
> total_stock = calculate_total_stock.__func__()
NameError: name'calculate_total_stock' is not defined
Can someone suggest me, what i am missing here, i am new to python. 1 day old
Answer: The code at the top level of the `Inventory` definition (i.e. class attributes
and method definitions) runs _before_ the name `Inventory` exists, so you
can't call its own methods within the definition. As you have a
`@staticmethod`, which doesn't require any class or instance argument, why not
move it outside?
def calculate_total_stock(product_names, product_stock):
total_stock = dict((item, 0) for item in product_names)
for nation in product_stock:
for item in nation:
total_stock[item] += nation[item]
return total_stock
class Inventory(object):
SHIPPING_COST = 400.0
TOTAL_STOCK = calculate_total_stock(product_names, product_stock)
def __init__(self, attributes=None):
self.inventory = {}
if attributes is not None:
for key in attributes:
self.inventory[key] = attributes[key]
def get_value(self, attribute):
return self.inventory[attribute]
def set_value(self, attribute, value):
self.inventory[attribute] = value
Note that I have done some tidying up, particularly in terms of
[style](http://legacy.python.org/dev/peps/pep-0008/) and making the explicit
arguments to `calculate_total_stock`.
|
Returning Cython array
Question: How does one properly initialize and return a Cython array? For instance:
cdef public double* cyTest(double[] input):
cdef double output[3]
for i in xrange(3):
output[i] = input[i]**2
print 'loop: ' + str(output[i])
return output
cdef double* test = [1,2,3]
cdef double* results = cyTest(test)
for i in xrange(3):
print 'return: ' + str(results[i])
This returns:
loop: 1.0->1.0
loop: 2.0->4.0
loop: 3.0->9.0
return: 1.88706086937e-299
return: 9.7051011575e+236
return: 1.88706086795e-299
So obviously, `results` still points only to garbage instead of the values it
should point to. Admittedly, I am slightly confused with mixing the pointer
and array syntax and which one is preferable/possible in a Cython context.
In the end, I want to call `cyTest` from a pure C++ function:
#include <iostream>
#include <Python.h>
#include "cyTest.h"
void main() {
Py_Initialize();
initcyTest();
double input[3] = {1,2,3};
double* output = cyTest(input);
for(int i = 0; i < 3; i++)
std::cout << "cout: " << output[i] << std::endl;
Py_Finalize();
}
This returns similar results:
loop: 1.0->1.0
loop: 2.0->4.0
loop: 3.0->9.0
cout: 1
cout: 6.30058e+077
cout: 6.39301e-308
Anyone care to explain what error I'm making? I'd like to keep it as simple as
possible. It's just returning an array from Cython to C++ after all. I'll deal
with dynamic memory allocation later, if not necessary.
Answer: You are returning reference to local array ( output ), which will not work.
Try changing your script to:
from cpython.mem cimport PyMem_Malloc
cdef public double * cyTest(double[] input):
cdef double * output = < double * >PyMem_Malloc( sizeof(double) * 3 )
for i in xrange(3):
output[i] = input[i]**2
print 'loop: ' + str(output[i])
return output
And in your c++ code,
after you done using `double* output` issue `free( output );`
If you want to use `cdef double* results = cyTest(test)` in your pyx script
then don't forget to use `PyMem_Free(results)`
|
edit a file line by line interactively from user input in python
Question: I want to know how to edit a file on the fly row by row in python.
For example I have a text file where I usually have:
key value
key value
key value
key value
key value
...
they are not necessarily the same pair for each line. It's just the way I
explained it.
I would like to show line by line key and value (on my terminal) and then I
want to do one of this two things:
-just press enter (or whatever hot-key) to go ahead and read (show) next line.
-enter a new value then hit enter. this will actually replace the value (that was being shown) on the file and finally go ahead to show next pair of key values.
Till end of file or possibly till I type 'quit' or some other keyword. doesn't
matter.
-Being able to go back to the previous row would be a plus (in case of accidentally going to next row), but it's not too important for now.
I find myself often editing huge files in a very tedious and repetitive way,
and text editors are really frustrating with their cursors going everywhere
when pressing the arrow-key. Also having to use the backspace to delete is
annoying.
I know how to read a file and how to write a file in python. But not in such
interactive way. I only know how to write the whole file at once. Plus I
wouldn't know if it is safe to open the same file in both reading and writing.
Also I know how to manipulate each line, split the text in a list of values
etc... all I really need is to understand how to modify the file at that exact
current line and handle well this type of interaction.
what is the best way to do this?
Answer: The way I would go about this is to load all the lines of the text file in a
list, and then iterate through that list, changing the values of the list as
you go along. Then at the very end (when you get to the last line or whenever
you want), you will write that whole list out to the file with the same name,
so that way it will overwrite the old file.
|
How to pass non-hard-coded parameter to Python decorator?
Question: My goal is to create a trivial unit test decorator, which executes a function
and, if it succeeds, do nothing, if it doesn't, print "FAILURE" and all its
parameters. I do know about the builtin `unittest` package. I'm doing this to
learn decorators. I'm not taking this any farther than "if actual equals
expected, do nothing, else print params".
I found [this function](http://stackoverflow.com/a/25206079/2736496) which
prints out all of a function's parameters:
def dumpArgs(func):
'''Decorator to print function call details - parameters names and effective values'''
def wrapper(*func_args, **func_kwargs):
arg_names = func.__code__.co_varnames[:func.__code__.co_argcount]
args = func_args[:len(arg_names)]
defaults = func.__defaults__ or ()
args = args + defaults[len(defaults) - (func.__code__.co_argcount - len(args)):]
params = list(zip(arg_names, args))
args = func_args[len(arg_names):]
if args: params.append(('args', args))
if func_kwargs: params.append(('kwargs', func_kwargs))
print(func.__name__ + ' (' + ', '.join('%s = %r' % p for p in params) + ' )')
return func(*func_args, **func_kwargs)
return wrapper
@dumpArgs
def test(a, b = 4, c = 'blah-blah', *args, **kwargs):
pass
test(1)
test(1, 3)
test(1, d = 5)
test(1, 2, 3, 4, 5, d = 6, g = 12.9)
Output:
test (a = 1, b = 4, c = 'blah-blah' )
test (a = 1, b = 3, c = 'blah-blah' )
test (a = 1, b = 4, c = 'blah-blah', kwargs = {'d': 5} )
test (a = 1, b = 2, c = 3, args = (4, 5), kwargs = {'g': 12.9, 'd': 6} )
I changed it to this, which prints out the parameters only if the function
does not equal `4` (implemented without a decorator param):
def get_all_func_param_name_values(func, *func_args, **func_kwargs):
arg_names = func.__code__.co_varnames[:func.__code__.co_argcount]
args = func_args[:len(arg_names)]
defaults = func.__defaults__ or ()
args = args + defaults[len(defaults) - (func.__code__.co_argcount - len(args)):]
params = list(zip(arg_names, args))
args = func_args[len(arg_names):]
if args: params.append(('args', args))
if func_kwargs: params.append(('kwargs', func_kwargs))
return '(' + ', '.join('%s = %r' % p for p in params) + ')'
def dumpArgs(func):
'''Decorator to print function call details - parameters names and effective values'''
def wrapper(*func_args, **func_kwargs):
a = func(*func_args, **func_kwargs)
if(a != 4):
return a
print("FAILURE: " + func.__name__ + get_all_func_param_name_values(func, *func_args, **func_kwargs))
return a
return wrapper
@dumpArgs
def getA(a, b = 4, c = 'blah-blah', *args, **kwargs):
return a
getA(1)
getA(1, 3)
getA(4, d = 5)
getA(1, 2, 3, 4, 5, d = 6, g = 12.9)
Output:
FAILURE: getA(a = 4, b = 4, c = 'blah-blah', kwargs = {'d': 5})
Out[21]: 1
(I don't understand why the `1` is printed in the second line.)
I then changed it to pass in the expected value, `4`, as decorator parameter.
As described in [this answer](http://stackoverflow.com/a/10176276/2736496), it
requires that the original decorator be a nested function:
def get_all_func_param_name_values(func, *func_args, **func_kwargs):
arg_names = func.__code__.co_varnames[:func.__code__.co_argcount]
args = func_args[:len(arg_names)]
defaults = func.__defaults__ or ()
args = args + defaults[len(defaults) - (func.__code__.co_argcount - len(args)):]
params = list(zip(arg_names, args))
args = func_args[len(arg_names):]
if args: params.append(('args', args))
if func_kwargs: params.append(('kwargs', func_kwargs))
return '(' + ', '.join('%s = %r' % p for p in params) + ')'
def dumpArgs(expected_value):
def dumpArgs2(func):
'''Decorator to print function call details - parameters names and effective values'''
def wrapper(*func_args, **func_kwargs):
a = func(*func_args, **func_kwargs)
if(a == expected_value):
return a
print("FAILURE: " + func.__name__ + get_all_func_param_name_values(func, *func_args, **func_kwargs))
return a
return wrapper
return dumpArgs2
@dumpArgs(4)
def getA(a, b = 4, c = 'blah-blah', *args, **kwargs):
return a
getA(1)
getA(1, 3)
getA(4, d = 5)
getA(1, 2, 3, 4, 5, d = 6, g = 12.9)
Output:
FAILURE: getA(a = 1, b = 4, c = 'blah-blah')
FAILURE: getA(a = 1, b = 3, c = 'blah-blah')
FAILURE: getA(a = 1, b = 2, c = 3, args = (4, 5), kwargs = {'g': 12.9, 'd': 6})
Out[31]: 1
(Again, that `1`...)
I'm not clear on how to change this hard-coded `4` to an `expected_value`
parameter, that is passed through at every function call. All the examples
I've seen (like [this one](http://stackoverflow.com/questions/10176226/how-to-
pass-extra-arguments-to-python-decorator)) have hard-coded parameters.
I currently experimenting with
assert_expected_func_params(4, getA, 1)
assert_expected_func_params(4, getA, 1, 3)
assert_expected_func_params(4, getA, 4, d = 5)
assert_expected_func_params(4, getA, 1, 2, 3, 4, 5, d = 6, g = 12.9)
But it's far from working.
How do I implement a decorator parameter that I can pass in to _every function
call_?
Answer: Since a decorator wraps the function, you can intercept the input and output
of the function when it is called. In this way, you could look for an
`_expected` keyword, strip it out, call the function, then test the return
value of the function against the passed in expected value.
from functools import wraps
_empty = object() # sentinel value used to control testing
def dump_ne(func):
@wraps(func)
def decorated(*args, **kwargs):
# remove the expected value from the actual call kwargs
expected = kwargs.pop('_expected', _empty)
# call the function with rest of args and kwargs
result = func(*args, **kwargs)
# only test when _expected was passed in the kwargs
# only print when the result didn't equal expected
if expected is not _empty and expected != result:
print('FAIL: func={}, args={}, kwargs={}'.format(func.__name__, args, kwargs))
return result
return decorated
@dump_ne
def cool(thing):
return thing.upper()
print(cool('cat')) # prints 'CAT', test isn't run
for thing in ('cat', 'ice', 'cucumber'):
print(cool(thing, _expected='CUCUMBER'))
# dumps info for first 2 calls (cat, ice)
|
fix error: jit decorator takes exactly one argument, 4 given
Question: I have the following class definition:
class GentleBoostC(object):
def __init__(self):
# do init stuff
# add jit in order to speed up the code
@jit
@void (float_[:,:],int_[:],int_)
def train(self, X, y, H):
# train do stuff
Then, in another file, I do this:
import GentleBoostC as gbc
# initialize the 2D array X_train, the 1D array y_train, and the integer boosting_rounds
gentlebooster = gbc.GentleBoostC()
gentlebooster.train(X_train,y_train,boosting_rounds)
But then I get this error:
Traceback (most recent call last):
File "C:\Users\app\Documents\Python Scripts\gbc_classifier_train.py", line 53, in <module>
gentlebooster.train(X_train,y_train,boosting_rounds)
TypeError: _jit_decorator() takes exactly 1 argument (4 given)
I find decorators so confusing, and it wasn't until this error that I realized
that the `jit` implementation uses decorators too! Or at least I'm guessing it
does.
Answer: There are three problems here:
1) The latest Numba (version 0.14) does not support jitting classes or class
methods (jitting classes was lost in the 0.12 refactor, but will probably be
added back soon).
2) There is no void decorator (although it's possible this existed in a
previous version - I don't remember).
3) The function signature isn't specified correctly in the jit decorator. It
should be something like: @jit(void(float_[:,:], int_[:], int_)) for a
function that takes a 2d float array, a 1d int array, and an int, and returns
nothing. You could also specify it as a string: @jit('void(f4[:,:], i4[:],
i4')
|
python: converting datetime format
Question: I have the following format:
`"Wed Jun 25 15:38:29 PDT 2014"` and I would like to convert it to
`"2014-06-25 15:38:29"`, i.e. `"%Y-%m-%d %H:%M:%S"`
Code:
import time
import datetime
rawtime = "Wed Jun 25 15:38:29 PDT 2014"
dt = time.strptime(rawtime, "%Y-%m-%d %H:%M:%S")
print dt
I get the error:
ValueError: time data 'Wed Jun 25 15:38:29 PDT 2014\n' does not match format '%Y-%m-%d %H:%M:%S'
Answer: Simplest solution using [dateutil](http://labix.org/python-dateutil) package.
It really is excellent at this.
from dateutil import parser
print parser.parse("Wed Jun 25 15:38:29 PDT 2014")
Output:
2014-06-25 15:38:29
|
Python equivalent of R's head and tail function
Question: I want to preview a Pandas dataframe. I would use head(mymatrix) in R, but I
do not know how to do this in Pandas Python.
When I type
df.head(10) I get...
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10 entries, 0 to 9
Data columns (total 14 columns):
#Book_Date 10 non-null values
Item_Qty 10 non-null values
Item_id 10 non-null values
Location_id 10 non-null values
MFG_Discount 10 non-null values
Sale_Revenue 10 non-null values
Sales_Flg 10 non-null values
Sell_Unit_Cost 5 non-null values
Store_Discount 10 non-null values
Transaction_Id 10 non-null values
Unit_Cost_Amt 10 non-null values
Unit_Received_Cost 5 non-null values
Unnamed: 0 10 non-null values
Weight 10 non-null values
Answer: Suppose you want to output the first and last 10 rows of the iris data set.
In R:
data(iris)
head(iris, 10)
tail(iris, 10)
In Python (scikit-learn required to load the iris data set):
import pandas as pd
from sklearn import datasets
iris = pd.DataFrame(datasets.load_iris().data)
iris.head(10)
iris.tail(10)
Now, as [previously answered](http://stackoverflow.com/questions/13085709/df-
head-sometimes-doesnt-work-in-pandas-python), if your data frame is too large
for the display you use in the terminal, a summary is output. To visualize
your data in a terminal, you could either expend the terminal or reduce the
number of columns to display, as follows.
iris.ix[:,1:2].head(10)
|
How to create a python decorator programatically
Question: I am writing an app that creates/provides various Python decorators. I'd like
this app to be localized, including the names of the decorators. The
decorators would ultimately be used by other developers who are using my app
as a framework (think of my app as a test framework).
Ideally, I'd like to have the ability to create the decorators dynamically
based on the localization settings.
For instance, if the language selected is English, I'd like my decorator to be
named "apple", and if it's French, I'd like it to be named "pomme".
In the Python code, it would either look like:
@apple
def my_func():
pass
or
@pomme
def my_func():
pass
I ultimately want to have the flexibility to add many more languages without
having to modify or duplicate a bunch of code. Ideally, I would only have one
decorator function that would handle "apple" or "pomme" depending on the
settings.
What's the best way to achieve this?
Answer: First, don't do this. This will bring many problems upon you and make life
much harder for you and your users. Anyway, python is very dynamic so you can
do that.
Setup your package like that:
yourpackage/
__init__.py
decorators.py
In `decorators.py`:
# List all decorators you want to publish. Use english names here.
__all__ = ['apple', 'orange', ...]
# Here come implementations named in english
def apple(...):
...
...
In `__init__.py`:
# Whatever over submodules export or just []
__all__ = [...]
from . import decorators
# Get locale somehow
LOCALE = ...
# This translation function could be as complex as you wish
# I use a lookup in hard-coded dict
TRANSLATIONS = {
'fr': {'apple': u'pomme', ...},
...
}
def _translate_name(name):
# If something is not translated we use default english name,
# could be easily changed to raise error
return TRANSLATIONS.get(LOCALE, {}).get(name, name)
# Generate dynamic attributes to current module and add them to __all__
import sys
this_module = sys.modules[__name__]
for name in decorators.__all__:
translated = _translate_name(name)
setattr(this_module, translated, getattr(decorators, name))
__all__.append(translated)
Managing `__all__` in `__init__.py` is optional. This is to allow `from
yourmodule import *`.
|
Django tutorial: unexpected indent error
Question: Here is my model.py code :
from django.db import models
# Create your models here.
class Poll(models.Model):
question = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
def __str__(self):
return self.question
class Choice(models.Model):
poll = models.ForeignKey(Poll)
choice_text = models.CharField(max_length=200)
votes = models.IntegerField(default=0)
def __str__(self):
return self.choice_text
And when I run Following command:
python manage.py runserver
this gives me following error:
> mjrulesamrat@mjrulesamrat-Lenovo-G570:~/django_local/first_web$ python
> manage.py runserver Validating models...
>
> Unhandled exception in thread started by Traceback (most recent call last):
> File "/usr/local/lib/python2.7/dist-packages/django/utils/autoreload.py",
> line 93, in wrapper fn(*args, **kwargs) File "/usr/local/lib/python2.7/dist-
> packages/django/core/management/commands/runserver.py", line 98, in
> inner_run self.validate(display_num_errors=True) File
> "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py",
> line 310, in validate num_errors = get_validation_errors(s, app) File
> "/usr/local/lib/python2.7/dist-
> packages/django/core/management/validation.py", line 34, in
> get_validation_errors for (app_name, error) in get_app_errors().items():
> File "/usr/local/lib/python2.7/dist-packages/django/db/models/loading.py",
> line 196, in get_app_errors self._populate() File
> "/usr/local/lib/python2.7/dist-packages/django/db/models/loading.py", line
> 75, in _populate self.load_app(app_name, True) File
> "/usr/local/lib/python2.7/dist-packages/django/db/models/loading.py", line
> 99, in load_app models = import_module('%s.models' % app_name) File
> "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 40,
> in import_module **import**(name) File
> "/home/mjrulesamrat/django_local/first_web/polls/models.py", line 7 def
> **str**(self): ^ IndentationError: unexpected indent
I'm using Django 1.6 and python 2.7.
Please guide me if i have made some mistake in this code. Cause when i run
follow in python shell it gives me poll object not the question.
>>> Poll.objects.all()
[<Poll: Poll object>]
Answer: Watch/Fix your indentation on the model method level:
from django.db import models
# Create your models here.
class Poll(models.Model):
question = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
# HERE
def __str__(self):
return self.question
class Choice(models.Model):
poll = models.ForeignKey(Poll)
choice_text = models.CharField(max_length=200)
votes = models.IntegerField(default=0)
# AND HERE
def __str__(self):
return self.choice_text
|
Postgres: Is there a way of executing code following a INSERT statement?
Question: This may seem strange, but I was curious to know if it was possible for a code
block to be executed following an INSERT statement in a postgres database?
Specifically, I'm interested in executing Python code after an INSERT
statement has occurred in a pg database.
Answer: The simple way to tackle this is to use postgresql
[notifications](http://www.postgresql.org/docs/9.3/static/sql-notify.html).
You can add after insert/update trigger which will do notification:
CREATE OR REPLACE FUNCTION on_insert() RETURNS trigger AS
$$
BEGIN
execute E'NOTIFY ENTITY_CHANGE, \'' || NEW.id || E'\'';
RETURN NEW;
END
$$
LANGUAGE 'plpgsql' VOLATILE;
create trigger trig_on_insert
after insert on ENTITY
for each row
execute procedure on_insert_to_t();
`ENTITY_CHANGE` is identifier of the channel you can take any you like.
And your application should
[listen](http://initd.org/psycopg/docs/advanced.html#asynchronous-
notifications) to it in separate thread (or process) and do what is needed:
from django.db import connection
curs = connection.cursor()
curs.execute("LISTEN ENTITY_CHANGED;")
while not_finish:
if select.select([connection],[],[],5) == ([],[],[]):
print "Timeout"
else:
connection.poll()
while connection.notifies:
notify = connection.notifies.pop()
entity_id = notify.payload
do_post_save(entity_id)
The only caveat is that notifications are not transactional and can be lost if
some catastrophic failure happen. That is in situation when your application
get notification but then crashed (or was killed) before it finishes
processing of the notification such notification is lost forever.
If you need to guarantee that post save processing is always happen you need
to maintain some table of tasks. After insert/update trigger should add task
to this table and some python process should poll this table and do required
processing. The downside is polling - it will do unnecessary queries when
system is not doing save of entity.
You can combine both approaches to get best of all worlds that is use notify
to start processing but processor should take tasks from task table which is
filled by trigger. During your application startup processing should be run to
do unfinished work if any.
|
python multiprocessing example itertools multple lists
Question: I have a very simple application with a nested for-loop and it can take
minutes to hours to run depending on the amount of data.
I got started with the multiprocessing lib in python. I tried implementing it
in is most basic form, and even though my code runs, there are no performance
gains. Leading me to believe I am implementing it incorrectly and/or the
design of my code is extremely flawed.
My code is pretty straight forward:
import csv
import multiprocessing
somedata1 = open('data1.csv', 'r')
SD_data = csv.reader(data1,delimiter=',')
data1 = []
**import lots of CSV data***
def crunchnumbers():
for i, vald1 in enumerate(data1):
for i, vald2 in enumerate(data2):
for i, vald3 in enumerate(data3):
for i, vald4 in enumerate(data3):
for i, vald5 in enumerate(data3):
sol = #add values
print d_solution
if __name__ == '__main__':
pool = multiprocessing.Pool(processes=4)
pool.apply(crunchnumbers)
How can I do this with python's multiprocessing? (somehow spliting into
chunks?) or is this a better job for jug? Based on suggestions on SO, I spent
a few days trying to use Jug, but the number of iterations in my nested for-
loops easily gets into the 10's of millions (and more) of very fast
transactions, so the author recommends against this.
Answer: I suggest to use `itertools.product` with multiprocessing-map:
import csv
import multiprocessing
from itertools import product
def crunchnumber(values):
if some criteria:
sol = values[0][2]+values[1][2]+values[2][2]....
return sol
def process(datas):
"takes data1, ..., datan as a list"
pool = multiprocessing.Pool(processes=4)
result = pool.map_async(crunchnumber, product(*datas))
print [a for a in result if a is not None]
|
make_server() check if bind to port succeeded
Question: _In Python 2 and 3k_ , using `wsgi.simple_server.make_server(host, port, app)`
does not raise an exception when the port is already in used. Instead, a call
to `.server_forever()` or `.handle_request()` simply blocks until the other
port closes and the next connection is incoming.
import wsgiref.simple_server as simple_server
def application(environ, start_response):
start_response('200 OK', [('Content-type', 'text/html')])
return ["<html><body><p>Hello!</p></body></html>".encode('utf-8')]
def main():
server = simple_server.make_server('', 8901, application)
server.serve_forever()
if __name__ == "__main__":
main()
I would expect an Exception to be raised, since `socket.socket.bind()` also
raises an exception in this case. Is there a way to determine if the returned
`HTTPServer` did successfully bind to the specified port?
Answer: I found the reason for this. The `HTTPServer` class source code in Python
2.7.8 is the following:
class HTTPServer(SocketServer.TCPServer):
allow_reuse_address = 1 # Seems to make sense in testing environment
def server_bind(self):
"""Override server_bind to store the server name."""
import pdb; pdb.set_trace()
SocketServer.TCPServer.server_bind(self)
host, port = self.socket.getsockname()[:2]
self.server_name = socket.getfqdn(host)
self.server_port = port
And `allow_reuse_address` is used in `SocketServer.TCPServer.server_bind()`
like this:
class TCPServer(BaseServer):
# ...
def server_bind(self):
"""Called by constructor to bind the socket.
May be overridden.
"""
if self.allow_reuse_address:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(self.server_address)
self.server_address = self.socket.getsockname()
Setting `allow_reuse_address` to `False` will cause
`self.socket.bind(self.server_address)` to raise an exception. I wonder if
this line in the `HTTPServer` class is intentional, since the comment says
it's "makes sense in testing environments".
|
Alternatives to using functools.partial with string methods
Question: A profiling of my code shows that methods `split` and `strip` of `str` objects
are amongst the the most called functions.
It happens that I use constructs such as:
with open(filename, "r") as my_file:
for line in my_file:
fields = line.strip("\n").split("\t")
And some of the files to which this is applied have a lot of lines.
So I tried using the "avoid dots" advice in
<https://wiki.python.org/moin/PythonSpeed/PerformanceTips> as follows:
from functools import partial
split = str.split
tabsplit = partial(split, "\t")
strip = str.strip
endlinestrip = partial(strip, "\n")
def get_fields(tab_sep_line):
return tabsplit(endlinestrip(tab_sep_line))
with open(filename, "r") as my_file:
for line in my_file:
fields = getfields(line)
However, this gave me a `ValueError: empty separator` for the `return` line of
my `get_fields` function.
After investigating, what I understand is that the separator for the `split`
method is the second positional argument, the first being the string object
itself, which made `functools.partial` understand `"\t"` as the string to be
split, and I was using the result of `"\n".strip(tab_sep_line)` as separator.
Hence the error.
What woud you suggest to do instead?
* * *
Edit: I tried to compare three ways to implement the `get_fields` function.
Approach 1: Using plain `.strip` and `.split`
def get_fields(tab_sep_line):
return tab_sep_line.strip("\n").split("\t")
Approach 2: Using `lambda`
split = str.split
strip = str.strip
tabsplit = lambda s : split(s, "\t")
endlinestrip = lambda s : strip(s, "\n")
def get_fields(tab_sep_line):
return tabsplit(endlinestrip(tab_sep_line))
Approach 3: Using the answer provided by Jason S
split = str.split
strip = str.strip
def get_fields(tab_sep_line):
return split(strip(tab_sep_line, "\n"), "\t")
Profiling indicates cumulated time for `get_fields` as follows:
Approach 1: 13.027
Approach 2: 16.487
Approach 3: 9.714
So avoiding dots makes a difference but using `lambda` seems counter-
productive.
Answer: The advice to "avoid dots" for performance is (1) only something you should do
if you actually have a performance problem, i.e. not if it's just called a lot
of times but if it actually _takes too much time_ , and (2) not going to be
solved by using `partial`.
The reason dots can take more time than locals is that python has to perform a
lookup each time. But if you use `partial`, then there's an extra function
call each time _and_ it also copies and updates a dictionary each time _and_
adds two lists. You're not gaining, you're losing.
However, if you really want you can do:
strip = str.strip
split = str.split
...
fields = split(strip(line), '\t')
|
Waiting for a table to load completely using selenium with python
Question: I want to scrape some data from a page which is in a table. So I am only
bothered about the data in the table. Earlier I was using Mechanize, but I
found sometimes some of the data are missing, especially in the bottom of the
table. Googling, I found out that it may be due to mechanize not handling
Jquery/Ajax.
So I switched to Selenium today. How do I wait for one and only one table to
load completely and then extract all links from that table using selenium and
python? If I wait for complete page to load, it is taking some time. I want to
ensure that only data in the table is loaded. My current code:
driver = webdriver.Firefox()
for page in range(1, 2):
driver.get("http://somesite.com/page/"+str(page))
table = driver.find_element_by_css_selector('div.datatable')
links = table.find_elements_by_tag_name('a')
for link in links:
print link.text
Answer: Use [`WebDriverWait`](http://selenium-
python.readthedocs.org/en/latest/waits.html#explicit-waits) to wait until the
table is located:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
...
wait = WebDriverWait(driver, 10)
table = wait.until(EC.presence_of_element_located(By.CSS_SELECTOR, 'div.datatable'))
This would be an _explicit wait_.
* * *
Alternatively, you can make the driver [_wait implicitly_](http://selenium-
python.readthedocs.org/en/latest/waits.html#implicit-waits):
> An implicit wait is to tell WebDriver to poll the DOM for a certain amount
> of time when trying to find an element or elements if they are not
> immediately available. The default setting is 0. Once set, the implicit wait
> is set for the life of the WebDriver object instance.
from selenium import webdriver
driver = webdriver.Firefox()
driver.implicitly_wait(10) # wait up to 10 seconds while trying to locate elements
for page in range(1, 2):
driver.get("http://somesite.com/page/"+str(page))
table = driver.find_element_by_css_selector('div.datatable')
links = table.find_elements_by_tag_name('a')
for link in links:
print link.text
|
PyPDF2 won't import
Question: Hi I'm just getting started with python and trying to get some requisite
libraries installed. Using Python 3.4.1 on OS X. I have installed PyPDF2 (with
supposed success), yet I cannot seem to use the tools:
sh-3.2# port select --list python
Available versions for python:
none
python25-apple
python26
python26-apple
python27-apple
python34 (active)
sh-3.2# pip install PyPDF2
Requirement already satisfied (use --upgrade to upgrade): PyPDF2 in /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages
Cleaning up...
sh-3.2#
...
import PyPDF2
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import PyPDF2
ImportError: No module named 'PyPDF2'
>>>
Am I missing a step? Or is PyPDF2 not supported in py3.4.1?
Answer: **PyPDF2** is compatible with Python 3.4, so that's not the problem.
In which Python version do you have pip installed? Even though you're on
`python34`, if pip is installed to a different version it will download
libraries to that version.
In any case, you can always install by downloading from
[PyPI](https://pypi.python.org/pypi?:action=display&name=PyPDF2&version=1.23),
then running `setup.py install`.
Still, the only possible explanation I have is that the current Python version
you're on does _not_ have PyPDF2 installed. See if you can import PyPDF2 from
any of the other versions.
|
IntelliJ IDEA - how to map remote PYTHONPATH to local environment?
Question: I'm using python remote interpreter in IntelliJ(13.1), and using "composes"
modules which are installed on server.
By importing the module like follwing, I can use the module without any
problem, but I get warn "No module named composes".
import composes
And I can't get the auto complete of the module in editor.
Do I need to map the remote PYTHONPATH to local?
If so, please tell me how to do that.
Answer: I found some documentation for this:
<http://www.jetbrains.com/pycharm/quickstart/configuring_interpreter.html>
I think best way is remote SSH interpreter. Check this out.
Edit: But don't forget. If you choose remote interpreter, you can't use your
local modules.
Edit2:
1) Add deployment server from **Tool->Deployment->Configuration**

2) Add remote interpreter from **File->Settings->Project Interpreter->Add
remote** And select the **Deployment Configuration** for FTP connection and
can send to server your local files

3) And now you can upload your files to server from Pycharm. For this **Right
click to project folder->Upload to xxx**. If all configuration is okay, now
your files will upload to server and you can use `auto-completion` for your
local files.

If it doesn't work, please try **File->Invalidate cache**. And let it delete
all cache and download over it again.
|
Merge multiple csv file based on a template header in python
Question: I have multiple csv files that all have more or less the same headers. some
might have all the headers some might not have them all. I want to use a
common csv file that will have only the headers and merge them all.
sample header:
a, b, c, d, e, f,
file 1:
a, b, d,
1, 2, 3,
file 2:
a, b, c, e,
4, 5, 6, 7,
Merged result:
a, b, c, d, e, f,
1, 2, , 3,
4, 5, 6, , 7, ,
So far I was pointed to use csv.DictReader, csv.DictWriter. But I am having
trouble with merging based on a common header and also keeping the header
order. Is there anyway I could still use them and not sort them?
I tried pandas merge function but it needs an order to sort based on, which my
data do not contain.
Any help is appreciated. Thank you
Answer: So I decided to help you create a class to do. It returns a generator which
you can iterate over to build your final file.
import csv
class DataFile(object):
empty = '' # use this if col does not have value
def __init__(self, filename):
f = open(filename, 'r')
self.reader = csv.reader(f)
# set first line as header
self.header = [x.strip() for x in self.reader.next()]
def get_header(self):
return self.header
def with_header(self, headers):
""" Returns a generator for specified headers"""
header_dict = dict([(a, i,) for i, a in enumerate(self.header)])
for line in self.reader:
li = []
for h in headers:
if h in header_dict:
li.append(line[header_dict[h]])
else:
li.append(self.empty)
yield li
You can use it to join files: `file1.csv` and `file2.csv` thus:
>>> one = DataFile('file1.csv')
>>> two = DataFile('file2.csv')
>>> one.get_header()
['a', 'b', 'd', '']
>>> comb = set(one.get_header() + two.get_header())
>>> final = list(one.with_header(comb)) + list(two.with_header(comb))
>>> final
[['1', '', '', ' 2', '', ' 3'], ['4', '', ' 6', ' 5', ' 7', '']]
You can then use `comb` and `final` to build your new csv file (with the csv
writer etc). Also, you can build a function that takes in multiple files and
just returns the new generator with all columns from all files etc. Modify the
char being set when value is not in file by modifying the `empty` attribute. I
think it's easy to follow
|
Can't log in to website with Python requests session module
Question: I am just starting out with web scraping. For my first project, I'm trying to
log into artofproblemsolving.com using requests.Session() and access another
user's account. Here is my code:
import requests
LOGIN_URL = 'https://www.artofproblemsolving.com/Forum/ucp.php?mode=login'
DATA_URL = 'https://www.artofproblemsolving.com/Forum/memberlist.php?mode=viewprofile&u=90586'
payload = {
'username': '{{my_username}}',
'password': '{{my_password}}'
}
with requests.Session() as s:
s.post(LOGIN_URL, data=payload)
r = s.get(DATA_URL)
print r.text
But when I run this in terminal, the output HTML is from the login page, not
the user's profile that I'm trying to grab. I double-checked that LOGIN_URL is
the POST action in the login form, that 'username' and 'password' are the
names of the form items, and that my login information is correct.
All help is appreciated, thank you!
Answer: I'm not sure this is direct reason for the problem. But there are another
parameter that the form send to login action.
Beside `username`, `password`, also pass following parameters:
'username': username,
'password': password,
'login': 'Login'.
'sid': sid, # You need to parse the login page to get sid
'redirect': 'index.php',
|
python multiprocessor how to stop all threads when on one of them find the result?
Question: I am trying to run my code in parallel using the python "from multiprocessing
import Process, Value" model.However, I am creating a shared variable and
using it as flag, so if one process find the result it will set the flag value
to 1 and exit.Other processes should exit when the flag value is changed but
for some reason the flag value did not change for these processes. this is the
code:
from multiprocessing import Process, Value
gflag= Value('i',0)#this is the global flag shared between process
gflag=0
while True:
if not piece:
break
list1= piece.splitlines()
p = Process(target=Dowork, args=(gflag,inValue,list1,var1,))
p.start()
if(gflag==1):
exit()
piece = f.read(10000)
def doWork(gflag,inputvalue,diclist,var1):
for p in diclist:
calResult= doSomeCal(p,var1)
if( calResult == inputvalue):
gflag=1
exit()
if(gflag==1):
print"exit now"
exit()
Question is how to stop all the threads when on of them find the result?
Answer: The main problem with your attempted solution is that, because you assign to
`gflag` within `doWork`, it's a local variable, not a global. To fix that, you
need to add `global gflag` at the start of the function.
But even if you fixed that, it wouldn't solve your problem. When you write
`gflag=1`, that doesn't update the shared value stored in `gflag`, it just
rebinds the name `gflag` to the local int `1`. You want to use `gflag.value`.
(Note that once you fix this, the previous problem goes away… but it's still
often better to use the `global` declaration for human readers, even when the
compiler doesn't need it.)
And finally, `doWork` doesn't actually check the flag until after it's
finished doing all the work, so the flag doesn't help you to exit the child
processes earlier. You probably want to put that `if` statement inside the
`for` loop, so it checks the flag once per element, instead of only once at
the end.
|
Curses using changing data
Question: I have the following code. The number range increments by 1 each time. i.e 1-9
then 2-10 etc. I want to display this within a shell window using python via
curses.
The goal is to have a list of text that is constantly changing.
from itertools import cycle
import curses, contextlib, time
@contextlib.contextmanager
def curses_screen():
"""Contextmanager's version of curses.wrapper()."""
try:
stdscr=curses.initscr()
curses.noecho()
curses.cbreak()
stdscr.keypad(1)
try: curses.start_color()
except: pass
yield stdscr
finally:
stdscr.keypad(0)
curses.echo()
curses.nocbreak()
curses.endwin()
with curses_screen() as stdscr:
pad = curses.newpad(100, 100)
pad.addstr(0,0, curses.longname())
coord = 5, 5, 10, 75
# simulate changing text #
for z in range(1,10):
s,e = z+1,z+10
for x in range(s,e):
pad.addstr(x,0, str(x))
pad.refresh(0, 0, *coord)
With this code nothing is printed to the screen ? Note : The reason Ive used a
newpad is so that I eventually scroll through this list.
Thanks,
Answer: It's not that nothing is printed to the screen, it's that you do all your
printing in a few microseconds, at which point you restore the pre-`curses`
screen and exit. Throw in a `pad.getch()` or just a `time.sleep(1)` and you'll
see the output.
If you want this to go slowly enough that you can watch it, instead of as fast
as your computer can count up numbers, you'll want a delay (and a `refresh`)
inside the loop.
|
sending and recieving through serial
Question: I have got a simple program running in the uno that measures the distance
using ping sensor and now i am trying to control some servos based on the
distance in python but the conditional thingy is not working even in a simple
code like this
import serial
data = serial.Serial('COM7',9600)
while(1):
if(data.inWaiting()>0):
dist = data.readline()
if(dist>100):
print("dist is greater than 100")
else:
print("this shit does not work")
It is always the if that works, I am a noob please help me!
Answer: Data from serial port type is `str`
You are trying to read the data and use `if .... > 100` this will never work.
You need to read the data to a buffer and then check it, Also you need some
sync frame to know you have got all the data(it's not a must but its much
eaiser)
for example let say the read data is `100!` and `!` is your sync frame.
import serial
my_serial = serial.Serial('COM7',9600)
my_data = ""
while(1):
if(my_serial.inWaiting()>0):
my_data = my_serial.readline()
if '!' in my_data:
break
my_data = [:my_data.find("!")]
if int(my_data) > 100:
print("dist is greater than 100")
else:
print("this shit does not work")
|
Python findall, regex
Question: I have this text:
u'times_viewed': 12268,
u'url': u'/photo/79169307/30-seconds-light',
u'user': {u'affection': 63962,
How can I just get out this string: `"/photo/79169307/30-seconds-light"`?
I am trying with regex and `findall`:
list = re.findall(‘u‘url‘: u‘/photo/"([^"]*)"‘, text)
but it won't go.
Answer: I assume that by "it won't go," you mean that you get a syntax error, which
you should. Here:
list=re.findall(‘u‘url‘: u‘/photo/"([^"]*)"‘,text)
you're using `"` when you mean `'`. This is causing a syntax error because `"`
closes the string you're trying to pass `re.findall`. Try:
list_ = re.findall("u'url': u'/photo/([^']*)'", text)
Additionally, this isn't going to grab the text after `photo`, so you'll need
to add more parens:
list_ = re.findall("u'url': u'(/photo/([^']*))'", text)
and now `list_.group(1)` should hold your string.
On top of that, it looks like you're dealing with JSON. A better approach
might be:
import json
json.loads(text)
list_ = text['url']
|
Tuple for multiprocessing.Array in python
Question: I'm struggling with multiprocessing in python. I want to put list of tuple in
multiprocessing.Array, but I can't find the typecode for tuple.
This is the code, and I want to know how to write "type_of_tuple" for arr in
main function.
from multiprocessing import Pool, Array
def thread_func(time, array):
time.sleep(time)
if len(array) > 0:
print(array.pop(0))
def main(cpu_number):
list = [("a","b"), ("c","d"), ("e","f")]
arr = Array( type_of_tuple """ how to write this?""", list)
for i in range(cpu_number):
r = pool.apply_async(thread_func, args=(1000, arr))
thread_list.append(r)
for thread in thread_list:
thread.wait()
if __name__ == "__main__":
main(3)
Answer: The reason you can't find it is because it doesn't exist. The whole point of
`Array` is that it handles arrays of simple, homogenous types that can be
stored as "unboxed" binary data.
A tuple is a compound type, which can hold any number of values of any kind.
So you can't put it in an `Array`.
In fact, you can't put strings in arrays either, because strings have a
variable number of characters; each one is a different size. (And, if this is
Python 3, it's even worse than that, because characters can be 1, 2, or 4
bytes…)
On top of that, an array has a fixed length; you can't `pop` values off it
anyway.
So, you will need to find a different way to share this data.
You could use `shared_ctypes` if you understand C well enough to map your
tuple of strings to a `struct` of `char*`.
Or you could write a function to encode the tuples into fixed-size values
(which you then slice into an Array of characters) on one side and decode them
on the other.
But I suspect you'll find life a lot simpler if you do what the docs recommend
and find a way to write your code in terms of message passing instead of
shared memory.
Since the only shared mutation you need to here is to have each job `pop` a
value off the end so that other jobs won't see the same value, the obvious
answer is to use a `Queue`, because that's exactly what it does.
Or, even simpler, just use one of the higher-level methods like `map` instead
of `apply`, to take care of managing the queue and making sure each job gets
exactly one value, so you don't even have to think about it. For example:
def thread_func(time, value):
time.sleep(time)
print(value)
def main(cpu_number):
values = [("a","b"), ("c","d"), ("e","f")]
results = pool.imap_unordered(partial(thread_func, 1000), values[:cpu_number])
for result in results:
pass
if __name__ == "__main__":
main(3)
(As a side note, I'm not sure why you're restricting the number of tasks to
the number of CPUs. Normally, you create a `Pool(cpu_number)` and just queue
up all of the tasks on that. If you only want to run exactly 3 tasks, you
don't even really need a pool for that, just run each one on a `Process`.)
|
Google Maps with Python 3.4.1
Question: I am trying to write a script to assign the Latitude and Longitude of a
location based on the address similar to what is fantastically explained here:
<http://py-googlemaps.sourceforge.net> The only problem is that, that code is
written for Python 2.3-2.6. Does anyone know how I would update this to work
with Python 3.4.1?
When I run
from googlemaps import GoogleMaps
I get the error
No module named 'googlemaps'
Thanks for your help
Answer: I've tried using Python 2to3 tool, and it worked just fine:
`2to3 googlemaps.py -w`
About error `No module named 'googlemaps'`. You have to place `googlemaps.py`
into any PYTHONPATH directory. You may use local directory, or you may
manually copy `googlemaps.py` to site-packages folder.
Hope it helps
|
python get html page after login
Question: I want to login to famjia.com and i try all the methods, none of them works
for me. I tried using requests and urllib but they don't work. Help? These is
my code. Thanks in advance. import requests
URL = 'http://www.famjia.com/portal/intranet/famjiaPaper/'
session = requests.session()
login_data = dict({'initialURI':'/portal/intranet/',
'loginname':'loginname',
'loginpassword':'loginpassword',
'username':'username',
'password':'password',
})
r = session.post(URL, data=login_data)
req = session.get('http://www.famjia.com/portal/intranet/famjiaPaper/')
print req.content
Answer: The `POST` request should be made to `http://www.famjia.com/portal/login` url:
import requests
URL = 'http://www.famjia.com/portal/intranet/famjiaPaper/'
LOGIN_URL = 'http://www.famjia.com/portal/login'
session = requests.session()
login_data = {'initialURI': '/portal',
'loginname': '',
'loginpassword': '',
'username': 'YOUR USERNAME HERE',
'password': 'YOUR PASSWORD HERE'}
session.post(LOGIN_URL, data=login_data)
req = session.get(URL)
print req.content
|
Single process code performs faster than Multiprocessing - MCVE
Question: My attempt to speed up one of my applications using Multiprocessing resulted
in lower performance. I am sure it is a design flaw, but that is the point of
discussion- How to better approach this problem in order to take advantage of
multiprocessing.
My current results on a 1.4ghz atom:
1. SP Version = 19 seconds
2. MP Version = 24 seconds
Both versions of code can be copied and pasted for you to review. The dataset
is at the bottom and can be pasted also. (I decided against using xrange to
illustrate the problem)
First the SP version:
*PASTE DATA HERE*
def calc():
for i, valD1 in enumerate(D1):
for i, valD2 in enumerate(D2):
for i, valD3 in enumerate(D3):
for i, valD4 in enumerate(D4):
for i, valD5 in enumerate(D5):
for i, valD6 in enumerate(D6):
for i, valD7 in enumerate(D7):
sol1=float(valD1[1]+valD2[1]+valD3[1]+valD4[1]+valD5[1]+valD6[1]+valD7[1])
sol2=float(valD1[2]+valD2[2]+valD3[2]+valD4[2]+valD5[2]+valD6[2]+valD7[2])
return None
print(calc())
Now the MP version:
import multiprocessing
import itertools
*PASTE DATA HERE*
def calculate(vals):
sol1=float(valD1[0]+valD2[0]+valD3[0]+valD4[0]+valD5[0]+valD6[0]+valD7[0])
sol2=float(valD1[1]+valD2[1]+valD3[1]+valD4[1]+valD5[1]+valD6[1]+valD7[1])
return none
def process():
pool = multiprocessing.Pool(processes=4)
prod = itertools.product(([x[1],x[2]] for x in D1), ([x[1],x[2]] for x in D2), ([x[1],x[2]] for x in D3), ([x[1],x[2]] for x in D4), ([x[1],x[2]] for x in D5), ([x[1],x[2]] for x in D6), ([x[1],x[2]] for x in D7))
result = pool.imap(calculate, prod, chunksize=2500)
pool.close()
pool.join()
return result
if __name__ == "__main__":
print(process())
And the data for both:
D1 = [['A',7,4],['B',3,7],['C',6,1],['D',12,6],['E',4,8],['F',8,7],['G',11,3],['AX',11,7],['AX',11,2],['AX',11,4],['AX',11,4]]
D2 = [['A',7,4],['B',3,7],['C',6,1],['D',12,6],['E',4,8],['F',8,7],['G',11,3],['AX',11,7],['AX',11,2],['AX',11,4],['AX',11,4]]
D3 = [['A',7,4],['B',3,7],['C',6,1],['D',12,6],['E',4,8],['F',8,7],['G',11,3],['AX',11,7],['AX',11,2],['AX',11,4],['AX',11,4]]
D4 = [['A',7,4],['B',3,7],['C',6,1],['D',12,6],['E',4,8],['F',8,7],['G',11,3],['AX',11,7],['AX',11,2],['AX',11,4],['AX',11,4]]
D5 = [['A',7,4],['B',3,7],['C',6,1],['D',12,6],['E',4,8],['F',8,7],['G',11,3],['AX',11,7],['AX',11,2],['AX',11,4],['AX',11,4]]
D6 = [['A',7,4],['B',3,7],['C',6,1],['D',12,6],['E',4,8],['F',8,7],['G',11,3],['AX',11,7],['AX',11,2],['AX',11,4],['AX',11,4]]
D7 = [['A',7,4],['B',3,7],['C',6,1],['D',12,6],['E',4,8],['F',8,7],['G',11,3],['AX',11,7],['AX',11,2],['AX',11,4],['AX',11,4]]
And now the theory:
Since there is little actual work (just summing 7 ints) there is too much CPU
bound data and Interprocess Communication creates too much overhead to make
Multiprocessing effective. This seems like a situation where I really need the
ability to multithread. So at this point I am looking for suggestions before I
try this on a different language because of the GIL.
********Debugging
File "calc.py", line 309, in <module>
smart_calc()
File "calc.py", line 290, in smart_calc
results = pool.map(func, chunk_list)
File "/usr/local/lib/python2.7/multiprocessing/pool.py", line 250, in map
return self.map_async(func, iterable, chunksize).get()
File "/usr/local/lib/python2.7/multiprocessing/pool.py", line 554, in get
raise self._value
TypeError: sequence index must be integer, not 'slice'
In this case, totallen = 108 and CHUNKS is set to 2. When CHUNKS is reduced to
1, it works.
Answer: Ok, I think I've figured out actually get a speed boost from multiprocessing.
Since your actual source lists aren't very long, it's reasonable to pass them
in their entirety to the worker processes. So, if each worker process has
copies of the same source lists, then ideally we'd want all of them iterate
over different pieces of the lists in parallel, and just sum up that unique
slice. Because we know the size of the input lists, we can accurately
determine how long `itertools.product(D1, D2, ...)` will be, which means we
can also accurately determine how big each chunk should be to evenly
distribute the work. So, we can provide each worker with a specific range of
the `itertools.product` iterator that they should iterate over and sum:
import math
import itertools
import multiprocessing
import functools
def smart_calc(valD1, valD2, valD3, valD4, valD5, valD6, valD7, slices):
# Build an iterator over the entire data set
prod = itertools.product(([x[1],x[2]] for x in valD1),
([x[1],x[2]] for x in valD2),
([x[1],x[2]] for x in valD3),
([x[1],x[2]] for x in valD4),
([x[1],x[2]] for x in valD5),
([x[1],x[2]] for x in valD6),
([x[1],x[2]] for x in valD7))
# But only iterate over our unique slice
for subD1, subD2, subD3, subD4, subD5, subD6, subD7 in itertools.islice(prod, slices[0], slices[1]):
sol1=float(subD1[0]+subD2[0]+subD3[0]+subD4[0]+subD5[0]+subD6[0]+subD7[0])
sol2=float(subD1[1]+subD2[1]+subD3[1]+subD4[1]+subD5[1]+subD6[1]+subD7[1])
return None
def smart_process():
CHUNKS = multiprocessing.cpu_count() # Number of pieces to break the list into.
total_len = len(D1) ** 7 # The total length of itertools.product()
# Figure out how big each chunk should be. Got this from
# multiprocessing.map()
chunksize, extra = divmod(total_len, CHUNKS)
if extra:
chunksize += 1
# Build a list that has the low index and high index for each
# slice of the list. Each process will iterate over a unique
# slice
low = 0
high = chunksize
chunk_list = []
for _ in range(CHUNKS):
chunk_list.append((low, high))
low += chunksize
high += chunksize
pool = multiprocessing.Pool(processes=CHUNKS)
# Use partial so we can pass all the lists to each worker
# while using map (which only allows one arg to be passed)
func = functools.partial(smart_calc, D1, D2, D3, D4, D5, D6, D7)
result = pool.map(func, chunk_list)
pool.close()
pool.join()
return result
Results:
sequential: 13.9547419548
mp: 4.0270690918
Success! Now, you do have to actually combine the results after you have them,
which will add additional overhead to your real program. It might end up
making this approach slower than sequential again, but it really depends on
what you actually want to do with the data.
|
How to get the body text of email with imaplib?
Question: I am in python3.4 .
import imaplib
import email
user="XXXX"
password="YYYY"
con=imaplib.IMAP4_SSL('imap.gmail.com')
con.login(user,password)
con.list()
con.select("INBOX")
result,data=con.fetch(b'1', '(RFC822)')
raw=email.message_from_bytes(data[0][1])
>>> raw["From"]
'xxxx'
>>> raw["To"]
'[email protected]'
>>> raw["Subject"]
'Re:get the min date from a list'
When i run 'print(raw)' there are many lines of the body of the email ,
i can't get it with `raw[TEXT]` OR `raw['TEXT']` OR `raw['BODY']` ,
how can i get the body of the email text?
Answer: You're asking it for a header named `TEXT` or `BODY`, and obviously there is
no such thing. I think you're mixing up IMAP4 part names (the things you pass
in `con.fetch`) and RFC2822 header names (the things you use in an
`email.message.Message`).
As the [`email.message`](https://docs.python.org/3/library/email.message.html)
documentation explains, a `Message` consists of headers and a payload. The
payload is either a string (for non-multipart messages) or a list of
sub-`Message`s (for multipart). Either way, what you want here is
`raw.get_payload()`.
If you want to handle both, you can either first check `raw.is_multipart()`,
or you can check the type returned from `get_payload()`. Of course you have to
do decide what you want to do in the case of a multipart message; what counts
as "the body" when there are three parts? Do you want the first? The first
`text/plain`? The first `text/*`? The first `text/plain` if there is one, the
first `text/*` if not, and the first of anything if even that doesn't exist?
Or all of them concatenated together?
Let's assume you just want the first one. To do that:
def get_text(msg):
if msg.is_multipart():
return get_text(msg.get_payload(0))
else:
return msg.get_payload(None, True)
If you want something different, hopefully you can figure out how to do it
yourself. (See the `get_content_type` and/or `get_content_maintype` methods on
`Message`.)
|
Does Behave (BDD) work with Python 3.4?
Question: I am using [Behave](http://pythonhosted.org/behave/install.html) (BDD for
Python) and have been trying to enable JUnit output without success. After
troubleshooting, I realized that I am getting the following error message
**only** when using **Python 3.4** :
/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4 "/Users/myusername/Documents/Programming/Selenium Programming/GMail Project/GMailTests.py"
Traceback (most recent call last):
File "/Users/myusername/Documents/Programming/Selenium Programming/GMail Project/GMailTests.py", line 62, in <module>
config = Configuration()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/behave/configuration.py", line 481, in __init__
load_configuration(self.defaults, verbose=verbose)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/behave/configuration.py", line 394, in load_configuration
defaults.update(read_configuration(filename))
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/behave/configuration.py", line 348, in read_configuration
result[dest] = cfg.get('behave', dest, use_raw_value)
TypeError: get() takes 3 positional arguments but 4 were given
When I update my project to use Python 2.7 instead, everything works fine.
Here is an important note: this is only causing trouble when I enable the
JUnit output in the `behave.ini` config file. If I take the two lines below
out of the config, everything goes fine. Unfortunately, I need to enable JUnit
output for my project:
[behave]
junit=true
junit_directory=./JunitReports
If you know of any way I could make this work with Python 3.4, I'd love to
know about it. Thanks in advance.
Answer: Looks like I answered my own question in my last comment. I just wanted to
close the thread and provide an official answer as of 8/13/2014: `behave` is
**not** fully supported on `Python`**`3.4`** , and even though most of it
works fine when installed using `pip3 install behave`, the `JUnit` output
option does not function.
There is a known issue for it that has been documented
[here](https://github.com/behave/behave/issues/82).
|
Difference between / in C++ and Python
Question: **Using Python 2.7**
I was trying to solve the Reverse Polish Notation problem on LeetCodeOJ.
[RPN on LeetCodeOJ](https://oj.leetcode.com/problems/evaluate-reverse-polish-
notation/)
I wrote my straightforward solution in Python as follows:
class Solution:
# @param tokens, a list of string
# @return an integer
def evalRPN(self, tokens):
stack = []
for token in tokens:
if token in ["+" , "-" ,"*", "/"]:
op1 = stack.pop()
op2 = stack.pop()
if token == "+":
stack.append(op2+op1)
elif token == "-":
stack.append(op2-op1)
elif token == "*":
stack.append(op2*op1)
elif token == "/":
stack.append(op2/op1)
else:
stack.append(int(token))
if len(stack) == 1:
return stack.pop()
else:
return 0
This gets rejected on a test case:
Input: ["10","6","9","3","+","-11","*","/","*","17","+","5","+"]
Output: 12
Expected: 22
But if I modify the application of `'/'` operation to `stack.append(int(op2 /
(op1*1.0)))`, it succeeds.
The `/` operation is performed once on this input calculating `6/-132` which
results in `-1` using either of two ways.
Strangely, despite the fact that both evaluations result in `-1`, the program
as a whole differs in its output. As shown above, using the first way gives
`12` as the RPNEval while using the second would give `22`. What causes this?
I visited this [link](http://yucoding.blogspot.com/2013/12/leetcode-question-
evaluate-reverse.html#comment-form), but it only says that there is some
difference in the `/` operator in Python and C++. What is the difference?
Answer: If you are on Python 2, `/` does integer division (meaning, it drops the
remainder and just gives you the _rounded-down_ result) unless at least one of
the operands is of type `float` rather than `int`. You fix this by multiplying
with `1.0`, but you could also call `float(...)` on one of the operands. This
is similar to C++, however, in C++ the result is rounded _towards zero_ rather
than _down_ , meaning that you will receive different results with one
negative operand:
### C++:
1 / 2 // gives 0
(-1) / 2 // also gives 0
### Python 2:
1 / 2 # gives 0
(-1) / 2 # gives -1 (-0.5 rounded down)
### Python 3:
On Python 3, `/` always does proper floating point division, meaning that you
always get a `float` back, you can use `//` to restore the old behaviour
1 / 2 # gives 0.5
(-1) / 2 # gives -0.5
1 // 2 # gives 0
(-1) // 2 # gives -1
### Edited to add:
Since you are on Python 2.7 (see the edited question), it indeed seems to be
the integer division thing you are stuck at. To get the new Python 3-style
behaviour in Python 2, you can also run
from __future__ import division
at the **beginning** of your program (it must be at the very start, or the
interpreter will complain)
### Yet another edit regarding `int(something)`
Beware that while integer division rounds _down_ , conversion to integer
rounds _towards zero_ , like integer division in C++.
|
Python requests speed up using keep-alive
Question: In the HTTP protocol you can send many requests in one socket using keep-alive
and then receive the response from server at once, so that will significantly
speed up whole process. Is there any way to do this in python requests lib? Or
are there any other ways to speed this up that well using requests lib?
Answer: Yes, there is. Use [`requests.Session`](http://docs.python-
requests.org/en/latest/user/advanced/#session-objects) and [it will do keep-
alive by default](http://docs.python-
requests.org/en/latest/user/advanced/#keep-alive).
I guess I should include a quick example:
import logging
import requests
logging.basicConfig(level=logging.DEBUG)
s = requests.Session()
s.get('http://httpbin.org/cookies/set/sessioncookie/123456789')
s.get('http://httpbin.org/cookies/set/anothercookie/123456789')
r = s.get("http://httpbin.org/cookies")
print(r.text)
You will note that these log message occur
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): httpbin.org
DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies/set/sessioncookie/123456789 HTTP/1.1" 302 223
DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies HTTP/1.1" 200 55
DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies/set/anothercookie/123456789 HTTP/1.1" 302 223
DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies HTTP/1.1" 200 90
DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies HTTP/1.1" 200 90
If you wait a little while, and repeat the last `get` call
INFO:requests.packages.urllib3.connectionpool:Resetting dropped connection: httpbin.org
DEBUG:requests.packages.urllib3.connectionpool:"GET /cookies HTTP/1.1" 200 90
Note that it resets the dropped connection, i.e. reestablishing the connection
to the server to make the new request.
|
ImportError: No module named libxml2
Question: I am using Ubuntu 12.04.2 LTS. I have used libxml2 in my python script and
when I try to run it, gives error
Traceback (most recent call last):
File "deploy.py", line 3, in <module>
import libxml2
ImportError: No module named libxml2
I tried almost all stackoverflow answers for the same question but nothings
solves the issue (Installed several different packages).
Answer: You have to install the package in Ubuntu before being able to use it:
sudo apt-get install python-libxml2
|
How can I read the accelerometer in my windows tablet with python?
Question: I have an accelerometer in my tablet, that I can read from within javascript.
How can I access this data in python? Is there some ctypes trickery I can use
to call a windows 8 Sensor API function?
Answer: Horrible hack - start up a webserver in `server.py`:
import bottle
from threading import Thread
on_data = lambda alpha, beta, gamma: None
@bottle.route('/')
def handler():
return bottle.static_file('index.html', '.')
@bottle.post('/data')
def handler():
on_data(**bottle.request.json)
def data_handler(f):
global on_data
on_data = f
return f
def go():
t = Thread(target=lambda: bottle.run(quiet=True))
t.start()
With this `index.html`:
<script>
window.addEventListener('deviceorientation', function(eventData) {
var d = {};
['alpha', 'beta', 'gamma'].forEach(function(prop) {
d[prop] = eventData[prop];
})
var xhr = new XMLHttpRequest();
xhr.open("POST", "http://localhost:8080/data");
xhr.setRequestHeader("Content-Type", "application/json;charset=UTF-8");
xhr.send(JSON.stringify(d));
}, false);
</script>
And use it as:
import server
@server.data_hander
def on_acc_data(alpha, beta, gamma):
print alpha, beta, gamma
server.go()
After having opened `localhost:8080/` in the browser
|
multiprocessing do not work
Question: I am working on Ubuntu 12 with 8 CPU3 as reported by the System monitor.
the testing code is
import multiprocessing as mp
def square(x):
return x**2
if __name__ == '__main__':
pool=mp.Pool(processes=4)
pool.map(square,range(100000000))
pool.close()
# for i in range(100000000):
# square(i)
The problem is:
1) All workload seems to be scheduled to just one core, which gets close to
100% utilization, despite the fact that several processes are started.
Occasionally all workload migrates to another core but the workload is never
distributed among them.
2) without multiprocessing is faster
for i in range(100000000):
square(i)
I have read the similar questions on stackoverflow like: [Python
multiprocessing utilizes only one
core](http://stackoverflow.com/questions/6905264/python-multiprocessing-
utilizes-only-one-core)
still got no applied result.
Answer: The function you are using is way too short (i.e. doesn't take enough time to
compute), so you spend all your time in the synchronization between processes,
that has to be done in a serial manner (so why not on a single processor). Try
this:
import multiprocessing as mp
def square(x):
for i in range(10000):
j = i**2
return x**2
if __name__ == '__main__':
# pool=mp.Pool(processes=4)
# pool.map(square,range(1000))
# pool.close()
for i in range(1000):
square(i)
You will see that suddenly the multiprocessing works well: it takes ~2.5
seconds to accomplish, while it will take 10s without it.
**Note:** If using python 2, you might want to replace all the `range` by
`xrange`
**Edit:** I replaced `time.sleep` by a CPU-intensive but useless calculation
**Addendum:** In general, for multi-CPU applications, you should try to make
each CPU do as much work as possible without returning to the same process. In
a case like yours, this means splitting the range into almost-equal sized
lists, one per CPU and send them to the various CPUs.
|
cxfreeze command not found in windows
Question: I installed [cx_Freeze](http://cx-freeze.sourceforge.net/) via the _msi
installer_ on my Windows 7 pc. It told me the installation was successful and
running `pip install cx_Freeze` doesn't cause anything.
Anyway when I try to run the command `cxfreeze --version` in the windows
command line it tells me, that the program can not be found. I'm not even
sure, if this command has to be run in the command line, or in some python
shell.
Despite successful installation, there is no executable `cxfreeze.xyz` file in
my file system. But in the python installation folder there is a file
`Scripts\cxfreeze`. This file has no extension and can't be executed in the
command line. It's not a binary file, but contains the following text instead:
#!C:\Python\32-bit\3.4\python.exe
from cx_Freeze import main
main()
How can I make cxfreeze run, like stated in their documentation?
Answer: After some more research I found, that it's a known bug of cx_Freeze:
<https://bitbucket.org/anthony_tuininga/cx_freeze/issue/90/cxfreeze-in-
windows-is-not-executable>
* * *
_In the link there is also a work around, which i quote here:_
I create a `cxfreeze.cmd` in `venv\Scripts\` with the following contents:
:: cxfreeze.cmd
:: make sure cxfreeze from the official installation is in the same folder
:: python is in my path
python "%~dp0\cxfreeze" %*
And cmd.exe recognizes `cxfreeze.cmd`, so that I can run `cxfreeze --version`
now. Maybe the developers could consider adding my file into the official
installation process.
|
How to see a Google+ user's circles with google-api-python-client
Question: I'm trying to access a user's circles in this way:
from apiclient.discovery import build
service = build('plus','v1',developerKey=my_developer_key) # <-- NOT the user's token
people_request = service.people().list(userId=my_gplus_id, collection='connected')
all_people = people_request.execute()
The user approved the following scope:
'https://www.googleapis.com/auth/plus.login',
'https://www.googleapis.com/auth/userinfo.email',
'https://www.googleapis.com/auth/userinfo.profile'
But I'm getting this error:
<HttpError 403 when requesting https://www.googleapis.com/plus/v1/people/107512995392892664693/people/connected?alt=json&key=... returned "Forbidden">
Any ideas? Thanks~!
**EDIT** : I tried the same with google's JS API. It doesn't use the "key" url
param but instead uses a "bearer" header with the user's access token, perhaps
I'm using the wrong token?
Answer: The only supported `userId` is `me` for the currently authenticated user.
|
swig: extending a class template to provide __str__
Question: Say you have a template class `Foo`, and you want to wrap it with Swig
transparently so that you can print the class:
>>> from example import *
>>> f = Foo2()
>>> print(f)
In Foo class!
I have followed [this post](http://stackoverflow.com/questions/2548779/how-to-
stringfy-a-swig-matrix-object-in-python) and [this
one](http://www.swig.org/Doc1.3/SWIGPlus.html#SWIGPlus_class_extension). So my
header file is:
#include <iostream>
template <int d> class Foo {
public:
friend std::ostream &operator<<(std::ostream &os, const Foo &m) {
os << "Inside Foo class!" << std::endl;
return os;
}
};
And my interface file:
%{
#include <sstream>
#include <iostream>
#include "foo.hpp"
%}
%include "std_iostream.i"
// Try grabbing it unmodified
%include "foo.hpp"
/* Instantiate a few different versions of the template */
%template(Foo2) Foo<2>;
%template(Foo3) Foo<3>;
%extend Foo<2> {
const char *__str__() {
std::ostringstream oss(std::ostringstream::out);
oss << *self;
return oss.str().c_str();
}
};
So this works just fine, I can print the object as before, but I would like to
generalize it for any value of the template parameter, as it doesn't make
sense to copy that code for every template parameter. I tried the following in
the interface file but it didn't work:
template <int d> class Foo {
public:
%extend {
const char *__str__() {
std::ostringstream oss(std::ostringstream::out);
oss << *self;
return oss.str().c_str();
}
}
};
Answer: You should be able to `%extend` the primary template, from outside its
definition, by omitting the template parameter list:
%extend Foo {
const char *__str__() {
std::ostringstream oss(std::ostringstream::out);
oss << *self;
return oss.str().c_str();
}
};
%template(Foo2) Foo<2>;
%template(Foo3) Foo<3>;
* * *
Or you could use a SWIG macro to wrap and extend each specialization in one
go:
%define WRAP_FOO(N)
%template( Foo ## N ) Foo<N>;
%extend Foo<N> {
const char *__str__() {
std::ostringstream oss(std::ostringstream::out);
oss << *self;
return oss.str().c_str();
}
};
%enddef
/* Instantiate a few different versions of the template */
WRAP_FOO(2)
WRAP_FOO(3)
**Note that in either case, you are causing undefined behaviour by returning
the result of`.c_str()` of a `std::string` which is destroyed before the
function returns.**
|
python search in a string with find
Question: i'm trying to find a string in the headers of response after login in
wordpress script , so i tried with this find method :
import urllib, urllib2, os, sys, requests , re
....
....
req = urllib2.Request(url, urllib.urlencode(dict(data)), dict(headers))
response = urllib2.urlopen(req)
res = dict(response.headers)
res1 = 'wp-admin'
print res.find(res1);
and i get this error :
Traceback (most recent call last):
File "C:\Python27\wp2\wp12.py", line 29, in <module>
print res.find(res1);
AttributeError: 'dict' object has no attribute 'find'
so any idea how to search in dict(response.headers) or transform it into text
to use find function correctly ? and thanks a lot for any help :)
Answer: You cannot use "find" on dict, but you can find in keys/values. Followed code
will find wp-admin in values and return key/value in dict. add k.find if you
also want find in keys.
dict([(k,v) for k,v in res.iteritems() if v.find('wp-admin') >=0])
|
Python string to date, date to string
Question: I have a list of blog posts with two columns. The date they were created and
the unique ID of the person creating them.
I want to return the date of the most recent blog post for each unique ID.
Simple, but all of the date values are stored in strings. And all of the
strings don't have a leading 0 if the month is less than 10.
I've been struggling w/ strftime and strptime but can't get it to return
effectively.
import csv
Posters = {}
with open('datetouched.csv','rU') as f:
reader = csv.reader(f)
for i in reader:
UID = i[0]
Date = i[1]
if UID in Posters:
Posters[UID].append(Date)
else:
Posters[UID] = [Date]
for i in Posters:
print i, max(Posters[i]), Posters[i]
This returns the following output
0014000000s5NoEAAU 7/1/10 ['1/6/14', '7/1/10', '1/18/14', '1/24/14', '7/1/10', '2/5/14']
0014000000s5XtPAAU 2/3/14 ['1/4/14', '1/10/14', '1/16/14', '1/22/14', '1/28/14', '2/3/14']
0014000000vHZp7AAG 2/1/14 ['1/2/14', '1/8/14', '1/14/14', '1/20/14', '1/26/14', '2/1/14']
0014000000wnPK6AAM 2/2/14 ['1/3/14', '1/9/14', '1/15/14', '1/21/14', '1/27/14', '2/2/14']
0014000000d5YWeAAM 2/4/14 ['1/5/14', '1/11/14', '1/17/14', '1/23/14', '1/29/14', '2/4/14']
0014000000s5VGWAA2 7/1/10 ['7/1/10', '1/7/14', '1/13/14', '1/19/14', '7/1/10', '1/31/14']
It's returning 7/1/2010 because that # is larger than 1. I need the max value
of the list returned as the exact same string value.
Answer: Parse the dates with `datetime.datetime.strptime()`, either when loading the
CSV or as a `key` function to `max()`.
While loading:
from datetime import datetime
Date = datetime.strptime(i[1], '%m/%d/%y')
or when using `max()`:
print i, max(Posters[i], key=lambda d: datetime.strptime(d, '%m/%d/%y')), Posters[i]
Demo of the latter:
>>> from datetime import datetime
>>> dates = ['1/6/14', '7/1/10', '1/18/14', '1/24/14', '7/1/10', '2/5/14']
>>> max(dates, key=lambda d: datetime.strptime(d, '%m/%d/%y'))
'2/5/14'
Your code can be optimised a little:
import csv
posters = {}
with open('datetouched.csv','rb') as f:
reader = csv.reader(f)
for row in reader:
uid, date = row[:2]
posters.setdefault(uid, []).append(datetime.strptime(date, '%d/%m/%y'))
for uid, dates in enumerate(posters.iteritems()):
print i, max(dates), dates
The [`dict.setdefault()`
method](https://docs.python.org/2/library/stdtypes.html#dict.setdefault) sets
a default value (an empty list here) whenever the key is not present yet.
|
Django aggregate Count only True values
Question: I'm using aggregate to get the count of a column of booleans. I want the
number of True values.
DJANGO CODE:
count = Model.objects.filter(id=pk).aggregate(bool_col=Count('my_bool_col')
This returns the count of all rows.
SQL QUERY SHOULD BE:
SELECT count(CASE WHEN my_bool_col THEN 1 ELSE null END) FROM <table name>
Here is my actual code:
stats = Team.objects.filter(id=team.id).aggregate(goals=Sum('statistics__goals'),
assists=Sum('statistics__assists'),
min_penalty=Sum('statistics__minutes_of_penalty'),
balance=Sum('statistics__balance'),
gwg=Count('statistics__gwg'),
gk_goals_avg=Sum('statistics__gk_goals_avg'),
gk_shutout=Count('statistics__gk_shutout'),
points=Sum('statistics__points'))
Thanks to **Peter DeGlopper** suggestion to use [django-aggregate-
if](https://pypi.python.org/pypi/django-aggregate-if/)
Here is the solution:
from django.db.models import Sum
from django.db.models import Q
from aggregate_if import Count
stats = Team.objects.filter(id=team.id).aggregate(goals=Sum('statistics__goals'),
assists=Sum('statistics__assists'),
balance=Sum('statistics__balance'),
min_penalty=Sum('statistics__minutes_of_penalty'),
gwg=Count('statistics__gwg', only=Q(statistics__gwg=True)),
gk_goals_avg=Sum('statistics__gk_goals_avg'),
gk_shutout=Count('statistics__gk_shutout', only=Q(statistics__gk_shutout=True)),
points=Sum('statistics__points'))
Answer: It seems what you want to do is some kind of **"Conditional aggregation"**.
Right now `Aggregation` functions do not support lookups like `filter` or
`exclude`: fieldname__lt, fieldname__gt, ...
So you can try this:
[django-aggregate-if](https://pypi.python.org/pypi/django-aggregate-if)
Description taken from the official page.
> Conditional aggregates for Django queries, just like the famous SumIf and
> CountIf in Excel.
You can also first
[annotate](https://docs.djangoproject.com/en/1.6/ref/models/querysets/#annotate)
the desired value for each team, I mean count for each team the ammount of
`True` in the field you are interested. And then do all the
[aggregation](https://docs.djangoproject.com/en/1.6/ref/models/querysets/#aggregate)
you want to do.
|
Python 3 basic auth with pinnaclesports API
Question: i am trying to grab betting lines with python from pinnaclesports using their
API <http://www.pinnaclesports.com/api-xml/manual>
which requires basic authentication (<http://www.pinnaclesports.com/api-
xml/manual#authentication>):
> Authentication
>
> API use HTTP Basic access authentication . Always use HTTPS to access the
> API. You need to send HTTP Request header like this:
>
> Authorization: Basic
>
>
> For example:
> Authorization: Basic U03MyOT23YbzMDc6d3c3O1DQ1
>
import urllib.request, urllib.parse, urllib.error
import socket
import base64
url = 'https://api.pinnaclesports.com/v1//feed?sportid=12&leagueid=6164'
username = "abc"
password = "xyz"
base64 = "Basic: " + base64.b64encode('{}:{}'.format(username,password).encode('utf-8')).decode('ascii')
print (base64)
details = urllib.parse.urlencode({ 'Authorization' : base64 })
details = details.encode('UTF-8')
url = urllib.request.Request(url, details)
url.add_header("User-Agent","Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.29 Safari/525.13")
responseData = urllib.request.urlopen(url).read().decode('utf8', 'ignore')
print (responseData)
Unfortunately i get a http 500 error. Which from my point means either my
authentication isn't working properly or their API is not working.
Thanks in advance
Answer: As it happens, I don't seem to use the Python version you use, so this has not
been tested using your code, but there is an extraneous colon after "Basic" in
your base64 string. In my own code, adding this colon after "Basic" indeed
yields a http 500 error.
**Edit: Code example using Python 2.7 and urllib2:**
import urllib2
import base64
def get_leagues():
url = 'https://api.pinnaclesports.com/v1/leagues?sportid=33'
username = "myusername"
password = "mypassword"
b64str = "Basic " + base64.b64encode('{}:{}'.format(username,password).encode('utf-8')).decode('ascii')
headers = {'Content-length' : '0',
'Content-type' : 'application/xml',
'Authorization' : b64str}
req = urllib2.Request(url, headers=headers)
responseData = urllib2.urlopen(req).read()
ofn = 'api_leagues.txt'
with open(ofn, 'w') as ofile:
ofile.write(responseData)
|
Openshift doesn't perform syncdb on push
Question: I have the following error when performing my pushes and app-restart:
remote: Executing 'python /var/lib/openshift/6783687678687678/app-root/runtime/repo//wsgi/openshift/manage.py syncdb --noinput'
remote: python: can't open file '/var/lib/openshift/6783687678687678/app-root/runtime/repo/wsgi/openshift/manage.py': [Errno 2] No such file or directory
However this is not the path of my app. I cannot find out where the settings
are so that I can change them to the actual path. I've tried: `setup.py,
settings, application` and none of these seem to relate to the above path. The
path should be:
/var/lib/openshift/6783687678687678/app-root/runtime/repo/wsgi/mycoolapp/manage.py'
If I change the path in the action_hooks i.e. deploy, I get the following:
Executing 'python /var/lib/openshift/6783687678687678/app-root/runtime/repo//wsgi/app/manage.py collectstatic --noinput'
remote: Traceback (most recent call last):
remote: File "/var/lib/app/6783687678687678/app-root/runtime/repo/wsgi/app/manage.py", line 2, in <module>
remote: from django.core.management import execute_manager
remote: ImportError: cannot import name execute_manager
Answer: From the [cartridge
docs](http://openshift.github.io/documentation/oo_cartridge_guide.html):
> For backward compatibility, the wsgi/application path is selected as default
> WSGI entry-point > with higher priority. You can customize the path using
> the OPENSHIFT_PYTHON_WSGI_APPLICATION > environment variable.
But it looks like your application is in the right path so, I would check your
action_hooks in the repo under `~/.openshift/action_hooks/` to see if there
are some custom actions defined.
|
Python: Must non-built-in exceptions be imported in order to catch them?
Question: I'm trying to catch some exceptions thrown by the `requests` library, with the
following try-except block:
try:
get = requests.get((requester.batchesUrl)+str(id)+'/', auth=requester.auth)
except (ConnectionRefusedError, ConnectionError, MaxRetryError) as e:
print("CAUGHT ECONNECTION ERROR")
raise type(e)(str(e) + "Additional Info: Method couldn't connect to website, check that your server is running"
).with_traceback(sys.exc_info()[2])
But instead of catching the exceptions and adding `"Additional Info:"` to the
args, I get `NameError: global name 'MaxRetryError' is not defined.` Now I
know MaxRetryError is an exception in `urllib3.exceptions.MaxRetryError`. Must
I import these non-built-in exception in order to catch them? This seems
verbose to me considering the number of possible exceptions one is likely to
need to watch out for.
**Running:** Python 3.3, Windows 7.
Answer: You must **always always always** have a name in an accessible scope before
you can refer to it without throwing an exception. If that means importing,
then so be it.
|
python requests session failed to read the response after reading a big (more than 50mb) response content
Question: When using python requests to access some rest api, I am using request's
session object. I faced a issue, when the first request is reading large
content (more than 50mb) then the subsequent http request fails on the same
session object. But if I didnt use the Session object then everything works
fine... I have explained the code below...
import requests # version 2.3.0 # python version 2.7
headers = {"Authorization":"Bearer sometoken"}
sess = requests.Session()
sess.verify = False
host = "https://somehost/endpoint/"
res = sess.get(url = host+'obj1/28/content', headers = headers)
print res # this result received successfully with 200 response status code
url = host + 'obj2/1/content'
res = sess.get(url = url, headers=headers) # the process running here continuously running here. I need to kill the process to exit.
print "content ", res.content # this line never gets executed...
After killing the process , stack trace....
File "/opt/lib/python2.7/site-packages/requests/sessions.py", line 556, in send
r = adapter.send(request, **kwargs)
File "/opt/lib/python2.7/site-packages/requests/adapters.py", line 391, in send
r.content
File "/opt/lib/python2.7/site-packages/requests/models.py", line 690, in content
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
File "/opt/lib/python2.7/site-packages/requests/models.py", line 628, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "/opt/lib/python2.7/site-packages/requests/packages/urllib3/response.py", line 240, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/opt/lib/python2.7/site-packages/requests/packages/urllib3/response.py", line 187, in read
data = self._fp.read(amt)
File "/opt/lib/python2.7/httplib.py", line 567, in read
s = self.fp.read(amt)
File "/opt/lib/python2.7/httplib.py", line 1313, in read
return s + self._file.read(amt - len(s))
File "/opt/lib/python2.7/socket.py", line 380, in read
data = self._sock.recv(left)
File "/opt/lib/python2.7/ssl.py", line 242, in recv
return self.read(buflen)
File "/opt/lib/python2.7/ssl.py", line 161, in read
return self._sslobj.read(len)
But the same http requests with out Session object works fine.
print requests.get( host+'obj1/28/content', headers = headers, verify = False)
print requests.get( host+'obj2/1/content', headers = headers, verify = False)
Answer: From the `requests` docs:
> Excellent news — thanks to urllib3, keep-alive is 100% automatic within a
> session! Any requests that you make within a session will automatically
> reuse the appropriate connection!
>
> Note that **connections are only released back to the pool for reuse once
> all body data has been read** ; be sure to either set stream to False or
> read the content property of the Response object.
Sounds like the large request is holding up that connection, or, as abarnert
suggests, there's an issue with the server. Try setting `stream=False`, or
access the content of that first `res` object so that `requests` knows that it
can free up that connection.
EDIT: This looks like the issue. When you call `requests.get`, you set `verify
= False` explicity. This is unnecessary, since the default for `requests.get`
is `False`.
However, your lockup is in `adapter.send(request, **kwargs)`. So it looks like
an `HTTPAdapter` object is at fault. `adapter.send` has the following
signature:
send(request, stream=False, timeout=None, verify=True, cert=None, proxies=None)
with `verify=True` as the default.
This sounds like a bug in `requests`, but my guess is that the `verify`
parameter isn't getting passed down from the `Session`. The signature for
`sess.request` is:
request(method, url, params=None, data=None, headers=None, cookies=None, files=None, auth=None, timeout=None, allow_redirects=True, proxies=None, hooks=None, stream=None, verify=None, cert=None)
where `verify=None` rather than `False`, so maybe that means that it's getting
overriden somewhere.
Try explicitly setting `verify=False` in `sess.get`.
|
Getting TemplateDoesNoteExsist Error in Django
Question:
TemplateDoesNotExist at /
index.html
Request Method: GET
Request URL:
Django Version: 1.6.5
Exception Type: TemplateDoesNotExist
Exception Value:
index.html
Exception Location: C:\Python27\lib\site-packages\django\template\loader.py in find_template, line 131
Template-loader postmortem
Django tried loading these templates, in this order:
Using loader django.template.loaders.filesystem.Loader:
C:\Users\Peter Na\documents\github\tutorial2\static\templates\index.html (File does not exist)
Using loader django.template.loaders.app_directories.Loader:
C:\Python27\lib\site-packages\django\contrib\admin\templates\index.html (File does not exist)
C:\Python27\lib\site-packages\django\contrib\auth\templates\index.html (File does not exist)
TemplateDoesNotExist at /
index.html
Request Method: GET
Request URL:
Django Version: 1.6.5
Exception Type: TemplateDoesNotExist
Exception Value:
index.html
Exception Location: C:\Python27\lib\site-packages\django\template\loader.py in find_template, line 131
Python Executable: C:\Python27\python.exe
Python Version: 2.7.6
Template-loader postmortem
Django tried loading these templates, in this order:
Using loader django.template.loaders.filesystem.Loader:
C:\Users\Peter Na\documents\github\tutorial2\static\templates\index.html (File does not exist)
Using loader django.template.loaders.app_directories.Loader:
C:\Python27\lib\site-packages\django\contrib\admin\templates\index.html (File does not exist)
C:\Python27\lib\site-packages\django\contrib\auth\templates\index.html (File does not exist)
in the Root Folder I do have a static folder>Templates>index.html. It seems
like it cannot pick up the HTML file???
Answer: You need add your template dir in the settings file.
import os
PROJECT_PATH = os.path.realpath(os.path.dirname(__file__))
...
MEDIA_ROOT = PROJECT_PATH + '/media/'
TEMPLATE_DIRS = (
PROJECT_PATH + '/templates/'
)
|
scikit-learn's GridSearchCV stops working when n_jobs>1
Question: I have previously asked
[here](http://stackoverflow.com/questions/25249212/scikit-grid-search-for-knn-
regression-valueerror-array-contains-nan-or-infinity) come up with following
lines of code:
parameters = [{'weights': ['uniform'], 'n_neighbors': [5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]}]
clf = GridSearchCV(neighbors.KNeighborsRegressor(), parameters, n_jobs=4)
clf.fit(features, rewards)
But when I've run this there has appeared another problem that was not related
to the previously asked question. Python ends up with following OS error
message:
Process: Python [1327]
Path: /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
Identifier: Python
Version: 2.7.2.5 (2.7.2.5.r64662-trunk)
Code Type: X86-64 (Native)
Parent Process: Python [1316]
Responsible: Sublime Text 2 [308]
User ID: 501
Date/Time: 2014-08-12 10:27:24.640 +0200
OS Version: Mac OS X 10.9.4 (13E28)
Report Version: 11
Anonymous UUID: D10CD8B7-221F-B121-98D4-4574A1F2189F
Sleep/Wake UUID: 0B9C4AE0-26E6-4DE8-B751-665791968115
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000110
VM Regions Near 0x110:
-->
__TEXT 0000000100000000-0000000100001000 [ 4K] r-x/rwx SM=COW /Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
Application Specific Information:
*** multi-threaded process forked ***
crashed on child side of fork pre-exec
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libdispatch.dylib 0x00007fff91534c90 dispatch_group_async_f + 141
1 libBLAS.dylib 0x00007fff9413f791 APL_sgemm + 1061
2 libBLAS.dylib 0x00007fff9413cb3f cblas_sgemm + 1267
3 _dotblas.so 0x0000000102b0236e dotblas_matrixproduct + 5934
4 org.activestate.ActivePython27 0x00000001000c552d PyEval_EvalFrameEx + 23949
5 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
6 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968
7 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
8 org.activestate.ActivePython27 0x000000010003d390 function_call + 176
9 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
10 org.activestate.ActivePython27 0x00000001000c098a PyEval_EvalFrameEx + 4586
11 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
12 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968
13 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
14 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968
15 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127
16 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127
17 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
18 org.activestate.ActivePython27 0x000000010003d390 function_call + 176
19 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
20 org.activestate.ActivePython27 0x00000001000c098a PyEval_EvalFrameEx + 4586
21 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
22 org.activestate.ActivePython27 0x000000010003d390 function_call + 176
23 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
24 org.activestate.ActivePython27 0x000000010001d36d instancemethod_call + 365
25 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
26 org.activestate.ActivePython27 0x0000000100077dfa slot_tp_call + 74
27 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
28 org.activestate.ActivePython27 0x00000001000c098a PyEval_EvalFrameEx + 4586
29 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
30 org.activestate.ActivePython27 0x000000010003d390 function_call + 176
31 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
32 org.activestate.ActivePython27 0x00000001000c098a PyEval_EvalFrameEx + 4586
33 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127
34 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127
35 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
36 org.activestate.ActivePython27 0x000000010003d390 function_call + 176
37 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
38 org.activestate.ActivePython27 0x000000010001d36d instancemethod_call + 365
39 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
40 org.activestate.ActivePython27 0x0000000100077a28 slot_tp_init + 88
41 org.activestate.ActivePython27 0x0000000100074e25 type_call + 245
42 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
43 org.activestate.ActivePython27 0x00000001000c267d PyEval_EvalFrameEx + 11997
44 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127
45 org.activestate.ActivePython27 0x00000001000c7137 PyEval_EvalFrameEx + 31127
46 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
47 org.activestate.ActivePython27 0x000000010003d390 function_call + 176
48 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
49 org.activestate.ActivePython27 0x000000010001d36d instancemethod_call + 365
50 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
51 org.activestate.ActivePython27 0x0000000100077a28 slot_tp_init + 88
52 org.activestate.ActivePython27 0x0000000100074e25 type_call + 245
53 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
54 org.activestate.ActivePython27 0x00000001000c267d PyEval_EvalFrameEx + 11997
55 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
56 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968
57 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
58 org.activestate.ActivePython27 0x000000010003d390 function_call + 176
59 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
60 org.activestate.ActivePython27 0x000000010001d36d instancemethod_call + 365
61 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
62 org.activestate.ActivePython27 0x0000000100077dfa slot_tp_call + 74
63 org.activestate.ActivePython27 0x000000010000be12 PyObject_Call + 98
64 org.activestate.ActivePython27 0x00000001000c267d PyEval_EvalFrameEx + 11997
65 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
66 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968
67 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
68 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968
69 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
70 org.activestate.ActivePython27 0x00000001000c5d10 PyEval_EvalFrameEx + 25968
71 org.activestate.ActivePython27 0x00000001000c7ad6 PyEval_EvalCodeEx + 2118
72 org.activestate.ActivePython27 0x00000001000c7bf6 PyEval_EvalCode + 54
73 org.activestate.ActivePython27 0x00000001000ed31e PyRun_FileExFlags + 174
74 org.activestate.ActivePython27 0x00000001000ed5d9 PyRun_SimpleFileExFlags + 489
75 org.activestate.ActivePython27 0x00000001001041dc Py_Main + 2940
76 org.activestate.ActivePython27.app 0x0000000100000ed4 0x100000000 + 3796
Thread 0 crashed with X86 Thread State (64-bit):
rax: 0x0000000000000100 rbx: 0x00007fff7cd43640 rcx: 0x0000000000000000 rdx: 0x0000000105e00000
rdi: 0x0000000000000008 rsi: 0x0000000105e01000 rbp: 0x00007fff5fbfa370 rsp: 0x00007fff5fbfa350
r8: 0x0000000000000001 r9: 0x0000000105e00000 r10: 0x0000000105e01000 r11: 0x0000000000000000
r12: 0x000000010ba10530 r13: 0x000000010b000000 r14: 0x00000001066d1970 r15: 0x00007fff915311af
rip: 0x00007fff91534c90 rfl: 0x0000000000010206 cr2: 0x0000000000000110
Logical CPU: 2
Error Code: 0x00000006
Trap Number: 14
.........
VM Region Summary:
ReadOnly portion of Libraries: Total=183.7M resident=97.0M(53%) swapped_out_or_unallocated=86.7M(47%)
Writable regions: Total=1.3G written=142.8M(11%) resident=503.6M(39%) swapped_out=0K(0%) unallocated=791.7M(61%)
When I have replaced the second line in my code by:
clf = GridSearchCV(neighbors.KNeighborsRegressor(), parameters, n_jobs=1)
Then everything works fine except I don't use multiple threads.
My operating system is OSX 10.9.4
My python version is 2.7.8 |Anaconda 2.0.1 (x86_64)| (default, Jul 2 2014,
15:36:00) [GCC 4.2.1 (Apple Inc. build 5577)]
My scikit-lern version is 0.14.1
My numpy version is 1.8.1
And my scipy version is 0.14.0
My question is if anybody has an idea how to make GridSearchCV run on more
than one thread?
**EDIT:**
I have realized that actually this error happens only for some of my input
data sets. Unfortunately the problematic datasets (its X) are too big so it is
not possible to copy them in here. Input features data is basically tf-idf
vectors and y vectors are floats > 0, particularly:
[60.0, 7.0, 12.0, 21.0, 5.5, 3.0, 0.0, 2.5, 11.0, 3.0, 16.0, 2.0, 0.0, 4.5, 2.5, 6.0, 9.5, 2.5, 15.0, 7.0, 8.0, 13.0, 14.0, 8.0, 3.5, 6.0, 22.5, 7.0, 4.0, 3.5, 4.5, 6.0, 5.5, 7.0, 2.0, 0.0, 0.0, 0.0, 14.5, 8.0, 7.5, 2.5, 11.5, 1.0, 3.0, 14.5, 10.0, 14.5, 8.0, 8.0, 7.0, 2.5, 3.5, 3.0, 13.5, 7.0, 6.5, 2.5, 9.0, 8.0, 11.0, 17.5, 12.5, 4.5, 5.5, 8.0, 2.0, 7.0, 4.0, 1.5, 3.0, 21.5, 4.5, 4.0, 7.0, 9.0, 13.5, 8.0, 10.5, 4.5, 1.5, 11.5, 7.5, 11.5, 4.5, 5.0, 7.0, 9.5, 4.0, 4.0, 6.0, 3.5, 4.5, 7.5, 3.5, 3.5, 3.5, 6.0, 5.0, 5.5, 25.0, 6.5, 5.0, 2.0, 2.0, 10.5, 0.0, 6.5, 19.0, 9.0, 1.0, 1.5, 1.0, 0.0, 1.0, 4.5, 2.5, 17.5, 39.5, 7.5, 5.5, 8.0, 1.0, 6.0, 12.0, 10.0, 5.5, 19.0, 4.5, 1.5, 25.5, 4.0, 10.0, 18.5, 9.5, 10.5, 2.5, 6.0, 1.0, 10.0, 8.5, 12.5, 13.5, 5.0, 6.5, 11.0, 4.5, 8.0, 7.5, 11.5, 14.5, 9.0, 3.0, 1.5, 3.5, 5.5, 2.5, 12.5, 6.5, 5.5, 5.0, 0.0, 8.0, 3.0, 14.5, 5.0, 14.0, 7.0, 13.5, 12.5, 4.0, 1.5, 6.5, 10.5, 9.0, 16.5, 4.0, 4.0, 15.0, 11.5, 2.5, 8.5, 3.0, 5.0, 4.0, 8.5, 6.0, 5.0, 5.0, 5.0, 5.5, 8.0, 11.0, 4.0, 0.0, 5.5, 0.0, 4.5, 1.5, 0.0, 6.5, 11.0, 2.5, 8.0, 15.5, 5.5, 4.5, 5.0, 4.0, 5.5, 10.5, 7.5, 6.5, 8.5, 2.5, 1.5, 1.5, 18.0, 15.0, 14.0, 9.5, 5.5, 7.5, 14.5, 2.5, 5.0, 60.0, 6.5, 14.5, 6.5, 4.0, 1.5, 2.0, 4.0, 27.0, 3.0, 5.0, 4.0, 2.5, 1.0, 1.5, 1.5, 9.0, 4.0, 8.5, 4.0, 4.0, 0.0, 1.5, 7.5, 1.5, 7.5, 1.0, 28.5, 15.5, 7.5, 1.0, 2.5, 2.5, 2.5, 16.0, 5.5, 8.5, 4.0, 2.5, 5.0, 2.5, 6.0, 11.0, 10.0, 4.5, 6.5, 8.0, 6.0, 4.5, 15.5, 4.0, 5.0]
The version with 1 job works for all of my input data sets, even for this one.
Answer: `libdispatch.dylib` from Grand Central Dispatch is used internally by OSX's
builtin implementation of BLAS called Accelerate when you do a `numpy.dot`
calls. The GCD runtime does not work when programs call the POSIX `fork`
syscall without using an `exec` syscall afterwards and therefore makes all
Python programs that use the `multiprocessing` module prone to crash.
sklearn's `GridsearchCV` uses the Python `multiprocessing` module for
parallelization.
Under Python 3.4 and later you can force Python multiprocessing to use the
[forkserver start
method](https://docs.python.org/dev/library/multiprocessing.html#contexts-and-
start-methods) instead of the default `fork` mode to workaround this problem,
for instance at the beginning of the main file of your program:
if __name__ == "__main__":
import multiprocessing as mp; mp.set_start_method('forkserver')
Alternatively, you can rebuild numpy from source and make it link against
ATLAS or OpenBLAS instead of OSX Accelerate. The numpy developers are working
on binary distributions that include either ATLAS or OpenBLAS by default.
|
Django Tastypie prepend_urls error
Question: Django-tastypie error. I am trying to prepend_urls so that I can list friends
for a user but I get an error **" NameError at /api/v1/friends/user/1/ global
name 'url' is not defined"**. Here is the code for the Friends Resource.
class FriendsResource(ModelResource):
from_user=fields.ForeignKey(UserResource,'from_user')
to_user=fields.ForeignKey(UserResource,'to_user')
class Meta:
queryset=Friends.objects.all()
serializer=Serializer(formats=['json'])
resource_name='friends'
filtering={
'from_user':ALL_WITH_RELATIONS,
'to_user':ALL_WITH_RELATIONS
}
and here is the code for prepend_urls and the method to put into wrap_view.
def get_users(self,request):
self.method_check(request,['get'])
friends = []
for friend in Friends.objects.filter(Q(from_user=request.user)|Q(to_user=request.user)):
friends.append(friend)
def prepend_urls(self):
return [
url(r"^(?P<resource_name>%s)/(?P<pk>\w[\w/-]*)/user%s$" %(self._meta.resource_name,trailing_slash()),
self.wrap_view('get_users'),name= 'api_get_friends_for_user')
]
Here is the Traceback:
Environment:
Request Method: GET
Request URL: http://localhost:8000/api/v1/friends/user/1/
Django Version: 1.6.2
Python Version: 2.7.3
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'tastypie',
'userprof',
'relations',
'event',
'liking',
'feed')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Traceback:
File "/root/python/django-zack/local/lib/python2.7/site-packages/Django-1.6.2-py2.7.egg/django/core/handlers/base.py" in get_response
101. resolver_match = resolver.resolve(request.path_info)
File "/root/python/django-zack/local/lib/python2.7/site-packages/Django-1.6.2-py2.7.egg/django/core/urlresolvers.py" in resolve
318. for pattern in self.url_patterns:
File "/root/python/django-zack/local/lib/python2.7/site-packages/Django-1.6.2-py2.7.egg/django/core/urlresolvers.py" in url_patterns
346. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/root/python/django-zack/local/lib/python2.7/site-packages/Django-1.6.2-py2.7.egg/django/core/urlresolvers.py" in urlconf_module
341. self._urlconf_module = import_module(self.urlconf_name)
File "/root/python/django-zack/local/lib/python2.7/site-packages/Django-1.6.2-py2.7.egg/django/utils/importlib.py" in import_module
40. __import__(name)
File "/root/python/django-zack/wyat/wyat/urls.py" in <module>
25. url(r'^api/',include(v1_api.urls)),
File "/root/python/django-zack/local/lib/python2.7/site-packages/tastypie/api.py" in urls
107. pattern_list.append((r"^(?P<api_name>%s)/" % self.api_name, include(self._registry[name].urls)))
File "/root/python/django-zack/local/lib/python2.7/site-packages/tastypie/resources.py" in urls
324. urls = self.prepend_urls()
File "/root/python/django-zack/wyat/event/api.py" in prepend_urls
68. url(r"^(?P<resource_name>%s)/(?P<pk>\w[\w/-]*)/user%s$" %(self._meta.resource_name,trailing_slash()),
Exception Type: NameError at /api/v1/friends/user/1/
Exception Value: global name 'url' is not defined
Please tell me where I am wrong because I used the example from the cookbook
on nested resources and I can't see where I'm wrong.
Answer: @Zacmwa
I have a working prepend_urls .. you can take a look at the following example.
def prepend_urls(self):
return [
url(r"^(?P%s)/generate%s$" %
(self._meta.resource_name, trailing_slash()),
self.wrap_view('genusr'), name="api_get_genusr"),
]
def genusr(self, request, **kwargs):
data = self.deserialize(request, request.body, format=request.META.get('Content-Type', 'application/json'))
print(data.get('workflows',None))
child_resource = UserResource()
return child_resource.get_list(request)
And the above methods is wrappered inside my resource model.
Error is likely to be caused due to some missing imports kindly try to import
**django.core.urlresolvers import resolve, from tastypie.utils import
trailing_slash** and try again. Let me know what happens when you do this.
|
directory structure for a project that mixes C++ and Python
Question: Say you want want to create a programming project that mixes _C++_ and
_Python_. The **Foo** _C++_ project structure uses _CMake_ , and a _Python_
module is created by using _Swig_. The tree structure would look something
like this:
├── CMakeLists.txt
├── FooConfig.cmake.in
├── FooConfigVersion.cmake.in
├── Makefile
├── README
├── foo
│ ├── CMakeLists.txt
│ ├── config.hpp.in
│ ├── foo.cpp
│ └── foo.hpp
└── swig
└── foo.i
Now you would like to make use of the **Foo** project within a _Python_
project, say **Bar** :
├── AUTHORS.rst
├── CONTRIBUTING.rst
├── HISTORY.rst
├── LICENSE
├── MANIFEST.in
├── Makefile
├── README.rst
├── docs
│ ├── Makefile
│ ├── authors.rst
│ ├── conf.py
│ ├── contributing.rst
│ ├── history.rst
│ ├── index.rst
│ ├── installation.rst
│ ├── make.bat
│ ├── readme.rst
│ └── usage.rst
├── bar
│ ├── __init__.py
│ └── bar.py
├── requirements.txt
├── setup.cfg
├── setup.py
├── tests
│ ├── __init__.py
│ └── test_bar.py
└── tox.ini
This structure was crated by using [cookiecutter's pypackage
template](https://pypi.python.org/pypi/cookiecutter/0.7.2). A BoilerplatePP
template is also available to generate a _CMake_ _C++_ project using
cookiecutter (no _Swig_ part). So now that I have the structure of both
projects, and considering that the development will take place mainly in
_Python_ and the the project will be run in different systems, I need to
address the following questions:
1. What's the best way to mix them? Should I collapse both root directories? Should I have the **Foo** _C++_ project as a directory of the **Bar** project or the other way around? I may be inclined to put the entire _C++_ structure shown above in a folder at the root level of the _Python_ project, but I would like to know _a priori_ any pitfalls as the _CMake_ system is quite powerful and it may be convenient to do it the other way around.
2. In case I decide to put the **Foo** project as a directory within **Bar** , is the _Python_ setuptools package as powerful as the _CMake_ build system? I ask this because when I take a look at the **Bar** project, at the top level it seems there's only a bunch of scripts, but I don't know if this is the equivalent to _CMake_ as I'm new to _Python_.
3. The **Bar** project outlined above has a single _bar_ directory, but I assume that whenever this project expands, instead of having many other directories at the root level, other directories containing _Python_ code will be placed within _bar_. Is this correct (in the _Pythonic_ sense)?
4. I assume that a single egg will be produced from the entire project, so that it can be installed and run in many different python systems. Is the integration of the module created by the **Foo** project easy? I assume that this module will be created in a different directory than _bar_.
5. In order for the _Python_ code within the _bar_ directory, the module created by _Swig_ has to be available, so I guess the most straightforward way to do this is to modify the environmental variable `PYTHONPATH` using the _CMake_ system. Is this fine or is there a better way?
Answer: **If the C++ application has no use outside the Python package that will
contain it:**
You can pretty safely place the C++ code within the python package that owns
it. Have the "foo" directory within the "bar" directory within your example.
This will make packaging the final Python module a bit easier.
**If the C++ application is reusable:**
I would definitely try to think of things in terms of "packages", where
independent parts are self-contained. All independent parts live on the same
level. If one part depends on another, you import from its corresponding
"package" from the same level. This is how dependencies typically work.
I would NOT include one within the other, because one does not strictly belong
to the other. What if you started a third project that needed "foo", but did
not need "bar"?
I would place both "foo" and "bar" packages into the same "project" directory
(and I would probably give each package it's own code repository so each
package can be easily maintained and installed).
|
GeekTool only iterates through my python loop once
Question: I built a very simple script with PRAW that prints the top 10 link titles on
reddit.com/r/worldnews. I want this to work with GeekTool, but only the
following shows up:
"TOP 10 NEWS ON REDDIT
1 NEWS TITLE
2 "
I don't know why that happens since when running the script directly from the
command line I have no issues whatsoever.
Here's the python script:
import praw
def main():
subreddit = r.get_subreddit('worldnews')
x = 1
print "TOP 10 NEWS ON REDDIT"
print ''
for submission in subreddit.get_hot(limit=10):
print x, submission.title
x = x+1
print ' '
if __name__ == "__main__":
user_agent = "Top10 0.1 by /u/alexisfg"
r = praw.Reddit(user_agent=user_agent)
main()
Answer: If you put a try...except around the main function to print any exceptions,
you get the following error message:
ascii codec can't encode character u'\u2019' in position 12: ordinal not in range(128)
So this is an encoding issue - some character in the second title is not in
the ASCII range, which python/Geektool is using as the default encoding. You
can get around this by encoding the title string explicitly with
`.encode('utf-8')`.
|
Python script don't receive exit signal sent by supervisor
Question: I'm running a python script that creates a Tornado server, the server is run
by supervisor. I want to gracefully terminate all WebSocket client connections
when a **supervisorctl reload** is issued (normally after a deploy).
My problem is that I'm not able to get a function called when my server is
killed by supervisor, but it works when using kill with the signal or run on
console and killed with Control+C. I have tried other signals and
configurations without luck.
import signal, sys
def clean_resources(signum, frame):
print "SIG: %d, clean me" % signum
sys.exit(0)
if __name__ == '__main__':
# Nicely handle closing the server
for sig in (signal.SIGINT, signal.SIGTERM):
signal.signal(sig, clean_resources)
This is my tornado_supervisor.conf
[program:tornado_server]
command = python /opt/tornado/server.py -p 8890
user = www-data
stdout_logfile = /var/log/tornado/tornado_server_sup.log
redirect_stderr = true
autorestart=true
environment=HOME='/var/www'
environment=PYTHONPATH="$PYTHONPATH:/opt/tornado/"
stopsignal = TERM
stopwaitsecs = 10
stopasgroup = true
Answer: I had similar/same problem. Only parent Tornado process got the signal, while
children processes where not killed.
I made arrangement that parent process kills children manually using
os.killpg(), also, children uses some delays to (possibly) finish current
requests:
#will be initialized in main()
server = None
loop = None
def stop_loop():
global loop
loop.stop()
def signal_handler_child_callback():
global loop
global server
server.stop()
# allow to finish processing current requests
loop.add_timeout(time.time() + LOOP_STOP_DELAY, stop_loop)
def signal_handler(signum, frame):
global loop
global server
if loop:
#this is child process, will restrict incoming connections and stop ioloop after delay
loop.add_callback(signal_handler_child_callback)
else:
#this is master process, should restrict new incomming connections
#and send signal to child processes
server.stop()
signal.signal(signal.SIGTERM, signal.SIG_DFL)
os.killpg(0, signal.SIGTERM)
def main():
parse_command_line()
signal.signal(signal.SIGTERM, signal_handler)
# ...
tornado_app = tornado.web.Application(
[
#...
])
global server
server = tornado.httpserver.HTTPServer(tornado_app)
server.bind(options.port)
server.start(0)
global loop
loop = tornado.ioloop.IOLoop.instance()
loop.start()
if __name__ == '__main__':
main()
|
Logical url patterns - django | python
Question: I'm building a social network and I want to show special content when a user
is logged in and he accesses to his public profile url (so i'll show
customization tools). I've written code to return the user name and match it
with the regex, but I don't know how to only have the pattern if the user is
logged in.
from django.conf.urls import patterns, include, url
import re
from auth import engine
profile_name = engine.get_profile_name()
urlpatterns = patterns('',
...
url(r'^'+re.escape(profile_name)+r'/?', 'myprofile.views.show_profile') # authentication required
)
The engine will return `None` if the user is not logged in. But this may cause
an error in url().
So how can I achieve it?
Answer: You have to decorate either your view function or view class with
`login_required`. There is no regex way to find out if the user is logged in
or not since it's handled by the sessions and requests and not your url.
You can read up on it
[here](https://docs.djangoproject.com/en/1.5/topics/auth/default/#the-login-
required-decorator) otherwise here's an example functional view
@login_required(login_url="/login/") #will redirect to login if not logged in.
def show_profile(request, profile_name):
return render_to_response(...)
Or this is another approach if you want to omit the decorator
def show_profile(request, profile_name):
if request.user.is_authenticated():
return render_something_cool
else:
return render_something_else
|
How to parse XML with xml.sax and why it's not working
Question: I have a piece of code whitch in my opinion should work:
#!/usr/bin/env python3
import xml.sax
import xml.sax.handler
class MyClass:
def load_from_file(self, filename):
class MyXmlHandler(xml.sax.handler.ContentHandler):
def start_element(self, name, attrs):
print('It\'s working!!!')
xml.sax.parse(filename, MyXmlHandler())
app = MyClass()
app.load_from_file('/home/bps/Desktop/test.xml')
I'm sure that xml file is not empty, it contains many tags, but script ends
silently, there is no printed strings, no error, no exception, no nothing :P
Why? I'm missing something?
Answer: [The method name should be
`startElement`](https://docs.python.org/3.4/library/xml.sax.handler.html#xml.sax.handler.ContentHandler.startElement)
(rather than `start_element`), or `startElementNS` if your XML uses
namespaces.
|
Urllib2 Error in pip under Windows
Question: I have some trouble running pip form ActiveState Python 2.7.2 under Windows.
We use a proxy, which might be part of the issue. The proxy is a non-
authenticating proxy. The proxy settings from the system, manually in e.g.
Firefox or with some simple Python code work fine:
This works as expected:
urllib.urlopen('http://www.google.com', proxies={'http': 'http://proxy:port'})
It gives a response with header information from google.com:
<addinfourl at 61539976L whose fp = <socket._fileobject object at 0x00000000042924F8>>
With the proxy being set in the `http_proxy` environment variable I run
pip install loremipsum
I get
Downloading/unpacking loremipsum
Could not fetch URL http://pypi.python.org/simple/loremipsum: <urlopen error [Errno 11004] getaddrinfo
failed>
Will skip URL http://pypi.python.org/simple/loremipsum when looking for download links for loremipsum
Could not fetch URL http://pypi.python.org/simple/: <urlopen error [Errno 11004] getaddrinfo failed>
Will skip URL http://pypi.python.org/simple/ when looking for download links for loremipsum
Cannot fetch index base URL http://pypi.python.org/simple/
Could not fetch URL http://pypi.python.org/simple/loremipsum/: <urlopen error [Errno 11004] getaddrinfo failed>
Will skip URL http://pypi.python.org/simple/loremipsum/ when looking for download links for loremipsum
Could not find any downloads that satisfy the requirement loremipsum
No distributions at all found for loremipsum
Exception information:
Traceback (most recent call last):
File "C:\ActiveState\ActivePython\lib\site-packages\pip\basecommand.py", line 126, in main
self.run(options, args)
File "C:\ActiveState\ActivePython\lib\site-packages\pip\commands\install.py", line 222, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "C:\ActiveState\ActivePython\lib\site-packages\pip\req.py", line 954, in prepare_files
url = finder.find_requirement(req_to_install, upgrade=self.upgrade)
File "C:\ActiveState\ActivePython\lib\site-packages\pip\index.py", line 152, in find_requirement
raise DistributionNotFound('No distributions at all found for %s' % req)
DistributionNotFound: No distributions at all found for loremipsum
The error 11004 seems to indicate a name resolution problem, which behind a
proxy should be a problem with proxy access (or ignorance).
I can test a similar setup (http_proxy variable, same proxy, different Python)
under Linux. Running the above command works nicely. Also accessing the URL in
a browser on the Windows machine works (shows a set of egg and zip files).
I walked the `pip` code to find where it misses an ran the code in an
interactive session. I found that in `C:\ActiveState\ActivePython\lib\site-
packages\pip\downloads.py` basically all downloading preparation and action
happens. In `setup()` (line 125) the `ProxyHandler` is prepared and and
`opener` constructed, which is stored in `urllib2` to be used by further
calls. When running interactively I found that adding an entry for a `https`
proxy was needed. I also added the printing of debug information. This gave me
in an interactive run in iPython:
In [1]: import urllib2
In [2]: proxy='proxy:port'
In [3]: proxy_support = urllib2.ProxyHandler({"http": proxy, "ftp": proxy, "https": proxy})
In [4]: opener = urllib2.build_opener(proxy_support, urllib2.CacheFTPHandler, urllib2.HTTPHandler(debugle
vel=1), urllib2.HTTPSHandler(debuglevel=1))
In [5]: urllib2.install_opener(opener)
The proxy for `ftp`, `http` and `https` is indeed the same. I aslo checked
with some printouts that all code handling commandline parameters, etc. did
not mess with the proxy. The proxy is stored the same way in `downloads.py` as
simplified above (read from the http_proxy variable).
After figuring out the URL to fetch the package from pip goes to `__call__()`
at line 74.
First a request is constructed using:
In [6]: url = urllib2.Request('http://pypi.python.org/simple/loremipsum', headers={'Accept-encoding': 'identity'})
then the request is used with `urllib2.urlopen(url)`:
In [7]: response = urllib2.urlopen(url)
send: 'GET http://pypi.python.org/simple/loremipsum HTTP/1.1\r\nHost: pypi.python.org\r\nUser-Agent: Pyth
on-urllib/2.7\r\nConnection: close\r\nAccept-Encoding: identity\r\n\r\n'
reply: 'HTTP/1.1 301 Moved Permanently\r\n'
header: Server: Varnish
header: Retry-After: 0
header: Location: https://pypi.python.org/simple/loremipsum
header: Content-Length: 0
header: Accept-Ranges: bytes
header: Date: Tue, 12 Aug 2014 16:41:39 GMT
header: Via: 1.1 varnish
header: X-Served-By: cache-fra1234-FRA
header: X-Cache: MISS
header: X-Cache-Hits: 0
header: X-Timer: S1407861699.491394,VS0,VE0
header: Connection: close
header: Age: 0
send: 'CONNECT pypi.python.org:443 HTTP/1.0\r\n'
send: '\r\n'
send: 'GET /simple/loremipsum HTTP/1.1\r\nHost: pypi.python.org\r\nUser-Agent: Python-urllib/2.7\r\nConne
ction: close\r\nAccept-Encoding: identity\r\n\r\n'
reply: 'HTTP/1.1 301 Moved Permanently\r\n'
header: Date: Tue, 12 Aug 2014 16:41:40 GMT
header: Server: nginx/1.6.0
header: Location: /simple/loremipsum/
header: Cache-Control: max-age=600, public
header: Strict-Transport-Security: max-age=31536000; includeSubDomains
header: Via: 1.1 varnish
header: Content-Length: 0
header: Accept-Ranges: bytes
header: Via: 1.1 varnish
header: Age: 44282
header: X-Served-By: cache-iad2135-IAD, cache-fra1231-FRA
header: X-Cache: MISS, HIT
header: X-Cache-Hits: 0, 1
header: X-Timer: S1407861700.831757,VS0,VE0
header: Connection: close
send: 'CONNECT pypi.python.org:443 HTTP/1.0\r\n'
send: '\r\n'
send: 'GET /simple/loremipsum/ HTTP/1.1\r\nHost: pypi.python.org\r\nUser-Agent: Python-urllib/2.7\r\nConn
ection: close\r\nAccept-Encoding: identity\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Date: Tue, 12 Aug 2014 16:41:41 GMT
header: Server: nginx/1.6.0
header: Content-Type: text/html; charset=utf-8
header: X-PYPI-LAST-SERIAL: 794358
header: Cache-Control: max-age=600, public
header: Strict-Transport-Security: max-age=31536000; includeSubDomains
header: Via: 1.1 varnish
header: Content-Length: 913
header: Accept-Ranges: bytes
header: Via: 1.1 varnish
header: Age: 67708
header: X-Served-By: cache-iad2121-IAD, cache-fra1231-FRA
header: X-Cache: HIT, HIT
header: X-Cache-Hits: 1, 1
header: X-Timer: S1407861701.174694,VS0,VE0
header: Vary: Accept-Encoding
header: Connection: close
This seems to be an ok answer. I have the very same code in pip and yet it
fails.
What am I missing? Why is my interactive session working and pip isn't?
Answer: You didn't [specify the
proxy](http://pip.readthedocs.org/en/latest/reference/pip.html?highlight=proxy#cmdoption
--proxy) to pip?
EDIT: summary of the comments: proxy was specified, by ENV `HTTP_PROXY`, but
didn't work at first, and works now.
I looked a bit into urllib, and there is code to handle the windows registry
settings by default. There is `getproxies()`, which returns
`getproxies_environment() or getproxies_registry()`, so you _should_ have been
OK already without any modifications to the EMV _or_ the command line.
|
Seaborn FactorPlot throws TypeError
Question: sns.FactorPlot is throwing me a TypeError when it tries to set_title. This
happens on an example dataframe, but more worryingly, also happens on the
example from the documentation.
So
import seaborn as sns
exercise = sns.load_dataset('exercise')
sns.factorplot("kind", "pulse", "diet", exercise, kind="point")
Returns a long traceback. Here's the end of it:
*/lib/python2.7/site-packages/seaborn/linearmodels.pyc in plot(self, ax)
275 if hasattr(self.hue, "name"):
276 leg.set_title(self.hue.name,
--> 277 prop={"size": mpl.rcParams["axes.labelsize"]})
278 ax.xaxis.grid(False)
279 ax.set_xticks(self.positions)
TypeError: set_title() got an unexpected keyword argument 'prop'
When I run this in an iPython Notebook with pylab inline enabled, the plot
displays fine underneath the traceback. But I don't understand why I'm getting
this error, especially with an example from the docs.
Answer: This error happens with seaborn 0.3.1 and matplotlib < 1.2, but can be fixed
by updating either matplotlib or seaborn (currently that means installing the
development version of seaborn from github).
|
conditional breakpoint using pdb
Question: Sounds like I'm missing something extremely simple, I'm trying to set a
breakpoint in my python code using:
if(some condition):
pdb.set_trace()
My error in the code comes after a large number of iterations..difficult to
debug using print etc. I am able to print stuff when the condition hits but I
would like to set brk-pt.
\--EDIT--
Actual code:
import pdb
if (node_num == 16):
print node_num
pdb.set_trace()
Answer: I am not sure as to why your code isn't working, but what you can do is on
your local machine, create a new file for your minimum example to see if you
can do what you want to do
import pdb
for node_num in range(50):
if node_num == 16:
print(node_num)
pdb.set_trace()
Now running it:
16
> /tmp/tmp.py(3)<module>()
-> for node_num in range(50):
(Pdb) p node_num
16
As you can see this worked as intended with this trivial example, it's up to
you to figure out how to adapt this to your code, and/or figure out what else
did you do to your code/environment that prevented that prompt from showing
up.
Alternatively, if you have a function that is dying in an exception and you
want to know the exact line that caused it, you should use `post_mortem`
instead. Wrap the problematic section of code with this
try:
problem_function()
except Exception: # or the specific exception type thrown
pdb.post_mortem()
raise
What post_mortem would do is dump a break point right at the point where the
exception happened (specifically in that stack frame), and so this allows all
values to be inspected and then let you continue execution. However I also put
a raise at the end to allow the exception to continue as normal, and this is
intended as execution doesn't normally from where it die but just pause at
that exception handling block due to the `post_mortem` call. Might as well
just give up after inspecting what went wrong.
|
Creating multiple *.cfg files Python
Question: I'm working with *.cfg files. The file can be read in a text editor like gedit
and has this format:
% some comments
VAR_1= 1
%
% More comments
ANOTHER_VAR= -8
%
% comments again
VAR_THE_COMEBACK= 10
I want to create multiple config files just changing VAR_1=
1....2...3.........10. I manage to import the *cfg file without any new import
in python but I'm not getting a way to change just this parameter, saving the
file and creating another one with another value for VAR_1.
my code until now is really simple:
import os
os.chdir('/home/leonardo/Desktop')
f = open('file.cfg','r') #if I replace r by w I erase the file ....
a = f.read()
print a.find('1')
a.replace('1','2') #I tried this but. ... :(
f.close()
Any tips ?
Thank you for the help !
Answer: Untested code, but you will get the idea:
with open('file.cfg', 'r') as f:
contents_by_line = f.readlines()
for var_index, line in enumerate(contents_by_line):
if line.startswith("VAR_"):
break
else:
raise RuntimeError("VAR_ not found in file")
for var_i, new_cfg_file in ((2,"file2.cfg"),
(3, "file3.cfg")): #add files as you want
with open(new_cfg_file, "w") as fout:
for i, line in enumerate(contents_by_line):
if i == var_index:
fout.write("VAR_1=%d\n" % var_i)
else:
fout.write(line)
|
Python live dependency installation via pip (PyPI)
Question: I want to pull the live version of a package as a dependency of another
package I install with pip.
Now, I have already found out [how to install a live version of a package via
pip](http://stackoverflow.com/questions/23185238/easy-install-live-python-
libraries-scripts); and that is **not** the question I am asking here.
I'd like to know whether I can pull in a live dependency version (e.g. from
the PyPI index) - at present I was only able to set up tarballs via PyPI.
Answer: In your `setup.py`, do:
from setuptools import setup
setup(
...
install_requires=[
'a_required_pypi_package',
'another_package_in_pypi>=minimum_version'
]
...
)
and `pip`, `setup.py install` or `setup.py develop` will take care of it.
However the requirement will be considered satisfied, if any version of
`a_required_pypi_package` is installed. This is especially true, if you use
`pip freeze` to write a `requirements.txt` and use it to install packages.
|
How do imports work in IPython
Question: I'm a little bewildered by exactly how import statements work in IPython. I've
turned up nothing through web searches.
Implicit relative imports work with Python 2, but I don't know if that's still
the case with IPython for Python 3.
Relative imports using the dot syntax dont seem to work at all:
In [6]: ls
dsp/ __init__.py __init__.pyc utils/
In [7]: from .utils import capture
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-7-e7d50007bdd1> in <module>()
----> 1 from .utils import capture
ValueError: Attempted relative import in non-package
importing modules that use the dot syntax seems impossible:
In [8]: cd utils
/home/user/workspace/mypkg/mypkg/utils
In [9]: ls
capture/ capture.py capture.pyc cext/ __init__.py __init__.pyc
In [10]: from capture import Capture
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-8c31c76d052d> in <module>()
----> 1 from capture import Capture
/home/user/workspace/mypkg/mypkg/utils/capture.py in <module>()
17 import tarfile
18 import re
---> 19 from .. import utils
20 from . import flprint
21 from select import poll
ValueError: Attempted relative import in non-package
Is there some concise documentation on this somewhere?
Answer: The problem is I was importing the module from a lower position in the package
hierarchy than is used in the module's import statement. So if I cd into the
utils directory and run
from capture import Capture
then capture becomes the top level of the hierarchy. So the import statement
in the capture module
from .. import utils
goes beyond the top level. Python doesn't know what ".." refers to, because
modules aren't self-aware of what package they belong to. If I change back up
to the mypkg directory, I get the same problem
In [13]: cd ..
/home/user/workspace/myproj/mypkg
In [14]: from utils import capture
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-14-c87f26b2171d> in <module>()
----> 1 from utils import capture
/home/user/workspace/myproj/mypkg/utils/capture.py in <module>()
18 import re
19 import zmq
---> 20 from .. import utils
21 from . import flprint
22 from select import poll
ValueError: Attempted relative import beyond toplevel package
In this case, utils is the top level, so
from . import flprint
will work, but
from .. import utils
won't work.
I have to move one more directory up:
In [19]: cd ..
/home/user/workspace/myproj
In [20]: from mypkg.utils import capture
In [21]: cap = capture.Capture
IPython can import packages and modules located in the current working
directory, or from directories in the import path. I can add the package to
the import path to be able to import from any working directory.
In [23]: import sys
In [24]: sys.path.append('/home/user/workspace/myproj')
In [25]: cd
/home/user
In [26]: from mypkg.utils import capture
Using sys.path.append is probably the most robust method for controlling the
import path in Python generally, and much less error-prone than using relative
import statements. You can use it inside a module to essentially make the
module self-aware.
|
List in a Loop and Subprocesses, Standard Output
Question: I want to call a subprocess in a for loop and put the vertical lines
horizontally in a list for printing them by separating by comma. My code is
like this ;
import serial
import time
import subprocess #test
subprocess.call(['echo','I am Learning to use Subprocesses']) #when Using subprocess.call([*1,*2]) *1 is the first argument and it is the bash code, *2 is the opti$
ser = serial.Serial("/dev/ttyUSB0", 4800, timeout = 1)
file = open("/home/pi/GPSWIFIMODIFIED.csv", "a")
file.write('\n')
for i in range(0,50):
date = time.time()
val = ser.readline();
if val.find("GPGGA")==-1: continue
#### subprocess.call([iwlist, scan and the stuff
### iwlout = sys.stdout(((( output on the screen)))))) the output which iwlist wlan0 scan will put on the screen
### line = iwlout.readline()
### for j in range (1,40):
### if line.find("Cell +j")==-1:
#### mylistofwifi = [line.find("Address"),line.find("Quality"),line.find("IEEE"),line.find("Pairwise")]
#### ########################### ######Here there must be iwlist scan, and the gathering must be done
print >> file ,date,',',val[:42],',','WIFI DATA WILL BE HERE'# print mylistofwifi will be here
file.close()
The lines which works at the moment stores the data line by line and what I
want is to add the lines that I choose from iwlist scan outputs to the end of
the lines that has time and the GPGGA data.
The problem is that I am new to Python and I am not able to use lists or
parsing or the subprocesses efficiently.
Any suggestions will be appreciated, thanks in advance.
**The output from`sudo wlan0 iwlist` scan :**
Cell 01 - Address: 00:26:99:4D:0F:34
Channel:11
Frequency:2.462 GHz (Channel 11)
Quality=18/100 Signal level=18/100
Encryption key:on
ESSID:"STAFF"
Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 6 Mb/s; 9 Mb/s
11 Mb/s; 12 Mb/s; 18 Mb/s
Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s
Mode:Master
Extra:tsf=0000000db0b79259
Extra: Last beacon: 0ms ago
IE: Unknown: 00055354414646
IE: Unknown: 010882040B0C12161824
IE: Unknown: 03010B
IE: Unknown: 0706494520010D14
IE: Unknown: 0B0504000E8D5B
IE: Unknown: 2A0100
IE: IEEE 802.11i/WPA2 Version 1
Group Cipher : CCMP
Pairwise Ciphers (1) : CCMP
Authentication Suites (2) : 802.1x Proprietary
IE: Unknown: 32043048606C
IE: Unknown: 851E06008F000F00FF03590045452D472D303100000000000000000004000027
IE: Unknown: 9606004096000E00
IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00
IE: Unknown: DD06004096010104
IE: Unknown: DD050040960305
IE: Unknown: DD050040960B09
IE: Unknown: DD050040961401
Cell 02 - Address: 00:26:99:4D:0D:35
Channel:11
Frequency:2.462 GHz (Channel 11)
Quality=21/100 Signal level=21/100
Encryption key:on
ESSID:"eduroam"
Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 6 Mb/s; 9 Mb/s
11 Mb/s; 12 Mb/s; 18 Mb/s
Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s
Mode:Master
Extra:tsf=000001021e097bd4
Extra: Last beacon: 0ms ago
IE: Unknown: 0007656475726F616D
IE: Unknown: 010882848B0C12961824
IE: Unknown: 03010B
IE: Unknown: 0706494520010D14
IE: Unknown: 0B0503000C8D5B
IE: Unknown: 2A0100
IE: IEEE 802.11i/WPA2 Version 1
Group Cipher : TKIP
Pairwise Ciphers (2) : TKIP CCMP
Authentication Suites (2) : 802.1x Proprietary
IE: Unknown: 32043048606C
IE: Unknown: 851E02008F000F00FF03590045452D472D303200000000000000000003000027
IE: Unknown: 9606004096000B00
IE: WPA Version 1
Group Cipher : TKIP
Pairwise Ciphers (1) : TKIP
Authentication Suites (2) : 802.1x Proprietary
IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00
IE: Unknown: DD06004096010104
IE: Unknown: DD050040960305
IE: Unknown: DD050040960B09
IE: Unknown: DD050040961401
Cell 03 - Address: 00:26:99:4D:0D:30
Channel:11
Frequency:2.462 GHz (Channel 11)
Quality=21/100 Signal level=21/100
Encryption key:on
ESSID:"CONF"
Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 6 Mb/s; 9 Mb/s
11 Mb/s; 12 Mb/s; 18 Mb/s
Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s
Mode:Master
Extra:tsf=000001021e0e36dd
Extra: Last beacon: 0ms ago
IE: Unknown: 0004434F4E46
IE: Unknown: 010882848B0C12961824
IE: Unknown: 03010B
IE: Unknown: 0706494520010D14
IE: Unknown: 0B0503000C8D5B
IE: Unknown: 2A0100
IE: Unknown: 32043048606C
IE: Unknown: 851E02008F000F00FF03590045452D472D303200000000000000000003000027
IE: Unknown: 9606004096000B00
IE: WPA Version 1
Group Cipher : TKIP
Pairwise Ciphers (1) : TKIP
Authentication Suites (1) : PSK
IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00
IE: Unknown: DD06004096010104
IE: Unknown: DD050040960305
IE: Unknown: DD050040960B09
IE: Unknown: DD050040961400
**The output of running the file without the`subprocess.call([iwlist and the
stuff` (not including the commented lines)**
1407868084.56 , $GPGGA,104023.000,5323.0922,N,00636.1480,W , WIFI DATA WILL BE HERE
1407868085.21 , $GPGGA,104024.000,5323.0922,N,00636.1480,W , WIFI DATA WILL BE HERE
1407868086.21 , $GPGGA,104025.000,5323.0922,N,00636.1480,W , WIFI DATA WILL BE HERE
1407868087.62 , $GPGGA,104026.000,5323.0922,N,00636.1480,W , WIFI DATA WILL BE HERE
1407868088.25 , $GPGGA,104027.000,5323.0922,N,00636.1480,W , WIFI DATA WILL BE HERE
1407868089.21 , $GPGGA,104028.000,5323.0922,N,00636.1480,W , WIFI DATA WILL BE HERE
1407868090.21 , $GPGGA,104029.000,5323.0922,N,00636.1480,W , WIFI DATA WILL BE HERE
1407868091.2 , $GPGGA,104030.000,5323.0922,N,00636.1480,W , WIFI DATA WILL BE HERE
1407868092.61 , $GPGGA,104031.000,5323.0922,N,00636.1480,W , WIFI DATA WILL BE HERE
Answer: You can use
[`subprocess.check_output`](https://docs.python.org/2/library/subprocess.html#subprocess.check_output)
to get the output of a process:
from subprocess import check_output
iwout = check_output(['wlan0', 'iwlist', 'scan']).splitlines()
And then when you print:
print >> file, date, ',', val[:42], ',', ', '.join(iwout)
You can replace `iwout` with `map(strip, iwout)` to remove the unnecessary
whitespaces.
As you can see I removed the `sudo`, because it could block the execution
asking for the password, you shouldn't include a `sudo` call in your script
unless they are interactive, otherwise you should run all the command with
root privileges, and then perform all the invocations without `sudo`.
|
How to make QTreeWidget dragging semi-transparent and keep itemWidgets
Question: i have a treeWidget with itemWidget set on columns, but after dragging the
widgets are gone, and the dropping indicator is opaque
1. How can i make the widget persist after dropping
2. How to make dropping indicator transparent ? ( i'm on centos 6.5, the compositing manager is not running)


executable example
#!/usr/bin/env python2
import os
import sys
import re
from PyQt4 import QtGui, QtCore
from PyQt4.QtCore import Qt, QString
class MyTreeWidget(QtGui.QTreeWidget):
def __init__(self, parent=None):
super(MyTreeWidget, self).__init__(parent)
class CommandWidget(QtGui.QDialog):
def __init__(self, parent=None, level=0,script='echo /path/to/script'):
super(CommandWidget, self).__init__()
self.layout = QtGui.QHBoxLayout(self)
browseBtn = QtGui.QPushButton(parent)
browseBtn.setMinimumSize(QtCore.QSize(0, 25))
# level, path = val
# levelNum = re.search('(?<=level).+', level).group()
browseBtn.setText('%s : %s' % (level, script))
self._level = int(level)
self._script = script
browseBtn.setStyleSheet("text-align: left")
self.layout.addWidget(browseBtn)
# self.updateGeometry()
self.browseBtn = browseBtn
# self.layout.addWidget(browseBtn)
self.browseBtn.clicked.connect(self.browseCommandScript)
self.browseBtn.setIconSize(QtCore.QSize(64, 64))
def browseCommandScript(self):
script = QtGui.QFileDialog.getOpenFileName(
self, 'Select Script file', '/home/xxx/python', ".py Files (*.py);;Executable Files (*)")
if script:
self._script = script
button_label = re.search('[^\\/]*$',script).group()
self.browseBtn.setText(('%s : %s' % (self._level, button_label)))
@property
def level(self):
return self._level
@level.setter
def level(self, value):
self._level = value
@property
def script(self):
return self._script
@script.setter
def script(self, value):
self._script = value
class MyLineEdit(QtGui.QWidget):
def __init__(self,value=None,parent=None):
super(MyLineEdit,self).__init__(parent)
self.layout = QtGui.QHBoxLayout(self)
self.layout.setSpacing(0)
self.layout.setMargin(3)
self.lineEdit = QtGui.QLineEdit(value)
spacer1 = QtGui.QSpacerItem(20, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Expanding)
spacer2 = QtGui.QSpacerItem(20, 20, QtGui.QSizePolicy.Expanding, QtGui.QSizePolicy.Expanding)
self.lineEdit.setContentsMargins(2,2,2,2)
self.lineEdit.setAlignment(Qt.AlignHCenter)
self.layout.addItem(spacer1)
self.layout.addWidget(self.lineEdit)
self.layout.addItem(spacer2)
self.lineEdit.setMaximumSize(QtCore.QSize(70, 25))
self.lineEdit.textEdited.connect(self._update_item_widget_data)
def text(self):
return self.lineEdit.text()
def setText(self,text):
return self.lineEdit.setText(text)
def _update_item_widget_data(self,text):
# print 'update',text
self.treeWidgetItem.setData(1,Qt.UserRole,text)
class TheUI(QtGui.QDialog):
def __init__(self, args=None, parent=None):
super(TheUI, self).__init__(parent)
self.layout = QtGui.QVBoxLayout(self)
treeWidget = MyTreeWidget()
button = QtGui.QPushButton('Add')
self.layout.addWidget(treeWidget)
self.cssEditTE = QtGui.QPlainTextEdit()
self.layout.addWidget(button)
self.layout.addWidget(self.cssEditTE)
self.cssEditTE.textChanged.connect(self._update_css)
treeWidget.setHeaderHidden(True)
treeWidget.setRootIsDecorated(False)
layout = QtGui.QHBoxLayout(self)
rootDecorationCB = QtGui.QCheckBox('RootIsDecorated')
layout.addWidget(rootDecorationCB)
self.layout.addLayout(layout)
rootDecorationCB.stateChanged.connect(self._update_root_decorated)
indentationSlider = QtGui.QSlider()
indentationSlider.setOrientation(Qt.Horizontal)
indentationSlider.setRange(0,100)
indentationSlider.setValue(20)
indentationSlider.valueChanged.connect(self._alter_indentation)
layout.addWidget(indentationSlider)
self.layout.setStretchFactor(treeWidget,1)
self.treeWidget = treeWidget
self.button = button
self.button.clicked.connect(lambda *x: self.addCmd())
HEADERS = ( "script", "chunksize", "mem" )
self.treeWidget.setHeaderLabels(HEADERS)
self.treeWidget.setColumnCount( len(HEADERS) )
self.treeWidget.setColumnWidth(0,200)
self.treeWidget.header().show()
for i in range(len(HEADERS)):
self.treeWidget.headerItem().setTextAlignment(i,Qt.AlignHCenter)
self.treeWidget.setDragDropMode(QtGui.QAbstractItemView.InternalMove)
self.treeWidget.setIndentation(60)
self.resize(500,700)
for i in xrange(2):
self.addCmd()
item = self.addCmd()
self.addCmd(parent = item)
self.addCmd(parent = item)
item = self.addCmd()
item=self.addCmd(parent = item)
self.addCmd(parent = item)
self.addCmd()
self.treeWidget.setColumnWidth(0,200)
def addCmd(self, level=0, script='echo /path/to/script',parent=None):
'add a level to tree widget'
root = self.treeWidget.invisibleRootItem()
if parent is None:
parent = root
item = QtGui.QTreeWidgetItem(parent)
# item = QtGui.QTreeWidgetItem(self.treeWidget.invisibleRootItem())
item.setFlags(item.flags() | QtCore.Qt.ItemIsDropEnabled)
existingLevels = self.treeWidget.topLevelItemCount()
# level, path = val
# level = level % existingLevels
cmdWidget = CommandWidget(self.treeWidget, existingLevels, script)
self.treeWidget.setItemWidget(item, 0, cmdWidget)
line_edit_1 = MyLineEdit('1')
line_edit_2 = MyLineEdit('200')
self.treeWidget.setItemWidget(item, 1, line_edit_1)
self.treeWidget.setItemWidget(item, 2, line_edit_2)
item.setExpanded(True)
return item
def _update_css(self):
self.treeWidget.setStyleSheet(self.cssEditTE.toPlainText())
def _update_root_decorated(self,state):
if state == Qt.Checked:
self.treeWidget.setRootIsDecorated(True)
else:
self.treeWidget.setRootIsDecorated(False)
self.treeWidget.updateGeometries()
def _alter_indentation(self,value):
print value
self.treeWidget.setIndentation(value)
self.treeWidget.updateGeometries()
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
gui = TheUI()
gui.show()
app.exec_()
Answer: > 1) How can i make the widget persist after dropping ?
Please use this to understand how to use `QItemDelegate` (running in Windows
7, Python 2.7, pyqt4):
import sys
import os
from PyQt4 import QtCore, QtGui
from functools import partial
class QCustomDelegate (QtGui.QItemDelegate):
def createEditor (self, parentQWidget, optionQStyleOptionViewItem, indexQModelIndex):
column = indexQModelIndex.column()
if column == 0:
editorQWidget = QtGui.QPushButton(parentQWidget)
self.connect(editorQWidget, QtCore.SIGNAL('released()'), partial(self.requestNewPath, indexQModelIndex))
return editorQWidget
elif column in [1, 2]:
editorQWidget = QtGui.QSpinBox(parentQWidget)
editorQWidget.setAlignment(QtCore.Qt.AlignHCenter | QtCore.Qt.AlignVCenter)
editorQWidget.setMinimum(0)
editorQWidget.setMaximum(2 ** 16)
return editorQWidget
else:
return QtGui.QItemDelegate.createEditor(self, parentQWidget, optionQStyleOptionViewItem, indexQModelIndex)
def setEditorData (self, editorQWidget, indexQModelIndex):
column = indexQModelIndex.column()
if column == 0:
textQString = indexQModelIndex.model().data(indexQModelIndex, QtCore.Qt.EditRole).toString()
editorQWidget.setText(textQString)
elif column in [1, 2]:
value, _ = indexQModelIndex.model().data(indexQModelIndex, QtCore.Qt.EditRole).toInt()
editorQWidget.setValue(value)
else:
QtGui.QItemDelegate.setEditorData(self, editorQWidget, indexQModelIndex)
def setModelData (self, editorQWidget, modelQAbstractItemModel, indexQModelIndex):
column = indexQModelIndex.column()
if column == 0:
textQString = editorQWidget.text()
modelQAbstractItemModel.setData(indexQModelIndex, textQString, QtCore.Qt.EditRole)
elif column in [1, 2]:
value = editorQWidget.value()
modelQAbstractItemModel.setData(indexQModelIndex, value, QtCore.Qt.EditRole)
else:
QtGui.QItemDelegate.setModelData(self, editorQWidget, modelQAbstractItemModel, indexQModelIndex)
def updateEditorGeometry(self, editorQWidget, optionQStyleOptionViewItem, indexQModelIndex):
column = indexQModelIndex.column()
if column in [0, 1, 2]:
editorQWidget.setGeometry(optionQStyleOptionViewItem.rect)
else:
QtGui.QItemDelegate.updateEditorGeometry(self, editorQWidget, optionQStyleOptionViewItem, indexQModelIndex)
def requestNewPath (self, indexQModelIndex):
self.emit(QtCore.SIGNAL('requestNewPath'), indexQModelIndex)
def paint (self, painterQPainter, optionQStyleOptionViewItem, indexQModelIndex):
column = indexQModelIndex.column()
if column == 0:
textQString = indexQModelIndex.model().data(indexQModelIndex, QtCore.Qt.EditRole).toString()
foundIndexQModelIndex = indexQModelIndex
while foundIndexQModelIndex.parent() != QtCore.QModelIndex():
foundIndexQModelIndex = foundIndexQModelIndex.parent()
buttonQStyleOptionButton = QtGui.QStyleOptionButton()
buttonQStyleOptionButton.rect = QtCore.QRect(optionQStyleOptionViewItem.rect)
buttonQStyleOptionButton.text = str(foundIndexQModelIndex.row() + 1) + ' : ' + os.path.basename(str(textQString))
buttonQStyleOptionButton.state = QtGui.QStyle.State_Active
QtGui.QApplication.style().drawControl(QtGui.QStyle.CE_PushButton, buttonQStyleOptionButton, painterQPainter)
elif column in [1, 2]:
value, _ = indexQModelIndex.model().data(indexQModelIndex, QtCore.Qt.EditRole).toInt()
textQStyleOptionViewItem = optionQStyleOptionViewItem
textQStyleOptionViewItem.displayAlignment = QtCore.Qt.AlignHCenter | QtCore.Qt.AlignVCenter
currentQRect = QtCore.QRect(optionQStyleOptionViewItem.rect)
currentQRect.setWidth(currentQRect.width() - 22)
self.drawDisplay(painterQPainter, textQStyleOptionViewItem, currentQRect, QtCore.QString(str(value)));
spinBoxQStyleOptionSpinBox = QtGui.QStyleOptionSpinBox()
spinBoxQStyleOptionSpinBox.rect = QtCore.QRect(optionQStyleOptionViewItem.rect)
QtGui.QApplication.style().drawComplexControl(QtGui.QStyle.CC_SpinBox, spinBoxQStyleOptionSpinBox, painterQPainter)
else:
QtGui.QItemDelegate.paint(self, painterQPainter, optionQStyleOptionViewItem, indexQModelIndex)
class QCustomTreeWidget (QtGui.QTreeWidget):
def __init__(self, parent = None):
super(QCustomTreeWidget, self).__init__(parent)
self.setDragEnabled(True)
self.setDragDropMode(QtGui.QAbstractItemView.InternalMove)
self.setColumnCount(3)
self.setHeaderLabels(('script', 'chunksize', 'mem'))
for i in range(self.columnCount()):
self.headerItem().setTextAlignment(i, QtCore.Qt.AlignHCenter)
self.header().setStretchLastSection(False)
self.header().setResizeMode(0, QtGui.QHeaderView.Stretch)
self.setIndentation(60)
self.setColumnWidth(0, 200)
myQCustomDelegate = QCustomDelegate()
self.setItemDelegate(myQCustomDelegate)
self.connect(myQCustomDelegate, QtCore.SIGNAL('requestNewPath'), self.getNewPath)
def addMenu (self, script = 'echo path_to_script', chunksize = 1, mem = 200, parentQTreeWidgetItem = None):
if parentQTreeWidgetItem == None:
parentQTreeWidgetItem = self.invisibleRootItem()
currentQTreeWidgetItem = QtGui.QTreeWidgetItem(parentQTreeWidgetItem)
currentQTreeWidgetItem.setData(0, QtCore.Qt.EditRole, script)
currentQTreeWidgetItem.setData(1, QtCore.Qt.EditRole, chunksize)
currentQTreeWidgetItem.setData(2, QtCore.Qt.EditRole, mem)
currentQTreeWidgetItem.setFlags(currentQTreeWidgetItem.flags() | QtCore.Qt.ItemIsEditable)
for i in range(self.columnCount()):
currentQSize = currentQTreeWidgetItem.sizeHint(i)
currentQTreeWidgetItem.setSizeHint(i, QtCore.QSize(currentQSize.width(), currentQSize.height() + 30))
currentQTreeWidgetItem.setExpanded(True)
return currentQTreeWidgetItem
def getNewPath (self, indexQModelIndex):
currentQTreeWidgetItem = self.itemFromIndex(indexQModelIndex)
pathQString = QtGui.QFileDialog.getOpenFileName (
self, 'Select Script file', '', '.py Files (*.py);;Executable Files (*)')
if not pathQString.isEmpty():
currentQTreeWidgetItem.setData(indexQModelIndex.column(), QtCore.Qt.EditRole, pathQString)
class QCustomQDialog (QtGui.QDialog):
def __init__ (self, parent = None):
super(QCustomQDialog, self).__init__(parent)
self.myQCustomTreeWidget = QCustomTreeWidget(self)
self.addQPushButton = QtGui.QPushButton('Add', self)
self.connect(self.addQPushButton, QtCore.SIGNAL('released()'), self.myQCustomTreeWidget.addMenu)
self.cssQPlainTextEdit = QtGui.QPlainTextEdit(self)
self.connect(self.cssQPlainTextEdit, QtCore.SIGNAL('textChanged()'), self.updateCss)
self.rootDecorationCBQCheckBox = QtGui.QCheckBox('Root is decorated')
self.connect(self.rootDecorationCBQCheckBox, QtCore.SIGNAL('stateChanged(int)'), self.updateRootDecorated)
self.updateRootDecorated(self.rootDecorationCBQCheckBox.checkState())
self.indentationQSlider = QtGui.QSlider(self)
self.indentationQSlider.setOrientation(QtCore.Qt.Horizontal)
self.indentationQSlider.setRange(0, 100)
self.indentationQSlider.setValue(20)
self.connect(self.indentationQSlider, QtCore.SIGNAL('valueChanged(int)'), self.alterIndentation)
self.alterIndentation(self.indentationQSlider.value())
self.layoutQVBoxLayout = QtGui.QVBoxLayout()
self.layoutQVBoxLayout.addWidget(self.myQCustomTreeWidget)
self.layoutQVBoxLayout.addWidget(self.addQPushButton)
self.layoutQVBoxLayout.addWidget(self.cssQPlainTextEdit)
self.downMenuQHBoxLayout = QtGui.QHBoxLayout()
self.downMenuQHBoxLayout.addWidget(self.rootDecorationCBQCheckBox)
self.downMenuQHBoxLayout.addWidget(self.indentationQSlider)
self.layoutQVBoxLayout.addLayout(self.downMenuQHBoxLayout)
self.layoutQVBoxLayout.setStretchFactor(self.myQCustomTreeWidget, 1)
self.setLayout(self.layoutQVBoxLayout)
self.resize(480, 640)
_ = self.myQCustomTreeWidget.addMenu()
_ = self.myQCustomTreeWidget.addMenu()
currentQTreeWidgetItem = self.myQCustomTreeWidget.addMenu()
self.myQCustomTreeWidget.addMenu(parentQTreeWidgetItem = currentQTreeWidgetItem)
self.myQCustomTreeWidget.addMenu(parentQTreeWidgetItem = currentQTreeWidgetItem)
currentQTreeWidgetItem = self.myQCustomTreeWidget.addMenu()
currentQTreeWidgetItem = self.myQCustomTreeWidget.addMenu(parentQTreeWidgetItem = currentQTreeWidgetItem)
currentQTreeWidgetItem = self.myQCustomTreeWidget.addMenu(parentQTreeWidgetItem = currentQTreeWidgetItem)
_ = self.myQCustomTreeWidget.addMenu()
def updateCss (self):
self.myQCustomTreeWidget.setStyleSheet(self.cssQPlainTextEdit.toPlainText())
def alterIndentation (self, value):
self.myQCustomTreeWidget.setIndentation(value)
self.myQCustomTreeWidget.updateGeometries()
def updateRootDecorated (self, state):
if state == QtCore.Qt.Checked:
self.myQCustomTreeWidget.setRootIsDecorated(True)
else:
self.myQCustomTreeWidget.setRootIsDecorated(False)
self.myQCustomTreeWidget.updateGeometries()
app = QtGui.QApplication([])
myQCustomQDialog = QCustomQDialog()
myQCustomQDialog.show()
sys.exit(app.exec_())
You are not the only one who has experienced this problem. I strongly
recommend using `QItemDelegate`.
The document says:
> If you want to display custom dynamic content or implement a custom editor
> widget, use QTreeView and subclass `QItemDelegate` instead.
And (user ? or) you want 'editors' always visible. It can do it in
`QItemDelegate`. But in class `QItemDelegate`, implements `QItemDelegate.paint
(self, QPainter painter, QStyleOptionViewItem option, QModelIndex index)`
(reference in later). And other property 'createEditor', 'setEditorData', etc.
Example for implementing this class:
import sys
from PyQt4 import QtCore, QtGui
from functools import partial
class QCustomDelegate (QtGui.QItemDelegate):
def createEditor (self, parentQWidget, optionQStyleOptionViewItem, indexQModelIndex):
column = indexQModelIndex.column()
if column == 1:
editorQWidget = QtGui.QSpinBox(parentQWidget)
editorQWidget.setAlignment(QtCore.Qt.AlignHCenter | QtCore.Qt.AlignVCenter)
editorQWidget.setMinimum(0)
editorQWidget.setMaximum(100)
return editorQWidget
elif column == 2:
editorQWidget = QtGui.QLineEdit(parentQWidget)
editorQWidget.setAlignment(QtCore.Qt.AlignHCenter | QtCore.Qt.AlignVCenter)
return editorQWidget
elif column == 3:
editorQWidget = QtGui.QPushButton(parentQWidget)
return editorQWidget
else:
return QtGui.QItemDelegate.createEditor(self, parentQWidget, optionQStyleOptionViewItem, indexQModelIndex)
def setEditorData (self, editorQWidget, indexQModelIndex):
column = indexQModelIndex.column()
if column == 1:
value, _ = indexQModelIndex.model().data(indexQModelIndex, QtCore.Qt.EditRole).toInt()
editorQWidget.setValue(value)
elif column == 2:
textQString = indexQModelIndex.model().data(indexQModelIndex, QtCore.Qt.EditRole).toString()
editorQWidget.setText(textQString)
elif column == 3:
textQString = indexQModelIndex.model().data(indexQModelIndex, QtCore.Qt.EditRole).toString()
self.connect(editorQWidget, QtCore.SIGNAL('released()'), partial(self.requestNewPath, indexQModelIndex))
editorQWidget.setText(textQString)
else:
QtGui.QItemDelegate.setEditorData(self, editorQWidget, indexQModelIndex)
def setModelData (self, editorQWidget, modelQAbstractItemModel, indexQModelIndex):
column = indexQModelIndex.column()
if column == 1:
value = editorQWidget.value()
modelQAbstractItemModel.setData(indexQModelIndex, value, QtCore.Qt.EditRole)
elif column == 2:
textQString = editorQWidget.text()
modelQAbstractItemModel.setData(indexQModelIndex, textQString, QtCore.Qt.EditRole)
elif column == 3:
textQString = editorQWidget.text()
modelQAbstractItemModel.setData(indexQModelIndex, textQString, QtCore.Qt.EditRole)
else:
QtGui.QItemDelegate.setModelData(self, editorQWidget, modelQAbstractItemModel, indexQModelIndex)
def updateEditorGeometry(self, editorQWidget, optionQStyleOptionViewItem, indexQModelIndex):
column = indexQModelIndex.column()
if column == 1:
editorQWidget.setGeometry(optionQStyleOptionViewItem.rect)
elif column == 2:
editorQWidget.setGeometry(optionQStyleOptionViewItem.rect)
elif column == 3:
editorQWidget.setGeometry(optionQStyleOptionViewItem.rect)
else:
QtGui.QItemDelegate.updateEditorGeometry(self, editorQWidget, optionQStyleOptionViewItem, indexQModelIndex)
def requestNewPath (self, indexQModelIndex):
self.emit(QtCore.SIGNAL('requestNewPath'), indexQModelIndex)
def paint (self, painterQPainter, optionQStyleOptionViewItem, indexQModelIndex):
column = indexQModelIndex.column()
if column == 1:
value, _ = indexQModelIndex.model().data(indexQModelIndex, QtCore.Qt.EditRole).toInt()
textQStyleOptionViewItem = optionQStyleOptionViewItem
textQStyleOptionViewItem.displayAlignment = QtCore.Qt.AlignHCenter | QtCore.Qt.AlignVCenter
currentQRect = QtCore.QRect(optionQStyleOptionViewItem.rect)
currentQRect.setWidth(currentQRect.width() - 22)
self.drawDisplay(painterQPainter, textQStyleOptionViewItem, currentQRect, QtCore.QString(str(value)));
spinBoxQStyleOptionSpinBox = QtGui.QStyleOptionSpinBox()
spinBoxQStyleOptionSpinBox.rect = QtCore.QRect(optionQStyleOptionViewItem.rect)
QtGui.QApplication.style().drawComplexControl(QtGui.QStyle.CC_SpinBox, spinBoxQStyleOptionSpinBox, painterQPainter)
elif column == 2:
textQStyleOptionViewItem = optionQStyleOptionViewItem
textQStyleOptionViewItem.displayAlignment = QtCore.Qt.AlignHCenter | QtCore.Qt.AlignVCenter
QtGui.QItemDelegate.paint(self, painterQPainter, textQStyleOptionViewItem, indexQModelIndex)
elif column == 3:
textQString = indexQModelIndex.model().data(indexQModelIndex, QtCore.Qt.EditRole).toString()
buttonQStyleOptionButton = QtGui.QStyleOptionButton()
buttonQStyleOptionButton.rect = QtCore.QRect(optionQStyleOptionViewItem.rect)
buttonQStyleOptionButton.text = textQString
buttonQStyleOptionButton.state = QtGui.QStyle.State_Active
QtGui.QApplication.style().drawControl(QtGui.QStyle.CE_PushButton, buttonQStyleOptionButton, painterQPainter)
else:
QtGui.QItemDelegate.paint(self, painterQPainter, optionQStyleOptionViewItem, indexQModelIndex)
class QCustomTreeWidget (QtGui.QTreeWidget):
def __init__(self, parent = None):
super(QCustomTreeWidget, self).__init__(parent)
self.setDragEnabled(True)
self.setDragDropMode(QtGui.QAbstractItemView.InternalMove)
self.setColumnCount(4)
myQCustomDelegate = QCustomDelegate()
self.setItemDelegate(myQCustomDelegate)
self.connect(myQCustomDelegate, QtCore.SIGNAL('requestNewPath'), self.getNewPath)
def addMenu (self, title, value, text, path, parentQTreeWidgetItem = None):
if parentQTreeWidgetItem == None:
parentQTreeWidgetItem = self.invisibleRootItem()
currentQTreeWidgetItem = QtGui.QTreeWidgetItem(parentQTreeWidgetItem)
currentQTreeWidgetItem.setData(0, QtCore.Qt.EditRole, title)
currentQTreeWidgetItem.setData(1, QtCore.Qt.EditRole, value)
currentQTreeWidgetItem.setData(2, QtCore.Qt.EditRole, text)
currentQTreeWidgetItem.setData(3, QtCore.Qt.EditRole, path)
currentQTreeWidgetItem.setFlags(currentQTreeWidgetItem.flags() | QtCore.Qt.ItemIsEditable)
for i in range(self.columnCount()):
currentQSize = currentQTreeWidgetItem.sizeHint(i)
currentQTreeWidgetItem.setSizeHint(i, QtCore.QSize(currentQSize.width(), currentQSize.height() + 40))
def getNewPath (self, indexQModelIndex):
currentQTreeWidgetItem = self.itemFromIndex(indexQModelIndex)
pathQStringList = QtGui.QFileDialog.getOpenFileNames()
if pathQStringList.count() > 0:
textQString = pathQStringList.first()
currentQTreeWidgetItem.setData(indexQModelIndex.column(), QtCore.Qt.EditRole, textQString)
class QCustomQWidget (QtGui.QWidget):
def __init__ (self, parent = None):
super(QCustomQWidget, self).__init__(parent)
self.myQCustomTreeWidget = QCustomTreeWidget(self)
self.allQHBoxLayout = QtGui.QHBoxLayout()
self.allQHBoxLayout.addWidget(self.myQCustomTreeWidget)
self.setLayout(self.allQHBoxLayout)
self.myQCustomTreeWidget.addMenu('1', 10, 'A', 'home/Meyoko/Desktop')
self.myQCustomTreeWidget.addMenu('4', 14, 'B', 'home/Kitsune/Desktop')
self.myQCustomTreeWidget.addMenu('7', 17, 'C', 'home/Elbert/Desktop')
app = QtGui.QApplication([])
myQCustomQWidget = QCustomQWidget()
myQCustomQWidget.show()
sys.exit(app.exec_())
[Spin Box Delegate Example (C++)](http://doc.qt.digia.com/4.6/itemviews-
spinboxdelegate.html)
[`QItemDelegate`](http://pyqt.sourceforge.net/Docs/PyQt4/qitemdelegate.html)
|
Getting Broken Pipe failure when sending Multipart/form-data
Question: I am trying to setup a server for handling multi-part form data in python. I
am trying to hit my python server with curl command.
I getting Broken Pip Failure error.
Can someone please help ?
PYTHON SERVER CODE :
from BaseHTTPServer import BaseHTTPRequestHandler
import cgi
import cgitb
cgitb.enable(display=0,logdir="")
class PostHandler(BaseHTTPRequestHandler):
def do_POST(self):
print self.headers
expect = self.headers['Expect']
self.protocol_version='HTTP/1.1'
print "Expect %s " % (expect)
if expect.startswith('100') :
print "Entered Expect section %s " % (self.protocol_version)
self.send_response(100)
print self.protocol_version
#self.send_header("Content-Length","0")
self.end_headers()
else:
con_length = int(self.headers['Content-Length'])
print con_length
content_type = self.headers['Content-Type']
print content_type
if content_type.startswith('multipart/form-data') :
self.send_response(100)
self.end_headers()
self.wfile.write("Data:Krishnan");
#self.rfile.read(con_length)
else :
print self.rfile.read(con_length)
#Send the Response
self.send_response(200)
self.end_headers()
self.wfile.write("Data:Krishnan")
return
if __name__ == '__main__':
from BaseHTTPServer import HTTPServer
server = HTTPServer(('localhost',8000),PostHandler)
print "Started Serving on HTTP Port 8000"
server.serve_forever()
CURL COMMAND TO HIT SERVER :
`curl -iv http://localhost:8000 -F myfile=@"/home/local/krishnan/messages.gz"`
CURL RESPONSE :
* About to connect() to localhost port 8000 (#0)
* Trying 127.0.0.1... connected
> POST / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost:8000
> Accept: */*
> Content-Length: 4105
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=----------------------------1ca123daf202
>
< HTTP/1.1 100 Continue
HTTP/1.1 100 Continue
< Server: BaseHTTP/0.3 Python/2.7.3
Server: BaseHTTP/0.3 Python/2.7.3
< Date: Wed, 13 Aug 2014 15:21:23 GMT
Date: Wed, 13 Aug 2014 15:21:23 GMT
* Send failure: Broken pipe
* Closing connection #0
curl: (55) Send failure: Broken pipe
Please help me !
Answer: When you detect the `Expect:100-continue`, you are just responding with a
`100-continue` and then closing the connection. After sending the
`100-continue` you need to read the request body and if its fine, respond with
a `200 OK`.
def do_POST(self):
....
self.send_response(100)
self.end_headers()
con_length = int(self.headers['Content-Length'])
data = self.rfile.read(con_length)
self.send_response(200)
self.end_headers()
Also have a look at [How to handle "100 continue" HTTP
message?](https://stackoverflow.com/questions/2964687/how-to-
handle-100-continue-http-message)
|
Importing module implicitly
Question: In a directory, I have two files: `A.py`, and `B.py`. Here is their content:
# A.py
import numpy
x = numpy.array([1, 2, 3])
print x
# B.py
import A
y = numpy.array([4, 5, 6])
print y
From Command Prompt (Windows 8), I run the following command:
python A.py
which gives the output:
[1, 2, 3]
But when I run the following command:
python B.py
I get the output:
NameError: name 'numpy' is not defined
Why is this? Shouldn't numpy be imported implicitly into B via A?
Answer: When you do
import A
That brings in all of the exportable functions and variables from the file
`A.py` in, but with the namespace prefix of `A`
Assuming you don't want to do the import of numpy again in B (the normal
option), your code then needs to be one of
import A
y = A.numpy.array([4, 5, 6])
Or:
from A import *
The former gets `numpy` via A, with the prefix of A (since that's where it was
first imported), the latter explicitly imports all the things from A without a
new prefix. The downside of the latter is it can bring in the rest of the
kitchen sink too, so isn't generally a good plan for complex modules.
Normally though, if B needs numpy, it would import that directly. If A is
making some changes to a module that it then exports, you would normally
expect to import and reference it explicitly, to flag up to everyone looking
at the code later that you're not dealing with the regular version of the
library.
|
python No plot or NameError UPDATE plot is visible but not as it should be
Question: I am attempting to create a "rolling spline" using polynomials via polyfit and
polyval.
However I either get an error that "offset" is not defined... or, the spline
doesn't plot.
My code is below, please offer suggestions or insights. I am a polyfit newby.
import numpy as np
from matplotlib import pyplot as plt
x = np.array([ 3893.50048173, 3893.53295003, 3893.5654186 , 3893.59788744,
3893.63035655, 3893.66282593, 3893.69529559, 3893.72776551,
3893.76023571, 3893.79270617, 3893.82517691, 3893.85764791,
3893.89011919, 3893.92259074, 3893.95506256, 3893.98753465,
3894.02000701, 3894.05247964, 3894.08495254])
y = np.array([ 0.3629712 , 0.35187397, 0.31805825, 0.3142261 , 0.35417492,
0.34981215, 0.24416184, 0.17012087, 0.03218199, 0.04373861,
0.08108644, 0.22834105, 0.34330638, 0.33380814, 0.37836754,
0.38993407, 0.39196328, 0.42456769, 0.44078106])
e = np.array([ 0.0241567 , 0.02450775, 0.02385632, 0.02436235, 0.02653321,
0.03023715, 0.03012712, 0.02640219, 0.02095554, 0.020819 ,
0.02126918, 0.02244543, 0.02372675, 0.02342232, 0.02419184,
0.02426635, 0.02431787, 0.02472135, 0.02502038])
xk = np.array([])
yk = np.array([])
w0 = np.where((y<=(e*3))&(y>=(-e*3)))
w1 = np.where((y<=(1+e*3))&(y>=(1-e*3)))
mask = np.ones(x.size)
mask[w0] = 0
mask[w1] = 0
for i in range(0,x.size):
if mask[i] == 0:
if ((abs(y[i]) < abs(e[i]*3))and(abs(y[i])<(abs(y[i-1])-abs(e[i])))):
imin = i-2
imax = i+3
if imin < 0:
imin = 0
if imax >= x.size:
imax = x.size
offset = np.mean(x)
for order in range(20):
coeff = np.polyfit(x-offset,y,order)
model = np.polyval(coeff,x-offset)
chisq = ((model-y)/e)**2
chisqred = np.sum(chisq)/(x.size-order-1)
if chisqred < 1.5:
break
xt = x[i]
yt = np.polyval(coeff,xt-offset)
else:
imin = i-1
imax = i+2
if imin < 0:
imin = 0
if imax >= x.size:
imax = x.size
offset = np.mean(x)
for order in range(20):
coeff = np.polyfit(x-offset,y,order)
model = np.polyval(coeff,x-offset)
chisq = ((model-y)/e)**2
chisqred = np.sum(chisq)/(x.size-order-1)
if chisqred < 1.5:
break
xt = x[i]
yt = np.polyval(coeff,xt-offset)
xk = np.append(xk,xt)
yk = np.append(yk,yt)
#print order,chisqred
################################
plt.plot(x,y,'ro')
plt.plot(xk+offset,yk,'b-') # This is the non-plotting plot
plt.show()
################################
* * *
## Update
* * *
So I edited the code, removing all of the if conditions that do not apply to
this small sample of data.
I also added the changes that I made which allow the code to plot the desired
points... **however** , now that the plot is visible, I have a new problem.
The plot isn't a polynomial of the order the code is telling me it should be.
Before the plot command, I added a print, to display the order of the
polynomial and the chisqred, just to be certain that it was working.
Answer: First, thank you for providing a self-contained sample (not many newbies do
that)! If you want to improve your question, you should remove all debugging
code from the sample, as now it clutters the code. The code is quite long and
not very self-explanatory. (At least to me - the problem may be between my
ears, as well.)
* * *
Let us unroll the problem from the end. The proximal reason why you get an
empty plot is that you have empty `xk`and `yk` (empty arrays).
Why is that? That is because you have 19 points, and thus your for loop is
essentially:
for i in range(12, 19-1-12):
...
There is nothing to iterate from 12..6! So actually your loop is run through
exactly zero times and nothing is ever appended to `xk` and `yk`.
The same explanation explains the problem with `offset`. If the loop is never
run through, there is no `offset` defined in yout plot command (`xk+offset`),
hence the `NameError`.
* * *
This was the simple part. However, I do not quite understand your code.
Especially the loops where you loop `order` form 0..19 look strange, as only
the result form the last cycle will be used. Maybe there is something to fix?
(If you still have problems with the code after this analysis, please fix the
things you can, simplify the code as much as possible, and edit your question.
Then we can have another look into this!)
|
Matplotlib doesn't show proper font on ubuntu 14.04
Question: I installed matplotlib with all dependencies on ubuntu 14.04 from source
Processing dependencies for matplotlib==1.3.1
Searching for nose==1.3.3
Best match: nose 1.3.3
Processing nose-1.3.3-py2.7.egg
Removing nose 1.3.1 from easy-install.pth file
nose 1.3.3 is already the active version in easy-install.pth
Installing nosetests script to /usr/local/bin
Installing nosetests-2.7 script to /usr/local/bin
Using /usr/local/lib/python2.7/dist-packages/nose-1.3.3-py2.7.egg
Searching for pyparsing==2.0.1
Best match: pyparsing 2.0.1
Adding pyparsing 2.0.1 to easy-install.pth file
Using /usr/lib/python2.7/dist-packages
Searching for tornado==3.1.1
Best match: tornado 3.1.1
tornado 3.1.1 is already the active version in easy-install.pth
Using /usr/lib/python2.7/dist-packages
Searching for python-dateutil==1.5
Best match: python-dateutil 1.5
python-dateutil 1.5 is already the active version in easy-install.pth
Using /usr/lib/python2.7/dist-packages
Searching for numpy==1.8.1
Best match: numpy 1.8.1
numpy 1.8.1 is already the active version in easy-install.pth
When i try to plot something matplotlib doesn't show x ticks right as you can
see from here <http://bayanbox.ir/id/4106587232464013527?view>
Source:
import matplotlib.pyplot as plt
import matplotlib
import pandas as pd
import numpy as np
df=pd.DataFrame({'Val': np.random.random(50)})
df.index=pd.date_range('2000-01-02', periods=50)
plt.plot_date(df.index.to_pydatetime(), df.Val, fmt='-')
ax=plt.gca()
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%y%b\n%d'))
plt.show()
Answer: Matplotlib uses
[`strftime`](https://docs.python.org/2/library/datetime.html#strftime-and-
strptime-behavior) for working with date formatting in ticks. `strftime` will
use your computer's locale to choose the correct version of certain date
formats, for instance "January" vs "Januar" for English vs German.
Your issue arises because your default language is Persian. Matplotlib is
trying to plot using Persian but is failing at encoding it properly (resulting
in the squares).
Your choices to fix this are to either change the default language of your
computer, or to set the `locale` of your Python code using the
[`locale`](https://docs.python.org/2/library/locale.html) builtin module.
|
igraph's Gomory–Hu tree not working?
Question: When I try the following with `python-igraph`:
from igraph import *
g= Graph()
g.add_vertices(3)
g.vs["name"] = ["0", "1", "3"]
g.add_edge("0", "1", weight=0.0)
g.add_edge("1", "3", weight=10.0)
g.add_edge("0", "3", weight=10.0)
t = g.gomory_hu_tree(capacity="weight")
print t
I get the output:
IGRAPH UNW- 3 2 --
+ attr: name (v), flow (e), weight (e)
+ edges (vertex names):
0--1, 1--3
This makes no sense as vertex "3" is connected to the other vertices through
edges with high weight. Therefor the minimum cut tree `t` should be a star
with center "3". This is obviously not the case...
Answer: The algorithm is working fine. The minimum cost to disconnect any two nodes is
10.0. All trees which are subgraphs of this graph are valid Gomory-Hu trees.
In fact, this is the case for any K3 which has two edges of identical weight
and a third of less weight.
Consider the brute-force approach. Since the minimum cost to disconnect any
two nodes is 10.0, the complete minimum cut graph is the three nodes connected
with edges of weight 10.0. By symmetry, this graph has three equally valid
Gomory-Hu trees consisting of any two of the edges of the complete minimum cut
graph.
So 0--1--3, 1--3--0, and 3--0--1 are all acceptable Gomory-Hu trees of the
graph above.
In fact, for any graph of n nodes which has a complete minimum cut graph with
all edges equal, the Gomory-Hu tree is any tree which connects every node.
|
Using If statements in Python: If datetime.day == WEDNESDAY then call wed_module()
Question: Okay I want to call a different module for each day of the week in Python. My
code right now looks like this:
def today_Shift():
import time
import datetime
import calendar
print "Day of week:", datetime.date.today().strftime("%A")
#This gives me the day of the week.
#Now I need to know what to compare to what, to determine if for example the day of the #week is == wednesday.
#If the day of the week is == Wednesday Then
# call wed_info
#elif:
# call tues_info
#etc.
Answer: You can have specific functions for each day of the week that do whatever.
def wedFunc():
print "This is for wednesday"
def friFunc():
print "This is for friday"
Then you can make a dictionary that maps the day to the function
dayFunctionDict = {"Wednesday" : wedFunc, "Friday" : friFunc}
Then you call the appropriate function:
>>> dayFunctionDict['Wednesday']()
This is for wednesday
Using `datetime`
>>> dayFunctionDict[datetime.date.today().strftime("%A")]()
This is for wednesday
|
DLL load failed: %1 is not a valid Win32 application for NumPy
Question: I downloaded NumPy through Anaconda and copied and pasted the NumPy file from
there to the site-package file in the Python 27 folder.
I was trying to import NumPy from a 2.7.5 shell, and it gave me an error:
> DLL load failed: %1 is not a valid Win32 application.
I tried to research ways to get around it like verifying that I downloaded the
right version (64-bit, Python 2.7) and even tried downloading it again, but it
still gave me the same error.
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
import numpy
File "C:\Python27\lib\site-packages\numpy\__init__.py", line 168, in <module>
from . import add_newdocs
File "C:\Python27\lib\site-packages\numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "C:\Python27\lib\site-packages\numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "C:\Python27\lib\site-packages\numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "C:\Python27\lib\site-packages\numpy\core\__init__.py", line 6, in <module>
from . import multiarray
ImportError: DLL load failed: %1 is not a valid Win32 application.
How can I fix this?
Answer: > I downloaded NumPy through Anaconda and **copied and pasted the NumPy file
> from there to the site-package file in the Python 27 folder**.
Don't do this! [Follow the instructions on the Continuum
website](http://docs.continuum.io/anaconda/install.html#windows-install) and
install using the `.exe` file.
Anaconda is a fully self-contained environment that includes its own
installers for Python - you don't need to do anything to your system-wide
`site-packages` directory.
|
How can Flask/Python import config file that is one level up higher
Question: I'm writing a Flask web app and I ran into a small problem that really bothers
me. This is my microblog.py file:
from flask import Flask
from flask import render_template, flash, redirect
from forms import LoginForm
app = Flask(__name__)
app.config.from_object('config')
@app.route('/')
@app.route('/index')
...
And this is my project structure:
microblog\
flask\
<virtual environment files>
app\
static\
templates\
__init__.py
microblog.py
forms.py
tmp\
run.py
config.py
My question is why does the config.py file has to be out of the app folder and
is one level up? I can't really tell myself that it's "magic" so I have to
find the answer but I've failed miserably so far.
My findings suggested that it has something to do with the import system
(python import is really weird) but the thing is, my microblog.app.root_path
is ... (/Users/myname/microblog/app) so there is no way it could have seen the
config file. What's going on?
If you know the answer, please rescue my poor soul from this madness. Thank
you in advance!
Answer: Instead of using a Python file for your config, why not use a format more
suitable for these kind of work. One such candidate is JSON, and you could do
something like this (as a variation on what the [Step 2 of the official
tutorial suggests](http://flask.pocoo.org/docs/tutorial/setup/)):
import json
with open('../config.json') as f:
config = json.load(f)
app.config.update(config)
And instead of the `config.py` file you would have a `config.json` file (at
the same location) that might look like
{
"DEBUG": true,
"SECRET_KEY": "development key"
}
Of course, you don't have to use `json`, there are other configuration file
formats out there ready for you to use, but it's up to you to look up on how
to use that.
|
Does igraph's gomory_hu_tree calculate the minimum cut tree?
Question: I'm trying to implement [this graph clustering algorithm (sec.
3.2)](http://projecteuclid.org/euclid.im/1109191029) with python-igraph. As I
do not want to calculate the minimum cut tree myself, I'm trying to use the
`gomory_hu_tree()` method. To play around with this method (and to provide a
MWE), I wrote the following:
from igraph import *
g= Graph()
g.add_vertices(4)
g.vs["name"] = ["0", "1", "2", "artificial"]
g.add_edge("0", "1", weight=10.0)
g.add_edge("0", "2", weight=20.0)
g.add_edge("2", "1", weight=30.0)
g.add_edge("artificial", "0", weight=100.0)
g.add_edge("artificial", "1", weight=100.0)
g.add_edge("artificial", "2", weight=100.0)
t = g.gomory_hu_tree(capacity="weight")
print t.es["flow"]
print
print t
I get the following output:
[130.0, 140.0, 150.0]
IGRAPH UNW- 4 3 --
+ attr: name (v), flow (e), weight (e)
+ edges (vertex names):
0--1, 1--2, 2--artificial
But that is not a minimum cut tree! If the tree were like this, then the
removal of the edge between `1` and `2` would yield a partition of the graph
into the two subsets `{0, 1}` and `{2, t}` at a cost of 250. However, the
right answer is a cut into `{1}` and `{2, 0, t}` at a cost of just 140.
(By "cost" I mean the value of the respective cut.)
So, one (and the only) right answer for the min cut tree would have been
0--artificial, 1--artificial, 2--artificial
What did I get wrong? Is it possibly wrong to use the `gomory_hu_tree()`
method in this context?
_Note:_ I originally [asked this question in a completely wrong
way](http://stackoverflow.com/questions/25297470/igraphs-gomory-hu-tree-not-
working).
Answer: Here are a couple of definitions:
1)A tree is called a **flow equivalent tree** if and only if for each pair of
nodes (u, v) maximal flow between these two nodes in a tree is the same as in
the original graph(and this implies that the cost of the minimum cut is the
same).
2)A tree satisfies a **cut property** if and only if for each pair of nodes
(u, v) the minimum cut in this tree is the same as in the original graph(not
just the cost is the same, but two subsets are the same too).
So the question is: What is A Gomory-Hu tree? There are two common
definitions:
1)A flow equivalent tree.
2)A flow equivalent tree that satisfies cut property.
Even though it is not documented what definition is used in this library, it
seems that they used the first one. So it is only guaranteed that the cost of
the cut is the same, not the cut itself. If you need to find the cut itself,
you can use `maxflow` method for a fixed pair of nodes.
|
Log in with Python and Requests
Question: I've been trying to access a website with no API. I want to retreive my
current "queue" from the website. But it won't let me access this part of the
website if i'm not logged in. Here is my code :
login_data = {
'action': 'https://www.crunchyroll.com/?a=formhandler',
'name': 'my_username',
'password': 'my_password'
}
import requests
with requests.Session() as s:
s.post('https://www.crunchyroll.com/login', data=login_data)
ck = s.cookies
r = s.get('https://www.crunchyroll.com/home/queue')
print r.text
Right now, I get a page :
<html lang="en">
<head>
<title>Redirecting...</title>
<meta http-equiv="refresh" content="0;url=http://www.crunchyroll.com/home/queue" />
</head>
<body>
<script type="text/javascript">
document.location.href="http:\/\/www.crunchyroll.com\/home\/queue";
</script>
</body>
</html>
I think it should work, but I'm only getting the redirecting page ... How am I
suppose to get past that ?
Thanks !
Answer: The redirect is happening because you are not logging into the site properly -
you have the wrong form URL for the POST request, and you're not POSTing all
the form data the site is expecting.
You can figure out what is required to login by looking at the source code for
`https://www.crunchyroll.com/login`. The parts that matter are the `<form>`
tag and `<input>` tags:
<form id="RpcApiUser_Login" method="post" action="https://www.crunchyroll.com/?a=formhandler">
<input type="hidden" name="formname" value="RpcApiUser_Login" />
<input type="text" name="name" value="my_user_name_goes_here" /></td>
<input type="password" name="my_password_goes_here" /></td>
</form>
When this means is that when you click Submit, there is a POST request to the
URL `https://www.crunchyroll.com/?a=formhandler`, with key/value pairs of data
like `formname=RpcApiUser_Login`. To replicate this in Python you need to POST
all this same pairs of data to that URL.
To learn more about CGI programming like this, [look
here](http://oreilly.com/openbook/cgi/ch04_01.html).
Try this Python code, it works:
import requests
login_data = {
'name': 'my_username',
'password': 'my_password'
'formname': 'RpcApiUser_Login'
}
with requests.Session() as s:
s.post('https://www.crunchyroll.com/?a=formhandler', data=login_data)
r = s.get('http://www.crunchyroll.com/home/queue')
print r.text
|
python regex, optionally match a word
Question: I have the following regex:
PackageQuantity:\b|Servings?PerContainer:\b|Servings?PerPackage:\b(\d+)
that supposed to match the following text:
ServingsPerContainer:about11
Blank white spaces are escaped for comfortability
the idea is, that the words `Package Quantity`, `Servings per container` or
`servings per package` can be followed by any word (exactly one word), such as
`approx.`, or `about`.
Seems simple enough, but I couldn't find a solution, since the regex above
matches an empty string instead of the figure
pythonregex.com output:
>>> regex = re.compile("PackageQuantity:\b|Servings?PerContainer:\b|Servings?PerPackage:\b(\d+)",re.IGNORECASE)
>>> r = regex.search(string)
>>> r
<_sre.SRE_Match object at 0x672858ed0eef4da0>
>>> regex.match(string)
<_sre.SRE_Match object at 0x672858ed0ee8c6a8>
# List the groups found
>>> r.groups()
(None,)
# List the named dictionary objects found
>>> r.groupdict()
{}
# Run findall
>>> regex.findall(string)
[u'']
# Run timeit test
>>> setup = ur"import re; regex =re.compile("PackageQuantity:\b|Servings?PerContainer:\b|S ...
>>> t = timeit.Timer('regex.search(string)',setup)
>>> t.timeit(10000)
0.0259890556335
Answer: You are missing the optional word after the `:`
Either
[(PackageQuantity:|(Servings)?PerContainer:|(Servings)?PerPackage:)[a-zA-Z.]*(\d+)](http://regex101.com/r/vT0uY6/1)
or
[(PackageQuantity:|(Servings)?PerContainer:|(Servings)?PerPackage:)(about|approx.)?(\d+)](http://regex101.com/r/hT0jP3/1)
if your list of words is not too long should do the trick
|
when executing f2py fib1.f -m fib2 -h fib1.pyf I get the following error File " ^ SyntaxError: invalid syntax
Question: I am using `Mac 10.9` and running `Python 2.7.8`. Currently I am trying to use
`f2py`. I follow the example in the guide and typed
$ f2py -c fib1.f -m fib1
and I receive the following error
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/bin/f2py", line 3, in <module>
import f2py2e
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/f2py2e/__init__.py", line 10, in <module>
import f2py2e
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/f2py2e/f2py2e.py", line 26, in <module>
import crackfortran
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/f2py2e/crackfortran.py", line 1586
as=b['args']
I have tried as well the following command
$ f2py -c --help-fcompiler
and I receive the as error as above. I hope someone can help me. Regards
Answer: `as` is a reserved keyword in Python 2.6+.
Therefore trying to assign to it like this
as=b['args']
is a syntax error.
It's used in [exception
handling](https://docs.python.org/2/tutorial/errors.html#handling-exceptions)
and the [`with`
statement](https://docs.python.org/2/reference/compound_stmts.html#with)
(context managers).
In Python 2.5 you already get a deprecation warning if you're using it:
>>> as='foo'
<stdin>:1: Warning: 'as' will become a reserved keyword in Python 2.6
So that's some _really_ old code you're trying to run. You've basically got
two options:
* Use Python 2.5 or 2.4 to run it
* Or fix the code and replace the variable `as` with something else.
|
Python3-ldap KeyError: 'attributes'
Question: Using Python3.4 with the `python3-ldap` module loaded. Using the code:
from ldap3 import Server, Connection, SEARCH_SCOPE_WHOLE_SUBTREE, AUTO_BIND_NO_TLS #For title queires into LDAP
def GetTitle(u):
print(u)
t=[]
server = Server(DomanCtrlr)
c = Connection(server,
auto_bind=AUTO_BIND_NO_TLS,
read_only=True,
check_names=True,
user = user,
password= password)
c.search(search_base = 'dc=corp,dc=weyer,dc=pri',
search_filter = '(&(samAccountName=' + u + '))',
search_scope = SEARCH_SCOPE_WHOLE_SUBTREE,
attributes = ['title'],
paged_size = 5)
for entry in c.response:
print(entry['attributes']['title'])
t = entry['attributes']['title']
print(u, " : ", t)
users = ['user1', 'notAuser', 'user2']
for u in users:
GetTitle(u)
I expect to get an error for notAuser, but instead I get this output:
user1
['CONTROL ROOM OPERATOR']
user1 : ['CONTROL ROOM OPERATOR']
Traceback (most recent call last):
File "C:\Users\olsonma\Documents\ThreatMatrix_PY\LDAPTest.py", line 28, in <module>
GetTitle(u)
File "C:\Users\olsonma\Documents\ThreatMatrix_PY\LDAPTest.py", line 17, in GetTitle
print(entry['attributes']['title'])
KeyError: 'attributes'
Since the line obviously executed I am not clear how the error is happening.
I have found a lot of articles for the old `python-ldap` error similar to
this, but the option to fix it for python-ldap does not seen to be an option
for `python3-ldap`. Does anyone know 1.) why this happening? 2.) how to make
it stop?
Answer: Prints have been removed from code since I have found the issue.
from ldap3 import Server, Connection, SEARCH_SCOPE_WHOLE_SUBTREE, AUTO_BIND_NO_TLS #For title queires into LDAP
def GetTitle(u):
t=''
server = Server(DomainCtrlr)
c = Connection(server,
auto_bind=AUTO_BIND_NO_TLS,
read_only=True,
check_names=True,
user = user,
password= password)
c.search(search_base = 'dc=corp,dc=weyer,dc=pri',
search_filter = '(&(samAccountName=' + u + '))',
search_scope = SEARCH_SCOPE_WHOLE_SUBTREE,
attributes = ['title'],
paged_size = 5)
if len(c.response) > 1:
for entry in c.response:
t = entry['attributes']['title']
return t
users = ['lafrenh', 'userid', 'garlockb']
for u in users:
title = GetTitle(u)
print(title)
|
Python round with `n // 1`
Question: I was wondering if there is any reason not to use the `//` operator to round a
number to an integer. I didn't see much on this topic or really know what to
look for to find out more.
>>> from random import random
>>> random() * 20 // 1
1.0
>>> random() * 20 // 1
0.0
>>> random() * 20 // 1
16.0
>>> random() * 20 // 1
11.0
>>> random() * 20 // 1
0.0
Besides needing to add 1 to the result (to not get a range of 1-20 instead of
0-19) Or does this actual result in the range 0-20?
Answer: The main reason not to use it would be that there are perfectly good in-built
functions `int` and `round` that already do this, which are likely to be
efficient and will not confuse someone reading your code.
Another reason as pointed out in the comments is that `//1` is equivalent to
`math.floor` rather than `int` or `round`. `-2.5 // 1` returns a float `-3.0`
whereas `int(-2.5)` returns an integer `-2`.
The fact that experienced programmers can be confused about what `//1` does,
illustrates nicely why it is better to use one of the existing functions
designed for this purpose -- they behave in ways that are clearly defined,
documented and consistent.
|
How to create new console sessions in Python and work with them
Question: I'm trying to figure out how to work with consoles in Python. Let's say, I
have a Python2 script. And this script should create 3 consoles (bash or any
other) and provide different commands to them.
Example:
* Console #1 will be responsible for telnetting
* Console #2 for pinging
* Console #3 will be responsible for new ssh access
What I've found so far is **subprocess** module.
I can use this
import subprocess
term1 = subprocess.Popen(['open', '-a', 'Terminal'])
But I still cannot find how to send a command to **term1**
Thank you.
Answer: If you want to have the ability to communicate with your subprocess its best
to use `subprocess.popen`.
<https://docs.python.org/2/library/subprocess.html#subprocess.Popen>
This way you can create a stdin and stout to communicate with the process. As
shown in the link above you simply add them to the popen argument:
subprocess.Popen(args, stdin=PIPE, stdout=PIPE)
There is also `popen.communicate(input= 'your input')`.
That will wait for the command to finish.
|
Serial data over UDP Sockets in Python
Question: I may be going about this the wrong way but that's why I'm asking the
question.
I have a source of serial data that is connected to a SOC then streams the
serial data up to a socket on my server over UDP. The baud rate of the raw
data is 57600, I'm trying to use Python to receive and parse the data. I
tested that I'm receiving the data successfully on the port via the script
below (found here: <https://wiki.python.org/moin/UdpCommunication>)
import socket
UDP_IP = "MY IP"
UDP_PORT = My PORT
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
print "received message:", data
Since I'm not reading the data with the .serial lib in Python or setting the
baud rate to read at it comes all garbled, as would be expected. My end goal
is to be able to receive and parse the data for server side processing and
also have another client connect to the raw data stream piped back out from
the server (proxy) which is why I'm not processing the data directly from the
serial port on the device.
So my question is, how can I have Python treat the socket as a serial port
that I can set a baud rate on and #import serial and .read from? I can't seem
to find any examples online which makes me think I'm overlooking something
simple or am trying to do something stupid.
# sadface
Answer: You can't treat a socket as a serial line. A socket can only send and receive
data (data stream for TCP, packets for UDP). If you would need a facility to
control the serial line on the SOC you would need to build an appropriate
control protocol over the socket, i.e. either use another socket for control
like FTP does or use in-band control and distinguish between controlling and
data like HTTP does. And of course both sides of the connection have to
understand this protocol.
|
Cleanest data structure to use when interpreting data from neatly-structured user commands (in C++)
Question: I would like to write a simple in-house program that parses user commands
written in a language of our team's own invention (but based closely on
another program we are already familiar with). The command parser that I am
working on now will simply be the UI through which the user can run the other
algorithms I have already written. (Those other algorithms, by the way, are
used to generate the input files for a molecular dynamic simulation package
called [LAMMPS](http://lammps.sandia.gov/).) The only thing I really have left
to do is just write this UI, but as it turns out, writing your own scripting
language is almost an intractable challenge for a non software engineer to
tackle on his own.
According to the answers I received, what I am try to make would be considered
a [Domain Specific Language](http://en.wikipedia.org/wiki/Domain-
specific_language), and it is not advisable to try to make one's own DSL due
to the enormous amount of work required to make it useful and bug-free.
The best option then would actually be to use an existing scripting language
like Lua or Python, and embed it in the program.
To do this, I will most likely use [Lua](http://www.lua.org/) because it seems
most fitting for our needs. So at this point, the rest of this question is no
longer relevant since the answer would be: "Don't do it yourself." But I'm
still going to keep part of it here for other users to be able read and learn
from the wonderful answers below.
Thanks again to everyone who replied!
**_Old Question:_**
> I would like to write a program that parses a user text input and then runs
> a function corresponding to that input. To do this I would need to parse the
> string for relevant keywords. I believe there will be less than 15 keywords
> when I'm done, so ideally I'd like this code to be simple and short.
>
> The problem is that I am currently using if-statements to parse the strings.
> This is an extremely inconvenient way to parse commands because even for a
> short 3 word commands the code explodes into nested-ifs 3 layers deep. So
> longer 8+ word sentences will become nested-ifs more than 8 layers deep.
>
> This kind of programing approach quickly becomes unmanageable, especially
> when I need to make any significant changes to a command.
>
>
>
>
> My question is whether or not there exists a data structure in C++ that can
> help me better manage my giant nested-ifs, or if anyone could suggest a
> better way to parse a string for lots of different data types (i.e.
> substings, ints, and floats) and output an error message when the expected
> type is not found?
>
>
>
>
> Here is an example of a short user session to show the kinds of commands I
> would like to interpret:
>
>
> load "Basis.Silicon" as material 1
> add material 1 to layer 1
> rotate layer 1 about x-axis by 45 degrees
> translate layer 1 in x-axis by 10 nm
> generate crystal
>
>
> These commands are based on an already-existing program that our team uses,
> but unfortunately the source code for this program has never been publicly
> released so I am left guessing as to how it was actually implemented.
>
> One final note, unlike natural language processors, I know exactly what the
> format of each line will be. So my issue isn't so much _how_ to interpret
> the text, but rather how to code the logic in a concise and manageable way.
>
> Thanks everyone!
Answer: Your question is not clear. And your goals are more difficult than what you
believe.
Either you consider that you want to somehow process _human language
sentences_ (e.g. in English). Then you want to study [natural language
processing](http://en.wikipedia.org/wiki/Natural_language_processing), and you
can find some libraries related to that field.
Or you consider that you want to interpret some formal programming or
scripting language. Then you want to study
[interpreters](http://en.wikipedia.org/wiki/Interpreter_%28computing%29) and
[compilers](http://en.wikipedia.org/wiki/Compiler). BTW, in that case, you
might just embed an existing interpreter (like [Lua](http://lua.org/),
[Guile](http://www.gnu.org/software/guile), [Python](http://python.org/),
etc....) in your program.
You could also think in terms of [expert
systems](http://en.wikipedia.org/wiki/Expert_systems) with a [knowledge
base](http://en.wikipedia.org/wiki/Knowledge_base) made of
[rules](http://en.wikipedia.org/wiki/Rule-based_system) (this approach could
be viewed as in the middle between NLP and scripting language) You'll then
need some [inference engine](http://en.wikipedia.org/wiki/Inference_engine)
(perhaps [CLIPS](http://clipsrules.sourceforge.net/)). See also [J.Pitrat's
blog.](http://bootstrappingartificialintelligence.fr/WordPress3/)
Notice that even coding a simple interpreter is more difficult than you
believe. You absolutely need to represent [abstract syntax
trees](http://en.wikipedia.org/wiki/Abstract_syntax_tree), which you construct
from textual input with a [parsing](http://en.wikipedia.org/wiki/Parsing)
phase.
BTW, All of NLP, expert systems, and interpreter design and implementation are
difficult fields. You could get a PhD in all 3 fields (but you have to choose
which).
If you go the embedded interpreter way: study the interpreters I mentioned
(Guile, Lua, Python, [Neko](http://nekovm.org/), etc...) and choose which one
you want, to embed.
If for whatever reason, you want to make an interpreter from scratch: Learn
several programming languages first (including scripting languages like Ruby,
Python, Ocaml, Scheme, Lua, Neko, ...). Read books on [Programming Language
Pragmatics (by M.Scott)](https://www.cs.rochester.edu/~scott/pragmatics/) and
[Lisp In Small Pieces (by Queinnec)](http://pagesperso-
systeme.lip6.fr/Christian.Queinnec/WWW/LiSP.html). Read also text books on
compilation and parsing, and on [Garbage
Collection](http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29)
and formal (e.g. denotational)
[semantics](http://en.wikipedia.org/wiki/Semantics_%28computer_science%29).
All this may need a dozen _years_ of work.
Notice that by experience embedding a software in an interpreter is a very
structuring design. If you did not thought of that at the beginning you
probably need to redesign and refactor a lot your existing application. For
instance, when embedding a software in an interpreter, you cannot afford that
bad input crashes the program. So error handling and memory management
(interfacing to the GC of the interpreter) is challenging and gives new
constraints. Hence you'll need to re-think your application.
If all this is new (and even if you don't choose e.g. Guile as the embedding
interpreter): learn and practice a bit of Scheme -e.g. with Guile or
PltScheme- (e.g. reading [SICP](http://mitpress.mit.edu/sicp/)), read a little
bit about [λ-calculus](http://en.wikipedia.org/wiki/Lambda_calculus) and
[closures](http://en.wikipedia.org/wiki/Closure_%28computer_programming%29),
then read Queinnec's Lisp In Small Pieces book. Remember the [halting
problem](http://en.wikipedia.org/wiki/Halting_problem) (which is partly why
interpreters are difficult to code).
BTW the syntax you are proposing (e.g. `rotate mat 1 by x 90`) is not very
readable and looks [COBOL](http://en.wikipedia.org/wiki/COBOL)-like. If
possible, have a language which looks familiar to existing ones. Make it easy
to _read_ !
Start by reading all the wikipages I am referencing here.
FWIW, I am the main author of [MELT](http://gcc-melt.org/), a [domain specific
language](http://en.wikipedia.org/wiki/Domain_specific_language) (inspired a
lot by Scheme) to extend the [GCC](http://gcc.gnu.org/) compiler. Some of the
papers / documentations I wrote might inspire you (and contain valuable
references).
### Addenda (after question was reformulated)
You seems to invent some formal syntax like
add material 1 to layer 1
rotate layer 1 about x-axis by 90 degrees
translate layer 1 in x-axis by 10 inches
I can't guess what kind of language is it? Are you implementing a [3D
printer](http://en.wikipedia.org/wiki/3D_printing)? If yes, you should stick
to some existing standard formal language in that domain.
I believe that such a COBOL-like syntax is really wrong. The point is that it
is too verbose, and that you are wishing to implement some [domain specific
language](http://en.wikipedia.org/wiki/Domain_specific_language). I find your
example very bad-looking.
Is that syntax your invention, or is there some document specifying (and many
thousands _already existing_ lines coded in) your domain specific language. If
you are just inventing it, please reconsider the syntax and the semantics.
First, you need to specify on paper the full syntax and semantics of your DSL.
Is your DSL [Turing complete](http://en.wikipedia.org/wiki/Turing_complete)?
(I guess that yes, because Turing completeness is reached very quickly - e.g.
with variables and loops....). If yes, you are inventing a [scripting
language](http://en.wikipedia.org/wiki/Scripting_language). Please don't
invent scripting language without knowing several programming & scripting
languages (then read [_Programming Language
Pragmatics_](https://www.cs.rochester.edu/~scott/pragmatics/)...). The point
is that, if your scripting language will become successful, advanced users
will soon or later write important programs in it (e.g. many thousand lines).
Then, these advanced users will be programmers. In that case, it is very
important (for social & economic reasons) to have a DSL well founded and
looking familiar (if possible, an extension of some _existing_ scripting
language).
If your DSL already exists, stick to its specification on paper. If that
specification is not good enough, improve it with formalization (e.g. by
writing some BNF syntax, and some formal (e.g. denotational) semantics for
it). Publish and discuss that formalization with existing users.
Several industries got some ad-hoc DSLs which became widely used but was ill
designed (e.g., in the French nuclear industry, the _Gibiane_ DSL designed in
the 1970s by nuclear physicists, not computer scientists; the US Boeing
corporation is also rumored to have made similar mistakes). Then, maintaining
and improving the many hundred thousands lines of DSL scripts is becoming a
nightmare (and may means losing millions of dollars or euros). So you better
stick to some _existing_ scripting language. The advantages are that there
exist some culture on it (e.g. you can find dozens of books on Python or Lua,
and many trained engineers familiar with them), that the interpreter is widely
used and tested, that the community working on them is improving the
interpreters, so it has quite few uncorrected bugs.
You should not attempt to design and implement your own DSL if you are not a
trained computer scientist. Stick to some existing scripting language (of
course their syntax is not like you want it to be), and leverage on existing
implementations and experiment.
As a counter-example,
[J.Ousterhout](http://en.wikipedia.org/wiki/John_Ousterhout) has invented the
widely used [Tcl](http://en.wikipedia.org/wiki/Tcl) scripting language, with
the claim that scripts are always small (e.g. hundreds of line only) and won't
grow to big code base; unfortunately, some of them did, and Tcl is known as a
bad language to code many dozens of thousands of lines (even if Tcl is an easy
and convenient language for _tiny scripts_). The moral of the story is that if
a (turing complete) scripting language is becoming successful, some "crazy"
advanced user _will_ code hundred of thousands of script code. So you need
that scripting language to be well designed from the start. Hence, you should
adopt and adapt a good _existing_ scripting language (and avoid inventing an
unfamiliar syntax without having a good knowledge of several existing
scripting languages)
### later additions
PS: my criticism of Tcl is not entirely subjective: the point is that Tcl was
_designed for small scripts_ in mind (read J.Ousterhout's first papers about
Tcl), but my point is that when you offer a Turing-complete scripting
language, some "crazy" user will eventually write huge scripts for it. Hence,
you need to anticipate such "crazy" usage by offering a scripting language
which "scales up" to big scripts, so is built according to software
engineering practices for large software code base.
NB. Lua is probably a good choice as a language to embed. It is small, has a
nice implementation, is well documented, and has good performance. But be
careful about memory management issues (and this advice holds for any
scripting language).
|
Sentry logging in Django/Celery stopped working
Question: I have no idea whats wrong. So far logging worked fine (and I was relying on
that) but it seems to have stopped. I wrote a little test function (which does
not work either):
**core.tasks.py**
import logging
from celery.utils.log import get_task_logger
logger = get_task_logger(__name__)
logger.setLevel(logging.DEBUG)
@app.task
def log_error():
logger.error('ERROR')
**settings.py**
INSTALLED_APPS += (
'raven.contrib.django.raven_compat',
)
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'root': {
'level': 'INFO', #If set to DEBUG, prints ALL DJANGO debug logs.
'handlers': ['console', 'sentry'],
},
'formatters': {
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
#log everything to the console
'console':{
'level':'DEBUG',
'class':'logging.StreamHandler',
'formatter': 'simple'
},
#logs directly to sentry
'sentry': {
'level': 'ERROR',
'class': 'raven.contrib.django.raven_compat.handlers.SentryHandler',
#'class': 'raven.contrib.django.handlers.SentryHandler', #I have tried both
},
},
'loggers': {
'django.request':{
'handlers': ['console', 'sentry'],
'level': 'DEBUG',
'propagate': False,
},
'celery.task':{
'handlers': ['console', 'sentry'],
'level': 'DEBUG',
'propagate': False,
},
#this is the logger for celery itself
'celery':{
'handlers': ['console', 'sentry'],
'level': 'ERROR',
'propagate': False,
},
},
}
from logging.config import dictConfig
dictConfig(LOGGING)
Executing the following in the Django shell logs to the console but it doesn't
reach Sentry:
from core import tasks
tasks.log_error.delay()
It works when executing the task synchronously:
tasks.log_error()
Help!?
_Django==1.6.2_ , _raven==5.0.0_ , _sentry==6.3.2_ , _Python 2.7.3_
Answer: I had to add
CELERYD_HIJACK_ROOT_LOGGER=False
to my Django settings.py.
I don't really understand why I explicitely have to tell celery to **not**
hijack the root logger.
|
Strange Queue.PriorityQueue behaviour with multiprocessing in Python 2.7.6
Question: As you know from the title, I'm trying to use PriorityQueue with
multiprocessing. More precisely, I wanted to make shared PriorityQueue, wrote
some code and it doesn't run as I expected.
Look at the code:
import time
from multiprocessing import Process, Lock
from Queue import PriorityQueue
def worker(queue):
lock = Lock()
with lock:
for i in range(100):
queue.put(i)
print "worker", queue.qsize()
pr_queue = PriorityQueue()
worker_process = Process(target = worker, args = (pr_queue,))
worker_process.start()
time.sleep(5) # nope, race condition, you shall not pass (probably)
print "main", pr_queue.qsize()
Got the following output:
worker 100
main 0
What's happening and how to do what I want the right way? Thank you.
Answer: The problem isn't that it's not picklable in this case - if you're using a
Unix-like platform, the queue can be passed to the child without pickling. (On
Windows, I think you would get a pickling error here, though). The root
problem is that you're not using a process-safe queue. The only queues that
can be used between processes are [the `Queue`
objects](https://docs.python.org/2/library/multiprocessing.html#pipes-and-
queues) that live inside the `multiprocessing` module. Unfortunately, there is
no `PriorityQueue` implementation available. However, you can easily create
one by registering a `PriorityQueue` with a
[`multiprocessing.Manager`](https://docs.python.org/2/library/multiprocessing.html#managers)
class, like this:
import time
from multiprocessing import Process
from multiprocessing.managers import SyncManager
from Queue import PriorityQueue
class MyManager(SyncManager):
pass
MyManager.register("PriorityQueue", PriorityQueue) # Register a shared PriorityQueue
def Manager():
m = MyManager()
m.start()
return m
def worker(queue):
print(queue)
for i in range(100):
queue.put(i)
print "worker", queue.qsize()
m = Manager()
pr_queue = m.PriorityQueue() # This is process-safe
worker_process = Process(target = worker, args = (pr_queue,))
worker_process.start()
time.sleep(5) # nope, race condition, you shall not pass (probably)
print "main", pr_queue.qsize()
Output:
worker 100
main 100
Note that this probably won't perform quite as well as it would if it was
standard `multiprocessing.Queue` subclass. The `Manager`-based `PriorityQueue`
is implemented by creating a `Manager` server process which actually contains
a regular `PriorityQueue`, and then providing your main and worker processes
with [`Proxy`](https://docs.python.org/2/library/multiprocessing.html#proxy-
objects) objects that use IPC to read/write to the queue in the server
process. Regular `multiprocessing.Queue`s just write/read data to/from a
`Pipe`. If that's a concern, you could try implementing your own
`multiprocessing.PriorityQueue` by subclassing or delegating from
`multiprocessing.Queue`. It may not be worth the effort, though.
|
How to use ipython without installing in every virtualenv?
Question: **Background**
I use Anaconda's IPython on my mac and it's a great tool for data exploration
and debugging. However, when I wish to use IPython for my programs that
require virtualenv (e.g. a Django web app), I don't want to have to reinstall
IPython every time.
**Question**
Is there a way to use my local IPython while also using the rest of my
virtualenv packages? (i.e. just make IPython the exception to virtualenv
packages so that the local IPython setup is available no matter what) If so,
how would you do this on a mac? My guess is that it would be some nifty
`.bash_profile` changes, but my limited knowledge with it hasn't been
fruitful. Thanks.
**Example Usage**
Right now if I'm debugging a program, I'd use the following:
import pdb
pdb.set_trace() # insert this to pause program and explore at command line
This would bring it to the command line (that I wish was IPython)
Answer: If you have a module in your local Python and not in the virtualenv, it will
still be available in the virtualenv. Unless you shadow it with another
virtualenv version. Did you try to launch your local IPython from a running
virtualenv that didn't have an IPython? It should work.
|
Some issues with Python regex findall
Question: Got string source :
string ="""
html,,
head,, profile http://gmpg.org/xfn/11 ,,
lang en-US ,,
title,, Some markright page.
,,title
,,head
"""
...which have to parse as html :
<html>
<head profile="http://gmpg.org/xfn/11" lang="en-US">
<title>Some markright page</title>
</head>
I want to parse it with one `re.findall` pass like :
tagList = re.findall(
r'\s*([A-Z]?[a-z]+[0-9]?,,){1}' # Opening tag - has to be one
r'(.* ,,)*' # Attributes - could be more than one
r'(.*)?' # Content - could be one
r'(\s+,,[a-z]+[0-9]?)?' # Ending tag - could be one
, string )#, flags=re.S ) # can't make any use of DOTALL flag
for t in tagList :
n=0
for s in t :
n+=1
print "String group No:"+str(n)+" -> ", s.strip()
print "_"*10
...but got only:
String group No:1 -> html,,
String group No:2 ->
String group No:3 ->
String group No:4 ->
__________
String group No:1 -> head,,
String group No:2 -> profile http://gmpg.org/xfn/11 ,,
String group No:3 ->
String group No:4 ->
__________
String group No:1 -> title,,
String group No:2 ->
String group No:3 -> Some markright page.
String group No:4 -> ,,title
Please, keep in mind that I **have** to make my own parser, and upper
mentioned problem is just one appliance of this markup superset so help if you
can & want. Thanks.
Answer: This is the way I will do it:
#!/usr/bin/python
import re
pat = re.compile(r'''
(?P<open> \b [^\W_]+ ) ,, |
,, (?P<close> [^\W_]+ ) \b |
(?P<attrName> \S+ ) [ ] (?P<attrValue> [^,\n]+ ) [ ] ,, |
(?P<textContent> [^,\s] (?: [^,] | , (?!,) )*? ) \s* (?=[^\W_]*,,)''',
re.X)
txt = '''html,,
head,, profile http://gmpg.org/xfn/11 ,,
lang en-US ,,
title,, Some markright page.
,,title
,,head'''
result = ''
opened = False
for m in pat.finditer(txt):
if m.group('attrName'):
result += ' ' + m.group('attrName') + '="' + m.group('attrValue') + '"'
else:
if opened:
opened = False
result += '>'
if m.group('open'):
result += '<' + m.group('open')
opened = True
elif m.group('close'):
result += '</' + m.group('close') + '>'
else:
result += m.group('textContent')
print result
Note: I assume that the text content is always enclosed between tags.
|
Python error: unsupported operand type(s) for -: 'float' and 'NoneType'
Question: My code is supposed to read and subtract two data lists from each other. Why
am I receiving this error, and how can I resolve it?
Here is the full error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "spectra.py", line 32, in SpectraTest
subt = map(sub, flux, flux1)
TypeError: unsupported operand type(s) for -: 'float' and 'NoneType'
Here is the code:
import csv
def SpectraTest():
wave_num = []
flux = []
wave_num1=[]
flux1 = []
with open ("H20_Glass.CSV", "rb") as csvfile:
datareader= csv.reader(csvfile, delimiter = ",")
for row in datareader:
tempdata = row
wn = tempdata[0]
f1 = tempdata [1]
wn = eval(wn)
f1 = eval(f1)
wave_num.append(wn)
flux.append(f1)
with open ("blankGlass.CSV", "rb") as csvfile:
datareader= csv.reader(csvfile, delimiter = ",")
for row in datareader:
tempdata1 = row
wn1 = tempdata1[0]
f2 = tempdata1[1]
wn1 = eval(wn1)
f2 = eval(f2)
wave_num1.append(wn1)
flux1.append(f2)
map(float, flux1)
map(float, flux)
from operator import sub
subt = map(sub, flux, flux1)
wave_num1.reverse()
wave_num.reverse()
print("Number of wave numbers " + str(len(wave_num1)))
print("Number of flux numbers = "+ str(len(flux1)))
print("Number of wave numbers " + str(len(wave_num)))
print("Number of flux numbers = "+ str(len(flux)))
print subt
csvfile.close()
Answer: From the Python docs:
> map(function, iterable, ...)
>
> ...If one iterable is shorter than another it is assumed to be extended with
> None items...
I'd guess that your lists are not the same length so that it's trying to
subtract a None from a float.
|
Emulating a cURL command with Python
Question: I've got a cURL command that does what I need, and I'm trying to translate it
into python. Here's the cURL:
curl http://example.com:1234/faye -d 'message={"channel":"/test","data":"hello world"}'
This talks to a Faye server and publishes a message to the channel `/test`.
This works. I'm trying to do that same publishing from within Python. I've
looked at [this](https://stackoverflow.com/questions/3246021/python-
equivalent-of-curl-http-post) and
[this](https://stackoverflow.com/questions/1990976/convert-a-curl-post-
request-to-python-only-using-standard-libary), and neither of them helped me;
I get a 400 error with both of those methods. Here's some of the stuff I've
tried from within the Python shell:
import urllib2, json, requests
addr = 'http://example.com:1234/faye'
data = {'message': {'channel': '/test', 'data': 'hello from python'}}
data_as_json = json.dumps(data)
requests.post(addr, data=data)
requests.post(addr, params=data)
requests.post(addr, data=data_as_json)
requests.post(addr, params=data_as_json)
req = urllib2.Request(addr, data)
urllib2.urlopen(req)
req = urllib2.Request(addr, data_as_json)
urllib2.urlopen(req)
# All these things give 400 errors
Unfortunately I can't wireshark the connection since it's over an SSH tunnel
(so everything's encrypted and on the wrong ports). Using the `--trace` option
from cURL I can see that it's not url-encoding the data, so I know I don't
need to do that. I also really don't want to `Popen` cURL itself.
Answer: `message` in this case is the name of a POST variable, and shouldn't be
included in the JSON.
Thus, what you actually want to do is this:
data = urllib.urlencode({'message': json.dumps({'channel': '/test', 'data': 'hello from python'}))
conn = urllib2.urlopen('http://example.com:1234/faye', data=data)
print conn.read()
|
Extract data from multi array from json
Question: I am new to python.I need to extract data from json file.
import urllib
import re
import json
text = urllib.urlopen("http://www.acer.com/wjws/ws/gdp/files/en/IN/-/latest/driver/63/-").read()
result = json.loads(text) # result is now a dict
print result['Files']['OS']['Id']
I need to extract "Id" field in "OS" in "Files" from above JSON link
I am getting errors as : TypeError: list indices must be integers, not str
Link contains data as
> { "Files": { "Result": "OK", "Language": [ { "Id": "bg", "Title":
> "Bulgarian" },
>
>
> {
> "Id": "no",
> "Title": "Norwegian"
> },
>
> ],
> "SearchedLanguage": "en",
> "OS": [
> {
> "Id": "001",
> "Title": "Windows® 2000 Professional"
> },
> {
> "Id": "098",
> "Title": "Windows® 98"
> },
> {
> "Id": "0ME",
> "Title": "Windows® ME"
> },
> {
> "Id": "X02",
> "Title": "Windows® XP 32-bit"
> },
> {
> "Id": "X05",
> "Title": "Windows® XP 64-bit"
> }
> ],
> "File": [
> {
> "Link": "http:\/\/global-
> download.acer.com\/GDFiles\/Driver\/VGA\/VGA_VIA_1.0_w2k.zip?acerid=633676006896131590",
> "Category": "VGA",
>
> },
>
> ]
> } }
>
Answer: Change the last line code to,
print result['Files']['OS'][0]['Id']
It will get the first id in OS.
|
How do I manipulate datetime (tick labels and limits) on a plot axis in Python?
Question: I have a plot created within a for loop with a list of datetimes as the x
values. The x ticks are labeled as dates, but I would like to display the hour
(i.e. 6, 12, 18, 24 repeating). I would also like to set xlim to wider than
the dataset so all data points are within the axes (not on the edges). I would
post the figure, but this is my first question on stackoverflow, so that is
not allowed.
Answer: It's a touch confusing the first time you do it, but it's easiest to use
explicit formatters and locators for this. To keep the points from touching
the boundaries, use `ax.margins(pad)` or equivalently `plt.margins(pad)`.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
dates = pd.date_range('01/01/2014', '01/05/2014', freq='1H')
y = np.random.random(dates.size)
locator = mdates.HourLocator(range(0, 24, 6))
formatter = mdates.DateFormatter('%H')
fig, ax = plt.subplots()
ax.plot(dates, y, 'ko')
ax.margins(0.05) # Keep points from touching margin of plot
ax.xaxis.set(major_formatter=formatter, major_locator=locator)
plt.show()

Or you might prefer the hours to be displayed more like this:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
dates = pd.date_range('01/01/2014', '01/05/2014', freq='1H')
y = np.random.random(dates.size)
locator = mdates.HourLocator(range(0, 24, 6))
formatter = mdates.DateFormatter('%H:%M')
fig, ax = plt.subplots()
ax.plot(dates, y, 'ko')
ax.margins(0.05) # Keep points from touching margin of plot
ax.xaxis.set(major_formatter=formatter, major_locator=locator)
fig.autofmt_xdate() # Quick way of rotating tick labels
plt.show()

|
Atom.core not found when in virtualenv
Question: I'm trying to use the google for content api for shopping via the gdata client
library and the atom library seems to be giving me an error.
This only happens when I try to run my code in a virtualenv.
Traceback (most recent call last):
File "/home/tabakd/documents/programming/kek/server/merchant.py", line 2, in <module>
import atom.data
File "/home/tabakd/documents/programming/kek/venv/lib/python2.7/site-packages/atom/data.py", line 24, in <module>
import atom.core
ImportError: No module named core
Thanks in advance for the help :D.
## SOLUTION:
As per someones advice on IRC I was able to fix this error by adding `from
__future__ import absolute_import` to the beginning of `client.py` and
`data.py` in the atom directory.
Answer: As per someones advice on IRC I was able to fix this error by adding from
**future** import absolute_import to the beginning of client.py and data.py in
the atom directory.
|
python: making array index generation more efficient/elegant
Question: I am trying to get some array indices with python. At the moment, the code
looks very cumbersome and I was wondering if I am doing it inefficiently or in
unpythonic style. So, I have an n-dimensional array and I am trying to
generate some indexes as follows. Here is an isolated code sample that
demonstrates what I am trying to do. I am demonstrating this in a simple 2D
code segment but the array can be arbitrary dimension.
import numpy as np
a = np.random.rand(5, 5)
shape = a.shape
for i in range(len(shape)):
shape_temp = np.zeros(len(shape), dtype=np.int)
shape_temp[i] = 1
p = np.meshgrid(*[np.arange (shape_temp[l], shape[l]) for l in range(len(shape))])
# Do something with p
I was wondering if there was a more elegant and hopefully efficient way to
generate these indices? In particular the use of this shape_temp variable
looks ugly especially how it is created every time in the loop.
Answer:
shape_temp = np.zeros_like(shape)
You could avoid `shape_temp` with an expression like:
[np.arange(1 if l==i else 0, e) for l,e in enumerate(shape)]
Whether it is prettier or more efficient is debatable
Another snippet
temp = np.eye(len(shape))
[np.arange(j,k) for j,k in zip(temp[i,:],shape)]
An alternative to `meshgrid` is `mgrid` or `ogrid` (though I have to change in
`indexing` to get match). The expressions are more compact because they take
`slices` rather than `ranges`. Internally those functions use `arange`.
meshgrid(*[np.arange(j,k) for j,k in zip(temp[i,:],shape)],indexing='ij')
np.mgrid[[slice(j,k) for j,k in zip(temp[i,:],shape)]]
meshgrid(*[np.arange(j,k) for j,k in zip(temp[i,:],shape)],sparse=True,indexing='ij')
np.ogrid[[slice(j,k) for j,k in zip(temp[i,:],shape)]]
|
Speeding up video to image conversion
Question: I use
call(['avconv', '-i', 'video.mp4', '-vsync', '1','-r', '1','-an','-y','%5d.jpg'])
in Python. It works, but it goes through the videofile in realtime. How to
speed this up, so getting 60 pictures all in all, each second of the video
file does not take 1 minute but less.
Answer: The following Python code extracts 60 seconds worth of frames as fast as
possible, and outputs them as JPEG files in the current directory.
## source
from subprocess import call
call([
'avconv', '-i', 'video.mp4',
'-vsync', '1',
'-r', '1',
'-an', '-y',
'-t', '60', # 60 seconds = 60 pictures
'%5d.jpg',
])
## output
avconv version 0.8.9-6:0.8.9-0ubuntu0.13.10.1, Copyright (c) 2000-2013 the Libav developers
built on Nov 9 2013 19:09:46 with gcc 4.8.1
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x14bbe00] multiple edit list entries, a/v desync might occur, patch welcome
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2mp41
encoder : Lavf53.21.1
Duration: 01:59:16.23, start: 0.000000, bitrate: 1153 kb/s
Stream #0.0(und): Video: mpeg4 (Advanced Simple Profile), yuv420p, 480x368 [PAR 1:1 DAR 30:23], 1016 kb/s, 23.98 fps, 23.98 tbr, 24k tbn, 23.98 tbc
Stream #0.1(und): Audio: aac, 48000 Hz, mono, s16, 63 kb/s
Stream #0.2(und): Audio: aac, 48000 Hz, mono, s16, 64 kb/s
Incompatible pixel format 'yuv420p' for codec 'mjpeg', auto-selecting format 'yuvj420p'
[buffer @ 0x1670a20] w:480 h:368 pixfmt:yuv420p
[avsink @ 0x147f6a0] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out'
[scale @ 0x14bf520] w:480 h:368 fmt:yuv420p -> w:480 h:368 fmt:yuvj420p flags:0x4
Output #0, image2, to '%5d.jpg':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2mp41
encoder : Lavf53.21.1
Stream #0.0(und): Video: mjpeg, yuvj420p, 480x368 [PAR 1:1 DAR 30:23], q=2-31, 200 kb/s, 90k tbn, 1 tbc
Stream mapping:
Stream #0:0 -> #0:0 (mpeg4 -> mjpeg)
Press ctrl-c to stop encoding
frame= 62 fps= 37 q=11.2 Lsize= -0kB time=62.00 bitrate= -0.0kbits/s dup=0 drop=1375
video:1784kB audio:0kB global headers:0kB muxing overhead -100.001204%
|
What are the file places after you package a python program?
Question: I am wanting to package my program that uses over files to store user data
locally, but I don't know what directory I should put in all the `json.load`
and `json.dump`. So right now, I have the directory equal to
`json.dump(somelist,open('/home/username/filename','w'))` but when someone
downloads it, the program won't work since it is a different directory. I am
trying to PyInstaller but maybe PyInstaller will do it for me. I was just
wondering and I couldn't find anything on google but if there is something,
please link it to me. Thanks in advance!!
Answer: Use the following to get the user's home directory:
from os.path import expanduser
home = expanduser("~")
with open(os.path.join(home, 'file'), 'w') as sr:
json.dump(somelist, sr)
|
Python - Convert X, Y Rotation coordinates from Radians to Degrees
Question: I have been stuck working on this for hours and I'm not very good with this
kind of math so please bare with me.
I have 2 values that are in radians, `c[1]` and `c[3]`. I need to turn the
radians into degrees and I haven't the faintest idea what to do to these
numbers to get degrees out of them. I have been searching the internet far and
wide and I cant find anything that I can actually understand. I have tried
devising my own way to do it but I'm sure I'm not even close. I have tried the
following:
z = (((c[1] * 180) + 180) + ((c[3] * 180) + 180))
z = (((c[1] * math.pi) / 180) + ((c[3] * math.pi) / 180) / 2)
z = (c[1] * (90/math.pi) - (c[3] * (90/math.pi)))
z = math.atan2(c[3], c[1])
z = (math.degrees(c[1]) + math.degrees(c[3])) * 2
z = c[1]
z = (math.asin(c[3]) / math.acos(c[1]))
How do I get a value in degrees from 2 radians?
Answer: degree to radian conversions are done with the equation (n deg)*(pi/180 deg).
z = (c[1]*(math.pi/180.0) + (c[1]*(math.pi/180)
if it's something you need to do regularly make a function.
def DegtoRad(deg):
return (deg)*(math.pi/180)
or as a lambda
DegtoRad = lambda x: x*(math.pi/180)
remember though if you havent imported math/math.pi none of this will work.
probably better to define pi with an actual literal variable up to your needs
of precision.
|
Is there an usage `_tuple` in python?
Question: I read the official documentation for `collections.namedtuple` today and found
`_tuple` mentioned in the `__new__` method. I did not find where the `_tuple`
defined.
Here is the code, you can try it in Python - it does not raise any error.
>>> Point = namedtuple('Point', ['x', 'y'], verbose=True)
class Point(tuple):
'Point(x, y)'
__slots__ = ()
_fields = ('x', 'y')
def __new__(_cls, x, y):
'Create a new instance of Point(x, y)'
return _tuple.__new__(_cls, (x, y)) # Here. Why _tuple?
**Update**
One more question:
As @vaultah's answer, good job and thank you:D
But is there some advantage that `from builtins import property as _property,
tuple as _tuple`?
Is that just let `tuple` be a _protected value_? Am I right?
Answer: From the generic [source
code](http://hg.python.org/cpython/file/e831a98b3f43/Lib/collections/__init__.py#l266)
(you can see the source code generated for this specific namedtuple by
printing `Point._source`):
from builtins import property as _property, tuple as _tuple
So `_tuple` here is just an alias for built-in `tuple`:
In [1]: from builtins import tuple as _tuple
In [2]: tuple is _tuple
Out[2]: True
Namedtuple appeared in Python 2.6. The initial source for its `__new__` method
was
def __new__(cls, %(argtxt)s):
return tuple.__new__(cls, (%(argtxt)s)) \n
The thing is, the source code is in _string_. They later format it using `%
locals()`. If the `tuple` was listed in `argtxt`, the `tuple.__new__` would
have called `__new__` method on whatever the `tuple` field contained. `_tuple`
works as expected because `namedtuple` doesn't allow field names starting with
`_`.
The bug was fixed in Python 2.6.3 release (see the
[changelog](http://ftp.python.org/download/releases/2.6.3/NEWS.txt) \-
_collections.namedtuple() was not working with the following field names: cls,
self, tuple, itemgetter, and property._).
|
AttributeError: 'str' object has no attribute 'tostring'
Question: Trying to convert image to string....
import requests
image = requests.get(image_url).content
image.tostring()
I get the error:
> AttributeError: 'str' object has no attribute 'tostring'
How do I turn this into something that Python understands as an image which I
can then call tostring() on?
Answer: The `.content` attribute of a response is _already a string_. Python string
objects do not have a `tostring()` method.
Pillow / PIL in not coming into play here; the `requests` library does not
return a Python Image Library object when loading an image URL. If you
expected to have an `Image` object, you'll need to create that from the loaded
data:
from PIL import Image
from io import BytesIO
import requests
image_data = BytesIO(requests.get(image_url).content)
image_obj = Image.open(image_data)
`image_obj` then is a PIL `Image` instance, and now you can convert that to
raw image data with `Image.tostring()`:
>>> from PIL import Image
>>> from io import BytesIO
>>> import requests
>>> image_url = 'https://www.gravatar.com/avatar/24780fb6df85a943c7aea0402c843737?s=128'
>>> image_data = BytesIO(requests.get(image_url).content)
>>> image_obj = Image.open(image_data)
>>> raw_image_data = image_obj.tostring()
>>> len(raw_image_data)
49152
>>> image_obj.size
(128, 128)
>>> 128 * 128 * 3
49152
|
Keep console input line below output
Question: [EDIT:] I'm currently trying to make a small tcp chat application. Sending and
receiving messages already works fine... But the problem is:
When i start typing a message while i receive one... it appears after the text
I'm writing
Screenshot: <http://s7.directupload.net/images/140816/6svxo5ui.png>
[User sent > "hello", then I started writing "i am writing..." then user wrote
" i sent a..." before i sent my message... so it has been placed after my
input... I want the incoming message always to be before my input !
this is my current code: Client.py
con = connect.User()
server = raw_input("Type in the server adress \n[leave blank to use xyr.no-ip.info]\n>:")
nick =""
while nick == "":
nick = raw_input("Type in your nickname\n>:")
con.connect(server, nick)
def sender():
print("Sender started")
while 1:
msg = raw_input()
if msg == "q":
break
con.send(msg)
con.disconnect()
def receiver(server):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
if server == "":
server="xyr.no-ip.info"
sock.connect((server, 8000))
sock.send("%pyreceiver\n")
print("Receiver started")
while 1:
msg_in = sock.recv(1024)
if not str(msg_in).startswith("[py]" + nick):
if str(msg_in).startswith("/ua"):
print(str(msg_in)[3:])
elif str(msg_in).startswith("/u "):
print(str(msg_in)[2:])
else:
print(str(msg_in[:-1]))
#
if nick == "":
nick = "guest"
print("Name changed to ""guest""")
time.sleep(.5)
thread.start_new_thread(receiver, (server, ))
time.sleep(.5)
thread.start_new_thread(sender())
Connect.py
import socket
import time
class User():
nickel =""
def connect(self, server="xyr.no-ip.info", nick="guest"):
nickel = nick
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
if server == "":
server="xyr.no-ip.info"
print("server changed to xyr.no-ip.info")
time.sleep(.5)
print("Connecting...")
self.sock.connect((server, 8000))
print("Connected")
time.sleep(.4)
self.sock.send("[py]" + nick + "\n")
self.sock.send(nick + " connected with a python client\n")
print("registered as " + nick)
time.sleep(.3)
def send(self, msg):
self.sock.send(msg + "\n")
def disconnect(self):
self.sock.close()
print("disconnected")
Answer: Your code writes everything to stdout. Whenever something arrives to either of
your sender/receiver threads, it prints to stdout. The issue with that is, due
to the fundamental nature of output streams, you **cannot** accomplish the
following :
* place incoming messages _above_ the stuff currently being typed/echoed.
Things happen strictly in the order of occurrence. The moment something comes
in, wherever the cursor is, the print statement dumps that data over there.
You cannot modify that behaviour without using fancier / more powerful
constructs.
In order to do what you want, I would use
[ncurses](https://docs.python.org/2/howto/curses.html). You seem to be using
python on Windows, so you're going to have to do some digging on how to get
equivalent functionality. Check out this thread : [Curses alternative for
windows](http://stackoverflow.com/questions/14779486/curses-alternative-for-
windows)
|
HTML button on client to run python script on server then send results to webpage on client
Question: I have seen some previous questions, that were similar but I couldn't find
anything like this. I have a webpage (on a server) and I would like the user
to click a button which will execute a python script. I want this python
script to run on the server and then send the results back to the webpage and
display it.
When the user clicks the button, the data that will be sent to the server
would be an XML file.
I just don't know where to start with all of this. What can I use to
accomplish this?
Thanks for your time.
EDIT: I actually have the webpage all done and setup, and it produces the XML.
I just need to run the python script when a user clicks on a button on the
webpage. Not sure if that helps, but I'm posting it. Thanks
I WOULD LIKE A HIGH-LEVEL EXPLANATION FOR THIS PLEASE AND THANK YOU, since I
don't know about what has been suggested to me already.
Answer: There is a lot of web libs for python. You may try bottle (work without
installing, one-file, just put the „bottle.py” file in your work folder. A
simple example:
from bottle import route, run, static_file, post, request
@route('/js/<filename>')
def js(filename):
return static_file(filename, root='js')
@route('/')
def index():return static_file('tst.html', root='./')
@post('/xml')
def xml():
for x in request.forms:
print(x)
return {'return': 'accepted'}
run(host='0.0.0.0', port=8000)
And html:
<!DOCTYPE html>
<html lang="ro">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>TTL</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<script type="text/javascript" src="js/jquery.js"></script>
</head>
<body>
<button onclick="test()">Test</button>
<script type="text/javascript">
function test() {
$.ajax({
url: 'xml',
type: 'POST',
data: '<my><xml>string</xml></my>',
dataType: 'json',
success: function (ret) {
alert(ret['return']);
}
});
}
</script>
</body>
</html>
Sorry for JQuery, to lazy to write plain js xhr.
[Bottle](http://bottlepy.org/docs/dev/tutorial.html) is well documented, but
cherrypy, pyramid, django, tornado also.
|
flask sub function not yielding results
Question: I have a bunch of code (1300 lines) that is working correctly and I am trying
to incorporate flask into the picture. In order to do this, I an trying to use
flask.Response to call a function within my method, that calls another method
in my class.
Here is test code that re-creates my problem.
#!/usr/bin/env python
import flask
class TestClass(object):
app = flask.Flask(__name__)
def __init__(self):
pass
def worker(self):
yield 'print test\n'
@app.route('/')
def test_method_get_stuff():
return flask.render_template('index.html')
@app.route('/', methods=['POST'])
def test_method_post_stuff():
def test_method_sub_function():
tc.worker()
return flask.Response(test_method_sub_function(),mimetype= 'text/plain')
tc = TestClass()
tc.app.run(debug=True)
`index.html` just has a text box with a submit button.
The issue I have is once you click the submit button, the request goes through
sucessfully but the page is blank with no errors in the python command line or
in the browser, and what I expect to happen is to show in plain text "print
test" with a newline.'
Any assistance would be appreciated. I am trying to avoid completely re-
writing all my code. With the understanding that i will have to replace
'print' with 'yield' commands in my code.
Answer: Your nested `test_method_sub_function()` function doesn't return anything; it
simply creates the generator (by calling a generator function), then exits.
It should at the very least _return_ the `tc.worker()` call:
def test_method_sub_function():
return tc.worker()
at which point the route works. You may as well skip this nested function
however and use `tc.worker()` _directly_ :
@app.route('/', methods=['POST'])
def test_method_post_stuff():
return flask.Response(tc.worker(), mimetype='text/plain')
One note: although your use of the `Flask` object as a class attribute happens
to work, you should to put it in a class. Leave the `app` object and routes
outside of the class:
import flask
class TestClass(object):
def worker(self):
yield 'print test\n'
tc = TestClass()
app = flask.Flask(__name__)
@app.route('/')
def test_method_get_stuff():
return flask.render_template('index.html')
@app.route('/', methods=['POST'])
def test_method_post_stuff():
return flask.Response(tc.worker(), mimetype='text/plain')
app.run(debug=True)
|
Finding cosine similarity between 2 numbered datasets using Python
Question: I have numbered datasets of length 22 where each number can lie between 0 to 1
where it represents the percentage of that attribute.
[0.03, 0.15, 0.58, 0.1, 0, 0, 0.05, 0, 0, 0.07, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0]
[0.9, 0, 0.06, 0.02, 0, 0, 0, 0, 0.02, 0, 0, 0.01, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0]
[0.01, 0.07, 0.59, 0.2, 0, 0, 0, 0, 0, 0.05, 0, 0, 0, 0, 0, 0, 0.07, 0, 0, 0, 0, 0]
[0.55, 0.12, 0.26, 0.01, 0, 0, 0, 0.01, 0.02, 0, 0, 0.01, 0, 0, 0.01, 0, 0.01, 0, 0, 0, 0, 0]
[0, 0.46, 0.43, 0.05, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0, 0, 0.02, 0.02, 0, 0, 0, 0]
How can I calculate the cosine similarity between such 2 datasets using
Python?
Answer: According to the definition of [Cosine
similarity](https://en.wikipedia.org/wiki/Cosine_similarity#Definition) you
just need to compute the normalized dot product of the two vectors `a` and
`b`:
import numpy as np
a = [0.03, 0.15, 0.58, 0.1, 0, 0, 0.05, 0, 0, 0.07, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0]
b = [0.9, 0, 0.06, 0.02, 0, 0, 0, 0, 0.02, 0, 0, 0.01, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0]
print np.dot(a, b) / np.linalg.norm(a) / np.linalg.norm(b)
Output:
0.115081383219
|
add items to a dictionary and save it to a txt file
Question: a few weeks ago i have started learning python.
Now i have started to create a program to create a dictionary, add an item1 as
a string, add an item2 wich consists of numbers and save it after that. But it
is not working the way i want it to. It seems like the saving is not working
properly it seems to be overwriting the existing items.
Before posting this, i have thoroughly searched here on Stackoverflow to find
a solution, but i cant get it to work.
Here is my code:
import pickle
import os
if not os.path.exists( 'C:\path' ):
os.makedirs( 'C:\path' )
dict = {}
dict = pickle.load( open( "dict.txt", "rb" ) )
def add_dict(item):
for item in dict:
if not item in dict:
dict.update({item1 : item2})
print ("thx")
print (dict)
print ("added")
if item in dict:
print("item already exists.")
return
return
item1 = input("insert Item1: ")
item1 = item1.lower()
item2 = input("insert item2: ")
pickle.dump(dict, open("C:\path\dict.txt", "wb"))
I hope this is not a too specific case.
edit: I have edited my code, to make it less specific and made two mistakes
which are fixed now.
import pickle
import os
if not os.path.exists( 'C:\path' ):
os.makedirs( 'C:\path' )
dict = {}
item1 = input('insert item1 ')
item1 = buch.lower()
item2 = input('insert item2: ')
dict.update({item1: item2})
print("thx")
print(dict)
print("added")
pickle.dump(dict, open("C:\path\dict.txt", "wb"))
this was the code without the function.
Answer: The edited code still has errors.
I rewrote the whole thing to be more pythonic.
import os
import pickle
def read_data(filename):
try:
return pickle.load(open(filename, 'rb'))
except FileNotFoundError:
return {}
def write_data(filename, data):
try:
os.makedirs(os.path.dirname(filename))
except FileExistsError:
pass
pickle.dump(data, open(filename, 'wb'))
def main():
filename = os.path.join(os.path.dirname(__file__), 'dict.txt')
data = read_data(filename)
key = input('insert item1 ').lower()
value = input('insert item2: ')
data[key] = value
print(data)
write_data(filename, data)
if __name__ == '__main__':
main()
I used meaningful names and didn't call a variable `dict` because
[dict](https://docs.python.org/3/library/functions.html#func-dict) is a
builtin you don't want to overwrite.
First we read the file. If it doesn't exist we catch the error and use an
empty dictionary for the data.
Second we get the new data and update the dictionary.
Last we write the dictionary back to disk and create the target folder if it
doesn't exist.
|
Google app engine, cloud sql, and django: no rdbms backend module
Question: I've been following a number of tutorials on setting up _google app engine_
(GAE) with their cloud SQL and django. The conclusion I've reached is most of
them get you to install a local copy of python and all the libs. Some even
fail to mention they assume you'll be using a local SQL server for
testing/development.
Firstly, python modules you install locally don't magically get uploaded to
GAE. Either you use existing GAE libraries or you throw all your code in the
project to get uploaded.
Second, GAE installs all the available libraries locally so you can develop
with them. So you shouldn't go getting your own (potential version/addon
conflict issues).
I've set up a very simple project. I haven't bothered to install a local SQL
server yet. I'm on windows (aaaargh) aargh. I'm at the stage where I want to
`python manage.py syncdb`. In `settings.py`, `DATABASE` must contain the
connection info/url stuff. As I understand this can be for a local db (i.e.
your own for dev or the cloud after deploying) with a `mysql` bit or set up to
connect to the cloud from your local copy with `rdbms` (whatever that is). So
I've set `'ENGINE': 'google.appengine.ext.django.backends.rdbms'` and now get
this error:
Error was: No module named google.appengine.ext.django.backends.rdbms.base
I've uninstalled my local copy of django and set my `PYTHONPATH=C:\Program
Files (x86)\Google\google_appengine\lib\django-1.5`. Without this set, I get
an error so I have to assume it's using GAE's django. I've found a `C:\Program
Files
(x86)\Google\google_appengine\google\appengine\ext\django\backends\rdbms` path
too, which sounds relevant. So I'm not sure what to do next. I assume this
`rdbms` thing is necessary to communicate with the SQL db remotely. I would
like to be able to test locally/not in production.
What could be broken in my configuration to cause this?
As a side note, when ever I attempt to start a server with the GAE launcher, I
just get `ImportError: Could not import settings 'myblog.settings' (Is it on
sys.path?): No module named appengine_toolkit`. `python manage.py runserver`
works fine until a request triggers an attempt to connect to a SQL server.
Answer: The main thing I was missing here is making sure python could see the GAE
libs. I solved this in linux:
export PYTHONPATH=/usr/local/google_appengine/:/usr/local/google_appengine/lib/:/usr/local/google_appengine/lib/django-1.5/
that's
* `google_appengine/`
* `google_appengine/lib/`
* `google_appengine/lib/django-1.5/`
`PYTHONPATH` just didn't work on windows. Regarding `Could not import settings
'myblog.settings'`, this only happens on windows and I couldn't figure out why
and really can't be bothered wasting my time.
Two things that really helped me with GAE were:
1. Having some idea of the intended (however implicitly) GAE [project structure](https://www.google.com.au/search?q=google+app+engine+project+structure).
2. Getting to grips with `virtualenv` (I found [this](http://www.dabapps.com/blog/introduction-to-pip-and-virtualenv-python/) was a decent intro), and promptly realizing how messy google app engine's launcher is regarding sandboxing and external libraries you want to use.
* * *
Eventually, my setup was as follows. I used `virtualenv` to install all my
packages to the local `env` directory. To test locally, this worked fine (with
the `PYTHONPATH` above). To deploy only the right packages (e.g. push `django-
wiki` but not `MySQLdb`) I created a `libs` directory and simlinked everything
I wanted in `env/lib/python2.7/site-packages/`.
As a better alternative to `PYTHONPATH`, something like this works (in
`settings.py`):
IS_PRODUCTION = os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine')
if IS_PRODUCTION:
sys.path.insert(0, 'libs')
else:
sys.path.insert(0, 'env/lib/python2.7/site-packages/')
I also added a `skip_files:` section to `app.yaml` to exclude `#- ^env/.*` and
uncomment this out when deploying. This probably wouldn't be necessary if I
put my `env` outside the project directory like some others have mentioned.
|
LFU cache implementation in python
Question: I have implemented LFU cache in python with the help of Priority Queue
Implementation given at
<https://docs.python.org/2/library/heapq.html#priority-queue-implementation-
notes>
I have given code in the end of the post.
But I feel that code has some serious problems:
1\. To give a scenario, suppose there is only one page is continuously getting
visited (say 50 times). But this code will always mark the already added node
as "removed" and add it to heap again. So basically it will have 50 different
nodes for the same page. Hence increasing heap size enormously.
2\. This question is almost similar to Q1 of Telephonic Interview of
<http://www.geeksforgeeks.org/flipkart-interview-set-2-sde-2/> And the person
mentioned that doubly linked list can give better efficiency as compared to
heap. Can anyone explain me, how?
from llist import dllist
import sys
from heapq import heappush, heappop
class LFUCache:
heap = []
cache_map = {}
REMOVED = "<removed-task>"
def __init__(self, cache_size):
self.cache_size = cache_size
def get_page_content(self, page_no):
if self.cache_map.has_key(page_no):
self.update_frequency_of_page_in_cache(page_no)
else:
self.add_page_in_cache(page_no)
return self.cache_map[page_no][2]
def add_page_in_cache(self, page_no):
if (len(self.cache_map) == self.cache_size):
self.delete_page_from_cache()
heap_node = [1, page_no, "content of page " + str(page_no)]
heappush(self.heap, heap_node)
self.cache_map[page_no] = heap_node
def delete_page_from_cache(self):
while self.heap:
count, page_no, page_content = heappop(self.heap)
if page_content is not self.REMOVED:
del self.cache_map[page_no]
return
def update_frequency_of_page_in_cache(self, page_no):
heap_node = self.cache_map[page_no]
heap_node[2] = self.REMOVED
count = heap_node[0]
heap_node = [count+1, page_no, "content of page " + str(page_no)]
heappush(self.heap, heap_node)
self.cache_map[page_no] = heap_node
def main():
cache_size = int(raw_input("Enter cache size "))
cache = LFUCache(cache_size)
while 1:
page_no = int(raw_input("Enter page no needed "))
print cache.get_page_content(page_no)
print cache.heap, cache.cache_map, "\n"
if __name__ == "__main__":
main()
Answer: Efficiency is a tricky thing. In real-world applications, it's often a good
idea to use the simplest and easiest algorithm, and only start to optimize
when that's measurably slow. And then you optimize by doing profiling to
figure out where the code is slow.
If you are using CPython, it gets especially tricky, as even an inefficient
algorithm implemented in C can beat an efficient algorithm implemented in
Python due to the large constant factors; e.g. a double-linked list
implemented in Python tends to be a lot slower than simply using the normal
Python list, even for cases where in theory it should be faster.
**Simple algorithm:**
For an LFU, the simplest algorithm is to use a dictionary that maps keys to
(item, frequency) objects, and update the frequency on each access. This makes
access very fast (O(1)), but pruning the cache is slower as you need to sort
by frequency to cut off the least-used elements. For certain usage
characteristics, this is actually faster than other "smarter" solutions,
though.
You can optimize for this pattern by not simply pruning your LFU cache to the
maximum length, but to prune it to, say, 50% of the maximum length when it
grows too large. That means your prune operation is called infrequently, so it
can be inefficient compared to the read operation.
**Using a heap:**
In (1), you used a heap because that's an efficient way of storing a priority
queue. But you are not implementing a priority queue. The resulting algorithm
is optimized for pruning, but not access: You can easily find the n smallest
elements, but it's not quite as obvious how to update the priority of an
existing element. In theory, you'd have to rebalance the heap after every
access, which is highly inefficient.
To avoid that, you added a trick by keeping elements around even if they are
deleted. But this trades in space for time.
If you don't want to trade in time, you could update the frequencies in-place,
and simply rebalance the heap before pruning the cache. You regain fast access
times at the expense of slower pruning time, like the simple algorithm above.
(I doubt there is any speed difference between the two, but I have not
measured this.)
**Using a double-linked list:**
The double-linked list mentioned in (2) takes advantage of the nature of the
possible changes here: An element is either added as the lowest priority (0
accesses), or an existing element's priority is incremented exactly by 1. You
can use these attributes to your advantage if you design your data structures
like this:
You have a double-linked list of elements which is ordered by the frequency of
the elements. In addition, you have a dictionary that maps items to elements
within that list.
Accessing an element then means:
* Either it's not in the dictionary, that is, it's a new item, in which case you can simply append it to the end of the double-linked list (O(1))
* or it's in the dictionary, in which case you increment the frequency in the element and move it leftwards through the double-linked list until the list is ordered again (O(n) worst-case, but usually closer to O(1)).
To prune the cache, you simply cut off n elements from the end of the list
(O(n)).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.