text
stringlengths 226
34.5k
|
---|
how to compress a python dict to save into mysql
Question: i have a large python dict, maybe 10MB, failed to save it in mysqldb, as
repr(dict) i tried zlib.compress, the result string is ill to save mysql. now
how about compress the dict and convert to binary, and save as blob? any
coding sample?
#update i tried struct with zlib, about 50% reduced for string of 200k char
length.
test:
import zlib, struct, ast
dic={}
for i in xrange(20):
dic[str(i)]=i
s=zlib.compress(repr(dic), 5)
bs=struct.pack('%ds' %len(s),s)
s2=struct.unpack('%ds' %len(s),bs)[0]
s2=zlib.decompress(s2)
dic2=ast.literal_eval(s2)
assert dic2==dic
Answer: You should be able to save your python dictionary as a BLOB field, but you
must use parametrized queries, [like
this](http://stackoverflow.com/a/1294496/1734130).
If your blob is larger than 1MB, you may have to increase
[`max_allowed_packet`](http://dev.mysql.com/doc/refman/5.1/en/packet-too-
large.html) on both server and client for BLOBs to work.
However, better method would be to insert your dictionary into another table,
one row per key. You may need to create new table or might be able to use some
of the existing tables which should be created just for this purpose.
If that new table is properly indexed, you can take advantage of server being
able to quickly retrieve your data by key.
|
Removing lines from a file using python
Question: > **Possible Duplicate:**
> [Python 3 regular expression to find multiline
> comment](http://stackoverflow.com/questions/7081882/python-3-regular-
> expression-to-find-multiline-comment)
I need some inputs on how this can be done,really appreciate your inputs,I
looked at other posts but none of them matches my requirement.
[How to remove line from the file in
python](http://stackoverflow.com/questions/7276454/how-to-remove-line-from-
the-file-in-python) http://stackoverflow.com/questions/2064184/remove-lines-
from-textfile-with-python
I need to match a multi-line comment in a file based on a input string
provided.
Example:-
Lets say if the file "test.txt" has the following comment,if inputstring="This
is a test, script written" this comment needs to be deleted from the file
import os
import sys
import re
import fnmatch
def find_and_remove(haystack, needle):
pattern = re.compile(r'/\*.*?'+ needle + '.*?\*/', re.DOTALL)
return re.sub(pattern, "", haystack)
for path,dirs,files in os.walk(sys.argv[1]):
for fname in files:
for pat in ['*.cpp','*.c','*.h','*.txt']:
if fnmatch.fnmatch(fname,pat):
fullname = os.path.join(path,fname)
with open(fullname, "r") as f:
find_and_remove(f, r"This is a test, script written")
Error:-
Traceback (most recent call last):
File "comment.py", line 16, in <module>
find_and_remove(f, r"This is a test, script written")
File "comment.py", line 8, in find_and_remove
return re.sub(pattern, "", haystack)
File "/usr/lib/python2.6/re.py", line 151, in sub
return _compile(pattern, 0).sub(repl, string, count)
TypeError: expected string or buffer
Answer: The first thing that came to mind when I saw the question was "state machine",
and whenever I think "state machine" in python, the first thing that comes to
mind is "generator" a.k.a. yield:
def skip_comments(f):
"""
Emit all the lines that are not part of a multi-line comment.
"""
is_comment = False
for line in f:
if line.strip().startswith('/*'):
is_comment = True
if line.strip().endswith('*/'):
is_comment = False
elif is_comment:
pass
else:
yield line
def print_file(file_name):
with file(file_name, 'r') as f:
skipper = skip_comments(f)
for line in skipper:
print line,
EDIT: user1927396 upped the ante by specifying that it's just a specific block
to exclude, that contains specific text. Since it's inside the comment block,
we won't know up front if we need to reject the block or not.
My first thought was buffer. Ack. Poo. My second thought was a haunting
refrain I've been carrying in my head for 15 years and never used until now:
"stack of state machines" ...
def squelch_comment(f, first_line, exclude_if):
"""
Comment is a multi-line comment that we may want to suppress
"""
comment = [first_line]
if not first_line.strip().endswith('*/'):
for line in f:
if exclude_if in line:
comment = None
if comment and len(comment):
comment.append(line)
if line.strip().endswith('*/'):
break
if comment:
for comment_line in comment:
yield '...' + comment_line
def skip_comments(f):
"""
Emit all the lines that are not part of a multi-line comment.
"""
for line in f:
if line.strip().startswith('/*'):
# hand off to the nested, comment-handling, state machine
for comment_line in squelch_comment(f, line, 'This is a test'):
yield comment_line
else:
yield line
def print_file(file_name):
with file(file_name, 'r') as f:
for line in skip_comments(f):
print line,
|
How to grant access to google appengine page in Python?
Question: I have a simple app that serves a page, I want this page to be accessible only
to a couple of predetermined users already signed into Google. In fact, I want
access only through importHTML function in Google Spreadsheets.
How to implement this with minimum fuss? Here's the app:
import webapp2
class MainPage(webapp2.RequestHandler):
def get(self):
self.response.out.write('/Statement.htm')
app = webapp2.WSGIApplication([('/Statement.htm', MainPage)],
debug=True)
Here's the app.yaml
application: *********
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /.*
static_files: Statement.htm
upload: Statement.htm
I've seen this in the tutorial:
from google.appengine.api import users
class MainPage(webapp2.RequestHandler):
user = users.get_current_user()
But who is this "get_current_user"? I don't want any logins, I just want it to
be some kind of Google ID that I can check with if/else and serve out error
page if it doesn't match one or two allowed names.
Answer: You should give them admin permission from your google app engine dashboard -
under Administration -> Permissions - and add to your app.yaml file:
**EDIT:**
This is a way to do it using jinja2 template system.
app.yaml file:
application: statement
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /statement
script: statement.app
login: admin
libraries:
- name: jinja2
version: latest
statement.py file:
import webapp2
import jinja2
import os
jinja_environment = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.path.dirname(__file__)))
class Statement(webapp2.RequestHandler):
def get(self):
template = jinja_environment.get_template('Statement.htm')
self.response.out.write(template.render())
app = webapp2.WSGIApplication([('/statement', Statement)],
debug=True)
Statement.htm file:
`You must have been granted admin permission.`
When you go to `http://127.0.0.1:8080/statement` you must login as admin to
get there. All three files are in the same directory. If your python main file
has other name you should use that name and the extension `.app` in the
app.yaml file:
`script: your-python-file-name.app`
|
How to know which application run a python script
Question: Is there a way to know which application I'm running a python script from?
I can run python from multiple sources, like Textmate, Sublime Text 2 or
Terminal (I'm on Mac OSX). How can I know, exactly which tool launched the
current python app.
I've tried looking into the `os` and `inspect` modules, but couldn't find the
solution.
Answer: You can use [psutil](http://code.google.com/p/psutil/) to do this sort of
thing in a fairly cross-platform way:
import psutil
my_process = psutil.Process(os.getpid())
parent = my_process.parent
print "Parent process name: " + parent.name
From the `psutil` documentation:
> It currently supports Linux, Windows, OSX and FreeBSD, both 32-bit and
> 64-bit, with Python versions from 2.4 to 3.3 by using a single code base.
As well as the name you can get the executable path (which might be more
useful if you're selecting from a list of options) and also the full command-
line with which it was invoked. You can also get the username who ran the
process. See the [psutil classes
documentation](http://code.google.com/p/psutil/wiki/Documentation#Classes) for
details of the parameters and the platforms on which they're available.
As an aside, if at all possible I would structure your code so that you
_don't_ have to modify your behaviour according to the calling application -
it would be much preferable to have the calling application pass in a
parameter which modifies the behaviour of the shared code. However, I
appreciate that sometimes other concerns take precedence over cleanliness of
code, so `psutil` should enable you to do what you requested.
|
How to re-display a QDialog after hiding it?
Question: I am working with python and pyqt. I have a dialog that I want to temporarily
hide. After calling
dlg.hide()
I try calling
dlg.show()
but nothing happens. It is never re-displayed.
I am new to pyqt so any help is greatly appreciated.
Thanks in advance.
Answer: You are looking for the `exec_` method that makes the dialog modal, see how
this works:
#!/usr/bin/env python
#-*- coding:utf-8 -*-
from PyQt4 import QtCore, QtGui
class myDialog(QtGui.QDialog):
def __init__(self, parent=None):
super(myDialog, self).__init__(parent)
self.dialog = None
self.buttonShow = QtGui.QPushButton(self)
self.buttonShow.setText("Show Dialog")
self.buttonShow.clicked.connect(self.on_buttonShow_clicked)
self.buttonHide = QtGui.QPushButton(self)
self.buttonHide.setText("Close")
self.buttonHide.clicked.connect(self.on_buttonHide_clicked)
self.layout = QtGui.QVBoxLayout(self)
self.layout.addWidget(self.buttonShow)
self.layout.addWidget(self.buttonHide)
@QtCore.pyqtSlot()
def on_buttonHide_clicked(self):
self.accept()
@QtCore.pyqtSlot()
def on_buttonShow_clicked(self):
self.dialog = myDialog(self)
self.dialog.exec_()
class myWindow(QtGui.QWidget):
def __init__(self, parent=None):
super(myWindow, self).__init__(parent)
self.buttonShow = QtGui.QPushButton(self)
self.buttonShow.setText("Show Dialog")
self.buttonShow.clicked.connect(self.on_buttonShow_clicked)
self.layout = QtGui.QVBoxLayout(self)
self.layout.addWidget(self.buttonShow)
self.dialog = myDialog(self)
@QtCore.pyqtSlot()
def on_buttonHide_clicked(self):
self.dialog.accept()
@QtCore.pyqtSlot()
def on_buttonShow_clicked(self):
self.dialog.exec_()
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
app.setApplicationName('myWindow')
main = myWindow()
main.show()
sys.exit(app.exec_())
|
Interleave different length lists, elimating duplicates and preserve order in Python
Question: I have two lists, lets say:
keys1 = ['A', 'B', 'C', 'D', 'E', 'H', 'I']
keys2 = ['A', 'B', 'E', 'F', 'G', 'H', 'J', 'K']
How do I create a merged list without duplicates that preserve the order of
both lists, inserting the missing elements where they belong? Like so:
merged = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K']
Note that the elements can be compared against equality but not ordered (they
are complex strings). **Update:** The elements can't be ordered by comparing
them, but they have a order based on their occurrence in the original lists.
**Update:** In case of contradiction (different order in both input lists),
any output containing all elements is valid. Of course with bonus points if
the solution shows 'common sense' in preserving most of the order.
**Update:** Again (as some comments still argue about it), the lists normally
don't contradict each other in terms of the order of the common elements. In
case they do, the algorithm needs to handle that error gracefully.
I started with a version that iterates over the lists with .next() to advance
just the list containing the unmatched elements, but .next() just doesn't know
when to stop.
merged = []
L = iter(keys1)
H = iter(keys2)
l = L.next()
h = H.next()
for i in range(max(len(keys1, keys2))):
if l == h:
if l not in merged:
merged.append(l)
l = L.next()
h = H.next()
elif l not in keys2:
if l not in merged:
merged.append(l)
l = L.next()
elif h not in keys1:
if h not in merged:
merged.append(h)
h = H.next()
else: # just in case the input is badly ordered
if l not in merged:
merged.append(l)
l = L.next()
if h not in merged:
merged.append(h)
h = H.next()
print merged
This obviously doesn't work, as .next() will cause an exception for the
shortest list. Now I could update my code to catch that exception every time I
call .next(). But the code already is quite un-pythonic and this would clearly
burst the bubble.
Does anyone have a better idea of how to iterate over those lists to combine
the elements?
Bonus points if I can do it for three lists in one go.
Answer: What you need is basically what any merge utility does: It tries to merge two
sequences, while keeping the relative order of each sequence. You can use
Python's [`difflib`](http://docs.python.org/2/library/difflib.html) module to
diff the two sequences, and merge them:
from difflib import SequenceMatcher
def merge_sequences(seq1,seq2):
sm=SequenceMatcher(a=seq1,b=seq2)
res = []
for (op, start1, end1, start2, end2) in sm.get_opcodes():
if op == 'equal' or op=='delete':
#This range appears in both sequences, or only in the first one.
res += seq1[start1:end1]
elif op == 'insert':
#This range appears in only the second sequence.
res += seq2[start2:end2]
elif op == 'replace':
#There are different ranges in each sequence - add both.
res += seq1[start1:end1]
res += seq2[start2:end2]
return res
Example:
>>> keys1 = ['A', 'B', 'C', 'D', 'E', 'H', 'I']
>>> keys2 = ['A', 'B', 'E', 'F', 'G', 'H', 'J', 'K']
>>> merge_sequences(keys1, keys2)
['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K']
Note that the answer you expect is not necessarily the only possible one. For
example, if we change the order of sequences here, we get another answer which
is just as valid:
>>> merge_sequences(keys2, keys1)
['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'J', 'K', 'I']
|
Complex Sorting - Multi Key
Question: I am using Python 2.6.2. I have a dictionary, `graph` which has tuple (source,
destination) as key with a certain weight as its value.
Would like to sort `graph` based on source in descending order of weight.
There could be more than one same source in the tuple of graph with different
destination.
graph= {(2, 18): 0, (5, 13): 2, (0, 10): 2, (0, 36): 1, (3, 14): 2, (5, 23): 2, (0, 24): 1, (4, 32): 7, (2, 29): 0, (3, 27): 2, (0, 33): 2, (5, 42): 2, (5, 11): 2, (5, 39): 3, (3, 9): 8, (0, 41): 4, (5, 16): 5, (4, 17): 7, (4, 44): 7, (0, 31): 2, (5, 35): 5, (4, 30): 7}
Created an intermediary dictionary, `source_dict` which has source as key and
accumulated weight based on source as its value, {source:weight}
source_dict={0: 12, 2: 0, 3: 12, 4: 28, 5: 21}
After doing the sort function as below,
source_desc_sort=sorted(source_dict.items(), key=lambda x: x[1], reverse=True)
sortkeys = dict((x[0], index) for index,x in enumerate(source_desc_sort))
graph_sort = sorted(graph.iteritems(), key=lambda x: sortkeys[x[0][0]])
I get a sorted graph, `graph_sort` as below,
graph_sort= [((4, 17), 7), ((4, 44), 7), ((4, 30), 7), ((4, 32), 7), ((5, 23), 2), ((5, 35), 5), ((5, 13), 2), ((5, 42), 2), ((5, 11), 2), ((5, 39), 3), ((5, 16), 5), ((0, 10), 2), ((0, 36), 1), ((0, 24), 1), ((0, 33), 2), ((0, 41), 4), ((0, 31), 2), ((3, 14), 2), ((3, 27), 2), ((3, 9), 8), ((2, 29), 0), ((2, 18), 0)]
If you note in `graph_sort` the order of keys for the same source is not
important e.g for tuples with 5 as source, ((5, 23), 2) can come before ((5,
35), 5) or after eventhough the one have lower value than the other.
Now this is my challenges which I am trying to solve since 2 days ago,
Have redefined `source_dict` to `source_dict_angle` with angle as added
information , {source:{angle:weight}}
source_dict_angle={0: {0: 2, 90: 4, 180: 6}, 2: {0: 0, 270: 0}, 3: {180: 4, 270: 8}, 4: {0: 7, 180: 21}, 5: {0: 6, 90: 10, 180: 2, 270: 3}}
I like to do the sorting same as above but based on angle of source. Example,
the tuples with 4 as source and with destination(s) in angle 180 have to start
first as it has highest value i.e 21. Followed by tuples with 5 as source and
with destination(s) in angle 90 and so on.
Have intermediary dictionary, `relation_graph` which has position information
of destination relative to source, {source:{angle:destination:value}}
relation_graph={0: {0: {32: [1], 36: [1], 23: [1], 24: [1], 16: [1]}, 90: {3: [1], 41: [1], 44: [1]}, 180: {33: [1], 10: [1], 31: [1]}}, 1: {}, 2: {0: {18: [1]}, 270: {29: [1]}}, 3: {180: {27: [1], 14: [1], 31: [1]}, 270: {0: [1], 33: [1], 36: [1], 9: [1], 1: [1], 24: [1], 41: [1], 10: [1]}}, 4: {0: {32: [1], 18: [1], 23: [1]}, 180: {0: [1], 33: [1], 44: [1], 14: [1], 15: [1], 17: [1], 21: [1], 41: [1], 27: [1], 30: [1], 31: [1]}}, 5: {0: {42: [1], 11: [1], 23: [1]}, 90: {7: [1], 8: [1], 16: [1], 35: [1]}, 180: {0: [1], 13: [1], 14: [1], 44: [1]}, 270: {1: [1], 2: [1], 39: [1], 29: [1]}}}
Expected result
graph_sort_angle= [((4, 17), 7), ((4, 44), 7), ((4, 30), 7), ((5, 35), 5), ((5, 16), 5), ((3, 9), 8), ...
I am unable to find the solution for this as yet, I am trying to reuse the
solution I have done for `graph_sort` but it is not working well. Have feeling
I must do it different way.
Is there any way to use the same approach as I have done for `graph_sort`?
Appreciate if you can give me some pointers.
Will continue to work on this till then.
**Additional Explanation 09 Jan 2013 9.30PM : Lennart Regebro**
I would like to sort the keys of `graph` (tuple) based on the descending
values from `source_dict_angle`.
`graph` is composed of (source, destination) but `source_dict_angle` only have
source and angle information, {source:{angle:weight}}. It does not have
destination information. We would not be able to sort the tuples from `graph`
as we did in the first example.
We are given (not calculated) `relation_graph`, where we have the source,
angle and destination information, {source:{angle:destination:value}} . We
will use this dictionary to see which source pairs with which destination
using which angle (0 deg, 90 deg, 180 deg or 270 deg).
So we will
1. First refer to `source_dict_angle` to know which is the highest value. In this given example, Source 4 with angle 180 degree has the highest value i.e 21
2. We compare all destination of Source 4 with angle 180 from `relation_graph`, i.e [0, 33, 44, 14, 15, 17, 21, 41, 27, 30, 31] if it exist in `graph`. If yes, we rank the (Source, Destination) tuple in first position, i.e (4, 17). This can also be done in another way, since we have to sort Source 4, we check if any of the destination of Source 4 in `graph`, exist in the angle 180 of Source 4 in `relation_graph`. If yes, we rank (Source, Destination) tuple in first position. As the same source could be paired with more than one destination using the same angle, it is possible for us to have more than one (Source, Destination) tuples. e.g (4, 17), (4, 44), and (4, 30). This means, Source 4 uses angle 180 to connect to Destination 17, Destination 44 and Destination 30, hence the 3 pair of tuples. The order between these 3 pair of tuples are not a problem.
3. Once this is done we go to the next highest value in `source_dict_angle` doing the above steps till all of the sources have been sorted in descending order.
Answer: Skip the intermediate dictionary, that's not necessary.
For sorting on the source, you just do:
graph_sort = sorted(graph.iteritems(), key=lambda x: x[0][0])
For sorting on the angle you do:
def angle(x):
key, value = x
source, destination = key
return <insert angle calculation here>
graph_sort = sorted(graph.iteritems(), key=angle)
Update:
You need to stop using loads of different dictionaries to keep different data
that all belongs together. Create a class for the item that keeps all the
information.
From what I can gather from your question you have a dictionary of graph items
which keeps source, destination and a weight. You then have another dictionary
which keeps the wight, again. You then have a third dictionary that keeps the
angle.
Instead just do this:
class Graph(object):
def __init__(self, source, destination, weight, angle):
self.source = source
self.destination = destination
self.weight = weight
self.angle = angle
Your sorting problem is now trivial.
|
cxfreeze missing distutils module inside virtualenv
Question: When running a cxfreeze binary from a python3.2 project I am getting the
following runtime error:
/project/dist/project/distutils/__init__.py:13: UserWarning: The virtualenv distutils package at %s appears to be in the same location as the system distutils?
Traceback (most recent call last):
File "/home/chrish/.virtualenvs/project/lib/python3.2/distutils/__init__.py", line 19, in <module>
import dist
ImportError: No module named dist
Correspondingly there are several `distutils` entries in the missing modules
section of the cxfreeze output:
? dist imported from distutils
? distutils.ccompiler imported from numpy.distutils.ccompiler
? distutils.cmd imported from setuptools.dist
? distutils.command.build_ext imported from distutils
? distutils.core imported from numpy.distutils.core
...
I've tried forcing distutils to be included as a module, by both importing it
in my main python file and by adding it to a cxfreeze `setup.py` as:
options = {"build_exe": {"packages" : ["distutils"]} },
Neither approach worked. It seems likely that I've somehow broken the
virtualenv [as distutils seems fundamental and the warning regarding the
location of distutils], repeating with a clean virtualenv replicated the
problem.
It may be worth noting that I installed cx-freeze by running
`$VIRTUAL_ENV/build/cx-freeze/setup.py install` as it doesn't install cleanly
in pip.
Answer: Summarising my comments:
The copy of `distutils` in the virtualenv is doing some bizarre things which
confuse cx_Freeze. The simple workaround is to freeze outside a virtualenv, so
that it uses the system copy of distutils.
On Ubuntu, Python 2 and 3 co-exist happily: just use `python3` to do anything
with Python 3. E.g. to install cx_Freeze under Python 3: `python3 setup.py
install`.
|
Bound Python method accessed with getattr throwing:" takes no arguments (1 given) "
Question: Something rather odd is happening in an interaction between bound methods,
inheritance, and getattr that I am failing to understand.
I have a directory setup like:
/a
__init__.py
start_module.py
/b
__init__.py
imported_module.py
imported_module.py contains a number of class objects one of which is of this
form:
class Foo(some_parent_class):
def bar(self):
return [1,2,3]
A function in start_module.py uses inspect to get a list of strings
representing the classes in imported_module.py. "Foo" is the first of those
strings. The goal is to run bar in start_module.py using that string and
getattr.*
To do this I use code in start_module of the form:
for class_name in class_name_list:
instance = getattr(b.imported_module, class_name)()
function = getattr(instance, "bar")
for doodad in [x for x in function()]:
print doodad
Which does successfully start to iterate over the list comprehension, but on
the first string, "bar", I get a bizarre error. Despite bar being a bound
method, and so as far as I understand expecting an instance of Foo as an
argument, I am told:
TypeError: bar() takes no arguments (1 given)
This makes it seem like my call to function() is passing the Foo instance, but
the code is not expecting to receive it.
I really have no idea what is going on here and couldn't parse out an
explanation through looking on Google and Stack Overflow. Is the double
getattr causing some weird interaction? Is my understanding of class objects
in Python too hazy? I'd love to hear your thoughts.
*To avoid the anti-pattern, the real end objective is to have start_module.py automatically have access to all methods of name bar across a variety of classes similar to Foo in imported_module.py. I am doing this in the hopes of avoiding making my successors maintain a list for what could be a very large number of Foo-resembling classes.
Answered below: I think the biggest takeaways here are that inspect is very
useful, and that if there is a common cause for the bug you are experiencing,
make absolutely sure you've ruled that out before moving on to search for
other possibilities. In this case I overlooked the fact that the module I was
looking at that had correct code might not be the one being imported due to
recent edits to the file structure.
Answer: Since the sample code you posted is wrong, I'm guessing that you have another
module with the Foo class somewhere - maybe `bar` is defined like this
class Foo(object):
def bar(): # <-- missing self parameter
return [1,2,3]
This does give that error message
>>> Foo().bar()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: bar() takes no arguments (1 given)
|
How to run different Django projects under different Domains on one web server?
Question: I am trying to running two different projects in my web server. I want them to
be pointed by different domain. So I configured my httpd.conf like this:
NameVirtualHost *:80
<VirtualHost *:80>
ServerAdmin [email protected]
ServerName www.web1.com
WSGIScriptAlias / /var/www/web1/web1/wsgi.py
<Directory /var/www/web1/web1>
<Files wsgi.py>
Order deny,allow
Allow from all
</Files>
</Directory>
ErrorLog logs/dummy-host.web1.com-error_log
</VirtualHost>
<VirtualHost *:80>
ServerAdmin [email protected]
ServerName www.web2.com
ServerAlias web2.com *.web2.com
WSGIScriptAlias / /var/www/web2/web2/wsgi.py
<Directory /var/www/web2/web2>
<Files wsgi.py>
Order deny,allow
Allow from all
</Files>
</Directory>
ErrorLog logs/dummy-host.web2.com-error_log
</VirtualHost>
**With this configuration, I can run httpd and visit web1.com successfully.
However, when I tried to visit web2.com, a "Internal Server Error" appear. So
I went to check the log, it seemed that the "DJANGO_SETTINGS_MODULE" set by
myblog(web2) project is overrided by that of web1? Does any body know how to
solve this problem? Thank you!**
<code>
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] mod_wsgi (pid=27246): Exception occurred processing WSGI script '/var/www/myblog/myblog/wsgi.py'.
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] Traceback (most recent call last):
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] File "/usr/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 219, in __call__
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] self.load_middleware()
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 39, in load_middleware
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] for middleware_path in
settings.MIDDLEWARE_CLASSES:
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] File "/usr/lib/python2.6/site-packages/django/utils/functional.py", line 184, in inner
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] self._setup()
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] File "/usr/lib/python2.6/site-packages/django/conf/__init__.py", line 42, in _setup
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] self._wrapped = Settings(settings_module)
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] File "/usr/lib/python2.6/site-packages/django/conf/__init__.py", line 95, in __init__
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e))
[Thu Jan 10 00:22:38 2013] [error] [client 220.181.108.97] ImportError: Could not import settings 'myblog.settings' (Is it on sys.path?): No module named myblog.settings
~
</code>
Answer: One Suggestion would be to create 2 virtual hosts in your apache configuration
www.web1.com on 80
and
www.web2.com on 81
|
Create Class Implicitly In Python
Question: I have a function that its job is generate a python class implicitly according
to the given name that pass throw the function. After that I want to create
field and method implicitly for the generated class too. I don't know how can
start it. Can someone help...
Answer: Do you really need a class? For "types" created at runtime, maybe namedtuple
would be a solution.
from collections import namedtuple
MyType= namedtuple("MyType", "field1 method1")
x = MyType(field1="3", method1=lambda x: x+1)
print x.field1, x.method1(3)
|
How to use Python to find all isbn in a text file?
Question: I have a text file `text_isbn` with loads of ISBN in it. I want to write a
script to parse it and write it to a new text file with each ISBN number in a
new line.
Thus far I could write the regular expression for finding the ISBN, but could
not process any further:
import re
list = open("text_isbn", "r")
regex = re.compile('(?:[0-9]{3}-)?[0-9]{1,5}-[0-9]{1,7}-[0-9]{1,6}-[0-9]')
I tried to use the following but got an error (I guess the list is not in
proper format...)
parsed = regex.findall(list)
How to do the parsing and write it to a new file (output.txt)?
Here is a sample of the text in `text_isbn`
Praxisguide Wissensmanagement - 978-3-540-46225-5
Programmiersprachen - 978-3-8274-2851-6
Effizient im Studium - 978-3-8348-8108-3
Answer: How about
import re
isbn = re.compile("(?:[0-9]{3}-)?[0-9]{1,5}-[0-9]{1,7}-[0-9]{1,6}-[0-9]")
matches = []
with open("text_isbn") as isbn_lines:
for line in isbn_lines:
matches.extend(isbn.findall(line))
|
Segment an image using python and PIL to calculate centroid and rotations of multiple rectangular objects
Question: I am using python and PIL to find the centroid and rotation of various
rectangles (and squares) in a 640x480 image, similar to this one 
So far my code works for a single rectangle in an image.
import Image, math
def find_centroid(im):
width, height = im.size
XX, YY, count = 0, 0, 0
for x in xrange(0, width, 1):
for y in xrange(0, height, 1):
if im.getpixel((x, y)) == 0:
XX += x
YY += y
count += 1
return XX/count, YY/count
#Top Left Vertex
def find_vertex1(im):
width, height = im.size
for y in xrange(0, height, 1):
for x in xrange (0, width, 1):
if im.getpixel((x, y)) == 0:
X1=x
Y1=y
return X1, Y1
#Bottom Left Vertex
def find_vertex2(im):
width, height = im.size
for x in xrange(0, width, 1):
for y in xrange (height-1, 0, -1):
if im.getpixel((x, y)) == 0:
X2=x
Y2=y
return X2, Y2
#Top Right Vertex
def find_vertex3(im):
width, height = im.size
for x in xrange(width-1, 0, -1):
for y in xrange (0, height, 1):
if im.getpixel((x, y)) == 0:
X3=x
Y3=y
return X3, Y3
#Bottom Right Vertex
def find_vertex4 (im):
width, height = im.size
for y in xrange(height-1, 0, -1):
for x in xrange (width-1, 0, -1):
if im.getpixel((x, y)) == 0:
X4=x
Y4=y
return X4, Y4
def find_angle (V1, V2, direction):
side1=math.sqrt(((V1[0]-V2[0])**2))
side2=math.sqrt(((V1[1]-V2[1])**2))
if direction == 0:
return math.degrees(math.atan(side2/side1)), 'Clockwise'
return 90-math.degrees(math.atan(side2/side1)), 'Counter Clockwise'
#Find direction of Rotation; 0 = CW, 1 = CCW
def find_direction (vertices, C):
high=480
for i in range (0,4):
if vertices[i][1]<high:
high = vertices[i][1]
index = i
if vertices[index][0]<C[0]:
return 0
return 1
def main():
im = Image.open('hopperrotated2.png')
im = im.convert('1') # convert image to black and white
print 'Centroid ', find_centroid(im)
print 'Top Left ', find_vertex1 (im)
print 'Bottom Left ', find_vertex2 (im)
print 'Top Right', find_vertex3 (im)
print 'Bottom Right ', find_vertex4 (im)
C = find_centroid (im)
V1 = find_vertex1 (im)
V2 = find_vertex3 (im)
V3 = find_vertex2 (im)
V4 = find_vertex4 (im)
vertices = [V1,V2,V3,V4]
direction = find_direction(vertices, C)
print 'angle: ', find_angle(V1,V2,direction)
if __name__ == '__main__':
main()
Where I am having problems is when there is more than one object in the image.
I know PIL has a find_edges method that gives an image of just the edges, but
I have no idea how to use this new edge image to segment the image into the
separate objects.
from PIL import Image, ImageFilter
im = Image.open('hopperrotated2.png')
im1 = im.filter(ImageFilter.FIND_EDGES)
im1 = im1.convert('1')
print im1
im1.save("EDGES.jpg")
if I can use the edges to segment the image into individual rectangles then i
can just run my first bit of code on each rectangle to get centroid and
rotation.
But what would be better is to be able to use the edges to calculate rotation
and centroid of each rectangle without needing to split the image up.
Everyone's help is greatly appreciated!
Answer: You need to identify each object before finding the corners. You only need the
border of the objects, so you could also reduce your initial input to that.
Then it is only a matter of following each distinct border to find your
corners, the centroid is directly found after you know each distinct border.
Using the code below, here is what you get (centroid is the red point, the
white text is the rotation in degrees):

Note that your input is not binary, so I used a really simple threshold for
that. Also, the following code is the simplest way to achieve this, there are
faster methods in any decent library.
import sys
import math
from PIL import Image, ImageOps, ImageDraw
orig = ImageOps.grayscale(Image.open(sys.argv[1]))
orig_bin = orig.point(lambda x: 0 if x < 128 else 255)
im = orig_bin.load()
border = Image.new('1', orig.size, 'white')
width, height = orig.size
bim = border.load()
# Keep only border points
for x in xrange(width):
for y in xrange(height):
if im[x, y] == 255:
continue
if im[x+1, y] or im[x-1, y] or im[x, y+1] or im[x, y-1]:
bim[x, y] = 0
else:
bim[x, y] = 255
# Find each border (the trivial dummy way).
def follow_border(im, x, y, used):
work = [(x, y)]
border = []
while work:
x, y = work.pop()
used.add((x, y))
border.append((x, y))
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1),
(1, 1), (-1, -1), (1, -1), (-1, 1)):
px, py = x + dx, y + dy
if im[px, py] == 255 or (px, py) in used:
continue
work.append((px, py))
return border
used = set()
border = []
for x in xrange(width):
for y in xrange(height):
if bim[x, y] == 255 or (x, y) in used:
continue
b = follow_border(bim, x, y, used)
border.append(b)
# Find the corners and centroid of each rectangle.
rectangle = []
for b in border:
xmin, xmax, ymin, ymax = width, 0, height, 0
mean_x, mean_y = 0, 0
b = sorted(b)
top_left, bottom_right = b[0], b[-1]
for x, y in b:
mean_x += x
mean_y += y
centroid = (mean_x / float(len(b)), mean_y / float(len(b)))
b = sorted(b, key=lambda x: x[1])
curr = 0
while b[curr][1] == b[curr + 1][1]:
curr += 1
top_right = b[curr]
curr = len(b) - 1
while b[curr][1] == b[curr - 1][1]:
curr -= 1
bottom_left = b[curr]
rectangle.append([
[top_left, top_right, bottom_right, bottom_left], centroid])
result = orig.convert('RGB')
draw = ImageDraw.Draw(result)
for corner, centroid in rectangle:
draw.line(corner + [corner[0]], fill='red', width=2)
cx, cy = centroid
draw.ellipse((cx - 2, cy - 2, cx + 2, cy + 2), fill='red')
rotation = math.atan2(corner[0][1] - corner[1][1],
corner[1][0] - corner[0][0])
rdeg = math.degrees(rotation)
draw.text((cx + 10, cy), text='%.2f' % rdeg)
result.save(sys.argv[2])
|
Python Duplicate Removal
Question: I have a question about removing duplicates in Python. I've read a bunch of
posts but have not yet been able to solve it. I have the following csv file:
**EDIT**
**Input:**
ID, Source, 1.A, 1.B, 1.C, 1.D
1, ESPN, 5,7,,,M
1, NY Times,,10,12,W
1, ESPN, 10,,Q,,M
**Output should be:**
ID, Source, 1.A, 1.B, 1.C, 1.D, duplicate_flag
1, ESPN, 5,7,,,M, duplicate
1, NY Times,,10,12,W, duplicate
1, ESPN, 10,,Q,,M, duplicate
1, NY Times, 5 (or 10 doesn't matter which one),7, 10, 12, W, not_duplicate
In words, if the ID is the same, take values from the row with source "NY
Times", if the row with "NY Times" has a blank value and the duplicate row
from the "ESPN" source has a value for that cell, take the value from the row
with the "ESPN" source. For outputting, flag the original two lines as
duplicates and create a third line.
To clarify a bit further, since I need to run this script on many different
csv files with different column headers, I can't do something like:
def main():
with open(input_csv, "rb") as infile:
input_fields = ("ID", "Source", "1.A", "1.B", "1.C", "1.D")
reader = csv.DictReader(infile, fieldnames = input_fields)
with open(output_csv, "wb") as outfile:
output_fields = ("ID", "Source", "1.A", "1.B", "1.C", "1.D", "d_flag")
writer = csv.DictWriter(outfile, fieldnames = output_fields)
writer.writerow(dict((h,h) for h in output_fields))
next(reader)
first_row = next(reader)
for next_row in reader:
#stuff
Because I want the program to run on the first two columns independently of
whatever other columns are in the table. In other words, "ID" and "Source"
will be in every input file, but the rest of the columns will change depending
on the file.
Would greatly appreciate any help you can provide! FYI, "Source" can only be:
NY Times, ESPN, or Wall Street Journal and the order of priority for
duplicates is: take NY Times if available, otherwise take ESPN, otherwise take
Wall Street Journal. This holds for every input file.
Answer: The below code reads all of the records into a big dictionary whose keys are
their identifiers and whose values are dictionaries mapping source names to
entire data rows. Then it iterates through the dictionary and gives you the
output you asked for.
import csv
header = None
idfld = None
sourcefld = None
record_table = {}
with open('input.csv', 'rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
row = [x.strip() for x in row]
if header is None:
header = row
for i, fld in enumerate(header):
if fld == 'ID':
idfld = i
elif fld == 'Source':
sourcefld = i
continue
key = row[idfld]
sourcename = row[sourcefld]
if key not in record_table:
record_table[key] = {sourcename: row, "all_rows": [row]}
else:
if sourcename in record_table[key]:
cur_row = record_table[key][sourcename]
for i, fld in enumerate(row):
if cur_row[i] == '':
record_table[key][sourcename][i] = fld
else:
record_table[key][sourcename] = row
record_table[key]["all_rows"].append(row)
print ', '.join(header) + ', duplicate_flag'
for recordid in record_table:
rowdict = record_table[recordid]
final_row = [''] * len(header)
rowcount = len(rowdict)
for sourcetype in ['NY Times', 'ESPN', 'Wall Street Journal']:
if sourcetype in rowdict:
row = rowdict[sourcetype]
for i, fld in enumerate(row):
if final_row[i] != '':
continue
if fld != '':
final_row[i] = fld
if rowcount > 1:
for row in rowdict["all_rows"]:
print ', '.join(row) + ', duplicate'
print ', '.join(final_row) + ', not_duplicate'
|
python drag and drop explorer files to tkinter entry widget
Question: I'm fairly new to Python. I'm trying to input a file name (complete with full
path) to a TKinter entry widget. Since the path to the file name can be very
long I would like to be able to drag and drop the file directly from Windows
Explorer. In Perl I have seen the following:
use Tk::DropSite;
.
.
my $mw = new MainWindow;
$top = $mw->Toplevel;
$label_entry = $top->Entry(-width => '45',. -background => 'ivory2')->pack();
$label_entry->DropSite(-dropcommand => \&drop,-droptypes => 'Win32',);
Is there something similar I can do using TKinter in Python?
Answer: Tk does not have any command to handle that, and Python doesn't include any
extra Tk extension to perform drag & drop inter-applications, therefore you
need an extension to perform the operation. Tkdnd (the Tk extension at
<http://sourceforge.net/projects/tkdnd>, not the Tkdnd.py module) works for
me. To use it from Python, a wrapper is required. Quickly searching for one,
it seems <http://mail.python.org/pipermail/tkinter-
discuss/2005-July/000476.html> contains such code. I did another one because I
didn't like that other one. The problem with my wrapper is that it is highly
untested, in fact I only used the function `bindtarget` and only for 10
seconds or so.
With the wrapper below, you can create some widget and announce that it
supports receiving dragged files. Here is one example:
# The next two lines are not necessary if you installed TkDnd
# in a proper place.
import os
os.environ['TKDND_LIBRARY'] = DIRECTORYTOTHETKDNDBINARY
import Tkinter
from untested_tkdnd_wrapper import TkDND
root = Tkinter.Tk()
dnd = TkDND(root)
entry = Tkinter.Entry()
entry.pack()
def handle(event):
event.widget.insert(0, event.data)
dnd.bindtarget(entry, handle, 'text/uri-list')
root.mainloop()
And here is the code for `untested_tkdnd_wrapper.py`:
import os
import Tkinter
def _load_tkdnd(master):
tkdndlib = os.environ.get('TKDND_LIBRARY')
if tkdndlib:
master.tk.eval('global auto_path; lappend auto_path {%s}' % tkdndlib)
master.tk.eval('package require tkdnd')
master._tkdnd_loaded = True
class TkDND(object):
def __init__(self, master):
if not getattr(master, '_tkdnd_loaded', False):
_load_tkdnd(master)
self.master = master
self.tk = master.tk
# Available pre-defined values for the 'dndtype' parameter:
# text/plain
# text/plain;charset=UTF-8
# text/uri-list
def bindtarget(self, window, callback, dndtype, event='<Drop>', priority=50):
cmd = self._prepare_tkdnd_func(callback)
return self.tk.call('dnd', 'bindtarget', window, dndtype, event,
cmd, priority)
def bindtarget_query(self, window, dndtype=None, event='<Drop>'):
return self.tk.call('dnd', 'bindtarget', window, dndtype, event)
def cleartarget(self, window):
self.tk.call('dnd', 'cleartarget', window)
def bindsource(self, window, callback, dndtype, priority=50):
cmd = self._prepare_tkdnd_func(callback)
self.tk.call('dnd', 'bindsource', window, dndtype, cmd, priority)
def bindsource_query(self, window, dndtype=None):
return self.tk.call('dnd', 'bindsource', window, dndtype)
def clearsource(self, window):
self.tk.call('dnd', 'clearsource', window)
def drag(self, window, actions=None, descriptions=None,
cursorwin=None, callback=None):
cmd = None
if cursorwin is not None:
if callback is not None:
cmd = self._prepare_tkdnd_func(callback)
self.tk.call('dnd', 'drag', window, actions, descriptions,
cursorwin, cmd)
_subst_format = ('%A', '%a', '%b', '%D', '%d', '%m', '%T',
'%W', '%X', '%Y', '%x', '%y')
_subst_format_str = " ".join(_subst_format)
def _prepare_tkdnd_func(self, callback):
funcid = self.master.register(callback, self._dndsubstitute)
cmd = ('%s %s' % (funcid, self._subst_format_str))
return cmd
def _dndsubstitute(self, *args):
if len(args) != len(self._subst_format):
return args
def try_int(x):
x = str(x)
try:
return int(x)
except ValueError:
return x
A, a, b, D, d, m, T, W, X, Y, x, y = args
event = Tkinter.Event()
event.action = A # Current action of the drag and drop operation.
event.action_list = a # Action list supported by the drag source.
event.mouse_button = b # Mouse button pressed during the drag and drop.
event.data = D # The data that has been dropped.
event.descr = d # The list of descriptions.
event.modifier = m # The list of modifier keyboard keys pressed.
event.dndtype = T
event.widget = self.master.nametowidget(W)
event.x_root = X # Mouse pointer x coord, relative to the root win.
event.y_root = Y
event.x = x # Mouse pointer x coord, relative to the widget.
event.y = y
event.action_list = str(event.action_list).split()
for name in ('mouse_button', 'x', 'y', 'x_root', 'y_root'):
setattr(event, name, try_int(getattr(event, name)))
return (event, )
Together with Tkdnd, you will find a `tkdnd.tcl` program which is a higher
level over the own C extension it provides. I didn't wrap this higher level
code, but it could be more interesting to replicate it in Python than to use
this lower level wrapper.
|
Python Random Random
Question: ho i making the random number be below the random number before.
if Airplane==1:
while icounter<4:
ifuelliter=random.randrange(1,152621)
#litter/kilometer
LpK=152620/13500
km=LpK*ifuelliter
ipca=random.randrange(0,50)
ipcb=random.randrange(0,50)
ipcc=random.randrange(0,812)
#3D space distance calculation
idstance= math.sqrt((icba-ipca)**2 + (icbb-ipcb)**2 + (icbc-ipcc)**2)
totaldist=km-idstance
if totaldist>0:
print "You have enoph fuel to get to New York AirPort"
print ifuelliter,LpK,km,ipca,ipcb,ipcc,idstance
icounter=3
if totaldist<=0:
print "You dont have enoph fuel to get to New York AirPort please go to the nearest one or you will die"
print ifuelliter,LpK,km,ipca,ipcb,ipcc,idstance
icounter=icounter+1
whati mean that the "ipca , ipcb , ipcc," i need that they will grow down and
not chust a other number.
Answer: Just set the second parameter of randrange with the previous value:
import random
a = random.randrange(0,50)
b = random.randrange(0,a)
while b > a:
b = random.randrange(0,a)
By the way, be careful if your indenting style at the beginning of your code
if Airplane == 1:
while ....
Should be
if Airplane == 1:
while ....
|
Python Week Numbers where all weeks have 7 days, regardless of year rollover
Question: I have an application where I need to measure the week-number of the year, and
I want all weeks to have 7 days, regardless of whether the days are in
separate years.
For example, I want all the days from Dec 30, 2012 to Jan 5, 2013 to be in the
same week.
But this is not straight forward to do in python, because as the `datetime`
documentation states [here](http://docs.python.org/2/library/datetime.html):
%U Week number of the year (Sunday as the first day of the week)
as a decimal number [00,53]. All days in a new year preceding the
first Sunday are considered to be in week 0.
I do not want 'All days in a new year preceding the first Sunday' to be
considered to be in week 0. Week 0 will have less than 7 days, as will the
last week of 2012.
Therefore Python returns:
import datetime
datetime.date(2012, 12, 31).strftime('%Y-%U')
>>> 2012-53
import datetime
datetime.date(2013, 01, 01).strftime('%Y-%U')
>>> 2013-00
Even though those two days are a Monday and Tuesday, respectably and should be
in the same week, when a week is considered to start on Sunday and end on
Saturday.
Instead, I want functionality that mirrors what MySQL does with `yearweek` in
mode 2 (doc [here](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-
functions.html#function_yearweek)).
For example,
mysql> select yearweek('2013-01-01', 2) as week;
+--------+
| week |
+--------+
| 201253 |
+--------+
1 row in set (0.64 sec)
Note that even though the date is in 2013, the week is considered to be
201253, guaranteeing that the last week of 2012 will 7 days.
Is this already implemented in Python?
Calendar included below for reference:
December 2012
Mo Tu We Th Fr Sa Su
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31
January 2013
Mo Tu We Th Fr Sa Su
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 31
Answer: The **isoweek** module provides everything you need.
From the documentation:
from isoweek import Week
w = Week(2011, 20)
print "Week %s starts on %s" % (w, w.monday())
print "Current week number is", Week.thisweek().week
print "Next week is", Week.thisweek() + 1
<http://pypi.python.org/pypi/isoweek/1.1.0>
|
Python bytes are missing after recv from UDP socket
Question:
import socket
import sys
import datetime
import os
try:
username = "root"
password = "Apacheah64"
db_name = "DB_GPS"
table_name = "Tbl_GPS"
host = ""
port = 6903
buf = 4096
except IndexError:
sys.exit(1)
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.bind((host, port))
while 1:
data = s.recv(buf)
if not data:
print("Client has exited!")
break
else:
print("\nReceived message '", data,"'")
# Close socket
s.close()
the bytes i m received should be 43 bytes, but what i received from client is
Received message ' b'\x0f\x00\x00\x00NR09G05164\x00' ' ? only 15 bytes. why?
Below is Original Bytes 43 bytes
`00 00 00 01 00 06 ec 44 76 a6 21 c2 00 00 08 00 45 00 00 2b 08 43 00 00 34 11
81 2b cb 52 50 db 67 0d 7a 19 24 2d 1a f7 00 17 83 26 0f 00 00 00 4e 52 30 39
47 30 35 31 36 34 00`
Answer: Maybe missing bytes are simply not displayed by the `print`? Check `len(data)`
value.
You can't receive incomplete packet over UDP, it will deliver you full
datagram or nothing at all.
|
urllib2 python exception in Ubuntu when loading a https json file
Question: I'm trying to load a json file but it's throwing an exception:
<urlopen error [Errno 104] Connection reset by peer>
This is my code (I executed it on the shell for testing/debugging purposes):
>>> import urllib2
>>> uri = 'https://api.mercadolibre.com/sites/MLA/search?q=camisas%20columbia'
>>> req = urllib2.Request(uri)
>>> resp = urllib2.urlopen(req)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/opt/bitnami/python/lib/python2.7/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/opt/bitnami/python/lib/python2.7/urllib2.py", line 400, in open
response = self._open(req, data)
File "/opt/bitnami/python/lib/python2.7/urllib2.py", line 418, in _open
'_open', req)
File "/opt/bitnami/python/lib/python2.7/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/opt/bitnami/python/lib/python2.7/urllib2.py", line 1215, in https_open
return self.do_open(httplib.HTTPSConnection, req)
File "/opt/bitnami/python/lib/python2.7/urllib2.py", line 1177, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 104] Connection reset by peer>
I'm using Ubuntu 12.04 (64-bit) Bitnami's Django Stack 1.4.3-0 virtualized on
VMWare.
But, I was curious and tried the same exact code in my host machine (Windows 7
64-bits) where I also have **THE SAME EXACT VERSION** of python installed and
guess what... it worked flawlessly.
Here's the windows output:
C:\Users\Kevin>python
Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib2
>>> uri = "https://api.mercadolibre.com/sites/MLA/search?q=camisas%20columbia"
>>> req = urllib2.Request(uri)
>>> resp = urllib2.urlopen(req)
>>> resp.read()
'{"site_id":"MLA","query":"camisas columbia","paging": {"total":43,"offset":0,"limit":50},"results": [{"id":"MLA445360462","site_id":"MLA","title":"Ca
misa Columbia Silver Rider Hombre Tecnolog\xc3\xadas De Omni-dry" [...]
How can I fix this issue in Ubuntu? I have tried changing the user agent and
stuff in the request but the result was always the same on Ubuntu.
Also tried manually copying the json file and uploaded it to dropbox and ran
the same code as above but with the dropbox url and it worked flawlessly on
both systems.
Hope you guys can help me, this is driving me crazy and my whole project
depends on that freaking api :(
Thanks in advance and sorry for my poor english.
Answer: I found the root of the issue:
<https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/965371>
|
os.environ['http_proxy'] not working
Question: > **Possible Duplicate:**
> [Is it possible to change the Environment of a parent process in
> python?](http://stackoverflow.com/questions/263005/is-it-possible-to-change-
> the-environment-of-a-parent-process-in-python)
I am using python 2.4.3. I tried to set my http_proxy variable. Please see the
below example and please let me know what is wrong. the variable is set
according to python, however when i get out of the interactive mode. The
http_proxy variable is still not set. I have tried it in a script and also
tried it with other variables but i get the same result. No variable is
actually set up in the OS.
Python 2.4.3 (#1, May 1 2012, 13:52:57)
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.environ['http_proxy']="abcd"
>>> os.system("echo $http_proxy")
abcd
0
>>> print os.environ['http_proxy']
abcd
>>>
user@host~$ echo $http_proxy
user@host~$
Answer: When you run this code, you set the environment variables, its working scope
is only within the process. After you exit (exit the interactive mode of
python), these environment will be disappear.
As your code "os.system("echo $http_proxy")" indicates, if you want to use
these environment variables, you need run external program within the process.
These variables will be transfer into the child processes and can be used by
them.
|
urllib2.URLError: <urlopen error unknown url type: c>
Question: I am using the below code to scrape over XFN content from web page
<http://ajaxian.com> but I am gatting the below error:
Traceback (most recent call last): File "C:\Users\Somnath\workspace\us.chakra.social.web.microformat\src\microformats_xfn_scrape.py", line 40, in <module>
page = urllib2.urlopen(URL)
File "C:\Python27\lib\urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 394, in open
response = self._open(req, data)
File "C:\Python27\lib\urllib2.py", line 417, in _open
'unknown_open', req)
File "C:\Python27\lib\urllib2.py", line 372, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 1232, in unknown_open
raise URLError('unknown url type: %s' % type)
urllib2.URLError: <urlopen error unknown url type: c>
My code is as follows:
'''
Created on Jan 11, 2013
@author: Somnath
'''
# Scraping XFN content from a web page
# -*-coding: utf-8 -*-
import sys
import urllib2
import HTMLParser
from BeautifulSoup import BeautifulSoup
# Try http://ajaxian.com
URL = sys.argv[0]
XFN_TAGS = set([
'colleague',
'sweetheart',
'parent',
'co-resident',
'co-worker',
'muse',
'neighbor',
'sibling',
'kin',
'child',
'date',
'spouse',
'me',
'acquaintance',
'met',
'crush',
'contact',
'friend',
])
#try:
page = urllib2.urlopen(URL)
#except urllib2.URLError:
# print 'Failed to fetch ' + item
#try:
soup = BeautifulSoup(page)
#except HTMLParser.HTMLParseError:
# print 'Failed to parse ' + item
anchorTags = soup.findAll('a')
for a in anchorTags:
if a.has_key('rel'):
if len(set(a['rel'].split()) & XFN_TAGS) > 0:
tags = a['rel'].split()
print a.contents[0], a['href'], tags
I am running PyDev under Eclipse and is using Run As --> Python Run and set
the Runtime Configuration with argument "http://ajaxian.com/". Can anybody
suggest where I am getting wrong?
One more thing: I have commented the two try blocks in my code because it was
giving an error undefined variable : item. If I want to re-include the try-
except blocks, should I give a blank definition of variable, item outside the
try blocks? How can I get rid of that problem?
Answer: As you suggested `sys.argv[0]` prints the path of your script, that is because
you call your script like
python microformats_xfn_scrape.py <some_argument>
and here index 0 of sys.argv is the name of the script and not the argument.
What you need to do is call your script with the `<url>` parameter, like :
python microformats_xfn_scrape.py http://www.ajaxian.com/
and in your script change `sys.argv[0]` to `sys.argv[1]` as url arguments
index is 1.
|
Best practice for starting/stopping daemons as part of a unittest (using pytest)
Question: Functional testing of code often requires external resources like e.g. a
database.
There are basically two approaches:
* assuming that a resource (e.g. database) is always running and is always available
* start/stop the related resource as part of the test
In the "old" world of Python unittest(2) world setUp() and tearDown() methods
could be used for controlling the services.
With py.test the world became more complicated and the concept of setUp() and
tearDown() methods has been replaced with the funcarg magic for implementing
fixtures. Honestly this approach is broken - at least as a replacement for
setUp/tearDown methods.
What is the recommended way for controlling services and resources in a
project where py.test is used?
Should we continue writing our tests (at least where needed) with
setUp/tearDown methods or is there a better pattern?
Answer: pytest supports xUnit style setup/teardown methods, see
<http://pytest.org/latest/xunit_setup.html> so if you prefer this style you
can just use it.
Using <http://pytest.org/latest/fixture.html> one can also instantiate a
"session" scoped fixture which can instantiate external processes for the
whole test run and return a connection object. Code would roughly look like
this:
# content of conftest.py
import pytest
@pytest.fixture(scope="session")
def connection():
... instantiate process ...
return connection_to_process_or_url
and test files using it like this:
# content of test_conn.py
def test_conn_ok(connection):
... work with connection ...
If you want to keep up the service between test runs, you will need to write
some logic (storing PID, checking it's alive, if no PID or not alive, start a
new process) yourself for the time being (a future release might include such
support code).
|
cannot add or apply the user defined package of TCL in Python
Question: I have a TCL script that requires a user defined package within, when I run
the TCL through python by the below script:
import subprocess
p = subprocess.Popen(
"tclsh tcltest.tcl",
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
print stdout
print stderr
returns the following error:
can't find package __teapot__
while executing
"package require __teapot__"
the TCL works in tclsh environment! I believe that something is wrong in my
setup that python doesn't recognize the package!
Answer: I wonder if the environment variables are not passed in explicitly. How about:
import subprocess
import os
p = subprocess.Popen(
"tclsh tcltest.tcl",
env=os.environ,
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
print stdout
print stderr
### Update
Since that does not make any different, stick the following line into the
first line of _tcltest.tcl_ file and compare the outputs:
puts ">>$auto_path<<"
I suspect the `auto_path` variable is different between the two cases. This
variable is one way Tcl uses to locate packages.
|
Python Unit Testing a loop function
Question: This is a followup question from
[1](http://stackoverflow.com/questions/14282783/call-a-python-unittest-from-
another-script-and-export-all-the-error-messages#comment19835851_14282783)
I need to test the function with two given cases in a loop. However, based on
the printed results, it seems like only the first iteration is checked? Is
this related to `runner.run(unittest.makeSuite(MyTestCase))`?
import unittest
from StringIO import StringIO
import rice_output
class MyTestCase(unittest.TestCase):
def setUp(self):
#####Pre-defined inputs########
self.dsed_in=[1,2]
self.a_in=[2,3]
self.pb_in=[3,4]
#####Pre-defined outputs########
self.msed_out=[6,24]
#####TestCase run variables########
self.tot_iter=len(self.a_in)
def testMsed(self):
for i in range(self.tot_iter):
print i
fun = rice_output.msed(self.dsed_in[i],self.a_in[i],self.pb_in[i])
value = self.msed_out[i]
testFailureMessage = "Test of function name: %s iteration: %i expected: %i != calculated: %i" % ("msed",i,value,fun)
return self.assertEqual(round(fun,3),round(self.msed_out[i],3),testFailureMessage)
from pprint import pprint
stream = StringIO()
runner = unittest.TextTestRunner(stream=stream)
result = runner.run(unittest.makeSuite(MyTestCase))
print 'Tests run ', result.testsRun
print 'Errors ', result.errors
Here is he output:
0
Tests run 1
Errors []
[]
Test output
testMsed (__main__.MyTestCase) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Any suggestions? Thanks!
Answer: Remove the return statement
def testMsed(self):
for i in range(self.tot_iter):
print i
fun = rice_output.msed(self.dsed_in[i],self.a_in[i],self.pb_in[i])
value = self.msed_out[i]
testFailureMessage = "Test of function name: %s iteration: %i expected: %i != calculated: %i" % ("msed",i,value,fun)
self.assertEqual(round(fun,3),round(self.msed_out[i],3),testFailureMessage)
|
I am trying to make a file delete program
Question: I get winErrors at runtime like 'The system cannot find the file specified',
but I know that those files exist... The idea is to try and use recurssion to
embed itself into every file, and then delete them, which greatly decreases
the time spent deleting the files. My friend made this in Java, and it managed
to delete 3GBs in 11 seconds. I wanted to use the same idea with Python and
this is the result.
import os, sys, glob, fileinput, string
from os import *
def fileInput():
#asks for input of a file path
Folder = input("Please input a file path: ")
filePathLength = len(Folder)
#checks to make sure input was provided
if filePathLength == 0:
print("Please provide a folder...")
fileInput()
else:
#checks to make sure that it is a proper path, ie- that is has ":\\"
if Folder.find(":\\") == -1:
print("Make sure the path is valid")
fileInput()
else:
#if the path is a directory it calls the delete folder function
print("Inputted path: " + Folder)
if os.path.isdir(Folder):
deleteFolder(Folder)
else:
print("Path does not exist...")
fileInput()
def deleteFolder(pathDir):
print(str(pathDir))
try:
for folder in os.listdir(pathDir):
if folder.find(".") == -1:
deleteFolder(pathDir + "\\" + folder)
except NotADirectoryError as notADirectory:
print(str(notADirectory))
try:
for folder in os.listdir(pathDir):
if folder.find(".") != -1:
os.remove(folder)
print("deleted file " + str(folder))
except IOError as errorCheck:
print(str(errorCheck))
fileInput()
Any ideas will be much appreciated. I am using Python 3.3 on Windows 7 64-bit
Answer: `os.listdir()` returns relative paths. Use full path
`os.remove(os.path.join(pathDir, folder))`.
|
Using gevent with pyramid
Question: I'm building a website using pyramid, and I want to fetch some data from other
websites. Because there may be 50+ calls of `urlopen`, I wanted to use gevent
to speed things up.
Here's what I've got so far using gevent:
import urllib2
from gevent import monkey; monkey.patch_all()
from gevent import pool
gpool = gevent.pool.Pool()
def load_page(url):
response = urllib2.urlopen(url)
html = response.read()
response.close()
return html
def load_pages(urls):
return gpool.map(load_page, urls)
Running `pserve development.ini --reload` gives:
`NotImplementedError: gevent is only usable from a single thread`.
I've read that I need to monkey patch before anything else, but I'm not sure
where the right place is for that. Also, is this a pserve-specific issue? Will
I need to re-solve this problem when I move to
[mod_wsgi](http://docs.pylonsproject.org/projects/pyramid/en/latest/tutorials/modwsgi/index.html)?
Or is there a way to handle this use-case (just urlopen) without gevent? I've
seen suggestions for [requests](http://docs.python-requests.org/en/latest/)
but I couldn't find an example of fetching multiple pages in the docs.
### Update 1:
I also tried eventlet from [this SO
question](http://stackoverflow.com/a/2361129/312364) (almost directly copied
from this eventlet
[example](http://eventlet.net/doc/design_patterns.html#client-pattern)):
import eventlet
from eventlet.green import urllib2
def fetch(url):
return urllib2.urlopen(url).read()
def fetch_multiple(urls):
pool = eventlet.GreenPool()
return pool.imap(fetch, urls)
However when I call `fetch_multiple`, I'm getting `TypeError: request() got an
unexpected keyword argument 'return_response'`
### Update 2:
The `TypeError` from the previous update was likely from earlier attempts to
monkeypatch with gevent and not properly restarting pserve. Once I restarted
everything, it works properly. Lesson learned.
Answer: There are multiple ways to do what you want:
* Create a dedicated `gevent` thread, and explicitly dispatch all of your URL-opening jobs to that thread, which will then do the gevented `urlopen` requests.
* Use threads instead of greenlets. Running 50 threads isn't going to tax any modern OS.
* Use a thread pool and a queue. There's usually not much advantage to doing 50 downloads at the same time instead of, say, 8 at a time (as your browser probably does).
* Use a different async framework instead of `gevent`, one that doesn't work by magically greenletifying your code.
* Use a library that has its own non-magic async support, like `pycurl`.
* Instead of mixing and matching incompatible frameworks, build the server around `gevent` too, or find some other framework that works for both your web-serving and your web-client needs.
You could simulate the last one without changing frameworks by loading
`gevent` first, and have it monkeypatch your threads, forcing your existing
threaded server framework to become a `gevent` server. But this may not work,
or mostly work but occasionally fail, or work but be much slower… Really,
using a framework designed to be `gevent`-friendly (or at least greenlet-
friendly) is a much better idea, if that's the way you want to go.
You mentioned that others had recommended `requests`. The reason you can't
find the documentation is that the built-in async code in `requests` was
removed. See, [an older version](http://docs.python-
requests.org/en/v0.10.6/user/advanced/#asynchronous-requests) for how it was
used. It's now available as a separate library,
[`grequests`](https://github.com/kennethreitz/grequests). However, it works by
implicitly wrapping `requests` with `gevent`, so it will have exactly the same
issues as doing so yourself.
(There are other reasons to use `requests` instead of `urllib2`, and if you
want to `gevent` it it's easier to use `grequests` than to do it yourself.)
|
How to parse XML in Python and LXML?
Question: Here's my project: I'm graphing weather data from WeatherBug using RRDTool. I
need a simple, efficient way to download the weather data from WeatherBug. I
was using a terribly inefficient bash-script-scraper but moved on to
BeautifulSoup. The performance is just too slow (it's running on a Raspberry
Pi) so I need to use LXML.
What I have so far:
from lxml import etree
doc=etree.parse('weather.xml')
print doc.xpath("//aws:weather/aws:ob/aws:temp")
But I get an error message. Weather.xml is this:
<?xml version="1.0" encoding="UTF-8"?>
<aws:weather xmlns:aws="http://www.aws.com/aws">
<aws:api version="2.0"/>
<aws:WebURL>http://weather.weatherbug.com/PA/Tunkhannock-weather.html?ZCode=Z5546&Units=0&stat=TNKCN</aws:WebURL>
<aws:InputLocationURL>http://weather.weatherbug.com/PA/Tunkhannock-weather.html?ZCode=Z5546&Units=0</aws:InputLocationURL>
<aws:ob>
<aws:ob-date>
<aws:year number="2013"/>
<aws:month number="1" text="January" abbrv="Jan"/>
<aws:day number="11" text="Friday" abbrv="Fri"/>
<aws:hour number="10" hour-24="22"/>
<aws:minute number="26"/>
<aws:second number="00"/>
<aws:am-pm abbrv="PM"/>
<aws:time-zone offset="-5" text="Eastern Standard Time (USA)" abbrv="EST"/>
</aws:ob-date>
<aws:requested-station-id/>
<aws:station-id>TNKCN</aws:station-id>
<aws:station>Tunkhannock HS</aws:station>
<aws:city-state zipcode="18657">Tunkhannock, PA</aws:city-state>
<aws:country>USA</aws:country>
<aws:latitude>41.5663871765137</aws:latitude>
<aws:longitude>-75.9794464111328</aws:longitude>
<aws:site-url>http://www.tasd.net/highschool/index.cfm</aws:site-url>
<aws:aux-temp units="&deg;F">-100</aws:aux-temp>
<aws:aux-temp-rate units="&deg;F">0</aws:aux-temp-rate>
<aws:current-condition icon="http://deskwx.weatherbug.com/images/Forecast/icons/cond013.gif">Cloudy</aws:current-condition>
<aws:dew-point units="&deg;F">40</aws:dew-point>
<aws:elevation units="ft">886</aws:elevation>
<aws:feels-like units="&deg;F">41</aws:feels-like>
<aws:gust-time>
<aws:year number="2013"/>
<aws:month number="1" text="January" abbrv="Jan"/>
<aws:day number="11" text="Friday" abbrv="Fri"/>
<aws:hour number="12" hour-24="12"/>
<aws:minute number="18"/>
<aws:second number="00"/>
<aws:am-pm abbrv="PM"/>
<aws:time-zone offset="-5" text="Eastern Standard Time (USA)" abbrv="EST"/>
</aws:gust-time>
<aws:gust-direction>NNW</aws:gust-direction>
<aws:gust-direction-degrees>323</aws:gust-direction-degrees>
<aws:gust-speed units="mph">17</aws:gust-speed>
<aws:humidity units="%">98</aws:humidity>
<aws:humidity-high units="%">100</aws:humidity-high>
<aws:humidity-low units="%">61</aws:humidity-low>
<aws:humidity-rate>3</aws:humidity-rate>
<aws:indoor-temp units="&deg;F">77</aws:indoor-temp>
<aws:indoor-temp-rate units="&deg;F">-1.1</aws:indoor-temp-rate>
<aws:light>0</aws:light>
<aws:light-rate>0</aws:light-rate>
<aws:moon-phase moon-phase-img="http://api.wxbug.net/images/moonphase/mphase01.gif">0</aws:moon-phase>
<aws:pressure units=""">30.09</aws:pressure>
<aws:pressure-high units=""">30.5</aws:pressure-high>
<aws:pressure-low units=""">30.08</aws:pressure-low>
<aws:pressure-rate units=""/h">-0.01</aws:pressure-rate>
<aws:rain-month units=""">0.11</aws:rain-month>
<aws:rain-rate units=""/h">0</aws:rain-rate>
<aws:rain-rate-max units=""/h">0.12</aws:rain-rate-max>
<aws:rain-today units=""">0.09</aws:rain-today>
<aws:rain-year units=""">0.11</aws:rain-year>
<aws:temp units="&deg;F">41</aws:temp>
<aws:temp-high units="&deg;F">42</aws:temp-high>
<aws:temp-low units="&deg;F">29</aws:temp-low>
<aws:temp-rate units="&deg;F/h">-0.9</aws:temp-rate>
<aws:sunrise>
<aws:year number="2013"/>
<aws:month number="1" text="January" abbrv="Jan"/>
<aws:day number="11" text="Friday" abbrv="Fri"/>
<aws:hour number="7" hour-24="07"/>
<aws:minute number="29"/>
<aws:second number="53"/>
<aws:am-pm abbrv="AM"/>
<aws:time-zone offset="-5" text="Eastern Standard Time (USA)" abbrv="EST"/>
</aws:sunrise>
<aws:sunset>
<aws:year number="2013"/>
<aws:month number="1" text="January" abbrv="Jan"/>
<aws:day number="11" text="Friday" abbrv="Fri"/>
<aws:hour number="4" hour-24="16"/>
<aws:minute number="54"/>
<aws:second number="19"/>
<aws:am-pm abbrv="PM"/>
<aws:time-zone offset="-5" text="Eastern Standard Time (USA)" abbrv="EST"/>
</aws:sunset>
<aws:wet-bulb units="&deg;F">40.802</aws:wet-bulb>
<aws:wind-speed units="mph">3</aws:wind-speed>
<aws:wind-speed-avg units="mph">1</aws:wind-speed-avg>
<aws:wind-direction>S</aws:wind-direction>
<aws:wind-direction-degrees>163</aws:wind-direction-degrees>
<aws:wind-direction-avg>SE</aws:wind-direction-avg>
</aws:ob>
</aws:weather>
I used <http://www.xpathtester.com/test> to test my xpath and it worked there.
But I get the error message:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "lxml.etree.pyx", line 2043, in lxml.etree._ElementTree.xpath (src/lxml/lxml.etree.c:47570)
File "xpath.pxi", line 376, in lxml.etree.XPathDocumentEvaluator.__call__ (src/lxml/lxml.etree.c:118247)
File "xpath.pxi", line 239, in lxml.etree._XPathEvaluatorBase._handle_result (src/lxml/lxml.etree.c:116911)
File "xpath.pxi", line 224, in lxml.etree._XPathEvaluatorBase._raise_eval_error (src/lxml/lxml.etree.c:116728)
lxml.etree.XPathEvalError: Undefined namespace prefix
This is all _very_ new to me -- Python, XML, and LXML. All I want is the
observed time and the temperature.
Do my problems have anything to do with that aws: prefix in front of
everything? What does that even mean?
Any help you can offer is greatly appreciated!
Answer: The problem has all "to do with that aws: prefix in front of everything"; it
is a namespace prefix which you have to define. This is easily achievable, as
in:
print doc.xpath('//aws:weather/aws:ob/aws:temp',
namespaces={'aws': 'http://www.aws.com/aws'})[0].text
The need for this mapping between the namespace prefix to a value is
documented at <http://lxml.de/xpathxslt.html>.
|
Python, first time using Decimal and quantize
Question: I was just wondering if anybody had any input on how to improve this code. My
goal is for it to be as pythonic as possible since I'm trying to really learn
python well. This program works fine, but if you see anything that you think
could be done to improve (not major changes, just basic "Im new to python"
stuff) this program please let me know.
#!/usr/bin/python
from decimal import *
print "Welcome to the checkout counter! How many items are you purchasing today?"
numOfItems = int(raw_input())
dictionary = {}
for counter in range(numOfItems):
print "Please enter the name of product", counter + 1
currentProduct = raw_input()
print "And how much does", currentProduct, "cost?"
currentPrice = float(raw_input())
dictionary.update({currentProduct:currentPrice})
print "Your order was:"
subtotal = 0
for key, value in dictionary.iteritems():
subtotal = subtotal + value
stringValue = str(value)
print key, "$" + stringValue
tax = subtotal * .09
total = subtotal + tax
total = Decimal(str(total)).quantize(Decimal('0.01'), rounding = ROUND_DOWN)
stringSubtotal = str(subtotal)
stringTotal = str(total)
print "Your subtotal comes to", "$" + stringSubtotal + ".", " With 9% sales tax, your total is $" + stringTotal + "."
print "Please enter cash amount:"
cash = Decimal(raw_input()).quantize(Decimal('0.01'))
change = cash - total
stringChange = str(change)
print "I owe you back", "$" + stringChange
print "Thank you for shopping with us!"
Answer: 1. Call the product dictionary "products" or some similarly descriptive name, instead of just "dictionary"
2. Generally, if you are iterating over a range, use `xrange` instead of `range` for better performance (though it's a very minor nitpick in an app like this)
3. You can use `subtotal = sum(dictionary.itervalues())` to quickly add up all the item prices, without having to use the loop.
4. You should definitely use Decimal throughout to avoid inaccuracies due to `float`.
5. You can use a formatting string like `'%.2f' % value` (old-style format) or `'{:.2f}' .format(value)` (new-style format) to print out values with two decimal places.
6. The tax value should be a constant, so it can be changed easily (it's used in two places, once for the calculation and once for the display).
|
Django REST framework: help on object level permission
Question: Following this tutorial:
<http://django-rest-framework.org/tutorial/1-serialization.html>
through <http://django-rest-framework.org/tutorial/4-authentication-and-
permissions.html>
I have this code:
# models.py
class Message(BaseDate):
"""
Private Message Model
Handles private messages between users
"""
status = models.SmallIntegerField(_('status'), choices=choicify(MESSAGE_STATUS))
from_user = models.ForeignKey(User, verbose_name=_('from'), related_name='messages_sent')
to_user = models.ForeignKey(User, verbose_name=_('to'), related_name='messages_received')
text = models.TextField(_('text'))
viewed_on = models.DateTimeField(_('viewed on'), blank=True, null=True)
# serialisers.py
class MessageSerializer(serializers.ModelSerializer):
from_user = serializers.Field(source='from_user.username')
to_user = serializers.Field(source='to_user.username')
class Meta:
model = Message
fields = ('id', 'status', 'from_user', 'to_user', 'text', 'viewed_on')
# views.py
from permissions import IsOwner
class MessageDetail(generics.RetrieveUpdateDestroyAPIView):
model = Message
serializer_class = MessageSerializer
authentication_classes = (TokenAuthentication, SessionAuthentication)
permission_classes = (permissions.IsAuthenticated, IsOwner)
# permissions.py
class IsOwner(permissions.BasePermission):
"""
Custom permission to only allow owners of an object to edit or delete it.
"""
def has_permission(self, request, view, obj=None):
# Write permissions are only allowed to the owner of the snippet
return obj.from_user == request.user
# urls.py
urlpatterns = patterns('',
url(r'^messages/(?P<pk>[0-9]+)/$', MessageDetail.as_view(), name='api_message_detail'),
)
Then opening the URL of the API i get this error:
**AttributeError at /api/v1/messages/1/
'NoneType' object has no attribute 'from_user'**
Traceback:
File "/var/www/sharigo/python/lib/python2.6/site-packages/django/core/handlers/base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "/var/www/sharigo/python/lib/python2.6/site-packages/django/views/generic/base.py" in view
48. return self.dispatch(request, *args, **kwargs)
File "/var/www/sharigo/python/lib/python2.6/site-packages/django/views/decorators/csrf.py" in wrapped_view
77. return view_func(*args, **kwargs)
File "/var/www/sharigo/python/lib/python2.6/site-packages/rest_framework/views.py" in dispatch
363. response = self.handle_exception(exc)
File "/var/www/sharigo/python/lib/python2.6/site-packages/rest_framework/views.py" in dispatch
351. self.initial(request, *args, **kwargs)
File "/var/www/sharigo/python/lib/python2.6/site-packages/rest_framework/views.py" in initial
287. if not self.has_permission(request):
File "/var/www/sharigo/python/lib/python2.6/site-packages/rest_framework/views.py" in has_permission
254. if not permission.has_permission(request, self, obj):
File "/var/www/sharigo/sharigo/apps/sociable/permissions.py" in has_permission
17. return obj.from_user == request.user
Exception Type: AttributeError at /api/v1/messages/1/
Exception Value: 'NoneType' object has no attribute 'from_user'
It seems like None is being passed as the value for the parameter "obj" to
isOwner.has_permission(). What am I doing wrong? I think i followed strictly
the tutorial.
Answer: When `has_permission()` is called with `obj=None` it's supposed to return
whether the user has permission to _any_ object of this type. So you should
handle the case when None is passed.
Your code should be something like:
def has_permission(self, request, view, obj=None):
# Write permissions are only allowed to the owner of the snippet
return obj is None or obj.from_user == request.user
|
Cannot import a python module that is definitely installed (mechanize)
Question: On-going woes with the python (2.7.3) installation on my Ubuntu 12.04 machine
and importing modules.
Here I am having an issue where I have definitely installed mechanize both on
my machine and in various virtual environments.
I have tried installing from pip, easy_install and via `python setup.py
install` from this repo: <https://github.com/abielr/mechanize>.
To no avail, each time, I enter my python interactive and I get:
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mechanize
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named mechanize
>>>
Other computers I install this on do not have a problem (a mac, or a windows
machine at work, for instance, it's all good, installs and imports like
normal).
It's pretty much driving me crazy at this point, just want to get some work
done.
**UPDATE INFO (in response to comments)** :
Out put of `easy_install mechanize` and paths:
<me>@<host>:~$ sudo easy_install mechanize
[sudo] password for <me>:
Processing mechanize
Writing /home/<me>/mechanize/setup.cfg
Running setup.py -q bdist_egg --dist-dir /home/<me>/mechanize/egg-dist-tmp-zXwJ_d
warning: no files found matching '*.html' under directory 'docs'
warning: no files found matching '*.css' under directory 'docs'
warning: no files found matching '*.js' under directory 'docs'
mechanize 0.2.6.dev-20130112 is already the active version in easy-install.pth
Installed /usr/local/lib/python2.7/dist-packages/mechanize-0.2.6.dev_20130112-py2.7.egg
Processing dependencies for mechanize==0.2.6.dev-20130112
Finished processing dependencies for mechanize==0.2.6.dev-20130112
<me>@<host>:~$ ^C
<me>@<host>:~$ which pip
/home/<me>/bin/pip
<me>@<host>:~$ which python
/home/<me>/bin/python
<me>@<host>:~$ which easy_install
/home/<me>/bin/easy_install
<me>@<host>:~$
**SECOND UPDATE:** Seems to be something with mechanize, if I add any other
random package via PIP, there is not problem (in this instance `html5lib`)
**THIRD UPDATE (@DSM)**
1)
>>> sys.path
['', '/home/<me>/local/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg', '/home/<me>/local/lib/python2.7/site-packages/virtualenvwrapper-2.11-py2.7.egg', '/home/<me>/src/geopy', '/home/<me>/local/lib/python2.7/site-packages/BeautifulSoup-3.2.0-py2.7.egg', '/home/<me>/local/lib/python2.7/site-packages/django_sorting-0.1-py2.7.egg' ... <so on and so forth but mechanize is not here>]
>>>
2) *pretty long output of which most looks like:*
<me>@<host>:~$ ls -laR /usr/local/lib/python2.7/dist-packages/mech*
/usr/local/lib/python2.7/dist-packages/mechanize:
total 1144
...lots of other files, pretty much same permissions...
-rw-r--r-- 1 root staff 24916 Jan 11 01:19 _mechanize.py
...lots of other files...
3)
>>> import imp
>>> imp.find_module("mechanize")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named mechanize
>>>
**FOURTH EDIT** (this is getting ridiculous :/): This is similar to a problem
I've had before ([Complete removal and fresh install of python on Ubuntu
12.04](http://stackoverflow.com/questions/12449161/complete-removal-and-fresh-
install-of-python-on-ubuntu-12-04?rq=1)), if I run everything with sudo, it's
fine, but I don't know if I should have to do that...what's up with the
permissions?
Answer: in my case it is permission problem. The package was somehow installed with
root rw permission only, other user just cannot rw to it!
|
Create a set of objects when their type is set by ini file (Python)
Question: I'm working on a smart home project. I've got a bunch of pieces, such as a
handful of XBee readios, leds, GPS-synched clocks, water counters etc. I tried
to use OOP approach, so I created many classes and subclasses. Now all you
have to do in code is to define hardware, connect it by class-built-in
function to a parent and enjoy. To get an idea:
coordinator = XBee24ZBCoordinator('/dev/ttyS1', 115200,
"\x00\x13\xA2\x00\x40\x53\x56\x23", 'coord')
spalnya = XBee24ZBRemote('\x00\x13\xA2\x00\x40\x54\x1D\x12', 'spalnya')
spalnya.connectToCoordinator(coordinator)
vannaya = XBee24ZBRemote('\x00\x13\xA2\x00\x40\x54\x1D\x17', 'vannaya')
vannaya.connectToCoordinator(coordinator)
led = LED()
led.connectTo(spalnya.getPin('DO4'), 'DO')
led.on()
led.off()
I, however, don't want to do that in code. I want to have an ini file that
will define the topology of this 'network'. Thus I want this file to be
readable and editable by a human. Logical choise is ini (against e.j. json as
json when it comes to manual editing of config files is not super friendly to
at least me). Now, I got:
[xbee-coordinator]
type = XBee24ZBCoordinator
name = coord
comport = COM4
comspeed = 115200
I can create a function BuildNetwork('my.ini'), that will read and create the
required object instances and connections between them. How do I do it?
There's a class XBee24ZBCoordinator, but whar I get from ini is just a
string...
Answer: You have two options:
* Define all these classes in a module. Modules are just objects, so you can use `getattr()` on them:
import types
instance = getattr(types, typename)(arguments)
* Store them all in a dictionary and look them up by name; you don't have to type out the name in a string, the class has a `__name__` attribute you can re-use:
types = {}
class XBee24ZBCoordinator():
# class definition
types[XBee24ZBCoordinator.__name__] = XBee24ZBCoordinator
If these are defined in the 'current' module, the `globals()` function returns
a dictionary too, so `globals()['XBee24ZBCoordinator']` is a reference to the
class definition as well.
|
python scipy leastsq fit with complex numbers
Question: I have a data set of complex numbers, and I'd like to be able to find
parameters that best fit the data. Can you fit data in complex numbers using
leastsq as implemented by scipy in python?
For example, my code is something like this:
import cmath
from scipy.optimize import leastsq
def residuals(p,y,x):
L,Rs,R1,C=p
denominator=1+(x**2)*(C**2)*(R1**2)
sim=complex(Rs+R1/denominator,x*L-(R1**2)*x*C/denominator)
return(y-sim)
z=<read in data, store as complex number>
x0=np.array[1, 2, 3, 4]
res = leastsq(residuals,x0, args=(z,x))
However, `residuals` doesn't like working with my complex number, I get the
error:
File "/tmp/tmp8_rHYR/___code___.py", line 63, in residuals
sim=complex(Rs+R1/denominator,x*L-(R1**_sage_const_2 )*x*C/denominator)
File "expression.pyx", line 1071, in sage.symbolic.expression.Expression.__complex__ (sage/symbolic/expression.cpp:7112)
TypeError: unable to simplify to complex approximation
I'm guessing that I need to work only with floats/doubles rather than complex
numbers. In that case, how can I evaluate the real and complex parts
separately and then lump them back together into a single error metric for
`residuals` to return?
Answer: The least squares function in scipy wants a real residual returned because it
is difficult to compare complex values (e.g. is 1+2j greater or less than
2+1j?). Remember the residual is essentially a measure of the quality of the
set of parameters passed in, it tells leastsq how close to the true fit it is.
What you can do is add the error (y-sim) in quadrature, appending these lines
after you calculate 'sim' in your residuals function:
a = y-sim
return a.real**2 + a.imag**2
So long as y and sim are both np.array's of complex's then this will work and
is relatively efficient.
|
How to refresh the multi-line output dynamically
Question: I want to refresh some info dynamically(just like progress bar), I can do it
with following code
#! /usr/bin/env python
import sys
import time
print "start the output"
def loop():
i = 0
while 1:
i += 1
output = "\rFirst_line%s..." % str(i)
sys.stdout.write(output)
sys.stdout.flush()
time.sleep(1)
loop()
It could only output single_line info dynamically, When add '\n' into output,
It couldn't work as expect.
output = "\rFirst_line%s...\n" % str(i)
Any way could help it to refresh multi_line content?
Answer: You could do it with [curses](http://docs.python.org/2/library/curses.html),
but it's nontrivial.
|
Implementing this algorithm on Lists - Python
Question: Edit: Seems I made a mistake on my previous description and example so here it
is fixed.
# Latest Version
interactions = [ ['O1', 'O3'],
['O2', 'O5'],
['O8', 'O10']
['P3', 'P5'],
['P2', 'P19'],
['P1', 'P6'] ]
So same as before, each entry is an interaction between two parts of an
object. For example think of O and P as organisms, and O1, O8, P4, P6 ... as
sub-section to the organisms. So each interaction is between sub-sections in
the same organism, and in this list there are many organisms.
Now, the similar list:
similar = ['O1', 'P23'],
['O3', 'P50'],
['P2', 'O40'],
['P19', 'O22']
So **O1** is similar to **P23** and **O3** is similar to **P50** AND [O1, O2]
interact thus the interaction ['P23', 'P50'] is a transformed interaction.
Likewise, **P2** is similar to **O40** and **P19** is similar to **O22** AND
[P2, P19] interact thus the interaction ['O40', 'O22'] is a transformed
interaction.
The transformed interactions will always be from the same organism, eg: [PX,
PX] or [OX, OX].
# Older Version
Let's say I have the following list:
interactions = [ [O1, O3],
[O1, O8],
[O4, O6],
[O9, O2],
[ ... ] ]
what this list is meant to represent is an interaction between two objects, so
`O1 and O3` interact, etc.
Now, let's say I have a second list:
similar = [ [O1, H33],
[O6, O9],
[O4, H1],
[O2, H12],
[ ... ] ]
and what this list is meant to represent is objects that are similar.
If we know objects A and B in the list `interactions` do indeed have an
intereaction, AND we know that we have an object A' that is similar to A, and
an object B' which is similar to B, then we can map the interaction from A to
B to the objects A' to B'.
For example: `O9 and O2 interact.` `O6 is similar to O9.` `H12 is similar to
O2.` `thus [O6, H12] interact.`
Note: `interactions = [ [O1, O3] ]` is the same as `[O3, O1]`, although it
would only be stored in the list `interactions` once, in either format. The
same applies to the list `similar`.
So I guess the algorithm for this would be:
1. for every unique object A in the in fields [0] and [1] in list `similar`,
2. fetch a list B of interactions from the list `interactions`.
3. check for entries in `similar` where A is similar to some object A', and B is similar to some object B'.
4. map the interaction between A' and B'.
Edit: Code for this version.
# Code
from collections import defaultdict
interactions = [ ['O1', 'O3'],
['O1', 'O8'],
['O4', 'O6'],
['O9', 'O2'] ]
similar = [ ['O1', 'H33'],
['O6', 'O9'],
['O4', 'H1'],
['O2', 'H12'] ]
def list_of_lists_to_dict(list_of_lists):
d = defaultdict(list)
for sublist in list_of_lists:
d[sublist[0]].append(sublist[1])
d[sublist[1]].append(sublist[0])
return d
interactions_dict = list_of_lists_to_dict(interactions)
similar_dict = list_of_lists_to_dict(similar)
for key, values in interactions_dict.items():
print "{0} interacts with: {1}".format(key, ', '.join(values))
if key in similar_dict:
print " {0} is similar to: {1}".format(key, ', '.join(similar_dict[key]))
forward = True
for value in values:
if value in similar_dict:
print " {0} is similar to: {1}".format(value, ', '.join(similar_dict[value]))
reverse = True
if forward and reverse:
print " thus [{0}, {1}] interact!".format(', '.join(similar_dict[key]),
', '.join(similar_dict[value]))
forward = reverse = False
Alright, that's all the background information.
So, I'm pretty new to python and I think I could implement this with a bunch
of nested for loops and conditions, however I was wondering if there is a more
elegant, pythonic way of going about this.
If read all of this, thank you for you time! :)
Answer: This should do it
_interactions = set([ (O1, O3),
(O1, O8),
(O4, O6),
(O9, O2),
( ... ) ])
interactions = set()
for i,j in _interactions:
if (i,j) not in interactions and (j,i) not in interactions:
interactions.add((i,j))
_similar = set([ (O1, H33),
(O6, O9),
(O4, H1),
(O2, H12),
( ... ) ])
similar = set()
for i,j in _similar:
if (i,j) not in similar and (j,i) not in similar:
similar. add((i,j))
answer = set()
for i,j in interactions:
a = random.choice([x for x,y in similar if y==i] + [y for x,y in a if x==i]) # assuming that everything is similar to at least one thing
b = random.choice([x for x,y in similar if y==j] + [y for x,y in a if x==j]) # assuming same
if (a,b) not in answer and (b,a) not in answer:
answer.add((a,b))
|
Python. Generating a random + or - sign using the random command
Question: What is the easiest way to generate a random `+`,`-`,`*`, or `/` sign using
the import random function while assigning this to a letter.
E.G.
g = answeryougive('+','-')
Thanks in advance :)
Answer: You want
[`random.choice`](http://docs.python.org/2/library/random.html#random.choice)
random.choice(['+', '-'])
Or more concisely:
random.choice('+-')
|
Syntax error when trying to assign objects to class in Python using Mac OS X Terminal
Question: I am learning python programming and I am just going through easy exercises.
One of them has me create a class as follows:
class MyFirstClass:
Pass
That is it. I save this and then when I try to import the file using python3.3
in a Mac Terminal and assign an object:
a = MyFirstClass()
I get a syntax error. Am I not running the program correctly? I have performed
this task in IDLE but it does not seem to work when I am using Python in the
terminal.
Answer: Python is case-sensitive. `Pass` should be `pass`.
|
Python ctypes: How to modify an existing char* array
Question: I'm working on a Python application that makes use of libupnp which is a C
library. I'm using CTypes to use the library which is easy enough. The problem
I'm having is when I'm registering a callback function for read requests. The
function has a prototype of the following form:
int read_callback(void *pFileHandle, char *pBuf, long nBufLength);
pFileHandle is just some file handle type. pBuf is a writable memory buffer.
This is where the data is output. nBufLength is the number of bytes to read
from the file. A status code is returned.
I have a Python function pointer for this. That was easy enough to make but
when I define a Python function to handle this callback I've found that pBuf
doesn't get written to because Python strings are immutable and when you
assign or modify them they create new instances. This poses a big problem
because the C library expects the char pointer back when the function finishes
with the requested file data. The buffer ends up being empty every time though
because of the way Python strings are. Is there some way around this without
modifying the C library?
The handler should modify the buffer parameter that is given which is my
problem.
So what I want to have happen is that the Python function gets called to
perform a read of some file (could be in memory, a file system handle, or
anything in between). The pBuf parameter is populated by a read of the stream
(again in Python). The callback then returns to the C code with pBuf written
to.
Answer: ctypes can allocate a buffer object that your C library should be able to
write to:
import ctypes
init_size = 256
pBuf = ctypes.create_string_buffer(init_size)
See:
<http://docs.python.org/2/library/ctypes.html#ctypes.create_string_buffer>
|
Can I run a python file within another python file?
Question: > **Possible Duplicate:**
> [Run a python script from another python script, passing in
> args](http://stackoverflow.com/questions/3781851/run-a-python-script-from-
> another-python-script-passing-in-args)
Suppose I have script2.py that returns a int value.
Is it possible to run only script.py on my shell and within script.py gets in
some way the int value returned by script2.py according to parameters I passed
to script.py?
Answer: Yes it's possible. That's what modules are used for :).
First you have to put the method that you use in script2.py into a function.
I'll use the same example as the person above me has used (because it's a good
example :p).
def myFunction(a, b):
return a + b
Now on your script.py, you'll want to put:
import script2
at the top. This will only work if both files are in the same directory.
To use the function, you would type:
# Adding up two numbers
script2.myFunction(1, 2)
and that should return 3.
|
Testing Python C libraries - get build path
Question: When using setuptools/distutils to build C libraries in Python
$ python setup.py build
the `*.so/*.pyd` files are placed in `build/lib.win32-2.7` (or equivalent).
I'd like to test these files in my test suite, but I'd rather not hard code
the `build/lib*` path. Does anyone know how to pull this path from distutils
so I can `sys.path.append(build_path)` \- or is there an even better way to
get hold of these files? (without having installed them first)
Answer: You must get the platform that you are running on and the version of python
you are running on and then assemble the name yourself.
To get the current platform, use `sysconfig.get_platform()`. To get the python
version, use `sys.version_info` (specifically the first three elements of the
returned tuple). On my system (64-bit linux with python 2.7.2) I get:
>>> import sysconfig
>>> import sys
>>> sysconfig.get_platform()
linux-x86_64
>>> sys.version_info[:3]
(2, 7, 2)
The format of the lib directory is "lib.platform-versionmajor.versionminor"
(i.e. only 2.7, not 2.7.2). You can construct this string using python's
string formatting methods:
def distutils_dir_name(dname):
"""Returns the name of a distutils build directory"""
f = "{dirname}.{platform}-{version[0]}.{version[1]}"
return f.format(dirname=dname,
platform=sysconfig.get_platform(),
version=sys.version_info)
You can use this to generate the name of any of distutils build directory:
>>> import os
>>> os.path.join('build', distutils_dir_name('lib'))
build/lib.linux-x86_64-2.7
|
Finding the variables (read or write)
Question: I'd like to develop a small debugging tool for python programs.In Dynamic
Slicing How can I find the variables that are accessed in a statement? And
find the type of access (read or write) for those variables (in Python).###
Write: A statement can change the program state Read : A statement can read
the program state ._*_ *For example in these 4 lines we have: (1) x = a+b =>
write{x} & reads{a,b} (2)y=6 => write{y}&reads{} (3) while(n>1) => write{}
&reads{n} (4) n=n-1 write{n} & reads{n}
Answer: Not sure what your goal is. Perhaps
[`dis`](http://docs.python.org/2/library/dis.html) is what you're looking for?
>>> import dis
>>> dis.dis("x=a+b")
1 0 LOAD_NAME 0 (a)
3 LOAD_NAME 1 (b)
6 BINARY_ADD
7 STORE_NAME 2 (x)
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
|
Using python's pack with arrays
Question: I'm trying to use the pack function in the struct module to encode data into
formats required by a network protocol. I've run into a problem in that I
don't see any way to encode arrays of anything other than 8-bit characters.
For example, to encode "TEST", I can use format specifier "4s". But how do I
encode an array or list of 32-bit integers or other non-string types?
Here is a concrete example. Suppose I have a function `doEncode` which takes
an array of 32-bit values. The protocol requires a 32-bit length field,
followed by the array itself. Here is what I have been able to come up with so
far.
from array import *
from struct import *
def doEncode(arr):
bin=pack('>i'+len(arr)*'I',len(arr), ???)
arr=array('I',[1,2,3])
doEncode(arr)
The best I have been able to come up with is generating a format to the pack
string dynamically from the length of the array. Is there some way of
specifying that I have an array so I don't need to do this, like there is with
a string (which e.g. would be `pack('>i'+len(arr)+'s'`)?
Even with the above approach, I'm not sure how I would go about actually
passing the elements in the array in a similar dynamic way, i.e. I can't just
say `, arr[0], arr[1], ...` because I don't know ahead of time what the length
will be.
I suppose I could just pack each individual integer in the array in a loop,
and then join all the results together, but this seems like a hack. Is there
some better way to do this? The array and struct modules each seem to do their
own thing, but in this case what I'm trying to do is a combination of both,
which neither wants to do.
Answer:
data = pack('>i', len(arr)) + arr.tostring()
|
Parsing dictionary-like URL parameters in Python
Question: I'm working on implementing server-side filtering to serve KendoUI's Grid
component, using Python.
The problem I'm facing is that the AJAX call that it generates by default
seems to be incompatible with both Flask's built-in URL parser and Python's
`urlparse` module.
Here's a contrived sample of the type of query string I'm having trouble with:
`a=b&c=d&foo[bar]=baz&foo[baz]=qis&foo[qis]=bar`
Here's the result I'm going for:
{
'a': 'b',
'c': 'd',
'foo': {
'bar': 'baz',
'baz': 'qis',
'qis': bar'
}
}
Unfortunately, here's the `request.args` you get from this, if passed to a
Flask endpoint:
{
'a': 'b',
'c': 'd',
'foo[bar]': 'baz'
'foo[baz]': 'qis'
'foo[qis]': 'bar'
}
Worse yet, in practice, the structure can be several layers deep. A basic call
where you're filtering the column `foo` to only rows where the value is equal
to `'bar'` will produce the following:
{
'filter[logic]': 'and',
'filter[filters][0][value]': 'bar',
'filter[filters][0][field]': 'foo',
'filter[filters][0][operator]': 'eq'
}
I checked the RFC, and it requires that the query string contain only "non-
hierarchical" data. While I believe it's referring to the object the URI
represents, there is no provision for this type of data structure in the
specification that I can find.
I begin to write a function that would take a dictionary of params and return
the nested construct they represented, but I soon realized that it was nuanced
problem, and that surely someone out there has had this trouble before.
Is anyone aware of either a module that will parse these parameters in the way
I'm wanting, or an elegant way to parse them that I've perhaps overlooked?
Answer: I just wrote a little function to do this:
from collections import defaultdict
import re
params = {
'a': 'b',
'c': 'd',
'foo[bar]': 'element1',
'foo[baz]': 'element2',
'foo[qis]': 'element3',
'foo[borfarglan][bofgl]': 'element4',
'foo[borfarglan][bafgl]': 'element5',
}
def split(string, brackets_on_first_result = False):
matches = re.split("[\[\]]+", string)
matches.remove('')
return matches
def mr_parse(params):
results = {}
for key in params:
if '[' in key:
key_list = split(key)
d = results
for partial_key in key_list[:-1]:
if partial_key not in d:
d[partial_key] = dict()
d = d[partial_key]
d[key_list[-1]] = params[key]
else:
results[key] = params[key]
return results
print mr_parse(params)
This should work to any nest level.
|
Filter a django model by comparing two foreign keys
Question: I need to create a filter in Django by comparing two foreign keys to each
other. The double-underscore syntax only works on the left hand side of the
equation. So whatever is on the right side throws an error:
match = UserProfile.objects.filter(
user__date_joined__gte = group__date
)
Django (or python here) doesn't interpret group__date as a parseable variable
name, and complains that it's not defined. I can switch the variables around,
and then user_ _date_joined would be undefined. (the variable names here are
just an example)
What I'm trying to achieve would look like this in SQL:
SELECT * FROM profile p, user u, group g WHERE
p.u_id = u.id AND
u.group_id = g.id AND
u.date_joined >= g.date
Answer: You will have to use [F()
expressions](https://docs.djangoproject.com/en/dev/topics/db/queries/#django.db.models.F)
to do this
from django.db.models import F
match = UserProfile.objects.filter(user__date_joined__gte = F('group__date'))
|
django south: problems with path to django
Question: The problem causes this error when I try to use south:
$ python manage.py schemamigration
You must provide an app to create a migration for.
$ python manage.py schemamigration myapp --initial
OSError: [Errno 13] Permission denied: '../myapp/migrations'
$ sudo python manage.py schemamigration myapp --initial
ImportError: No module named django.core.management
$ python
>>> import south
>>> import django.core.management
>>> south.__file__
'/home/mydev/venv/lib/python2.7/site-packages/south/__init__.pyc'
>>> django.__file__
'/home/mydev/venv/lib/python2.7/site-packages/django/__init__.pyc'
It seems to me that `manage.py schemamigration` generates an error message
that appears to be returned by `schemamigration`. But `schemamigration` and
other south commands cannot find django once they are called.
`'/home/mydev/venv/lib/python2.7/site-packages/'` is on my sys.path. The
`/south` folder is a sim link to the actual south package which is in a
`/dist-packages` folder. I did put a sim link in the actual `/south` folder
back to the the django package, but that didn't solve anything.
What could be wrong?
Answer: The problem is due to permissions and use of virtualenv. You got the
'permission denied' error as your current user does not have write permissions
for this project.
You can change the permissions for the entire project and make you current
user as the owner of all files and folders in the project
sudo chown -R <username>:<username> <project_folder>
When you tried running migration using sudo it was not able to find django
package as it lies in the virtualenv which has been activated by normal user.
I guess these steps should solve this incase you don't want to change the
permissions.
sudo -i
source /<virtualenv_path>/bin/activate
This should activate the virtualenv for sudo and now you'll be able to access
all packages in the virtualenv
I think you should go the permissions way
|
Video Appears Corrupted on Upload via Python ftplib
Question: I'm attempting to use the **ftplib** library in Python to try and FTP some
videos to my website. I've got a script with the basics "working", but there's
a problem with it. When I try to play my video back using VLC player, it
appears that it's corrupt, as it only plays the first 15 seconds of video
(probably until some important keyframe in the video, but that's likely
besides the point) and then the audio only sometimes continues.
The strange thing about it is that if I perform an `ls -l` on both the local
directory and the remote directory, the filesizes are the same (even though
the Mac locally calculates the filesize as 75.6 MB while the cPanel File
Manager shows it as only 71.2 MB). To double-check, the MD5 checksums on both
ends also came out to be the same. But clearly, the data isn't entirely right
on the server side. Odds are, I'm probably not opening the .mov file correctly
in my Python script, as it took me a little while just to wrap my head around
file and file-like objects (meaning I probably still have it wrong).
This is my current upload script:
#!/usr/bin/env python
import os
from ftplib import FTP
# Log in to the FTP server
ftp = FTP('campusweatherservice.com')
ftp.login('myusername', 'mypassword')
# Change directories to the intended video source
os.chdir('/Users/cws/Desktop/CNET')
f = file('cnet.mov', 'rb') # Currently opening the file as a file object
ftp.storbinary('STOR CNET.mov', f) # We then try to store it this way
# Close our file object and then quit the FTP client
f.close()
ftp.quit()
Any thoughts or obvious mistakes on my end here? If you'd like to try the
video file for yourself, it's at
<http://www.campusweatherservice.com/CNET/CNET.mov>
Answer: So hopefully you can see from the comments on the original question that this
issue was a matter of testing, not the code.
Lessons learned here:
1. If your checksums match, make damn sure your files aren't in fact, identical. That's 1000x more likely to be the case than some weird fringe case (as I originally figured it was)
2. If you provide an actionable item in your Stack Overflow post (like 'download my video to see for yourself'), you should probably test that out yourself.
So I guess this question is now at least good as a) a reference for uploading
files via FTP using Python and b) a good lesson on asking questions on Stack
Overflow!
|
awscli getting started error
Question: I'd already asked this on the AWS official forum on Jan-2 but not any reply.
So, I m posting it here again so that I can get the error fixed.
* * *
I installed awscli as stated in this page
<http://docs.amazonwebservices.com/cli/latest/userguide/cli-chap-getting-
started.html>
And the following is the installation details:
millisami at millisami in ~/Code/face_tester on design ✹
± which python
/usr/local/bin/python
millisami at millisami in ~/Code/face_tester on design ✹
± python --version
Python 2.7.3
millisami at millisami in ~/Code/face_tester on design ✹
± pip install awscli --upgrade
Requirement already up-to-date: awscli in /usr/local/lib/python2.7/site-packages
Requirement already up-to-date: botocore>=0.4.0 in /usr/local/lib/python2.7/site-packages/botocore-0.4.1-py2.7.egg (from awscli)
Requirement already up-to-date: six>=1.1.0 in /usr/local/lib/python2.7/site-packages/six-1.2.0-py2.7.egg (from awscli)
Requirement already up-to-date: argparse>=1.1 in /usr/local/lib/python2.7/site-packages/argparse-1.2.1-py2.7.egg (from awscli)
Requirement already up-to-date: requests>=0.12.1,<1.0.0 in /usr/local/lib/python2.7/site-packages/requests-0.14.2-py2.7.egg (from botocore>=0.4.0->awscli)
Requirement already up-to-date: python-dateutil>=2.1 in /usr/local/lib/python2.7/site-packages/python_dateutil-2.1-py2.7.egg (from botocore>=0.4.0->awscli)
Cleaning up...
millisami at millisami in ~/Code/face_tester on design ✹
± aws help
Traceback (most recent call last):
File "/usr/local/share/python/aws", line 15, in <module>
import awscli.clidriver
File "/usr/local/lib/python2.7/site-packages/awscli/__init__.py", line 18, in <module>
import botocore.base
ImportError: No module named botocore.base
millisami at millisami in ~/Code/face_tester on design1 ↵ ✹
Since installing the pip its successful, but why that _botocore_ is being
reported no such file??
Answer: Try this command
sudo pip install awscli --force-reinstall --upgrade
|
DLL load failed with PyQGIS
Question: I've already asked this question in the "gis.stackexchange" forum, but it
seems to be more appropriate for this one.
I am having problems with Python Shell importing the module "qgis.core". When
I type "import qgis.core" the Shell ("idle.pyw") gives me this error:
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import qgis.core
ImportError: DLL load failed: Impossibile trovare il modulo specificato (Trad "Impossible to find the specified module").
I have already set the environment to point the right folders following
"PyQGIS cookbook" instructions. In my case, the paths are:
PYTHOPATH=C:\"QGIS_path"\apps\qgis\python; Path=C:\"QGIS_path"\apps\qgis\bin.
Am I missing something? What are supposed to be the right libraries the Shell
is unable to find? I am working with windows 7 64 bit, QGIS Lisboa (version
1.8), Python 2.7.2 (the one that is included in the QGIS package).
Answer: I do not like to mix my python installations, so I add the paths on the fly.
The paths you need are below:
import sys
sys.path.extend([r"C:\Program Files\QGIS Valmiera\apps",r"C:\Program Files\QGIS Valmiera\apps\qgis\bin",r"C:\Program Files\QGIS Valmiera\apps\Python27"])
import qgis.core
|
Is there a way to access an overridden method within a multiple inheritance object in Python?
Question: I have the following Python code. How do I get c to return 2 without changing
the class definitions? Or differently stated. How to access overridden methods
in an object with multiple inheritance?
class A(object):
def foo(self):
return 1
class B(object):
def foo(self):
return 2
class C(A, B):
def __init__(self):
A.__init__(self)
B.__init__(self)
c = C()
c.foo() # Returns 1
Answer: Just after asking this question I got an answer elsewhere so here goes:
import types
c.foo = types.MethodType(B.foo, c)
|
python android getcontacts
Question: I read the python api of android. I found a method called contactsGet(). I use
such method as bellow.
import android
droid = adroid.Android()
cons = droid.contactsGet()
print cons
but I get no contact info. could you please help me with this? thanks very
much.
Answer: as njzk2 said, try it like this:
import android
droid = android.Android()
cons = droid.contactsGet()
print cons
|
ImportError: No module named _backend_gdk
Question: I am starting to get some insight into interactive plotting with python and
matplotlib using pyGTK+. Therefore I took a look at the example given at the
matplotlib website:
<http://matplotlib.org/examples/user_interfaces/gtk_spreadsheet.html>
This is a short exerpt of the Code:
#!/usr/bin/env python
"""
Example of embedding matplotlib in an application and interacting with
a treeview to store data. Double click on an entry to update plot
data
"""
import pygtk
pygtk.require('2.0')
import gtk
from gtk import gdk
import matplotlib
matplotlib.use('GTKAgg') # or 'GTK'
from matplotlib.backends.backend_gtk import FigureCanvasGTK as FigureCanvas
from numpy.random import random
from matplotlib.figure import Figure
Ones I try to run this Script in the Terminal I get the following error:
Traceback (most recent call last):
File "gtk_spreadsheet.py", line 15, in <module>
from matplotlib.backends.backend_gtk import FigureCanvasGTK as FigureCanvas
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backends/backend_gtk.py", line 33, in <module>
from matplotlib.backends.backend_gdk import RendererGDK, FigureCanvasGDK
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backends/backend_gdk.py", line 29, in <module>
from matplotlib.backends._backend_gdk import pixbuf_get_pixels_array
ImportError: No module named _backend_gdk
I have python 2.7 , matplotlib 1.2.0 and pygtk 2.24 installed.
Can anyone figure out where the error is located? I think it might be
connected to some linkin issues?
Thanks a lot!
Answer: Note that the Debian/Ubuntu package you need is not 'pygtk2-devel' but
'python-gtk2-dev':
sudo apt-get install python-gtk2-dev
should fix this error on these platforms.
|
Line-buffering of stdout fails on MINGW/MSYS Python 2.7.3
Question: The problem is illustrated by this simple script:
import time, os, sys
sys.stdout = os.fdopen( sys.stdout.fileno(), 'w', 1 ) # line-buffer stdout
print 'before sleep'
time.sleep( 10 )
print 'after sleep'
If line-buffering is successful, then there will be a 10-sec gap between the
printing of the two lines. If not, both lines will appear virtually at the
same time after a 10-sec pause (once Python starts up); that is, the lines are
printed when the program exits.
On Linux, I see line-buffered behavior to both a file and to the screen if the
"sys.stdout" line is included. Without that line, I see line-buffered behavior
to the screen, but not to a file. This is expected.
In the MSYS/MINGW environment, if I omit the "sys.stdout" line, I see the same
behavior as Linux: line-buffering to the screen but not to a file.
What is weird is that _with_ the "sys.stdout" line, I don't see line-buffering
to _either_ the screen or a file. I expect to see it to both, as in Linux.
Can anyone suggest a workaround?
Here's a bit more information:
> uname -a MINGW32_NT-6.0 FOO 1.0.11(0.46/3/2) 2009-05-23 19:33 i686 Msys
Thanks, -W.
Answer: One of my colleagues knew the answer.
Line buffering is not supported on WIN32. If line buffering is specified, it
reverts to full buffering. Unbuffered output is available, and the workaround
is to use it on WIN32. I have tried it in my simple test program, and it
works.
Ref.: <http://msdn.microsoft.com/en-us/library/86cebhfs%28v=vs.71%29.aspx>
|
Define a route for url ending with integer in python
Question: Here is a portion of my python code:
@app.route("/<int:param>/")
def go_to(param):
return param
The above function routes a url such as `www.example.com/12` to this function.
How can I declare a parameter rule to redirect urls ending with integers, such
as `www.example.com/and/boy/12`, to this function?
I'm using the Flask framework.
Answer: You will need `Werkzeug routing`.
_Complete code:_
from flask import Flask
from werkzeug.routing import BaseConverter
app = Flask(__name__)
class RegexConverter(BaseConverter):
def __init__(self, url_map, *items):
super(RegexConverter, self).__init__(url_map)
self.regex = items[0]
app.url_map.converters['regex'] = RegexConverter
# To get all URLs ending with "/number"
@app.route("/<regex('.*\/([0-9]+)'):param>/")
def go_to_one(param):
return param.split("/")[-1]
# To get all URLs ending with a number
@app.route("/<regex('.*([0-9]+)'):param>/")
def go_to_one(param):
return param.split("/")[-1]
# To get all URLs without a number
@app.route("/<regex('[^0-9]+'):param>/")
def go_to_two(param):
return param
@app.route('/')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
app.run()
|
is it possible to add syntax synonym to 'def' in python?
Question: I would like to use let instead of def.
I am looking for sane way of changing syntax of own code in this way.
basically in c it would be
#define let def
How to make the same in python?
Answer: `def` is a keyword in python, so it can't be changed to anything else.
From the
[docs](http://docs.python.org/2/reference/lexical_analysis.html#keywords):
> The following identifiers are used as reserved words, or keywords of the
> language, and cannot be used as ordinary identifiers. They must be spelled
> exactly as written here:
and del from not while as elif
global or with assert else if pass yield
break except import print class exec in
raise continue finally is return def for lambda
try
|
CSV to KML via Python
Question: I have a little problem I always get the error:
> TypeError: cannot concatenate 'str' and 'list' objects
import csv
import os
fp = "filepath/testfile.csv"
file = open(fp)
lines =file.readlines()
for line in lines:
line = line.strip()
fields = line.split(',') #comma seperated
branch = fields[0].split() #splitting
lat = fields[1].split()
lng = fields[2].split()
web = fields[3].split()
email = fields[4].split()
adress = fields[5].split()
print ("branch: " + branch) #print splitted
print ("lat: " + lat)
print ("lng: " + lng)
print ("web :" + web)
print ("email: " + email)
print ("address: " + address)
f = open('filepath/csv2kml.kml', 'w')
fname = "testing_Actions"
#Writing the kml file.
f.write("<?xml version='1.0' encoding='UTF-8'?>\n")
f.write("<kml xmlns='http://earth.google.com/kml/2.1'>\n")
f.write("<Document>\n")
f.write(" <name>" + fname + '.kml' +"</name>\n")
for row in lines:
f.write(" <Placemark>\n")
f.write(" <ExtendedData>\n")
f.write(" <Data name=the branch name>\n")
f.write(" <value>\n")
f.write(" " + str(branch) + "\n")
f.write(" </value>\n")
f.write(" </Data>\n")
f.write(" </Data name=Web>\n")
f.write(" <value>\n")
f.write(" " + str(web) +"\n")
f.write(" </value>\n")
f.write(" </Data>\n")
f.write(" </Data name=email>\n")
f.write(" <value>\n")
f.write(" " + str(email) + "\n")
f.write(" </value>\n")
f.write(" </Data>\n")
f.write(" <description>" + str(address) + "</description>\n")
f.write(" <Point>\n")
f.write(" <coordinates>" + str(lat) + "," + str(lng) + "</coordinates>\n")
f.write(" </Point>\n")
f.write(" </Placemark>\n")
f.write("</Document>\n")
f.write("</kml>\n")
print ("File Created. ")
f.close
file.close()
I cannot find my error.
Answer: "_I cannot find my error_ "
You can. If you read the error message completely, it tells you also, in which
lines the error occurred.
Was it here?
print ("branch: " + branch) #print splitted
and in the following lines you try to concatenate string `"branch"` and a list
`branch`.
Try to replace it with:
print ("branch: {}".format(branch)) #print splitted
|
Setting up i18n with python GAE: importError No module named babel
Question: [GAE's webapp2_extra.i18n can't import babel on
MacOSX](http://stackoverflow.com/questions/10898491/gaes-
webapp2-extra-i18n-cant-import-babel-on-macosx) I followed the post above and
added babel and pytz libraries under the file path /lib.
But when I call `from webapp2_extras import i18n`, I still get `ImportError:
No module named babel`
Anyone know what's wrong?
Answer: Include those lib into your project, ideal is to make a symlik
|
Why does Capistrano need modifications to use something like pythonbrew?
Question: As I understand, all that Capistrano does is ssh into the server and execute
the commands we want it to (mostly).
I've used rvm in some past couple of projects, and had to install the rvm-
capistrano gem. Otherwise, it failed to find the executables (or so I recall),
even though we had a proper .rvmrc file (with the correct ruby and the correct
gemset) in the repository.
Similarly, today I was setting up deployment for a project for which I'm using
pythonbrew (yeah I know I could use fabric, but that's not important for this
question), and a simple "cd #{deploy_to}/current && pythonbrew venv use myenv
&& gunicorn_django -c gunicorn.py" gave me an error message saying "cannot
find the executable gunicorn_django". This, I suppose is because the
virtualenv was not activated correctly. But didn't we activate the environment
when we did "pythonbrew venv use myenv"? The complete command works fine if I
ssh into the server and execute it on the shell, but it doesn't when I do it
via Capistrano.
My question is - why does Capistrano need modifications to play along with
programs like rvm and pythonbrew, even though all it's doing is executing a
couple of commands over ssh?
Answer: Thats because their ssh'ing in doesn't activate your shell's environment. So
it's not picking up the source statements that enable the magic. Just do an
rvm use ... before running commands instead of assuming the cd will pick that
up automatically. Should be fine then. If you had been using fabric there is
the env() context manager that you could use to be sure thats run before each
command.
|
Python Pandas - Deleting multiple series from a data frame in one command
Question: In short ... I have a Python Pandas data frame that is read in from an Excel
file using 'read_table'. I would like to keep a handful of the series from the
data, and purge the rest. I know that I can just delete what I don't want one-
by-one using 'del data['SeriesName']', but what I'd rather do is specify what
to keep instead of specifying what to delete.
If the simplest answer is to copy the existing data frame into a new data
frame that only contains the series I want, and then delete the existing frame
in its entirety, I would satisfied with that solution ... but if that is
indeed the best way, can someone walk me through it?
TIA ... I'm a newb to Pandas. :)
Answer: You can use the `DataFrame` `drop` function to remove columns. You have to
pass the `axis=1` option for it to work on columns and not rows. Note that it
returns a copy so you have to assign the result to a new `DataFrame`:
In [1]: from pandas import *
In [2]: df = DataFrame(dict(x=[0,0,1,0,1], y=[1,0,1,1,0], z=[0,0,1,0,1]))
In [3]: df
Out[3]:
x y z
0 0 1 0
1 0 0 0
2 1 1 1
3 0 1 0
4 1 0 1
In [4]: df = df.drop(['x','y'], axis=1)
In [5]: df
Out[5]:
z
0 0
1 0
2 1
3 0
4 1
|
How to send a file using scp using python 3.2?
Question: I'm trying to send a group of files to a remote server through no-ack's python
byndings for libssh2, but I am totally lost regarding the library usage due to
the lack of documentation.
I've tried using the C docs for libssh2 unsuccesfully.
Since I'm using python 3.2, paramiko and pexpect are out of the question.
Anyone can help?
EDIT: I just found some code in no-Ack's blog comments to his post.
import libssh2, socket, os
SERVER = 'someserver'
username = 'someuser'
password = 'secret!'
sourceFilePath = 'source/file/path'
destinationFilePath = 'dest/file/path'
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((SERVER, 22))
session = libssh2.Session()
session.startup(sock)
session.userauth_password(username, password)
sourceFile = open(sourceFilePath, 'rb')
channel = session.scp_send(destinationFilePath, 0o644, os.stat(sourceFilePath).st_size)
while True:
data = sourceFile.read(4096)
if not data:
break
channel.write(data)
exitStatus = channel.exit_status()
channel.close()
Seems to work fine.
Answer: And here's how to **get** files with libssh2 in Python 3.2. Major kudos to no-
Ack for showing me this. You'll need the Python3 bindings for libssh2
<https://github.com/wallunit/ssh4py>
import libssh2, socket, os
SERVER = 'someserver'
username = 'someuser'
password = 'secret!'
sourceFilePath = 'source/file/path'
destinationFilePath = 'dest/file/path'
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((SERVER, 22))
session = libssh2.Session()
session.startup(sock)
session.userauth_password(username, password)
(channel, (st_size, _, _, _)) = session.scp_recv(sourceFilePath, True)
destination = open(destinationFilePath, 'wb')
got = 0
while got < st_size:
data = channel.read(min(st_size - got, 1024))
got += len(data)
destination.write(data)
exitStatus = channel.get_exit_status()
channel.close()
|
Django DateTimeField() and timezone.now()
Question: OK, weird time zone issues when I'm running function tests. Django 1.4, Python
2.7. Are milliseconds truncated in DateTimeField() on MySQL? That's the only
theory I've got.
model file
from django.db import models
from django.utils import timezone
class Search(models.Model):
query = models.CharField(max_length=200, null=True)
query_date = models.DateTimeField(null=True)
test.py
from django.test import TestCase
from django.utils import timezone
from search.models import Search
class SearchModelTest(TestCase):
def test_creating_a_new_search_and_saving_it_to_the_database(self):
# start by creating a new Poll object with its "question" set
search = Search()
search.query = "Test"
search.query_date = timezone.now()
# check we can save it to the database
search.save()
# now check we can find it in the database again
all_search_in_database = Search.objects.all()
self.assertEquals(len(all_search_in_database), 1)
only_search_in_database = all_search_in_database[0]
self.assertEquals(only_search_in_database, search)
# and check that it's saved its two attributes: question and pub_date
self.assertEquals(only_search_in_database.query, "Test")
self.assertEquals(only_search_in_database.query_date, search.query_date)
The test fails with this:
self.assertEquals(only_search_in_database.query_date, search.query_date)
AssertionError: datetime.datetime(2013, 1, 16, 21, 12, 35, tzinfo=<UTC>) != datetime.datetime(2013, 1, 16, 21, 12, 35, 234108, tzinfo=<UTC>)
I think what's happening is that the milliseconds are being truncated after
saving to the database. Can that be right? I'm running MySQL v 5.5. Is MySQL
truncating the date?
Answer: Django ORM converts `DateTimeField` to `Timestamp` in mysql. You can confirm
that by looking at the raw sql doing `./manage.py sqlall <appname>`
In mysql `timestamp` does not store milliseconds.
The TIMESTAMP data type is used for values that contain both date and time parts. TIMESTAMP has a range of '1970-01-01 00:00:01' UTC to '2038-01-19 03:14:07' UTC.
It is a bug in MySql which appears to be fixed in v5.6.4, The
[Bug](http://bugs.mysql.com/bug.php?id=8523)
Noted in 5.6.4 changelog.
MySQL now supports fractional seconds for TIME, DATETIME, and
TIMESTAMP values, with up to microsecond precision.
|
import error in celeryd task
Question: I am having problems running my celery task because it cannot find one of my
modules:
(ff)bash-3.2$ flipfinder_app/manage.py celeryd
[...]
Traceback (most recent call last):
File "/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/billiard/process.py", line 248, in _bootstrap
self.run()
File "/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/billiard/process.py", line 97, in run
self._target(*self._args, **self._kwargs)
File "/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/billiard/pool.py", line 268, in worker
initializer(*initargs)
File "/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/celery/concurrency/processes/__init__.py", line 51, in process_initializer
app.loader.init_worker()
File "/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/celery/loaders/base.py", line 115, in init_worker
self.import_default_modules()
File "/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/djcelery/loaders.py", line 136, in import_default_modules
super(DjangoLoader, self).import_default_modules()
File "/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/celery/loaders/base.py", line 110, in import_default_modules
| self.builtin_modules]
File "/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/celery/loaders/base.py", line 96, in import_task_module
return self.import_from_cwd(module)
File "/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/celery/loaders/base.py", line 104, in import_from_cwd
package=package)
File "/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/celery/utils/imports.py", line 96, in import_from_cwd
return imp(module, package=package)
File "/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/celery/loaders/base.py", line 99, in import_module
return importlib.import_module(module, package=package)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/jasonlfunk/Workspace/Work/csm/ff-app/ff_app/apps/tabs/keywords/tasks.py", line 11, in <module>
from apps.util.adsense import has_adsense
ImportError: No module named adsense
It does exist:
(ff)bash-3.2$ pwd
/Users/jasonlfunk/Workspace/Work/csm/ff-app/ff_app/apps/util
(ff)bash-3.2$ ls | grep adsense
adsense.py
And when I use the django shell, it imports fine.
(ff)bash-3.2$ ff_app/manage.py shell
Python 2.7.3 (default, Jan 9 2013, 09:25:40)
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.65))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> from apps.util.adsense import has_adsense
>>> has_adsense
<function has_adsense at 0x10d3171b8>
I added this to the task file:
import sys
print sys.path
and see this output when I try to run celery:
['/Users/jasonlfunk/Workspace/Work/csm/ff-app', '/Users/jasonlfunk/Workspace/Work/csm/ff-app/ff_app/lib',
'/Users/jasonlfunk/Workspace/Work/csm/ff-app/ff_app/apps', '/Users/jasonlfunk/Workspace/Work/csm/ff-app/ff_app/lib',
'/Users/jasonlfunk/Workspace/Work/csm/ff-app/ff_app/apps', '/Users/jasonlfunk/Workspace/Work/csm/ff-app/ff_app',
'/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/pip-1.2.1-py2.7.egg',
'/Users/jasonlfunk/.virtualenvs/ff/src/pywhois', '/Users/jasonlfunk/.virtualenvs/ff/src/django-filter',
'/Users/jasonlfunk/.virtualenvs/ff/lib/python27.zip', '/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7',
'/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/plat-darwin', '/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/plat-mac',
'/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/plat-mac/lib-scriptpackages', '/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/lib-tk',
'/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/lib-old', '/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/lib-dynload',
'/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7',
'/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin',
'/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk',
'/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac',
'/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages',
'/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages', '/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/PIL',
'/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/newrelic-1.5.0.103',
'/Users/jasonlfunk/.virtualenvs/ff/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg-info']
You can see that `/Users/jasonlfunk/Workspace/Work/csm/ff-app/ff_app` is in
the path which is where the import should be.
I aslo added this to the tasks file:
import os
os.chdir('/Users/jasonlfunk/Workspace/Work/csm/ff-app/ff_app')
And everything works fine.
-------------- [email protected] v3.0.11 (Chiastic Slide)
---- **** -----
--- * *** * -- [Configuration]
-- * - **** --- . broker: django://guest@localhost:5672//
- ** ---------- . app: default:0x10ddf5810 (djcelery.loaders.DjangoLoader)
- ** ---------- . concurrency: 2 (processes)
- ** ---------- . events: OFF (enable -E to monitor this worker)
- ** ----------
- *** --- * --- [Queues]
-- ******* ---- . celery: exchange:celery(direct) binding:celery
--- ***** -----
[2013-01-16 17:22:53,248: WARNING/MainProcess] [email protected] ready.
What's going on? Why does cd'ing into a directory that is already in my path
fix the problem? Any ideas?
Answer: Is **from apps.util.adsense import has_adsense** inside task def ?
|
Is there a recommended format for multi-line imports?
Question: I have read there are three ways for coding multi-line imports in python
With slashes:
from Tkinter import Tk, Frame, Button, Entry, Canvas, Text, \
LEFT, DISABLED, NORMAL, RIDGE, END
Duplicating senteces:
from Tkinter import Tk, Frame, Button, Entry, Canvas, Text
from Tkinter import LEFT, DISABLED, NORMAL, RIDGE, END
With parenthesis:
from Tkinter import (Tk, Frame, Button, Entry, Canvas, Text,
LEFT, DISABLED, NORMAL, RIDGE, END)
Is there a recomended format or a more elegant way for this statements?
Answer: Personally I go with parentheses when importing more than one component and
sort them alphabetically. Like so:
from Tkinter import (
Button,
Canvas,
DISABLED,
END,
Entry,
Frame,
LEFT,
NORMAL,
RIDGE,
Text,
Tk,
)
Overall though it's a personal preference and I would advise you to go with
whatever looks best to you.
|
Python: Pair alphabets after loop is completed
Question: I tried to pair Alphabets by this
import string
a=string.uppercase
for i in range(0,30):
print a[i%26]*(i / 26+1)
This will print A-Z and then after Z it will print AA BB like string
but i need to put this string like `AA AB AC AD AE` until the range is defined
after printing A-Z then the result will be like print A-Z then AA AB AC ....
Answer: You can take advantage of the `itertools` module and use a generator to handle
this pretty cleanly:
from itertools import count, product, islice
from string import ascii_uppercase
def multiletters(seq):
for n in count(1):
for s in product(seq, repeat=n):
yield ''.join(s)
gives
>>> list(islice(multiletters('ABC'), 20))
['A', 'B', 'C', 'AA', 'AB', 'AC', 'BA', 'BB', 'BC', 'CA', 'CB', 'CC', 'AAA', 'AAB', 'AAC', 'ABA', 'ABB', 'ABC', 'ACA', 'ACB']
>>> list(islice(multiletters(ascii_uppercase), 30))
['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', 'AA', 'AB', 'AC', 'AD']
and you can make an object and get them one by one, if you'd prefer:
>>> m = multiletters(ascii_uppercase)
>>> next(m)
'A'
>>> next(m)
'B'
>>> next(m)
'C'
[Update: I should note though that I pass data between Python and Excel all
the time -- am about to do so, actually -- and never need this function. But
if you have a specific question about the best way to exchange data, it's
probably better to ask a separate question than to edit this one now that
there are several answers to the current question.]
|
extract label value for checkbox input object with beautiful soup instead of mechanize in python
Question: New to mechanize and BeautifulSoup and am loving it.
I used the prototypical way of opening a URL with mechanize, and I now have
the returned object:
def OpenURL(URL, USERAGENT):
br = Browser()# Create a browser
br.set_handle_robots(False) # no robots
br.set_handle_refresh(False) # can sometimes hang without this
br.addheaders = [('User-agent', USERAGENT)]
#open the URL
result = br.open(URL)# Open the login page
return result
In my returned result, I have an input object of type "checkbox", name
"BoxName". The checkbox has a label. The HTML looks like this:
<input type="checkbox" name="BoxName" checked="checked" disabled="disabled" />
<label for="BoxName">DESIREDTEXT</label>
I am able to get the DESIREDTEXT with mechanize as follows: (code paraphrased
to save space)
if control.type == "checkbox":
for item in control.items:
if(control.name == "BoxName"):
DESIREDTEXT = str([label.text for label in item.get_labels()])
Is there an equivalent way to get the label text value with BeautifulSoup? I
am happy to use mechanize to retrieve it, but I just wondered if BeautifulSoup
had the ability as well.
_**_addendum ****
HTML from source:
<input id="ctl00_ContentPlaceHolder1_chkCheckedIn" type="checkbox" name="ctl00$ContentPlaceHolder1$chkCheckedIn" checked="checked" disabled="disabled" />
<label for="ctl00_ContentPlaceHolder1_chkCheckedIn">Checked-In 1/17/2013 1:23:01 AM</label>
This is the code where Inbox.read() outputs all the HTML. I verified that the
label is there:
soup = BeautifulSoup(Inbox.read())
print soup.find('label',{'for': 'ctl00_ContentPlaceHolder1_chkCheckedIn'}).text
This is my error:
AttributeError Traceback (most recent call last)
/usr/local/lib/python2.7/dist-packages/IPython/utils/py3compat.pyc in
execfile(fname, *where) 176 else: 177 filename = fname \--> 178
**builtin**.execfile(filename, *where)
/home/ubuntu/testAffinity.py in () 133 from BeautifulSoup import BeautifulSoup
134 soup = BeautifulSoup(Inbox.read()) \--> 135 print
soup.find('label',{'for': 'ctl00_ContentPlaceHolder1_chkCheckedIn'}).text 136
137
AttributeError: 'NoneType' object has no attribute 'text'
Not sure what I'm missing here, but it has to be simple. I notice that the
'for' value is different than the checkbox "name" value. I tried using the
checkbox "name" value but received the same error.
** after upgrade to bs4
133 from bs4 import BeautifulSoup 134 soup = BeautifulSoup(Inbox.read()) \-->
135 print soup.find('label',{'for':
'ctl00_ContentPlaceHolder1_chkCheckedIn'}).text 136 137
AttributeError: 'NoneType' object has no attribute 'text'
**post upgrade to 4 I printed soup. The attribute is correct. The label is in
the html from the print soup.
<input checked="checked" disabled="disabled" id="ctl00_ContentPlaceHolder1_chkCheckedIn" name="ctl00$ContentPlaceHolder1$chkCheckedIn" type="checkbox"/>
<label for="ctl00_ContentPlaceHolder1_chkCheckedIn">Checked-In 1/17/2013 1:23:01 AM</label>
Answer: Yes, just give it a try:
soup.find('label',{'for':'BoxName'}).text
with your function it would look like:
html = OpenURL(URL, USERAGENT).read()
soup = BeautifulSoup(html)
print soup.find('label',{'for':'BoxName'}).text
|
Python: Intersection of full string from list with partial string
Question: Let's say I have a string and a list of strings:
a = 'ABCDEFG'
b = ['ABC', 'QRS', 'AHQ']
How can I pull out the string in list b that matches up perfectly with a
section of the string a? So the would return would be something like `['ABC']`
The most important issue is that I have tens of millions of strings, so that
time efficiency is essential.
Answer: If you only want the first match in b:
next((s for s in b if s in a), None)
This has the advantage of short-circuiting as soon as it finds a match whereas
the other list solutions will keep going. If no match is found, it will return
`None`.
|
Truncating files and logging in Python
Question: I'm writing a script that needs to go delete massive movie/media files from
our company's database. I'm developing in a Mac and Python environment both of
which are new to me. I'm trying to make this as resilient as possible since it
can possibly nuke the database of all projects currently in production and not
the old ones that are retired.
Would like to know, if there any severe logical flaws, if I'm logging right
etc. And any other suggestions to make this as robust and careful as possible
is appreciated.
import os.path
import shutil
import datetime
import logging
root_path = "blah"
age_in_days = 2
truncate_size = 1024
class TruncateOldFiles():
def delete_files(root_path):
if os.path.exists(root_path):
for dirpath, dirnames, filenames in os.walk(root_path):
for file in filenames:
current_path = os.path.join(dirpath, file)
file_modified_time = datetime.date(os.path.getmtime(current_path))
if ((datetime.datetime.now() - file_modified_time) > datetime.timedelta(days = age_in_days)):
count += 1
if count == len(files) and not os.path.isfile("donotdelete.txt"):
for file in filenames:
try:
with open (file, 'w+') as file:
file.truncate(1024)
log()
except IOError:
pass
def log():
format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logging.basicConfig(filename='myapp.log', level=logging.INFO, format = format)
logging.info('Starting to truncate all files...')
Also, I was only able to compile this in terminal but don't quite know how to
debug logical errors from it. I'm used to coding in C++ and Java in IDEs and
here I'm using Xcode which does not seem as conducive to my style of
development.
Thank you.
Answer: I am not sure where the mentioned database comes into play, you seem only to
be working on filenames in a filesystem.
* You are using `os.path.isfile()` which I would only use for testing if something that exist is a file (and not a directory, a link, etc.). It returns False (I had to look that up) if the name does not exists in the filesystem, so it works. But I would have expected it to throw an IOError. My recommendation is to use `os.path.exists()` instead.
* be carefull with comparing `date()` and `datetime()`, they are not the same. And to get the `datetime()` from a timestamp use `.fromtimestamp`
* I hope you realise that the script always looks for the 'donotdelete.txt' in the directory that you start the script from. `os.walk` does not do a `os.chdir`. If that is not what you want (and have the `donotdelete.txt` as a safeguard for certain specific directories by having one in each directory not to truncate, you should test for `os.path.exists(os.path.join(dirpath, 'donotdelete.txt'))`
* `len(files)`? Do you mean `len(filenames)`, to see if all the files in the directory are old enough by comparing to `count`?
* You correctly construct a `current_path` from the `dirpath` and the `filename` in the for loop where you test for age. In the truncating for loop you just use `file`, which would try to open in the current directory.
* You are making an old-style class I would always make new classes [new style](http://docs.python.org/2/reference/datamodel.html#newstyle):
class TruncateOldFiles(object): ....
* you should have `self` parameters in each of the methods, then you can call `log` as `self.log()`, as is your code would not work unless you do `TruncateOldFiles.log()`
* I am not sure where the format info in the log gets filled in from. It write (after correcting how `log()` is called, only the line `starting to truncate .....` for each file it would truncate without additional information.
* count is not initialised, just incremented, you need to do `count = 0`
* I would pass in the root path, days and truncate size a parameters to the class creation. The latter two probably as defaults.
* For this kind of destructive non-reversible operations I add a argument to the class creation to be able to have it run without doing anything except for logging. Maybe that is what the test for `donotdelete.txt` is for but that does not log anything, so you have no indication in the log what the program would be doing.
* For many classes I have a verbose argument that helps with finding errors, this is for interactive running and different from the log
* You have the 1024 hardcoded instead of using truncate_size, and you are opening and truncating files that are smaller than truncate_size which is unnecessary.
* you use `file` (a python keyword) as a variable name both in the for loop as well in the with statement, it probably works, but it is not very good style and bound to lead to problems when you extend the code in the for loop.
My class would be more like (but `log()` would still need fixing):
class TruncateOldFiles():
def __init__(self, age_in_days=2, truncate_size=1024,
verbose=0, for_real=True):
self._age = datetime.timedelta(days = age_in_days)
self._truncate_size = truncate_size
self._verbose = verbose
self._for_real = for_real
def delete_files(self, root_path):
if not os.path.exists(root_path):
if self._verbose > 1:
print 'root_path', self._root_path, 'does not exists'
return
for dirpath, dirnames, filenames in os.walk(root_path):
count = 0
for filename in filenames:
current_path = os.path.join(dirpath, filename)
file_modified_time = datetime.datetime.fromtimestamp(os.path.getmtime(current_path))
if self._verbose > 0:
print file_modified_time, current_path
if ((datetime.datetime.now() - file_modified_time) > self._age):
count += 1
if count == len(filenames) and not os.path.exists(os.path.join(dirpath, "donotdelete.txt")):
for filename in filenames:
current_path = os.path.join(dirpath, filename)
if os.path.getsize(current_path) <= self._truncate_size:
if self._verbose > 0:
print 'not big enough:', current_path
continue
try:
if self._verbose > 0:
print 'truncating:', file
if self._for_real:
with open (current_path, 'w+') as fp:
fp.truncate(self._truncate_size)
self.log()
except IOError:
pass
def log(self):
format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logging.basicConfig(filename='myapp.log', level=logging.INFO, format = format)
logging.info('Starting to truncate all files...')
and the code to test this:
tof = TruncateOldFiles(verbose=1, for_real=False)
tof.delete_files('blah')
|
Most lightweight way to plot streaming data in python
Question: To give you a sense of what I'm looking for, it looks like
[this](http://youtu.be/xMWIATas_e0?t=1m37s):
Up until now I have used matplotlib for all my plotting, and timing hasn't
been critical (it's been done in postprocessing).
I'm wondering if there is a lighter weight way to plot other than shifting my
data to the left and redrawing the entire plot.
Answer: Have a look at the [Matplotlib Animations
Examples](http://www.scipy.org/Cookbook/Matplotlib/Animations). The main trick
is to not completely redraw the graph but rather use the OO interface of
matplotlib and set the x/ydata of the plot-line you created. If you've
integrated your plot with some GUI, e.g., GTK, then definitely do it like
proposed in the respective section of the plot, otherwise you might interfere
with the event-loop of your GUI toolkit.
For reference, if the link ever dies:
from pylab import *
import time
ion()
tstart = time.time() # for profiling
x = arange(0,2*pi,0.01) # x-array
line, = plot(x,sin(x))
for i in arange(1,200):
line.set_ydata(sin(x+i/10.0)) # update the data
draw() # redraw the canvas
print 'FPS:' , 200/(time.time()-tstart)
|
What are the three arguments to Gdk.threads_add_idle()?
Question: Using python and the `gi.repository` module, I am trying to call
`Gdk.threads_add_idle`, but I get the error message that three arguments are
required. The [documentation](http://developer.gnome.org/gdk/2.22/gdk-
Threads.html#gdk-threads-add-idle), however, only mentions two args.
You can try the function (on a linux only, i guess) by typing the following in
a python interpreter:
from gi.repository import Gdk
Gdk.threads_add_idle(...)
Any ideas what the three args are?
Answer: By looking on a source code search engine, I was able to find a python project
using [that call](http://code.ohloh.net/file?fid=iaOddY7MjZkG_AZM-
XSzCrWAmC8&cid=Z2GcLmDDO0A&s=threads_add_idle&pp=0&fl=Python&ff=1&filterChecked=true&mp=1&ml=0&me=1&md=1&browser=Default#L313).
Gdk.threads_add_idle(GLib.PRIORITY_DEFAULT_IDLE, self._idle_call, data)
It looks like introspection data is wrong, the priority should already default
to `PRIORITY_DEFAULT_IDLE` (as specified in the the documentation you pointed
out). You should file a bug on <http://bugzilla.gnome.org>.
**UPDATE:**
Pouria's bug report has been resolved `NOTABUG`, as this is a [naming
confusion between the C and Python
API](http://bugzilla.gnome.org/show_bug.cgi?id=692280#c1).
|
Getting yesterdays date with time zone
Question: Hi I was able to find the answer to this question but now with the timezone
included (http://stackoverflow.com/questions/1712116/formatting-yesterdays-
date-in-python)
This is working fine for me:
>>> import time
>>> time.strftime('/%Z/%Y/%m/%d')
'/EST/2013/01/18'
but is there a way to get the yesterdays date? I need to handle the timezone
change when we switch from EST to EDT, EDT to EST
datetime modules allows to use timedelta, but the naive object don't support
timezones by default and I'm not sure how to handle that.
Answer: the do it yourself method, but i don't know if there is a better method
>>> import time
>>> from datetime import date, timedelta
>>> yesterday = date.today() - timedelta(1)
>>> yesterday = yesterday.strftime('%Y/%m/%d')
>>> yesterday = "/%s/%s" % ( time.tzname[0], yesterday )
>>> print yesterday
'/CET/2013/01/17'
|
HOGDescriptor with videos to recognize objects
Question: Unfortunately I am both a python and a openCV beginner, so wish to excuse me
if the question is stupid.
I am trying to use a `cv2.HOGDescriptor` to recognize objects in a video. I am
concerned with a frame-by-frame recognition (i.e. no tracking or so).
* * *
Here is what I am doing:
1. I read the video (currently a `.mpg`) by using
capture = cv.CreateFileCapture(video_path) #some path in which I have my video
#capturing frames
frame = cv.QueryFrame(capture) #returns cv2.cv.iplimage
2. In order to ultimately use the detector on the frames (which I would do by using
found, w = hog.detectMultiScale(frame, winStride, padding, scale)
) I figured that I need to convert `frame` from `cv2.cv.iplimage` to
`numpy.ndarray` which I did by
tmp = cv.CreateImage(cv.GetSize(frame),8,3)
cv.CvtColor(frame,tmp,cv.CV_BGR2RGB)
ararr = np.asarray(cv.GetMat(tmp)).
Now I have the following error:
found, w = hog.detectMultiScale(ararr, winStride, padding, scale)
TypeError: a float is required
where
winStride=(8,8)
padding=(32,32)
scale=1.05
I really can't understand which element is the real problem here. I.e. which
number should be the float?
Any help appreciated
Answer: There is no need to perform that extra conversion yourself, that problem is
related to the mixing of the new and old OpenCV bindings for Python. The other
problem regarding `hog.detectMultiScale` is simply due to incorrect parameter
ordering.
The second problem can be directly seen by checking
`help(cv2.HOGDescriptor().detectMultiScale)`:
detectMultiScale(img[, hitThreshold[, winStride[, padding[,
scale[, finalThreshold[, useMeanshiftGrouping]]]]]])
as you can see, every parameter is optional but the first (the image). The
ordering is also important, since you are effectively using `winStride` as the
first, while it is expected to be the second, and so on. You can used named
arguments to pass it. (All this has been observed in the earlier answer.)
The other problem is the code mix, here is a sample code that you should
consider using:
import sys
import cv2
hog = cv2.HOGDescriptor()
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
hogParams = {'winStride': (8, 8), 'padding': (32, 32), 'scale': 1.05}
video = cv2.VideoCapture(sys.argv[1])
while True:
ret, frame = video.read()
if not ret:
break
result = hog.detectMultiScale(frame, **hogParams)
print result
|
How to make a For loop without using the In keyword in Python
Question: I was just wondering how I could go about using a For loop without using the
"in" keyword in Python?
Since the "in" keyword tests to see if a value is in a list, returning True if
the value is in the list and False if the value is not in the list, it seems
confusing that a For loop uses the In keyword as well when it could just as
well use a word like "of" instead.
However, I tried to use a code like this:
for i of range(5):
print i
It returns a syntax error. So I was wondering if there was any way I could use
a For loop without also using the In keyword since it is confusing.
Answer: No. There is not, it's a part of the language and can't be changed (without
modifying the underlying language implementation).
I would argue that it's not confusing at all, as the syntax reads like
English, and parses fine as an LL(1) grammar. It also reduces the number of
keywords (which is good as it frees up more words for variable naming).
A lot of languages reuse keywords in different contexts, Python does it with
`as` to:
import bar as b
with foo() as f:
...
3.3 does it with `from` too:
from foo import bar
def baz(iter):
yield from iter
Even Java does this, `extends` is usually used to give the base class for a
class, but it's also used to specify an upper bound on a type for generics.
class Foo extends Bar {
void baz(List<? extends Bar> barList) {
...
}
}
Note that even if this was possible, it would be a bad idea, as it would
reduce readability for other Python programmers used to the language as it
stands.
Edit:
As other answers have given replacements for the `for` loop using `while`
instead, I'll add in the best way of doing a bad thing:
iterable = iter(some_data)
while True:
try:
value = next(iterable)
except StopIteration:
break
do_something(value)
|
Dajaxice not found on production server
Question: I have a Django 1.4 project, running on Python 2.7 in which I'm using
[Dajaxice](http://www.dajaxproject.com/) 0.5.4.1. I have set it up on my
development machine (Windows 7) and everything works perfectly. However when I
deploy my app to production server (Ubuntu 12.04) I get 404 error for
`dajaxice.core.js` file and cannot resolve this problem no matter what.
Production server works with exactly the same versions of all software.
My project structure looks like this:
/myproject
/myproject/myproject-static/ <-- all the static files are here
/myproject/myproject-static/css/
/myproject/myproject-static/img/
/myproject/myproject-static/js/
/myproject/templates/
/myproject/myproject/
/myproject/main/
/myproject/app1/
/myproject/app2/
/myproject/app3/
etc.
I was following the Dajaxice installation steps [here](http://django-
dajaxice.readthedocs.org/en/latest/installation.html) and put everything in
its place (in `settings.py`, ˙urls.py`and`base.html` files).
My `settings.py` file has also these values:
from unipath import Path
PROJECT_ROOT = Path(__file__).ancestor(3)
STATIC_ROOT = ''
STATIC_URL = '/myproject-static/'
STATICFILES_DIRS = (
PROJECT_ROOT.child('myproject-static'),
)
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
'dajaxice.finders.DajaxiceFinder',
)
DAJAXICE_MEDIA_PREFIX = "dajaxice"
DAJAXICE_DEBUG = True
I have an `Alias` directive in my `django.conf` file which looks like this:
Alias /myproject-static/ "/path/to/myproject/myproject-static/"
I did `collectstatic` on my production server and got all static files
collected within few folders in the root of my project. So, now when I look at
my deployed web site, I can see that CSS is properly applied, JavaScript is
working fine and navigation around the site works as intended. Everything is
fine except Ajax is totally broken since `dajaxice.core.js` is never included.
My project folder structure after collecting static looks like this:
/myproject
/myproject/myproject-static/ <-- all the static files are originally here
/myproject/myproject-static/css/
/myproject/myproject-static/img/
/myproject/myproject-static/js/
/myproject/templates/
/myproject/admin/ <-- folder created with 'collectstatic' command
/myproject/css/ <-- folder created with 'collectstatic' command
/myproject/dajaxice/ <-- dajaxice.core.js is located here
/myproject/django_extensions/ <-- folder created with 'collectstatic' command
/myproject/img/ <-- folder created with 'collectstatic' command
/myproject/js/ <-- folder created with 'collectstatic' command
/myproject/myproject/
/myproject/main/
/myproject/app1/
/myproject/app2/
/myproject/app3/
etc.
Am I doing something completely wrong with my static files here?
What else should I try to fix this simple error?
Answer: Have you check if as the rest of the assets, `dajaxice.core.js` is inside your
`static/dajaxice` folder? If not, the issue could be related with a miss
configuration of the `STATICFILES_FINDERS`, check [Installing dajaxice
again](http://django-
dajaxice.readthedocs.org/en/latest/installation.html#installing-dajaxice)
Another usual issue with collectstatic and dajaxice is to run the first using
`--link` Are you using this option?
Hope this helps
|
Python urlparse -- extract domain name without subdomain
Question: Need a way to extract a domain name without the subdomain from a url using
Python urlparse.
For example, I would like to extract `"google.com"` from a full url like
`"http://www.google.com"`.
The closest I can seem to come with `urlparse` is the `netloc` attribute, but
that includes the subdomain, which in this example would be `www.google.com`.
I know that it is possible to write some custom string manipulation to turn
www.google.com into google.com, but I want to avoid by-hand string transforms
or regex in this task. (The reason for this is that I am not familiar enough
with url formation rules to feel confident that I could consider every edge
case required in writing a custom parsing function.)
Or, if `urlparse` can't do what I need, does anyone know any other Python url-
parsing libraries that would?
Answer: You probably want to check out
[tldextract](http://pypi.python.org/pypi/tldextract), a library designed to do
this kind of thing.
It uses the Public Suffix List to try and get a decent split based on known
gTLDs, but do note that this is just a brute-force list, nothing special, so
it can get out of date (although hopefully it's curated so as not to).
>>> import tldextract
>>> tldextract.extract('http://forums.news.cnn.com/')
ExtractResult(subdomain='forums.news', domain='cnn', suffix='com')
So in your case:
>>> extracted = tldextract.extract('http://www.google.com')
>>> "{}.{}".format(extracted.domain, extracted.suffix)
"google.com"
|
Determine if a Python class is an Abstract Base Class or Concrete
Question: My Python application contains many abstract classes and implementations. For
example:
import abc
import datetime
class MessageDisplay(object):
__metaclass__ = abc.ABCMeta
@abc.abstractproperty
def display(self, message):
pass
class FriendlyMessageDisplay(MessageDisplay):
def greet(self):
hour = datetime.datetime.now().timetuple().tm_hour
if hour < 7:
raise Exception("Cannot greet while asleep.")
elif hour < 12:
self.display("Good morning!")
elif hour < 18:
self.display("Good afternoon!")
elif hour < 20:
self.display("Good evening!")
else:
self.display("Good night.")
class FriendlyMessagePrinter(FriendlyMessageDisplay):
def display(self, message):
print(message)
`FriendlyMessagePrinter` is a concrete class that we can use...
FriendlyMessagePrinter().greet()
Good night.
...but `MessageDisplay` and `FriendlyMessageDisplay` are abstract classes and
attempting to instantiate one would result in an error:
TypeError: Can't instantiate abstract class MessageDisplay with abstract methods say
How can I check if a given class object is an (uninstantiatable) abstract
class?
Answer:
import inspect
print(inspect.isabstract(object)) # False
print(inspect.isabstract(MessageDisplay)) # True
print(inspect.isabstract(FriendlyMessageDisplay)) # True
print(inspect.isabstract(FriendlyMessagePrinter)) # False
This checks that the internal flag `TPFLAGS_IS_ABSTRACT` is set in the class
object, so it can't be fooled as easily as your implementation:
class Fake:
__abstractmethods__ = 'bluh'
print(is_abstract(Fake), inspect.isabatract(Fake)) # True, False
|
Scrapy: why does my response object not have a body_as_unicode method?
Question: I wrote a spider, that worked brilliantly the first time. The second time I
tried to run it, it didn't venture beyond the `start_urls`. I tried to `fetch`
the url in `scrapy shell` and create a `HtmlXPathSelector` object from the
returned response. That is when I got the error
So the steps were: `
[scrapy shell] fetch('http://example.com') #its something other than example.
[scrapy shell] from scrapy.selector import HtmlXPathSelector
[scrapy shell] hxs = HtmlXPathSelector(response)
---------------------------------------------------------------------------
Traceback:
AttributeError Traceback (most recent call last)
<ipython-input-3-a486208adf1e> in <module>()
----> 1 HtmlXPathSelector(response)
/home/codefreak/project-r42catalog/env-r42catalog/lib/python2.7/site-packages/scrapy/selector/lxmlsel.pyc in __init__(self, response, text, namespaces, _root, _expr)
29 body=unicode_to_str(text, 'utf-8'), encoding='utf-8')
30 if response is not None:
---> 31 _root = LxmlDocument(response, self._parser)
32
33 self.namespaces = namespaces
/home/codefreak/project-r42catalog/env-r42catalog/lib/python2.7/site-packages/scrapy/selector/lxmldocument.pyc in __new__(cls, response, parser)
25 if parser not in cache:
26 obj = object_ref.__new__(cls)
---> 27 cache[parser] = _factory(response, parser)
28 return cache[parser]
29
/home/codefreak/project-r42catalog/env-r42catalog/lib/python2.7/site-packages/scrapy/selector/lxmldocument.pyc in _factory(response, parser_cls)
11 def _factory(response, parser_cls):
12 url = response.url
---> 13 body = response.body_as_unicode().strip().encode('utf8') or '<html/>'
14 parser = parser_cls(recover=True, encoding='utf8')
15 return etree.fromstring(body, parser=parser, base_url=url)
Error:
AttributeError: 'Response' object has no attribute 'body_as_unicode'
Am I overlooking something very obvious or stumbled upon a bug in scrapy?
Answer: `body_as_unicode` is a method of
[TextResponse](http://doc.scrapy.org/en/latest/topics/request-
response.html#scrapy.http.TextResponse). TextResponse, or one of its
subclasses such as HtmlResponse, will be created by scrapy if the http
response contains textual content.
In [1]: fetch('http://scrapy.org')
...
In [2]: type(response)
Out[2]: scrapy.http.response.html.HtmlResponse
...
In [3]: fetch('http://www.scrapy.org/site-media/images/logo.png')
...
In [4]: type(response)
Out[4]: scrapy.http.response.Response
In your case, the most likely explanation is that scrapy believes the response
does not contain text.
Does the HTTP response from the server correctly set the Content-Type header?
Does it render correctly in a browser? These questions will help understand if
it's expected behavior or a bug.
|
ctypes: Cast string to function?
Question: I was reading the article [Tips for Evading Anti-Virus During Pen
Testing](http://pen-testing.sans.org/blog/pen-testing/2011/10/13/tips-for-
evading-anti-virus-during-pen-testing) and was surprised by given Python
program:
from ctypes import *
shellcode = '\xfc\xe8\x89\x00\x00....'
memorywithshell = create_string_buffer(shellcode, len(shellcode))
shell = cast(memorywithshell, CFUNCTYPE(c_void_p))
shell()
The shellcode is shortened. Can someone explain what is going on? I'm familiar
with both Python and C, I've tried read on the
[`ctypes`](http://docs.python.org/2.7/library/ctypes.html#module-ctypes)
module, but there are two main questions left:
* What is stored in `shellcode`?
I know this has _something_ to do with C (in the article it is an shellcode
from Metasploit and a different notation for ASCII was chosen), but I cannot
identify whether if it's C source (probably not) or originates from some sort
of compilation (which?).
* Depending on the first question, what's the magic happening during the cast?
Answer: Have a look at this shellcode, I toke it from [here](http://www.shell-
storm.org/shellcode/files/shellcode-648.php) (it pops up a MessageBoxA):
#include <stdio.h>
typedef void (* function_t)(void);
unsigned char shellcode[] =
"\xFC\x33\xD2\xB2\x30\x64\xFF\x32\x5A\x8B"
"\x52\x0C\x8B\x52\x14\x8B\x72\x28\x33\xC9"
"\xB1\x18\x33\xFF\x33\xC0\xAC\x3C\x61\x7C"
"\x02\x2C\x20\xC1\xCF\x0D\x03\xF8\xE2\xF0"
"\x81\xFF\x5B\xBC\x4A\x6A\x8B\x5A\x10\x8B"
"\x12\x75\xDA\x8B\x53\x3C\x03\xD3\xFF\x72"
"\x34\x8B\x52\x78\x03\xD3\x8B\x72\x20\x03"
"\xF3\x33\xC9\x41\xAD\x03\xC3\x81\x38\x47"
"\x65\x74\x50\x75\xF4\x81\x78\x04\x72\x6F"
"\x63\x41\x75\xEB\x81\x78\x08\x64\x64\x72"
"\x65\x75\xE2\x49\x8B\x72\x24\x03\xF3\x66"
"\x8B\x0C\x4E\x8B\x72\x1C\x03\xF3\x8B\x14"
"\x8E\x03\xD3\x52\x33\xFF\x57\x68\x61\x72"
"\x79\x41\x68\x4C\x69\x62\x72\x68\x4C\x6F"
"\x61\x64\x54\x53\xFF\xD2\x68\x33\x32\x01"
"\x01\x66\x89\x7C\x24\x02\x68\x75\x73\x65"
"\x72\x54\xFF\xD0\x68\x6F\x78\x41\x01\x8B"
"\xDF\x88\x5C\x24\x03\x68\x61\x67\x65\x42"
"\x68\x4D\x65\x73\x73\x54\x50\xFF\x54\x24"
"\x2C\x57\x68\x4F\x5F\x6F\x21\x8B\xDC\x57"
"\x53\x53\x57\xFF\xD0\x68\x65\x73\x73\x01"
"\x8B\xDF\x88\x5C\x24\x03\x68\x50\x72\x6F"
"\x63\x68\x45\x78\x69\x74\x54\xFF\x74\x24"
"\x40\xFF\x54\x24\x40\x57\xFF\xD0";
void real_function(void) {
puts("I'm here");
}
int main(int argc, char **argv)
{
function_t function = (function_t) &shellcode[0];
real_function();
function();
return 0;
}
Compile it an hook it under any debugger, I'll use gdb:
> gcc shellcode.c -o shellcode
> gdb -q shellcode.exe
Reading symbols from shellcode.exe...done.
(gdb)
>
Disassemble the main function to see that different between calling
`real_function` and `function`:
(gdb) disassemble main
Dump of assembler code for function main:
0x004013a0 <+0>: push %ebp
0x004013a1 <+1>: mov %esp,%ebp
0x004013a3 <+3>: and $0xfffffff0,%esp
0x004013a6 <+6>: sub $0x10,%esp
0x004013a9 <+9>: call 0x4018e4 <__main>
0x004013ae <+14>: movl $0x402000,0xc(%esp)
0x004013b6 <+22>: call 0x40138c <real_function> ; <- here we call our `real_function`
0x004013bb <+27>: mov 0xc(%esp),%eax
0x004013bf <+31>: call *%eax ; <- here we call the address that is loaded in eax (the address of the beginning of our shellcode)
0x004013c1 <+33>: mov $0x0,%eax
0x004013c6 <+38>: leave
0x004013c7 <+39>: ret
End of assembler dump.
(gdb)
There are two `call`, let's make a break point at `<main+31>` to see what is
loaded in eax:
(gdb) break *(main+31)
Breakpoint 1 at 0x4013bf
(gdb) run
Starting program: shellcode.exe
[New Thread 2856.0xb24]
I'm here
Breakpoint 1, 0x004013bf in main ()
(gdb) disassemble
Dump of assembler code for function main:
0x004013a0 <+0>: push %ebp
0x004013a1 <+1>: mov %esp,%ebp
0x004013a3 <+3>: and $0xfffffff0,%esp
0x004013a6 <+6>: sub $0x10,%esp
0x004013a9 <+9>: call 0x4018e4 <__main>
0x004013ae <+14>: movl $0x402000,0xc(%esp)
0x004013b6 <+22>: call 0x40138c <real_function>
0x004013bb <+27>: mov 0xc(%esp),%eax
=> 0x004013bf <+31>: call *%eax ; now we are here
0x004013c1 <+33>: mov $0x0,%eax
0x004013c6 <+38>: leave
0x004013c7 <+39>: ret
End of assembler dump.
(gdb)
Look at the first 3 bytes of the data that the address in eax continues:
(gdb) x/3x $eax
0x402000 <shellcode>: 0xfc 0x33 0xd2
(gdb) ^-------^--------^---- the first 3 bytes of the shellcode
So the CPU will `call 0x402000`, the beginning of our shell code at
`0x402000`, lets disassemble what ever at `0x402000`:
(gdb) disassemble 0x402000
Dump of assembler code for function shellcode:
0x00402000 <+0>: cld
0x00402001 <+1>: xor %edx,%edx
0x00402003 <+3>: mov $0x30,%dl
0x00402005 <+5>: pushl %fs:(%edx)
0x00402008 <+8>: pop %edx
0x00402009 <+9>: mov 0xc(%edx),%edx
0x0040200c <+12>: mov 0x14(%edx),%edx
0x0040200f <+15>: mov 0x28(%edx),%esi
0x00402012 <+18>: xor %ecx,%ecx
0x00402014 <+20>: mov $0x18,%cl
0x00402016 <+22>: xor %edi,%edi
0x00402018 <+24>: xor %eax,%eax
0x0040201a <+26>: lods %ds:(%esi),%al
0x0040201b <+27>: cmp $0x61,%al
0x0040201d <+29>: jl 0x402021 <shellcode+33>
....
As you see, a shellcode is nothing more than assembly instructions, the only
different is in the way you write these instructions, it uses special
techniques to make it more portable, for example never use a fixed address.
The python equivalent to the above program:
#!python
from ctypes import *
shellcode_data = "\
\xFC\x33\xD2\xB2\x30\x64\xFF\x32\x5A\x8B\
\x52\x0C\x8B\x52\x14\x8B\x72\x28\x33\xC9\
\xB1\x18\x33\xFF\x33\xC0\xAC\x3C\x61\x7C\
\x02\x2C\x20\xC1\xCF\x0D\x03\xF8\xE2\xF0\
\x81\xFF\x5B\xBC\x4A\x6A\x8B\x5A\x10\x8B\
\x12\x75\xDA\x8B\x53\x3C\x03\xD3\xFF\x72\
\x34\x8B\x52\x78\x03\xD3\x8B\x72\x20\x03\
\xF3\x33\xC9\x41\xAD\x03\xC3\x81\x38\x47\
\x65\x74\x50\x75\xF4\x81\x78\x04\x72\x6F\
\x63\x41\x75\xEB\x81\x78\x08\x64\x64\x72\
\x65\x75\xE2\x49\x8B\x72\x24\x03\xF3\x66\
\x8B\x0C\x4E\x8B\x72\x1C\x03\xF3\x8B\x14\
\x8E\x03\xD3\x52\x33\xFF\x57\x68\x61\x72\
\x79\x41\x68\x4C\x69\x62\x72\x68\x4C\x6F\
\x61\x64\x54\x53\xFF\xD2\x68\x33\x32\x01\
\x01\x66\x89\x7C\x24\x02\x68\x75\x73\x65\
\x72\x54\xFF\xD0\x68\x6F\x78\x41\x01\x8B\
\xDF\x88\x5C\x24\x03\x68\x61\x67\x65\x42\
\x68\x4D\x65\x73\x73\x54\x50\xFF\x54\x24\
\x2C\x57\x68\x4F\x5F\x6F\x21\x8B\xDC\x57\
\x53\x53\x57\xFF\xD0\x68\x65\x73\x73\x01\
\x8B\xDF\x88\x5C\x24\x03\x68\x50\x72\x6F\
\x63\x68\x45\x78\x69\x74\x54\xFF\x74\x24\
\x40\xFF\x54\x24\x40\x57\xFF\xD0"
shellcode = c_char_p(shellcode_data)
function = cast(shellcode, CFUNCTYPE(None))
function()
|
Internationalization with python gae, babel and i18n. Can't output the correct string
Question:
jinja_env = jinja2.Environment(loader = jinja2.FileSystemLoader(template_dir),extensions=['jinja2.ext.i18n'], autoescape = True)
jinja_env.install_gettext_translations(i18n)
config['webapp2_extras.i18n'] = {
'translations_path': 'locale',
'template_path': 'views'
}
app = webapp2.WSGIApplication([
('/', MainController.MainPageHandler)
], config=config, debug=True)
In the messages.po file.
> "Project-Id-Version: PROJECT VERSION\n" "Report-Msgid-Bugs-To:
> EMAIL@ADDRESS\n" "POT-Creation-Date: 2013-01-19 19:26+0800\n" "PO-Revision-
> Date: 2013-01-19 19:13+0800\n" "Last-Translator: FULL NAME \n" "Language-
> Team: en_US \n" "Plural-Forms: nplurals=2; plural=(n != 1)\n" "MIME-Version:
> 1.0\n" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-
> Encoding: 8bit\n" "Generated-By: Babel 0.9.6\n"
>
> #~ msgid "Hello-World"
>
> #~ msgstr "Hello World"
In the handler:
from webapp2_extras import i18n
from webapp2_extras.i18n import gettext as _
class MainPageHandler(Handler.Handler):
def get(self):
locale = self.request.GET.get('locale', 'en_US')
i18n.get_i18n().set_locale(locale)
logging.info(locale)
message = _('Hello-World')
logging.info(message)
self.render("main.html")
In the html file:
<div id="main">
{{ _("Hello-World") }}
</div>
When navigate to the webpage, it returns the string "Hello-World" instead of
"Hello World". I don't know what's wrong. Anyone can help?
Answer: Couple of things that might be wrong, or might just be missing from the
description...
the default 'domain' with webapp2 translation is 'messages', not 'message', so
if your file is actually 'message.po' as you typed it, then that needs to
change.
Secondly, the translation works off the compiled .mo file, not the .po, so if
you haven't run the compile step (`pybabel compile -f -d ./locale`), you need
to do that. You should have a file at `locale/en_US/LC_MESSAGES/messages.mo`
|
Dynamically create a list of shared arrays using python multiprocessing
Question: I'd like to share several numpy arrays between different child processes with
python's multiprocessing module. I'd like the arrays to be separately
lockable, and I'd like the number of arrays to be dynamically determined at
runtime. Is this possible?
In [this answer](http://stackoverflow.com/a/7908612/513688), J.F. Sebastian
lays out a nice way to use python's numpy arrays in shared memory while
multiprocessing. The array is lockable, which is what I want. I would like to
do something very similar, except with a variable number of shared arrays. The
number of arrays would be determined at runtime. His example code is very
clear and does almost exactly what I want, but I'm unclear how to declare a
variable number of such arrays without giving each one of them a hard-coded
name like `shared_arr_1`, `shared_arr_2`, et cetera. What's the right way to
do this?
Answer: Turns out this was easier than I thought! Following J.F. Sebastian's
encouragement, here's my crack at an answer:
import time
import ctypes
import logging
import Queue
import multiprocessing as mp
import numpy as np
info = mp.get_logger().info
def main():
logger = mp.log_to_stderr()
logger.setLevel(logging.INFO)
data_pipeline = Image_Data_Pipeline(
num_data_buffers=5,
buffer_shape=(60, 256, 512))
start = time.clock()
data_pipeline.load_buffers(data_pipeline.num_data_buffers)
end = time.clock()
data_pipeline.close()
print "Elapsed time:", end-start
class Image_Data_Pipeline:
def __init__(self, num_data_buffers, buffer_shape):
"""
Allocate a bunch of 16-bit buffers for image data
"""
self.num_data_buffers = num_data_buffers
self.buffer_shape = buffer_shape
pix_per_buf = np.prod(buffer_shape)
self.data_buffers = [mp.Array(ctypes.c_uint16, pix_per_buf)
for b in range(num_data_buffers)]
self.idle_data_buffers = range(num_data_buffers)
"""
Launch the child processes that make up the pipeline
"""
self.camera = Data_Pipeline_Process(
target=child_process, name='Camera',
data_buffers=self.data_buffers, buffer_shape=buffer_shape)
self.display_prep = Data_Pipeline_Process(
target=child_process, name='Display Prep',
data_buffers=self.data_buffers, buffer_shape=buffer_shape,
input_queue=self.camera.output_queue)
self.file_saving = Data_Pipeline_Process(
target=child_process, name='File Saving',
data_buffers=self.data_buffers, buffer_shape=buffer_shape,
input_queue=self.display_prep.output_queue)
return None
def load_buffers(self, N, timeout=0):
"""
Feed the pipe!
"""
for i in range(N):
self.camera.input_queue.put(self.idle_data_buffers.pop())
"""
Wait for the buffers to idle. Here would be a fine place to
feed them back to the pipeline, too.
"""
while True:
try:
self.idle_data_buffers.append(
self.file_saving.output_queue.get_nowait())
info("Buffer %i idle"%(self.idle_data_buffers[-1]))
except Queue.Empty:
time.sleep(0.01)
if len(self.idle_data_buffers) >= self.num_data_buffers:
break
return None
def close(self):
self.camera.input_queue.put(None)
self.display_prep.input_queue.put(None)
self.file_saving.input_queue.put(None)
self.camera.child.join()
self.display_prep.child.join()
self.file_saving.child.join()
class Data_Pipeline_Process:
def __init__(
self,
target,
name,
data_buffers,
buffer_shape,
input_queue=None,
output_queue=None,
):
if input_queue is None:
self.input_queue = mp.Queue()
else:
self.input_queue = input_queue
if output_queue is None:
self.output_queue = mp.Queue()
else:
self.output_queue = output_queue
self.command_pipe = mp.Pipe() #For later, we'll send instrument commands
self.child = mp.Process(
target=target,
args=(name, data_buffers, buffer_shape,
self.input_queue, self.output_queue, self.command_pipe),
name=name)
self.child.start()
return None
def child_process(
name,
data_buffers,
buffer_shape,
input_queue,
output_queue,
command_pipe):
if name == 'Display Prep':
display_buffer = np.empty(buffer_shape, dtype=np.uint16)
while True:
try:
process_me = input_queue.get_nowait()
except Queue.Empty:
time.sleep(0.01)
continue
if process_me is None:
break #We're done
else:
info("start buffer %i"%(process_me))
with data_buffers[process_me].get_lock():
a = np.frombuffer(data_buffers[process_me].get_obj(),
dtype=np.uint16)
if name == 'Camera':
"""
Fill the buffer with data (eventually, from the
camera, dummy data for now)
"""
a.fill(1)
elif name == 'Display Prep':
"""
Process the 16-bit image into a display-ready
8-bit image. Fow now, just copy the data to a
similar buffer.
"""
display_buffer[:] = a.reshape(buffer_shape)
elif name == 'File Saving':
"""
Save the data to disk.
"""
a.tofile('out.raw')
info("end buffer %i"%(process_me))
output_queue.put(process_me)
return None
if __name__ == '__main__':
main()
Background: This is the skeleton of a data-acquisition pipeline. I want to
acquire data at a very high rate, process it for on-screen display, and save
it to disk. I don't ever want display rate or disk rate to limit acquisition,
which is why I think using separate child processes in individual processing
loops is appropriate.
Here's typical output of the dummy program:
C:\code\instrument_control>c:\Python27\python.exe test.py
[INFO/MainProcess] allocating a new mmap of length 15728640
[INFO/MainProcess] allocating a new mmap of length 15728640
[INFO/MainProcess] allocating a new mmap of length 15728640
[INFO/MainProcess] allocating a new mmap of length 15728640
[INFO/MainProcess] allocating a new mmap of length 15728640
[[INFO/Camera] child process calling self.run()
INFO/Display Prep] child process calling self.run()
[INFO/Camera] start buffer 4
[INFO/File Saving] child process calling self.run()
[INFO/Camera] end buffer 4
[INFO/Camera] start buffer 3
[INFO/Camera] end buffer 3
[INFO/Camera] start buffer 2
[INFO/Display Prep] start buffer 4
[INFO/Camera] end buffer 2
[INFO/Camera] start buffer 1
[INFO/Camera] end buffer 1
[INFO/Camera] start buffer 0
[INFO/Camera] end buffer 0
[INFO/Display Prep] end buffer 4
[INFO/Display Prep] start buffer 3
[INFO/File Saving] start buffer 4
[INFO/Display Prep] end buffer 3
[INFO/Display Prep] start buffer 2
[INFO/File Saving] end buffer 4
[INFO/File Saving] start buffer 3
[INFO/MainProcess] Buffer 4 idle
[INFO/Display Prep] end buffer 2
[INFO/Display Prep] start buffer 1
[INFO/File Saving] end buffer 3
[INFO/File Saving] start buffer 2
[INFO/MainProcess] Buffer 3 idle
[INFO/Display Prep] end buffer 1
[INFO/Display Prep] start buffer 0
[INFO/File Saving] end buffer 2
[INFO/File Saving] start buffer 1
[[INFO/MainProcess] Buffer 2 idle
INFO/Display Prep] end buffer 0
[INFO/File Saving] end buffer 1
[INFO/File Saving] start buffer 0
[INFO/MainProcess] Buffer 1 idle
[INFO/File Saving] end buffer 0
[INFO/MainProcess] Buffer 0 idle
[INFO/Camera] process shutting down
[INFO/Camera] process exiting with exitcode 0
[INFO/Display Prep] process shutting down
[INFO/File Saving] process shutting down
[INFO/Display Prep] process exiting with exitcode 0
[INFO/File Saving] process exiting with exitcode 0
Elapsed time: 0.263240348548
[INFO/MainProcess] process shutting down
C:\code\instrument_control>
It seems to do what I want: the data gets processed for display and saved to
disk without interfering with the acquisition rate.
|
Get the "bits" of a float in Python?
Question: I am looking for the Python equivalent of Java's `Float.floatToBits`.
I found this [Python: obtain & manipulate (as integers) bit patterns of
floats](http://stackoverflow.com/questions/1922771/python-obtain-manipulate-
as-integers-bit-patterns-of-floats) but does anyone know of a less complicated
way?
Answer: The answer that Alex Martelli gives in that question is really pretty simple
-- you can reduce it to:
>>> import struct
>>>
>>>
>>> def floatToBits(f):
... s = struct.pack('>f', f)
... return struct.unpack('>l', s)[0]
...
...
>>> floatToBits(173.3125)
1127043072
>>> hex(_)
'0x432d5000'
Once you have it as an integer, you can perform any other manipulations you
need to.
You can reverse the order of operations to round-trip:
>>> def bitsToFloat(b):
... s = struct.pack('>l', b)
... return struct.unpack('>f', s)[0]
>>> bitsToFloat(0x432d5000)
173.3125
|
For class using the `with` keyword in python, can I have it __repr__ (print) itself?
Question: I'm trying to create an object that plays nicely with the `with` keyword in
Python. I understand that you have to create `__enter__` and `__exit__`
methods, but I'm not quite sure how manipulate the object. As a concrete
example I wrote a class that creates local space to work in, and cleans up on
exit:
import tempfile, os, shutil
class temp_workspace(object):
def __enter__(self):
self.local_dir = os.getcwd()
self.temp_dir = tempfile.mkdtemp()
os.chdir(self.temp_dir)
def __exit__(self, exc_type, exc_value, traceback):
os.chdir(self.local_dir)
shutil.rmtree(self.temp_dir)
def __repr__(self):
return self.temp_dir
This works just fine, but when I try to print the local directory name:
with temp_workspace() as T:
print "Temp directory name is ", T
It shows up as `None` and `__repr__` isn't even called! It's really confusing
since `T` is also `NoneType`. What am I doing wrong?
Answer: You're not returning the object from `__enter__` as specified by the [context
manager protocol](http://docs.python.org/2/reference/datamodel.html#context-
managers). Add `return self` to the end of your `__enter__` method.
|
moving django site to another server
Question: I want to move my django site to another machine. (I want to start with empty
DB)
I thought i'll be done with the following steps.
* copy all files
setup all tools(django,python,..etc)
run syncdb
When I run `manage.py syncdb`, it complains some table(such as
django_content_type) doesn't exist.
I looked at the DB, indeed there are no tables in the DB.
I tried `recreate project`(startproject) or `recreate app`(startapp). (But
they fail because the project or app name is already taken.)
What should I do?
The reason I can think of is mysql being upgraded to 5.5.27 (default to
innodb)
* * *
$ python manage.py syncdb
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 443, in execute_from_command_line
utility.execute()
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/core/management/base.py", line 231, in execute
self.validate()
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/core/management/base.py", line 266, in validate
num_errors = get_validation_errors(s, app)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/core/management/validation.py", line 30, in get_validation_errors
for (app_name, error) in get_app_errors().items():
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/db/models/loading.py", line 158, in get_app_errors
self._populate()
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/db/models/loading.py", line 64, in _populate
self.load_app(app_name, True)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/db/models/loading.py", line 88, in load_app
models = import_module('.models', app_name)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/home/ubuntu/Documents/aLittleArtist/django/gallery/models.py", line 152, in <module>
ALBUM_IMAGE_TYPE = ContentType.objects.get(app_label="gallery", model="AlbumImage")
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/db/models/manager.py", line 131, in get
return self.get_query_set().get(*args, **kwargs)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/db/models/query.py", line 361, in get
num = len(clone)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/db/models/query.py", line 85, in __len__
self._result_cache = list(self.iterator())
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/db/models/query.py", line 291, in iterator
for row in compiler.results_iter():
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 763, in results_iter
for rows in self.execute_sql(MULTI):
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 818, in execute_sql
cursor.execute(sql, params)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/db/backends/util.py", line 40, in execute
return self.cursor.execute(sql, params)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 114, in execute
return self.cursor.execute(query, args)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute
self.errorhandler(self, exc, value)
File "/home/ubuntu/virtualenvs/aLittleArtist/local/lib/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
django.db.utils.DatabaseError: (1146, "Table 'gallery_db.django_content_type' doesn't exist")
Answer: ALBUM_IMAGE_TYPE = ContentType.objects.get(app_label="gallery",
model="AlbumImage")
This line was the culprit.
seems like the above line attempts to do DB query before any DB table is
created.
I removed the line and relevant code and let syncdb run. and did migrate with
south.
|
Openerplib error "Method not available execute_kw"
Question: When I try to load data from OpenERP bd using [OpenErp Client
Lib](http://pypi.python.org/pypi/openerp-client-lib/1.0.3):
I get this error, from an iPython screen.
openerp-server is running, as well as openerp-web, and I get no error on the
log. Config file for both are the defaults.
In [8]: import openerplib
In [9]: connection = openerplib.get_connection(hostname="localhost",database="my_db",login="admin", password="1234")
In [10]: user_model = connection.get_model("res.users")
In [11]: ids = user_model.search([("login", "=", "admin")])
---------------------------------------------------------------------------
Fault Traceback (most recent call last)
/home/vanessa/<ipython-input-11-762f474d37fc> in <module>()
----> 1 ids = user_model.search([("login", "=", "vanessa")])
/usr/local/lib/python2.7/dist-packages/openerp_client_lib-1.1.0-py2.7.egg/openerplib/main.pyc in proxy(*args, **kw)
311 self.model_name,
312 method,
--> 313 args, kw)
314 if method == "read":
315 if isinstance(result, list) and len(result) > 0 and "id" in result[0]:
/usr/local/lib/python2.7/dist-packages/openerp_client_lib-1.1.0-py2.7.egg/openerplib/main.pyc in proxy(*args)
178 """
179 self.__logger.debug('args: %r', args)
--> 180 result = self.connector.send(self.service_name, method, *args)
181 self.__logger.debug('result: %r', result)
182 return result
/usr/local/lib/python2.7/dist-packages/openerp_client_lib-1.1.0-py2.7.egg/openerplib/main.pyc in send(self, service_name, method, *args)
81 url = '%s/%s' % (self.url, service_name)
82 service = xmlrpclib.ServerProxy(url)
---> 83 return getattr(service, method)(*args)
84
85 class XmlRPCSConnector(XmlRPCConnector):
/usr/lib/python2.7/xmlrpclib.pyc in __call__(self, *args)
1222 return _Method(self.__send, "%s.%s" % (self.__name, name))
1223 def __call__(self, *args):
-> 1224 return self.__send(self.__name, args)
1225
1226 ##
/usr/lib/python2.7/xmlrpclib.pyc in __request(self, methodname, params)
1576 self.__handler,
1577 request,
-> 1578 verbose=self.__verbose
1579 )
1580
/usr/lib/python2.7/xmlrpclib.pyc in request(self, host, handler, request_body, verbose)
1262 for i in (0, 1):
1263 try:
-> 1264 return self.single_request(host, handler, request_body, verbose)
1265 except socket.error, e:
1266 if i or e.errno not in (errno.ECONNRESET, errno.ECONNABORTED, errno.EPIPE):
/usr/lib/python2.7/xmlrpclib.pyc in single_request(self, host, handler, request_body, verbose)
1295 if response.status == 200:
1296 self.verbose = verbose
-> 1297 return self.parse_response(response)
1298 except Fault:
1299 raise
/usr/lib/python2.7/xmlrpclib.pyc in parse_response(self, response)
1471 p.close()
1472
-> 1473 return u.close()
1474
1475 ##
/usr/lib/python2.7/xmlrpclib.pyc in close(self)
791 raise ResponseError()
792 if self._type == "fault":
--> 793 raise Fault(**self._stack[0])
794 return tuple(self._stack)
795
Fault: <Fault Method not available execute_kw: 'Traceback (most recent call last):\n File "/usr/local/lib/python2.7/dist-packages/openerp-server/netsvc.py", line 489, in dispatch\n result = ExportService.getService(service_name).dispatch(method, auth, params)\n File "/usr/local/lib/python2.7/dist-packages/openerp-server/service/web_services.py", line 595, in dispatch\n raise NameError("Method not available %s" % method)\nNameError: Method not available execute_kw\n'>
Answer: `execute_kw` is available from OpenERP 6.1
Thus, OpenERP Client Lib is not compatible with OpenERP 6.0 or less.
|
Creating a video using OpenCV 2.4.0 in python
Question: I am trying to create a video using `OpenCV 2.4.0` in `python 2.7.2`. But the
`avi` file size is 0.
My code:
from cv2 import *
im1 = cv.LoadImage("1.jpg")
fps = 20
frame_size = cv.GetSize(im1)
#writer = cv.CreateVideoWriter("out.avi", CV_FOURCC('M', 'J', 'P', 'G'), fps, frame_size, True)
v = VideoWriter()
v.open("out.avi", cv.CV_FOURCC('F', 'M', 'P', '4'), fps, (800,600), True)
print v.isOpened()
`isOpened()` is always returning `false`.
Another try:
#!/usr/bin/env python
import sys
from cv2 import *
im1 = cv.LoadImage("1.jpg")
if not im1:
print "Error loading image"
im2 = cv.LoadImage("2.jpg")
if not im1:
print "Error loading image"
fps = 20
frame_size = cv.GetSize(im1)
writer = cv.CreateVideoWriter("out.avi", cv.CV_FOURCC('M', 'J', 'P', 'G'), fps, frame_size, True)
if not writer:
print "Error in creating video writer"
sys.exit(1)
else:
cv.WriteFrame(writer, im1)
cv.WriteFrame(writer, im2)
del writer
No errors, but the output is empty.
What am I missing?
Answer:
import cv2
img1 = cv2.imread('1.jpg')
img2 = cv2.imread('2.jpg')
img3 = cv2.imread('3.jpg')
height , width , layers = img1.shape
video = cv2.VideoWriter('video.avi',-1,1,(width,height))
video.write(img1)
video.write(img2)
video.write(img3)
cv2.destroyAllWindows()
video.release()
A simple code for what you want to do. for details
[here](http://docs.opencv.org/doc/tutorials/highgui/video-write/video-
write.html#videowritehighgui)
|
Unexpected behavior when using python thread locks and circulair imports
Question: I wrote a simple test program using thread locks. This program does not behave
as expected, and the python interpreter does not complain.
test1.py:
from __future__ import with_statement
from threading import Thread, RLock
import time
import test2
lock = RLock()
class Test1(object):
def __init__(self):
print("Start Test1")
self.test2 = test2.Test2()
self.__Thread = Thread(target=self.myThread, name="thread")
self.__Thread.daemon = True
self.__Thread.start()
self.test1Method()
def test1Method(self):
print("start test1Method")
with lock:
print("entered test1Method")
time.sleep(5)
print("end test1Method")
def myThread(self):
self.test2.test2Method()
if __name__ == "__main__":
client = Test1()
raw_input()
test2.py:
from __future__ import with_statement
import time
import test1
lock = test1.lock
class Test2(object):
def __init__(self):
print("Start Test2")
def test2Method(self):
print("start test2Method")
with lock:
print("entered test2Method")
time.sleep(5)
print("end test2Method")
Both sleeps are executed at the same time! Not what I expected when using the
lock.
When test2Method is moved to test1.py everything works fine. When I create the
lock in test2.py and import it in test1.py everything works fine. When I
create the lock in a separate source file and import it both in test1.py and
test2.py everything works fine.
Probably it has to do with circulair imports.
But why doesn't python complain about this?
Answer: In Python when you execute a python script using `$ python test1.py` what
happen is that your `test1.py` will be imported as `__main__` instead of
`test1`, so if you want to get the lock defined in the launched script you
shouldn't `import test1` but you should `import __main__` because if you do
the first one you will create another lock that is different from the
`__main__.lock` (`test1.lock != __main__.lock`).
So one fix to your problem (**which far from being the best**) and to **see
what is happening** you can change your 2 script to this:
test1.py:
from __future__ import with_statement
from threading import Thread, RLock
import time
lock = RLock()
class Test1(object):
def __init__(self):
print("Start Test1")
import test2 # <<<<<<<<<<<<<<<<<<<<<<<< Import is done here to be able to refer to __main__.lock.
self.test2 = test2.Test2()
self.__Thread = Thread(target=self.myThread, name="thread")
self.__Thread.daemon = True
self.__Thread.start()
self.test1Method()
def test1Method(self):
print("start test1Method")
with lock:
print("entered test1Method")
time.sleep(5)
print("end test1Method")
def myThread(self):
self.test2.test2Method()
if __name__ == "__main__":
client = Test1()
raw_input()
test2.py:
from __future__ import with_statement
import time
# <<<<<<<<<<<<<<<<<<<<<<<<<<<<< test1 is changed to __main__ to get the same lock as the one used in the launched script.
import __main__
lock = __main__.lock
class Test2(object):
def __init__(self):
print("Start Test2")
def test2Method(self):
print("start test2Method")
with lock:
print("entered test2Method")
time.sleep(5)
print("end test2Method")
HTH,
|
solving `DetachedInstanceError` with event.listens on SQLAlchemy
Question: I have a very similar problem as this
[question](http://stackoverflow.com/questions/9704927/pyramid-sql-alchemy-
detachedinstanceerror), however, even if I try to do something like
...
from my_app.models import Session
user = Session.merge(user)
new_foo = models.Foo(user=user)
...
Where I am basically getting the user model object from the request and trying
to create a new `Foo` object that has a relation to the user, it fails with a
`DetachedInstanceError` because (I think) that the `event.listens` I am using
comes later on with a different `Session`.
My listener function looks like this:
@event.listens_for(mapper, 'init')
def auto_add(target, args, kwargs):
Session.add(target)
And the `Session` is defined as:
Session = scoped_session(sessionmaker())
If I am relying on `event.listens` to add a target to a `Session`, how could I
make sure that objects like the user, which are added to a request context be
handled?
The one thing that allowed me to make that work was to call `sessionmaker`
with `expire_on_commit=False` but I don't think that is what I should be doing
as (per the awesome SQLA docs) it:
> Another behavior of commit() is that by default it expires the state of all
> instances >present after the commit is complete. This is so that when the
> instances are next >accessed, either through attribute access or by them
> being present in a Query result set, > they receive the most recent state.
> To disable this behavior, configure sessionmaker with >
> `expire_on_commit=False`.
I want to have the most recent state of the user object. What are my options
to take care of the `merge` in the right place?
The actual traceback (web framework specific lines trimmed) looks like this:
File "/Users/alfredo/python/my_app/my_app/models/users.py", line 31, in __repr__
return '<User %r>' % self.username
File "/Users/alfredo/.virtualenvs/my_app/lib/python2.7/site-packages/SQLAlchemy-0.8.0b2-py2.7-macosx-10.8-intel.egg/sqlalchemy/orm/attributes.py", line 251, in __get__
return self.impl.get(instance_state(instance), dict_)
File "/Users/alfredo/.virtualenvs/my_app/lib/python2.7/site-packages/SQLAlchemy-0.8.0b2-py2.7-macosx-10.8-intel.egg/sqlalchemy/orm/attributes.py", line 543, in get
value = callable_(passive)
File "/Users/alfredo/.virtualenvs/my_app/lib/python2.7/site-packages/SQLAlchemy-0.8.0b2-py2.7-macosx-10.8-intel.egg/sqlalchemy/orm/state.py", line 376, in __call__
self.manager.deferred_scalar_loader(self, toload)
File "/Users/alfredo/.virtualenvs/my_app/lib/python2.7/site-packages/SQLAlchemy-0.8.0b2-py2.7-macosx-10.8-intel.egg/sqlalchemy/orm/loading.py", line 554, in load_scalar_attributes
(state_str(state)))
DetachedInstanceError: Instance <User at 0x10986d1d0> is not bound to a Session; attribute refresh operation cannot proceed
The actual method where this takes place is this one:
def post(self, *args, **kw):
# Actually create one
new_foo = Foo(user=request.context['user'], title=kw['title'], body=kw['body'])
new_foo.flush()
redirect('/pages/')
The problem above is that you see that I am fetching the `User` object from
the request context, and that happened in a different `Session` (or at least,
that is what I assume is happening).
EDIT: It seems that the use of `__repr__` is causing me issues, at some point
during the request a string representation of the model is being called and
this section of my model gets me in trouble:
def __repr__(self):
return '<User %r>' % self.username
If I don't implement that method, all that I had before works as expected.
What can I do to prevent this raising on the repr?
Answer: I saw same error when I run nosetest with sqlalchemy. In my case,
logging.getLogger('foo').debug('data %s', mydata) made this error. 'mydata' is
a sqlalchemy mapped instance but not committed yet. My workaround is
`logging.getLogger('foo').debug('data %s', repr(mydata))`
Can you change `__repr__` method as following in order to figure out your
problem?
def __repr__(self):
try:
return '<User %r>' % self.username
except:
return 'I got it!'
|
How can I create my custom xmlrpc fault error in python
Question: When using `xmlrpclib` in python an error on the server side is reported by
the client side as an `xmlrpclib.Fault`. A `division by zero` error in a
method on the server side (using `SimpleXMLRPCServer`) usually gives an output
like the following:
xmlrpclib.Fault: <Fault 1: "<type 'exceptions.ZeroDivisionError'>:integer division or modulo by zero">
This is useful as it notes the type of error, but now _where_ it happened. How
is it possible to overwrite/modify the `xmlrpclib.Fault` method (in
`SimpleXMLRPCServer`?) so it reports the error e.g. as follows:
xmlrpclib.Fault: <Fault 1: "<type 'exceptions.ZeroDivisionError'>:integer division or modulo by zero MODULE: myMethod.py LINE: 657">
i.e. to include the name of the module the error appeared and the line number.
Is that possible to do on the server-side, without raising custom exceptions?
ANY error should reported in the same way on the client side.
Answer: If you use `SimpleXMLRPCServer`, you can override the internal
`_marshaled_dispatch` method to add information to the `Fault()` instance
generated:
This is the original method:
def _marshaled_dispatch(self, data, dispatch_method = None, path = None):
try:
params, method = xmlrpclib.loads(data)
# generate response
if dispatch_method is not None:
response = dispatch_method(method, params)
else:
response = self._dispatch(method, params)
# wrap response in a singleton tuple
response = (response,)
response = xmlrpclib.dumps(response, methodresponse=1,
allow_none=self.allow_none, encoding=self.encoding)
except:
# report low level exception back to server
# (each dispatcher should have handled their own
# exceptions)
exc_type, exc_value = sys.exc_info()[:2]
response = xmlrpclib.dumps(
xmlrpclib.Fault(1, "%s:%s" % (exc_type, exc_value)),
encoding=self.encoding, allow_none=self.allow_none)
return response
You can subclass `SimpleXMLRPCServer.SimpleXMLRPCServer` and override this
method:
import SimpleXMLRPCServer
import sys
import xmlrbclib
class VerboseFaultXMLRPCServer(SimpleXMLRPCServer.SimpleXMLRPCServer):
def _marshaled_dispatch(self, data, dispatch_method = None, path = None):
try:
params, method = xmlrpclib.loads(data)
# generate response
if dispatch_method is not None:
response = dispatch_method(method, params)
else:
response = self._dispatch(method, params)
# wrap response in a singleton tuple
response = (response,)
response = xmlrpclib.dumps(response, methodresponse=1,
allow_none=self.allow_none, encoding=self.encoding)
except:
# report low level exception back to server
# (each dispatcher should have handled their own
# exceptions)
exc_type, exc_value, tb = sys.exc_info()
while tb.tb_next is not None:
tb = tb.tb_next # find last frame of the traceback
lineno = tb.tb_lineno
code = tb.tb_frame.f_code
filename = code.co_filename
name = code.co_name
response = xmlrpclib.dumps(
xmlrpclib.Fault(1, "%s:%s FILENAME: %s LINE: %s NAME: %s" % (
exc_type, exc_value, filename, lineno, name)),
encoding=self.encoding, allow_none=self.allow_none)
return response
Then use `VerboseFaultXMLRPCServer` instead of
`SimpleXMLRPCServer.SimpleXMLRPCServer`.
|
Shell - Trying to output last portion of a logfile (Time-stamp is the separator)
Question: I would like to read in a logfile into shell, and output the last logged event
that had occurred. These logs are selenium-python automated test results that
I am using in larger script. This script requires the last chunk of the log.
Here is an example of one of the last logged events from an example output
file:
2013-01-10 13:49:55
Notes:
FAIL: runTest (__main__.changeContent)
Traceback (most recent call last):
File "demo-about-content-pyunit-w.py", line 294, in runTest
self.assertEqual(browser.find_element_by_id("descriptionAbout").text, "We are a business used for automated testing and pretty much boring.", 'about desc would not change')
AssertionError: about desc would not change
Ran 1 test in 40.954s>
FAILED (failures=1)
The logfile contains other logs similar to this one in chronological order. I
would like to output/scrape the last entry using the timestamp above as a
delimiter/cutoff threshold. I have tried using `tail` and regular expressions,
but am not quite sure how to go about doing this. I would prefer to read the
file from the end, as efficiency is important. I would like to solve this
problem in shell, but a solution in Python might also be useful.
Answer: I'm not sure if this is what you want, but you may try:
tac logfile.log | while read line; do echo ${line};
[[ "${line}" =~ [0-9]{4}(-[0-9]{2}){2}\ [0-9]{2}(:[0-9]{2}){2} ]] && break;
done | tac
This snippet reads the file "logfile.log" backwards, an print each line, until
a timestamp -- with the format you gave -- is found. Since the lines have been
printed backwards, another call for `tac` reorders the output.
|
Two forward slashes in Python
Question: I came across this sample of code from a [radix
sort](http://en.wikipedia.org/wiki/Radix_sort#Example_in_Python):
def getDigit(num, base, digit_num):
# pulls the selected digit
return (num // base ** digit_num) % base
What does the '`//`' do in Python?
Answer: `//` is the integer division operator.
In Python 3 the ordinary `/` division operator returns floating point values
even if both operands are integers, so a different operator is needed for
integer division. This is different from Python 2 where `/` performed integer
division if both operands where integers and floating point division if at
least one of the operands was a floating point value.
The `//` operator was first introduced for forward-compatibility in Python 2.2
when it was decided that Python 3 should have this new ability. Together with
the ability to enable the Python 3 behaviour via `from __future__ import
division` (also introduced in Python 2.2), this enables you to write Python
3-compatible code in Python 2.
|
Segfault on calling standard windows .dll from python ctypes with wine
Question: I'm trying to call some function from Kernel32.dll in my Python script running
on Linux. As Johannes Weiß pointed [How to call Wine dll from python on
Linux?](http://stackoverflow.com/questions/4052690/how-to-call-wine-dll-from-
python-on-linux/4053954#4053954) I'm loading **kernel32.dll.so** library via
**ctypes.cdll.LoadLibrary()** and it loads fine. I can see kernel32 loaded and
even has **GetLastError()** function inside. However whenever I'm trying to
call the function i'm gettings segfault.
import ctypes
kernel32 = ctypes.cdll.LoadLibrary('/usr/lib/i386-linux-gnu/wine/kernel32.dll.so')
print kernel32
# <CDLL '/usr/lib/i386-linux-gnu/wine/kernel32.dll.so', handle 8843c10 at b7412e8c>
print kernel32.GetLastError
# <_FuncPtr object at 0xb740b094>
gle = kernel32.GetLastError
# OK
gle_result = gle()
# fails with
# Segmentation fault (core dumped)
print gle_result
First I was thinking about calling convention differences but it seems to be
okay after all. I'm ending with testing simple function GetLastError function
without any params but I'm still getting Segmentation fault anyway.
My testing system is Ubuntu 12.10, Python 2.7.3 and wine-1.4.1 (everything is
32bit)
**UPD**
I proceed with my testing and find several functions that I can call via
ctypes without segfault. For instance I can name Beep() and GetCurrentThread()
functions, many other functions still give me segfault. I created a small C
application to test kernel32.dll.so library without python but i've got
essentially the same results.
int main(int argc, char **argv)
{
void *lib_handle;
#define LOAD_LIBRARY_AS_DATAFILE 0x00000002
long (*GetCurrentThread)(void);
long (*beep)(long,long);
void (*sleep)(long);
long (*LoadLibraryExA)(char*, long, long);
long x;
char *error;
lib_handle = dlopen("/usr/local/lib/wine/kernel32.dll.so", RTLD_LAZY);
if (!lib_handle)
{
fprintf(stderr, "%s\n", dlerror());
exit(1);
}
// All the functions are loaded e.g. sleep != NULL
GetCurrentThread = dlsym(lib_handle, "GetCurrentThread");
beep = dlsym(lib_handle, "Beep");
LoadLibraryExA = dlsym(lib_handle, "LoadLibraryExA");
sleep = dlsym(lib_handle, "Sleep");
if ((error = dlerror()) != NULL)
{
fprintf(stderr, "%s\n", error);
exit(1);
}
// Works
x = (*GetCurrentThread)();
printf("Val x=%d\n",x);
// Works (no beeping, but no segfault too)
(*beep)(500,500);
// Segfault
(*sleep)(5000);
// Segfault
(*LoadLibraryExA)("/home/ubuntu/test.dll",0,LOAD_LIBRARY_AS_DATAFILE);
printf("The End\n");
dlclose(lib_handle);
return 0;
}
I was trying to use different calling conventions for Sleep() function but got
no luck with it too. When I comparing function declarations\implementation in
Wine sources they are essentially the same
Declarations
HANDLE WINAPI GetCurrentThread(void) // http://source.winehq.org/source/dlls/kernel32/thread.c#L573
BOOL WINAPI Beep( DWORD dwFreq, DWORD dwDur ) // http://source.winehq.org/source/dlls/kernel32/console.c#L354
HMODULE WINAPI DECLSPEC_HOTPATCH LoadLibraryExA(LPCSTR libname, HANDLE hfile, DWORD flags) // http://source.winehq.org/source/dlls/kernel32/module.c#L928
VOID WINAPI DECLSPEC_HOTPATCH Sleep( DWORD timeout ) // http://source.winehq.org/source/dlls/kernel32/sync.c#L95
WINAPI is defined to be __stdcall
However some of them works and some don't. As I can understand this sources
are for kernel32.dll file and kernel32.dll.so file is a some kind of proxy
that supposed to provide access to kernel32.dll for linux code. Probably I
need to find exact sources of kernel32.dll.so file and take a look on
declarations.
Is there any tool I can use to take a look inside .so file and find out what
functions and what calling conventions are used?
Answer: The simplest way to examine a DLL is to use the `nm` command, i.e.
$ nm kernel32.dll.so | grep GetLastError
7b86aae0 T _GetLastError
As others have pointed out, the default calling convention for Windows C DLLs
is `stdcall`. It has nothing to do with using Python. On the Windows platform,
`ctypes.windll` is available.
However, **I am not even sure what you are trying to do is at all possible**.
Wine is a full-blown Windows emulator and it is safe to guess that at least
you would have to start it with `wine_init` before loading any other
functions. The Windows API probably have some state (set when Windows boots).
The easiest way to continue is probably to install a Windows version of Python
under Wine and run your script from there.
|
How to remove a resource (Google Docs document) from the root collection (folder)
Question: I am using the python gdata library to copy and move a google Docs file in a
specific folder. I am using the MoveResource method from DocsClient, the file
appears in the desired folder, but also stays in the root folder.
I cannot figure out how to remove it from the root folder?!
import gdata.docs.service
import gdata.docs.client
import gdata.docs.data
doc_service = gdata.docs.client.DocsClient()
doc_service.ClientLogin('myId', 'myPassword', 'myAppName')
# this is my source document
doc = doc_service.GetResourceById('ABC123')
# extracting the folder of my source document to move the copy in the same folder
for parent in doc.InCollections():
if parent.title == 'myFilter':
destination_folder_id = parent.href.split('%3A')[1]
destination_folder = doc_service.GetResourceById(destination_folder_id)
# making a copy of my source
newdoc = doc_service.CopyResource(doc, 'Test Python')
# moving my copy to the same folder as my source
# but the copy also stays in the root folder!
moveddoc = doc_service.MoveResource( newdoc, destination_folder )
Answer: You can remove the Google Docs file from the root collection using this code:
doc_service.Delete('https://docs.google.com/feeds/default/private/full/folder%3Aroot/contents/' + doc.resource_id.text, force=True)
|
python MySQLdb wont insert into db
Question: Hi I have an issue with inserting info to my db. It doesn't give off an
error.The code is here.
import MySQLdb as m
def Room(room):
db = m.connect("localhost","root","password","rooms")
cur = db.cursor()
cur.execute('INSERT INTO rooms (name) VALUES("%s");'% (room))
def getRoomDb():
db = m.connect("localhost","root","password","rooms")
cur = db.cursor()
cur.execute("SELECT * FROM rooms;")
result = cur.fetchall()
return result
print getRoomDb()
after i run the `Room("roomname")` it outputs like it should but nothing
actually gets put into the db
Answer: You didn't call `commit()` for the transaction in which you executed the
INSERT.
In Python, the default action is to roll back work, unless you explicitly
commit.
See also:
* [Python MySQL - SELECTs work but not DELETEs?](http://stackoverflow.com/questions/1451782/python-mysql-selects-work-but-not-deletes/)
* [Python MySQLdb update query fails](http://stackoverflow.com/questions/1028671/python-mysqldb-update-query-fails)
* <http://mysql-python.sourceforge.net/FAQ.html#my-data-disappeared-or-won-t-go-away>
|
Hot to get query string from Python script (using PyISAPI on IIS)
Question: I've done everything from this blog <http://geographika.co.uk/setting-up-
python-on-iis7>. I'm using only Python 2.6 (not Django) and want to get query
string parameters in my python script.
For example, during the execution of
`http://localhost/MyApp/program.py?par=test`
how to get value test in my script program.py:
from Http import *
def Request():
Header("Content-type: text/html")
Write(get_param("par"))
Thanks.
Answer: Got it
from Http import *
def Request():
Header("Content-type: text/html")
Write(Env.QUERY_STRING)
|
Python: Iterating over two lists and replacing elements in one list1 with the element from list2
Question: I have two lists of strings. In list1, which contains around 1000 string
elements, you have a string called "Date" that occurs randomly, immediately
followed by a string that contains the a particular Date : "17/09/2011". This
happens about 70 times. In List2: I have around 80 dates, as strings.
Question : I want to write a script that loops through both lists
simultaneously, and replaces the dates in list1, with the dates in list2, in
order. So, obviously you will have the first 70 dates of list2 replacing the
the 70 occurrences of dates in list1. Afterwards I want to write the modified
list1 to a .txt file.
I tried this, but I am totally stuck. I am super noob at Python.
def pairwise(lst):
""" yield item i and item i+1 in lst. e.g.
(lst[0], lst[1]), (lst[1], lst[2]), ..., (lst[-1], None)
"""
if not lst: return
#yield None, lst[0]
for i in range(len(lst)-1):
yield lst[i], lst[i+1]
yield lst[-1], None
for line in file:
list1.append(line.strip())
for i,j in pairwise(list1):
for k in list2:
if i == "Date":
list1."replace"(j) # Dont know what to do. And i know this double for looping is wrong also.
Answer: Maybe something like this (if there are no 'date' strings without a following
date):
iter2 = iter (list2)
for idx in (idx for idx, s in enumerate (list1) if s == 'Date'):
list1 [idx + 1] = next (iter2)
with open ('out.txt', 'w') as f:
f.write ('{}'.format (list1) )
@user1998510, here a bit of explanation:
`enumerate` takes a list as an argument and generates tuples of the form (i,
i-th element of the list). In my generator (i.e. the `(x for y in z if a)`
part) I assign the parts of this tuple to the local variables idx and s. The
generator itself only yields the index as the actual item of the list (to whit
`s`) is of no importance, as in the generator itself we filter for interesting
items `if s == 'Date'`. In the `for` loop I iterate through this generator
assigning its yielded values to `idx` (this is another `idx` than the inner
`idx` as generators in python don't leak anymore their local variables). The
generator yields all the indices of the list whose element is 'Date' and the
`for` iterates over it. Hence I assign the next date from the second list to
the idx+1st item of the old list for all interesting indices.
|
coverage on a frozen executable
Question: Is there any way to run coverage against an executable built with pyinstaller?
I tried just running it like a it was a python script and it didnt like the
executable file as input (I didnt really expect it to work) and I suspect that
the answer is no there is no easy way to run coverage against a built
executable .... (this is on windows .exe)
the coverage package I am using is just the normal coverage package that you
get with "easy_install coverage" from nedbatchelder.com (
<http://nedbatchelder.com/code/coverage/> )
Answer: This isn't a fully formulated answer but what I have found so far.
From my understanding of how pyinstaller works is that a binary is constructed
from a small C program that embeds a python interpreter and bootstraps loading
a script. The PyInstaller constructed EXE includes an archive after the end of
the actual binary that contains the resources for the python code. This is
explained here
<http://www.pyinstaller.org/export/develop/project/doc/Manual.html#pyinstaller-
archives>.
There is iu.py from Pyinstaller/loader/iu.py
[Docs](http://www.pyinstaller.org/export/develop/project/doc/Manual.html#iu-
py-an-imputil-replacement). You should be able to create an import hook to
import from the binary. Googling for pyinstaller disassembler found
<https://bitbucket.org/Trundle/exetractor/src/00df9ce00e1a/exetractor/pyinstaller.py>
that looks like it might extract necessary parts.
The other part of this is that all of the resources in the binary archive will
be compiled python code. Most likely, coverage.py will give you unhelpful
output the same way as when hitting any other compiled module when running
under normal conditions.
|
Using matplotlib in GAE
Question: My tags and title quite clearly state my problem. I want to use matplotlib to
create real-time plots in Google App Engine. I've read the
[documentation](https://developers.google.com/appengine/docs/python/tools/libraries27)
and searched on SO and Google. I found a post, pointing to [this working
demo](http://gae-matplotlib-demo.appspot.com/). But when I try it on my own,
it doesn't work for me.
I created a simple application, consisting only of a handler-script
**hello_world.py**
import numpy as np
import os
import sys
import cStringIO
print "Content-type: image/png\n"
os.environ["MATPLOTLIBDATA"] = os.getcwdu() # own matplotlib data
os.environ["MPLCONFIGDIR"] = os.getcwdu() # own matplotlibrc
import matplotlib.pyplot as plt
plt.plot(np.random.random((20))) #imshow(np.random.randint((10,10)))
sio = cStringIO.StringIO()
plt.savefig(sio, format="png")
sys.stdout.write(sio.getvalue())
and a a configuration file **app.yaml**
application: helloworldtak
version: 1
runtime: python27
api_version: 1
threadsafe: no
handlers:
- url: /.*
script: hello_world.py
libraries:
- name: numpy
version: "latest"
- name: matplotlib
version: "latest"
I want to plot something and then return the content as png-image. This
procedure works fine for a normal web-server like Apache or IIS, I did this a
million times.
The problem is rather: when I run my script locally within the development
server, I get an error that is probably due to my MPL version 1.1.1, which is
only "experimental" in GAE. But when I deploy my app to GAE, I get a
completely different, uncorrelated error.
Looking at the looks, the traceback is:
Traceback (most recent call last):
File "/base/data/home/apps/s~helloworldtak/1.364765672279579252/hello_world.py", line 16, in <module>
import matplotlib.pyplot as plt
File "/python27_runtime/python27_lib/versions/third_party/matplotlib-1.1.1/matplotlib/pyplot.py", line 23, in <module>
from matplotlib.figure import Figure, figaspect
File "/python27_runtime/python27_lib/versions/third_party/matplotlib-1.1.1/matplotlib/figure.py", line 18, in <module>
from axes import Axes, SubplotBase, subplot_class_factory
File "/python27_runtime/python27_lib/versions/third_party/matplotlib-1.1.1/matplotlib/axes.py", line 14, in <module>
import matplotlib.axis as maxis
File "/python27_runtime/python27_lib/versions/third_party/matplotlib-1.1.1/matplotlib/axis.py", line 10, in <module>
import matplotlib.font_manager as font_manager
File "/python27_runtime/python27_lib/versions/third_party/matplotlib-1.1.1/matplotlib/font_manager.py", line 1324, in <module>
_rebuild()
File "/python27_runtime/python27_lib/versions/third_party/matplotlib-1.1.1/matplotlib/font_manager.py", line 1278, in _rebuild
fontManager = FontManager()
File "/python27_runtime/python27_lib/versions/third_party/matplotlib-1.1.1/matplotlib/font_manager.py", line 995, in __init__
self.defaultFont['ttf'] = self.ttffiles[0]
IndexError: list index out of range
It seems to have to do something with the fonts-cache of MPL. I read in the
docs that caching and file-access is one of the problems with MPL in GAE, but
obviously, the import works for others.
What am I doing wrong?
**Edit** Based on the answer below, I changed my code to be
import numpy as np
import cStringIO
import matplotlib.pyplot as plt
import webapp2
class MainPage(webapp2.RequestHandler):
def get(self):
plt.plot(np.random.random((20)),"r-")
sio = cStringIO.StringIO()
plt.savefig(sio, format="png")
self.response.headers['Content-Type'] = 'image/png'
self.response.out.write(sio.getvalue())
app = webapp2.WSGIApplication([('/', MainPage)],
debug=True)
and like this, it's working.
Answer: I'm not familiar with sys module. To give an answer to the question I prefer
using webapp2. This is a working handler:
import webapp2
import StringIO
import numpy as np
import matplotlib.pyplot as plt
class MainPage(webapp2.RequestHandler):
def get(self):
plt.plot(np.random.random((20)))
sio = StringIO.StringIO()
plt.savefig(sio, format="png")
img_b64 = sio.getvalue().encode("base64").strip()
plt.clf()
sio.close()
self.response.write("""<html><body>""")
self.response.write("<img src='data:image/png;base64,%s'/>" % img_b64)
self.response.write("""</body> </html>""")
app = webapp2.WSGIApplication([('/', MainPage)], debug=True)
Alternatively, you could write the `sio.getvalue()` in the blobstore with
files api and use the method `get_serving_url()` of images api for avoid to
encode in base64.
|
gaussian fit with scipy.optimize.curve_fit in python with wrong results
Question: I am having some trouble to fit a gaussian to data. I think the problem is
that most of the elements are close to zero, and there not many points to
actually be fitted. But in any case, I think they make a good dataset to fit,
and I don't get what is confussing python. Here is the program, I have also
added a line to plot the data so you can see what I am trying to fit
#Gaussian function
def gauss_function(x, a, x0, sigma):
return a*np.exp(-(x-x0)**2/(2*sigma**2))
# program
from scipy.optimize import curve_fit
x = np.arange(0,21.,0.2)
# sorry about these data!
y = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.2888599818864958e-275, 1.0099964933708256e-225, 4.9869496866403137e-184, 4.4182929795060327e-149, 7.2953754336628778e-120, 1.6214815763354974e-95, 2.5845990267696154e-75, 1.2195550372375896e-58, 5.6756631456872126e-45, 7.2520963306599953e-34, 6.0926453402093181e-25, 7.1075523112494745e-18, 2.1895584709541657e-12, 3.1040093615952226e-08, 3.2818874974043519e-05, 0.0039462011337049593, 0.077653596114448178, 0.33645159419151383, 0.40139213808285212, 0.15616093582013874, 0.0228751827752081, 0.0014423440677009125, 4.4400754532288282e-05, 7.4939123408714068e-07, 7.698340466102054e-09, 5.2805658851032628e-11, 2.6233358880470556e-13, 1.0131613609937094e-15, 3.234727006243684e-18, 9.0031014316344088e-21, 2.2867065482392331e-23, 5.5126221075296919e-26, 1.3045106781768978e-28, 3.1185031969890313e-31, 7.7170036365830092e-34, 2.0179753504732056e-36, 5.6739187799428708e-39, 1.7403776988666581e-41, 5.8939645426573027e-44, 2.2255784749636281e-46, 9.4448944519959299e-49, 4.5331936383388069e-51, 2.4727435506007072e-53, 1.5385048936078214e-55, 1.094651071873419e-57, 8.9211199390945735e-60, 8.3347561634783632e-62, 8.928140776588251e-64, 1.0960564546383266e-65, 1.5406342485015278e-67, 2.4760905399114866e-69, 4.5423744881977258e-71, 9.4921949220625905e-73, 2.2543765002199549e-74, 6.0698995872666723e-76, 1.8478996852922248e-77, 6.3431644488676084e-79, 0.0, 0.0, 0.0, 0.0]
plot(x,y) #Plot the curve, the gaussian is quite clear
plot(x,y,'ok') #Overplot the dots
# Try to fit the result
popt, pcov = curve_fit(gauss_function, x, y)
The problem is that the results for popt is
print popt
array([ 7.39717176e-10, 1.00000000e+00, 1.00000000e+00])
Any hint on why this could be happening?
Thanks!
Answer: Your problem is with the initial parameters of the curve_fit. By default, if
no other information is given, it will start with an array of 1, but this
obviously lead to a radically wrong result. This can be corrected simply by
giving a reasonable starting vector. To do this, I start from the estimated
mean and standard deviation of your dataset
#estimate mean and standard deviation
meam = sum(x * y)
sigma = sum(y * (x - m)**2)
#do the fit!
popt, pcov = curve_fit(gauss_function, x, y, p0 = [1, mean, sigma])
#plot the fit results
plot(x,gauss_function(x, *popt))
#confront with the given data
plot(x,y,'ok')
This will perfectly approximate your results. Remember that curve fitting in
general cannot work unless you start from a good point (inside the convergence
basin, to be clear), and this doesn't depend on the implementation. Never do
blind fit when you can use your knowledge!
|
Converting numpy array to rpy2 matrix (forecast package, xreg parameter)
Question: I don't seem to be able to get the line `fit = forecast.Arima(series,
order=order, xreg=r_exog_train)` to work. It does work without the xreg
parameter so I'm pretty sure it's the numpy array to rpy2 matrix conversion
that makes a problem.
Does anyone observe a mistake here? Thanks!
This is the error I get (parts in German unfortunately):
Fehler in `colnames<-`(`*tmp*`, value = if (ncol(xreg) == 1) nmxreg else paste(n
mxreg, :
Länge von 'dimnames' [2] ungleich der Arrayausdehnung
Traceback (most recent call last):
File "r.py", line 58, in <module>
res = do_forecast(series, horizon=horizon, exog=(exog_train, exog_test))
File "r.py", line 39, in do_forecast
fit = forecast.Arima(series, order=order, xreg=exog_train)
File "C:\Python27\lib\site-packages\rpy2\robjects\functions.py", line 86, in _
_call__
return super(SignatureTranslatedFunction, self).__call__(*args, **kwargs)
File "C:\Python27\lib\site-packages\rpy2\robjects\functions.py", line 35, in _
_call__
res = super(Function, self).__call__(*new_args, **new_kwargs)
rpy2.rinterface.RRuntimeError: Fehler in `colnames<-`(`*tmp*`, value = if (ncol(
xreg) == 1) nmxreg else paste(nmxreg, :
Lõnge von 'dimnames' [2] ungleich der Arrayausdehnung
Here is the code example:
# Python wrapper for R forecast stuff
import numpy as np
print 'Start importing R.'
from rpy2 import robjects
from rpy2.robjects.packages import importr
from rpy2.robjects.numpy2ri import numpy2ri
robjects.conversion.py2ri = numpy2ri
base = importr('base')
forecast = importr('forecast')
stats = importr('stats')
ts = robjects.r['ts']
print 'Finished importing R.'
def nparray2rmatrix(x):
nr, nc = x.shape
xvec = robjects.FloatVector(x.transpose().reshape((x.size)))
xr = robjects.r.matrix(xvec, nrow=nr, ncol=nc)
return xr
def nparray2rmatrix_alternative(x):
nr, nc = x.shape
xvec = robjects.FloatVector(x.reshape((x.size)))
xr = robjects.r.matrix(xvec, nrow=nr, ncol=nc, byrow=True)
return xr
def do_forecast(series, frequency=None, horizon=30, summary=False, exog=None):
if frequency:
series = ts(series, frequency=frequency)
else:
series = ts(series)
if exog:
exog_train, exog_test = exog
r_exog_train = nparray2rmatrix(exog_train)
r_exog_test = nparray2rmatrix(exog_test)
order = robjects.IntVector([1, 0, 2]) # c(1,0,2) # TODO find right model
fit = forecast.Arima(series, order=order, xreg=r_exog_train)
forecast_result = forecast.forecast(fit, h=horizon, xreg=r_exog_test)
else:
# fit = forecast.auto_arima(series)
#robjects.r.plot(series)
fit = stats.HoltWinters(series)
forecast_result = forecast.forecast(fit, h=horizon)
if summary:
modsummary = base.summary(fit)
print modsummary
forecast_values = np.array(list(forecast_result.rx2('mean')))
return forecast_values
# Example
series = np.arange(100)
exog_train = np.ones((100,2))
exog_test = np.ones((100,2))
horizon = 100
res = do_forecast(series, horizon=horizon, exog=(exog_train, exog_test))
print res
Answer: I got it working by explicitely stating "dimnames" as in: `xr =
robjects.r.matrix(xvec, nrow=nr, ncol=nc, dimnames=dimnames)`. Apparently the
forecast package requires this.
|
Why is my python/numpy example faster than pure C implementation?
Question: I have pretty much the same code in python and C. Python example:
import numpy
nbr_values = 8192
n_iter = 100000
a = numpy.ones(nbr_values).astype(numpy.float32)
for i in range(n_iter):
a = numpy.sin(a)
C example:
#include <stdio.h>
#include <math.h>
int main(void)
{
int i, j;
int nbr_values = 8192;
int n_iter = 100000;
double x;
for (j = 0; j < nbr_values; j++){
x = 1;
for (i=0; i<n_iter; i++)
x = sin(x);
}
return 0;
}
Something strange happen when I ran both examples:
$ time python numpy_test.py
real 0m5.967s
user 0m5.932s
sys 0m0.012s
$ g++ sin.c
$ time ./a.out
real 0m13.371s
user 0m13.301s
sys 0m0.008s
It looks like python/numpy is twice faster than C. Is there any mistake in the
experiment above? How you can explain it?
P.S. I have Ubuntu 12.04, 8G ram, core i5 btw
Answer: First, turn on optimization. Secondly, subtleties matter. Your C code is
definitely not 'basically the same'.
Here is equivalent C code:
sinary2.c:
#include <math.h>
#include <stdlib.h>
float *sin_array(const float *input, size_t elements)
{
int i = 0;
float *output = malloc(sizeof(float) * elements);
for (i = 0; i < elements; ++i) {
output[i] = sin(input[i]);
}
return output;
}
sinary.c:
#include <math.h>
#include <stdlib.h>
extern float *sin_array(const float *input, size_t elements)
int main(void)
{
int i;
int nbr_values = 8192;
int n_iter = 100000;
float *x = malloc(sizeof(float) * nbr_values);
for (i = 0; i < nbr_values; ++i) {
x[i] = 1;
}
for (i=0; i<n_iter; i++) {
float *newary = sin_array(x, nbr_values);
free(x);
x = newary;
}
return 0;
}
Results:
$ time python foo.py
real 0m5.986s
user 0m5.783s
sys 0m0.050s
$ gcc -O3 -ffast-math sinary.c sinary2.c -lm
$ time ./a.out
real 0m5.204s
user 0m4.995s
sys 0m0.208s
The reason the program has to be split in two is to fool the optimizer a bit.
Otherwise it will realize that the whole loop has no effect at all and
optimize it out. Putting things in two files doesn't give the compiler
visibility into the possible side-effects of `sin_array` when it's compiling
`main` and so it has to assume that it actually has some and repeatedly call
it.
Your original program is not at all equivalent for several reasons. One is
that you have nested loops in the C version and you don't in Python. Another
is that you are working with arrays of values in the Python version and not in
the C version. Another is that you are creating and discarding arrays in the
Python version and not in the C version. And lastly you are using `float` in
the Python version and `double` in the C version.
Simply calling the `sin` function the appropriate number of times does not
make for an equivalent test.
Also, the optimizer is a really big deal for C. Comparing C code on which the
optimizer hasn't been used to anything else when you're wondering about a
speed comparison is the wrong thing to do. Of course, you also need to be
mindful. The C optimizer is very sophisticated and if you're testing something
that really doesn't do anything, the C optimizer might well notice this fact
and simply not do anything at all, resulting in a program that's ridiculously
fast.
|
Load in images from file python pygame
Question: I'm working on a pygame project, and in my main directory I have a folder
called "Sprites" for example. Is there a way using python to load all the
images in that file into a list? I know how to put things in list using a for
loop, I'm just not sure if there is a way to go through a folder's images and
load them into a pygame sprite. I also know that if the images were all named
the same with a number following them, I could concatenate a string, but what
if they had completely different names?
Answer: Use the os module to list the files of a directory; it will return a list
import os
fileList = os.listdir("path")
More details: <http://docs.python.org/2/library/os.html#os.listdir>
|
Creating thumbnails from video files with Python on my website
Question: I need to create thumbnails for a video file which users uploaded to web site
running django.
How would I go about this...which function can do this ? I can display
thumbnail images but I cant capture a view from a video. Thanks for helps.
Answer: Videos are tricky business due to the vastness of codecs, containers, etc. I
would recommend to use [`ffmpeg`](http://ffmpeg.org/) due to it's vast support
and call it in Python using
[subprocess](http://docs.python.org/2/library/subprocess.html) module.
Following the [first](http://blog.prashanthellina.com/2008/03/29/creating-
video-thumbnails-using-ffmpeg/) Google hit for `ffmpeg video thumbnail`, you
can do it like:
from subprocess import check_output
check_output('ffmpeg -itsoffset -4 -i test.avi -vcodec mjpeg -vframes 1 -an -f rawvideo -s 320x240 test.jpg', shell=True)
Obviously you have to change the command string but this should get you
started.
|
How to securely maintain a persistent SSH connection in PHP?
Question: I am currently working on a VPS panel that uses a master-slave model. One
master server runs a panel written in PHP, and manages multiple slave servers
via SSH. The slave servers are accessed via a limited account, that can sudo
to specific server administration-related commands, and all interaction is
logged in a directory that the account itself does not have access to.
I am currently using PHP-SSH2, but this approach has a few problems:
* Exit codes are not reliably returned, so all commands have to be executed in a wrapper script that packages up stdout, stderr, and the exit code into a JSON object and returns it via stdout. This script has to exist on every slave server.
* The PHP-SSH2 library does not know the concept of a "custom connection timeout", which means I have to probe a server with fsockopen before trying to use PHP-SSH2 to connect - if I don't do that, an unreachable server may delay the pageload for a minute or more. This is even more worse because of the next issue.
* Persistent connections are not possible. This causes absolutely ridiculous pageload times in the panel, especially combined with the previous issue with timeouts.
Right now, I'm trying to solve the last problem primarily.
There are several possible solutions I have run across, but all of them have
an issue of some sort:
* Using PHPSecLib, a pure PHP SSH implementation, and replacing all fsockopen calls with pfsockopen calls. This will make the connections persistent, but it's hackier than I'd like and the security implications of persistent sockets in PHP are unclear.
* Setting up a persistent SSH tunnel from the master server to each slave server, and running a simple daemon (bound to localhost) on each slave server that runs whatever it's told to. This is a problem for two reasons. First off it introduces the need for a daemon on the slave servers, which is something I'd rather avoid. The second issue is that if someone were to compromise a limited account on a slave server, they could still run certain system commands simply by connecting to the "command daemon", even if they would not have access to those commands from their own shell. This is a problem.
* Running a daemon on the master server that manages persistent SSH connections to slave servers on behalf of the panel. This would involve writing an SSH client in Python (this is the only suitable language I am familiar with), and would probably come down to using paramiko. Since the documentation of paramiko is poor, this isn't a very attractive option and may even cause security issues because it isn't entirely clear to me how things are _supposed_ to be used in paramiko.
The following are not an option:
* Switching to a different language for the panel itself. I am writing the panel in PHP because that is the language I am most familiar with, and I'm aware of the quirks and potential issues I might encounter. Writing an important public-facing project like this in a language I am not familiar with would be a bad idea.
* Using Twisted for the third "possible solution". Twisted is a very large and convoluted dependency, and the documentation seems even worse than that of paramiko.
* Running a HTTPd or otherwise non-SSH public-facing daemon on the slave servers.
In practice, I am seeing pageload times of sometimes over a minute when
multiple servers have to be contacted for a pageload. This is obviously not
acceptable for a VPS panel.
My goal is to have some kind of implementation that avoids the connection
overhead that is introduced when using PHP-SSH2. What would be the best way to
do this, in a secure manner, while introducing a minimal amount of
dependencies on the slave servers?
Answer: You could use autossh, and create reverse (portforward) tunnels with autossh.
Then let your php application talk against those reverse ssh ports. If the ssh
connection fails, autossh will keep trying to recreate the connection. Your
php app will fail to connect to the reverse tunnel and not even timeout.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.