text
stringlengths 226
34.5k
|
---|
Data append in a list using the previous data in Python
Question: İ need to create a list in Python. It is a little complicated. The list will
contain items which appends according to previous value. For example, suppose
that my list contains
x : 11 (key, value pair)
y : 5
z : 6
if I want to add another "x" item with value "4", it will detect the previous
value "11" of "x" and it will be recorded as " x : 15" as the sum with the
previous one. I thought that I might use linked lists but I could not figure
it out. Can you just provide me with other methods, data structures or code
for this purpose?
Answer: Using the
[`defaultdict`](http://docs.python.org/2/library/collections.html#collections.defaultdict)
and calling a function:
from collections import defaultdict
def add_item(d, key, value):
d[key] += value
d = defaultdict(int)
add_item(d, 'x', 11)
add_item(d, 'x', 4)
print d
>>>
defaultdict(<type 'int'>, {'x': 15})
|
peewee: filter select query results from many to many relationship
Question: I have the following code
#!/usr/bin/env python
"""doc"""
import peewee
db = peewee.SqliteDatabase(":memory:")
class BaseModel(peewee.Model): # pylint: disable=W0232
"""base model"""
class Meta: # pylint: disable=C0111,W0232,R0903
database = db
class Student(BaseModel):
"""doc"""
name = peewee.CharField()
class Course(BaseModel):
"""doc"""
name = peewee.CharField()
class StudentCourse(BaseModel):
"""doc"""
student = peewee.ForeignKeyField(Student)
course = peewee.ForeignKeyField(Course)
Student.create_table()
Course.create_table()
StudentCourse.create_table()
s1 = Student(name="Student1")
s1.save()
s2 = Student(name="Student2")
s2.save()
s3 = Student(name="Student3")
s3.save()
s4 = Student(name="Student4")
s4.save()
c1 = Course(name="course1")
c1.save()
c2 = Course(name="course2")
c2.save()
c3 = Course(name="course3")
c3.save()
sc21 = StudentCourse(student=s2, course=c1)
sc21.save()
sc22 = StudentCourse(student=s2, course=c2)
sc22.save()
sc23 = StudentCourse(student=s2, course=c3)
sc23.save()
sc31 = StudentCourse(student=s3, course=c1)
sc31.save()
sc32 = StudentCourse(student=s3, course=c2)
sc32.save()
sc41 = StudentCourse(student=s4, course=c1)
sc41.save()
query = (
StudentCourse
.select(StudentCourse, Student, Course)
.join(Course)
.switch(StudentCourse)
.join(Student)
.order_by(Student.name))
last = None
for student_course in query:
student = student_course.student
if student != last:
last = student
print "Student: %s" % student.name
print " - %s" % student_course.course.name
that produces this output:
Student: Student2
- course1
- course2
- course3
Student: Student3
- course1
- course2
Student: Student4
- course1
This is basically the example from [peewee
docs](http://peewee.readthedocs.org/en/latest/peewee/models.html#implementing-
many-to-many)
I need to filter the results and pick only those students who have all the
courses from a given subset:
courses = ["course1", "course2"]
I have tried to use the **IN** operator and modified the select query like
this:
query = (
StudentCourse
.select(StudentCourse, Student, Course)
.join(Course)
.switch(StudentCourse)
.join(Student)
.where(Course.name << courses)
.order_by(Student.name))
But it produces a wrong result, which means I'm doing it wrong:
Student: Student2
- course1
- course2
Student: Student3
- course1
- course2
Student: Student4
- course1
There are two wrong things here:
1. **course3** for **Student2** is not printed, I need it. I assume this is because **course3** isn't in **courses** list.
2. **Student4** gets selected, but it shouldn't, because **Student4** doesn't have both **course1** and **course2** , yet **course1** is **IN** the **courses** list.
Is it possible to achieve what I want in just one query or I have to iterate
over students, make a **SELECT** query for each of them and match the list of
their courses against my **courses** filter?
My goal is to get this output:
Student: Student2
- course1
- course2
- course3
Student: Student3
- course1
- course2
Answer: This should get you pointed in the right direction: <https://www.simple-
talk.com/sql/t-sql-programming/divided-we-stand-the-sql-of-relational-
division/>
|
No module named 'x' when reloading with os.execl()
Question: I have a python script that is using the following to restart:
python = sys.executable
os.execl(python, python, * sys.argv)
Most the time this works fine, but occasionally the restart fails with a no
module named error. Examples:
Traceback (most recent call last):
File "/usr/lib/python2.7/site.py", line 68, in <module>
import os
File "/usr/lib/python2.7/os.py", line 49, in <module>
import posixpath as path
File "/usr/lib/python2.7/posixpath.py", line 17, in <module>
import warnings
File "/usr/lib/python2.7/warnings.py", line 6, in <module>
import linecache
ImportError: No module named linecache
* * *
Traceback (most recent call last):
File "/usr/lib/python2.7/site.py", line 68, in <module>
import os
File "/usr/lib/python2.7/os.py", line 49, in <module>
import posixpath as path
File "/usr/lib/python2.7/posixpath.py", line 15, in <module>
import stat
ImportError: No module named stat
Edit: I attempted gc.collect() as suggested by andr0x and this did not work. I
got the same error:
Traceback (most recent call last):
File "/usr/lib/python2.7/site.py", line 68, in <module>
import os
File "/usr/lib/python2.7/os.py", line 49, in <module>
import posixpath as path
ImportError: No module named posixpath
Edit 2: I tried `sys.stdout.flush()` and im still getting the same error. I've
noticed I am only every getting between 1-3 successful restarts before an
error occurs.
Answer: I believe you are hitting the following bug:
<http://bugs.python.org/issue16981>
As it is unlikely that these modules are disappearing there must be another
error that is actually at fault. The bug report lists 'too many open files' as
prone to causing this issue however I am unsure if there are any other errors
which will also trigger this.
I would make sure you are closing any file handles before hitting the restart
code. You can also actually force the garbage collector to run manually with:
import gc
gc.collect()
<http://docs.python.org/2/library/gc.html>
You can try using that before hitting the restart code as well
|
Cythonize two small numpy functions, help needed
Question: # The problem
I'm trying to Cythonize two small functions that mostly deal with numpy
ndarrays for some scientific purpose. These two smalls functions are called
millions of times in a genetic algorithm and account for the majority of the
time taken by the algo.
I made some progress on my own and both work nicely, but i get only a tiny
speed improvement (10%). More importantly, cython --annotate show that the
majority of the code is still going through Python.
# The code
## First function:
The aim of this function is to get back slices of data and it is called
millions of times in an inner nested loop. Depending on the bool in
data[1][1], we either get the slice in the forward or reverse order.
#Ipython notebook magic for cython
%%cython --annotate
import numpy as np
from scipy import signal as scisignal
cimport cython
cimport numpy as np
def get_signal(data):
#data[0] contains the data structure containing the numpy arrays
#data[1][0] contains the position to slice
#data[1][1] contains the orientation to slice, forward = 0, reverse = 1
cdef int halfwinwidth = 100
cdef int midpoint = data[1][0]
cdef int strand = data[1][1]
cdef int start = midpoint - halfwinwidth
cdef int end = midpoint + halfwinwidth
#the arrays we want to slice
cdef np.ndarray r0 = data[0]['normals_forward']
cdef np.ndarray r1 = data[0]['normals_reverse']
cdef np.ndarray r2 = data[0]['normals_combined']
if strand == 0:
normals_forward = r0[start:end]
normals_reverse = r1[start:end]
normals_combined = r2[start:end]
else:
normals_forward = r1[end - 1:start - 1: -1]
normals_reverse = r0[end - 1:start - 1: -1]
normals_combined = r2[end - 1:start - 1: -1]
#return the result as a tuple
row = (normals_forward,
normals_reverse,
normals_combined)
return row
## Second function
This one gets a list of tuples of numpy arrays, and we want to add up the
arrays element wise, then normalize them and get the integration of the
intersection.
def calculate_signal(list signal):
cdef int halfwinwidth = 100
cdef np.ndarray profile_normals_forward = np.zeros(halfwinwidth * 2, dtype='f')
cdef np.ndarray profile_normals_reverse = np.zeros(halfwinwidth * 2, dtype='f')
cdef np.ndarray profile_normals_combined = np.zeros(halfwinwidth * 2, dtype='f')
#b is a tuple of 3 np.ndarrays containing 200 floats
#here we add them up elementwise
for b in signal:
profile_normals_forward += b[0]
profile_normals_reverse += b[1]
profile_normals_combined += b[2]
#normalize the arrays
cdef int count = len(signal)
#print "Normalizing to number of elements"
profile_normals_forward /= count
profile_normals_reverse /= count
profile_normals_combined /= count
intersection_signal = scisignal.detrend(np.fmin(profile_normals_forward, profile_normals_reverse))
intersection_signal[intersection_signal < 0] = 0
intersection = np.sum(intersection_signal)
results = {"intersection": intersection,
"profile_normals_forward": profile_normals_forward,
"profile_normals_reverse": profile_normals_reverse,
"profile_normals_combined": profile_normals_combined,
}
return results
Any help is appreciated - I tried using memory views but for some reason the
code got much, much slower.
Answer: After fixing the array cdef (as has been indicated, with the dtype specified),
you should probably put the routine in a cdef function (which will only be
callable by a def function in the same script).
In the declaration of the function, you'll need to provide the type (and the
dimensions if it's an array numpy):
cdef get_signal(numpy.ndarray[DTYPE_t, ndim=3] data):
I'm not sure using a dict is a good idea though. You could make use of numpy's
column or row slices like data[:, 0].
|
Assigning module function returns
Question: I'm relatively new to Python. I'm working on a script that will reassign a
numerical representation of a digit found in a string to its alphabetical
counterpart. Because the function is relatively large in size, I'm adding it
as a module to the script.
I'm having some issues getting the object back from the imported function. I
would like to use the returned object, but it seems to be coming back as None.
I've looked into global variables, but I don't know if that's the right
direction.
Here is what I've been working with:
...
import numassign
for i in CharacterKey: # i is '0'
if i.isdigit():
FoundInt = numassign.NumberAssignment(input = i)
CharacterKeyList.append(FoundInt)
raw_input('{} reassigned to {}'.format(FoundInt, i))
else:
CharacterKeyList.append(i)
...
Here is the referenced module (numassign):
...
def NumberAssignment(input):
if input == 0:
FoundInt = 'Zero'
return FoundInt
...
Currently returning FoundInt as None.
None reassigned to 0
How can I cross-reference objects from a module function? I'd rather not
clutter up my code with functions if I could import them from a referenced
module.
Answer: You are testing for an integer, put are passing in a string.
They may _print_ the same, but integers and strings never test as equal in
Python. Test for the string instead:
if input == '0':
Because your `if input == 0` fails for `input = '0'`, your function never
reaches a `return` statement, leaving Python to return the default `None`
instead.
|
Python secure websocket memory consumption
Question: I am writing a web socket server in python. I have tried the approach below
with txws, autobahn, and tornado, all with similar results.
I seem to have massive memory consumption with secure websockets and I cannot
figure out where or why this might be happening. Below is an example in
tornado, but I can provide examples in autobahn or txws.
import tornado.httpserver
import tornado.websocket
import tornado.ioloop
import tornado.web
import json
class AuthHandler(tornado.websocket.WebSocketHandler):
def open(self):
print 'new connection for auth'
def on_message(self, message):
message = json.loads(message)
client_id = message['client_id']
if client_id not in app.clients:
app.clients[client_id] = self
self.write_message('Agent Recorded')
def on_close(self):
print 'auth connection closed'
class MsgHandler(tornado.websocket.WebSocketHandler):
def open(self):
print 'new connection for msg'
def on_message(self, message):
message = json.loads(message)
to_client = message['client_id']
if to_client in app.clients:
app.clients[to_client].write_message('You got a message')
def on_close(self):
print 'msg connection closed'
app = tornado.web.Application([
(r'/auth', AuthHandler),
(r'/msg', MsgHandler)
])
app.clients = {}
if __name__ == "__main__":
http_server = tornado.httpserver.HTTPServer(app, ssl_options={
'certfile': 'tests/keys/server.crt',
'keyfile': 'tests/keys/server.key'
})
http_server.listen(8000)
tornado.ioloop.IOLoop.instance().start()
After making around 10,000 connections I find I am using around 700MB of
memory with SSL compared to 43MB without, and I never get it back unless I
kill the process. It seems like the problem is closely tied to the amount of
connections made rather than messages sent.
The consumption seems to happen independent of the client (I wrote my own
client and tried other clients).
Are secure websockets really that much more memory intensive that plain
websockets? Or is my server code not implementing it correctly?
Answer: I think the best solution is to use a real webserver (nginx apache) as a proxy
and let it manage the ssl layer.
|
Complex number troubles with numpy
Question: I'm attempting to translate some matlab code again and I've run into another
pickle. The code itself is very simple, it's just a demonstration of a 4 node
twiddle factor. Here is my attempt:
from numpy import *
from matplotlib import pyplot as plt
x = zeros(4)
x[-1+1] = 0
x[0+1] = 1
x[1+1] = 1
x[2+1] = 0
z = 0 - 1j
W4 = exp(z*2*pi/4)
W0 = W4 ** 0
W1 = W4 ** 1
W2 = W4 ** 2
W3 = W4 ** 3
X = zeros(4)
X[-1+1] = (x[-1+1] + x[1+1]*W0) + W0*(x[0+1] + x[2+1]*W0)
X[0+1] = (x[-1+1] + x[1+1]*W2) + W1*(x[0+1] + x[2+1]*W2)
X[1+1] = (x[-1+1] + x[1+1]*W0) + W2*(x[0+1] + x[2+1]*W0)
X[2+1] = (x[-1+1] + x[1+1]*W2) + W3*(x[0+1] + x[2+1]*W2)
fx = fft.fft(x)
plt.plot(X)
plt.plot(fx, 'ro')
plt.title("Results 4-point hand programmed FFT (blue) and the PYTHON routine (red o)")
plt.show()
Here are the output images. The first one is run with (almost) identical
matlab code, the second one is the image from the python code above. 
 For lines 24 to 27 it gives
me the error "ComplexWarning: Casting complex values to real discards the
imaginary part". Now I'm not used to working with complex numbers in python. I
tried adding a complex component to all the variables, but it gave me a graph
that's way off from the matlab one. Thoughts? If you would like me to post the
matlab code as well, let me know.
Answer: When you specify the array `x` and `X` you need to make sure it is of complex
data type, i.e:
x = zeros((4),dtype=complex)
EDIT:
To fix the plot you need to plot both real and imaginary parts:
plt.plot(X.real,X.imag)
plt.plot(fx.real,fx.imag, 'ro')
This gives me:

....which looks like your Matlab graph.
|
Python serialize objects list to JSON
Question: I am trying to serialize to JSON the `__dict__` of an object, which is working
fine, until I append objects to one of the instance attribute of my first
object:
from json import dumps
class A(object):
def __init__(self):
self.b_list = []
class B(object):
def __init__(self):
self.x = 'X'
self.y = 'Y'
def __repr__(self):
return dumps(self.__dict__)
a = A()
print dumps(a.__dict__) # works fine
a.b_list.append(B())
print dumps(a.__dict__)
When calling for the second time `dumps`, I got the following `TypeError`:
TypeError: {"y": "Y", "x": "X"} is not JSON serializable
I don't understand why I keep getting this error while I can't see why this is
not serializable to JSON.
Answer: That's because instances of `B` are not a simple type. Because you gave `B` a
`__repr__` method, the instance is _printed_ as it's JSON representation, but
it is not itself a supported JSON type.
Remove the `__repr__` method and the traceback is much less confusing:
>>> class A(object):
... def __init__(self):
... self.b_list = []
...
>>> class B(object):
... def __init__(self):
... self.x = 'X'
... self.y = 'Y'
...
>>> a = A()
>>> a.b_list.append(B())
>>>
>>> print dumps(a.__dict__)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/json/__init__.py", line 243, in dumps
return _default_encoder.encode(obj)
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/Users/mj/Development/Library/buildout.python/parts/opt/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: <__main__.B object at 0x10a753e10> is not JSON serializable
Use the `default` keyword argument to encode custom objects:
def encode_b(obj):
if isinstance(obj, B):
return obj.__dict__
return obj
json.dumps(a, default=encode_b)
Demo:
>>> def encode_b(obj):
... if isinstance(obj, B):
... return obj.__dict__
... return obj
...
>>> dumps(a.__dict__, default=encode_b)
'{"b_list": [{"y": "Y", "x": "X"}]}'
|
Internal Server Error with very simple python script
Question: I'm new to python, and i'm trying to run a simple script (On a Mac if that's
important).
Now, this code, gives me Internal Server Error:
#!/usr/bin/python
print 'hi'
But this one works like a charm (Only extra 'print' command):
#!/usr/bin/python
print
print 'hi'
Any explanation? Thanks!
Update:
When I run this script from the Terminal everything is fine. But when I run it
from the browser:
http://localhost/cgi-bin/test.py
I get this error (And again, only if i'm not adding the extra print command).
I use Apache server of course.
Answer: ~~Looks like you're running your script as a CGI-script~~ (your edit confirms
that you're using CGI)
...and the initial (empty) `print` is required to signify the end of the
headers.
Check your Apache's error log (`/var/log/apache2/error.log` probably) to see
if it says '_Premature end of script headers_ ' ([more info
here](http://httpd.apache.org/docs/2.2/howto/cgi.html#troubleshoot)).
**EDIT** : a bit more explanation:
A CGI script in Apache is responsible for generating it's own HTTP response.
An HTTP response consists of a header block, _an empty line_ , and the so-
called _body_ contents. Even though you should generate _some_ headers, it's
not mandatory to do so. However, you do need to output the empty line; Apache
expects it to be there, and if it's not (or if you only output a body which
can't be parsed as headers), Apache will generate an error.
That's why your first version didn't work, but your second did: adding the
empty `print` added the required empty line that Apache was expecting.
This will also work:
#!/usr/bin/env python
print "Content-Type: text/html" # header block
print "Vary: *" # also part of the header block
print "X-Test: hello world" # this too is a header, but a made-up one
print # empty line
print "hi" # body
|
Install libtiff on Mavericks
Question: I made a Python script that needs a libtiff module to run. Do you have any
suggestions on how to install libtiff? I tried to do it using fink, but I got
the following error:
> Failed: no package found for specification libtiff!
I also installed libtiff using brew, and in this case I get
> ImportError: No module named libtiff
Answer: Homebrew worked fine for me. Have you installed the [Python bindings for
libtiff](https://code.google.com/p/pylibtiff/)? For example, ...
% brew install libtiff
==> Downloading https://downloads.sf.net/project/machomebrew/Bottles/libtiff-4.0.3.mavericks.bottle.tar.gz
######################################################################## 100.0%
==> Pouring libtiff-4.0.3.mavericks.bottle.tar.gz
/usr/local/Cellar/libtiff/4.0.3: 254 files, 3.8M
% brew install python
% pip install --upgrade setuptools
% pip install --upgrade pip
% pip install numpy
% pip install -e svn+http://pylibtiff.googlecode.com/svn/trunk/
% python
Python 2.7.6 (default, Mar 12 2014, 18:28:55)
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.38)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import libtiff
>>> libtiff
<module 'libtiff' from '/Users/me/src/svn/libtiff/__init__.pyc'>
>>>
|
ipython not producing output graph using matplotlib
Question: SO I have recently started trying to use ipython, I am finding I cannot get it
to produce an output graph. I am running the following code in ipython:
from sklearn import linear_model
regr = linear_model.LinearRegression()
regr.fit(x, y)
pl.plot(x, y, 'o')
pl.plot(x_test, regr.predict(x_test))
and I am recieving the output:
[<matplotlib.lines.Line2D at 0x21d453b0>]
With no image attatched.
I installed ipython using the pythonxy package. Any thoughts of suggestions on
methods to get plots outputting correctly in ipython
_**See attached image:_**

Answer: Try running in a cell:
%pylab inline # or
%matplotlib inline
After that the plots should be displayed inline. Alternatively start the
notebook using the inline option in the command line:
ipython notebook --pylab=inline
|
'module' object has no attribute 'loads' while parsing JSON using python
Question: I am trying to parse JSON from Python. I recently started working with Python
so I followed some stackoverflow tutorial how to parse JSON using Python and I
came up with below code -
#!/usr/bin/python
import json
j = json.loads('{"script":"#!/bin/bash echo Hello World"}')
print j['script']
But whenever I run the above code, I always get this error -
Traceback (most recent call last):
File "json.py", line 2, in <module>
import json
File "/cygdrive/c/ZookPython/json.py", line 4, in <module>
j = json.loads('{"script":"#!/bin/bash echo Hello World"}')
AttributeError: 'module' object has no attribute 'loads'
Any thoughts what wrong I am doing here? I am running cygwin in windows and
from there only I am running my python program. I am using Python 2.7.3
And is there any better and efficient way of parsing the JSON as well?
**Update:-**
Below code doesn't work if I remove the single quote since I am getting JSON
string from some other method -
#!/usr/bin/python
import json
jsonStr = {"script":"#!/bin/bash echo Hello World"}
j = json.loads(jsonStr)
shell_script = j['script']
print shell_script
So before deserializing how to make sure, it has single quote as well?
This is the error I get -
Traceback (most recent call last):
File "jsontest.py", line 7, in <module>
j = json.loads(jsonStr)
File "/usr/lib/python2.7/json/__init__.py", line 326, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
Answer:
File "json.py", line 2, in <module>
import json
This line is a giveaway: you have named your script "json", but you are trying
to import the builtin module called "json", since your script is in the
current directory, it comes first in sys.path, and so that's the module that
gets imported.
You need to rename your script to something else, preferrably not a standard
python module.
|
Search pandas series for value and split series at that value
Question: Python 3.3.3 Pandas 0.12.0
I have a single column .csv file with hundreds of float values separated by an
arbitrary string (the string contains letters edit: _and will vary run to
run_). I'm a pandas beginner, hoping to find a way to load that .csv file and
split the float values into two columns at the level of that string.
I'm so stuck at the first part (searching for the string) that I haven't yet
been able to work on the second, which I thought should be much easier.
So far, I've been trying to use `raw = pandas.read_csv('myfile.csv',
squeeze=True)`, then something like `raw.str.findall('[a-z]')`, but I'm not
having much luck. I'd really appreciate if someone could lend a hand. I'm
planning to use this process on a number of similar .csv files, so I'd hope to
find a fairly automated way of performing the task.
Example input.csv:
123.4932
239.348
912.098098989
49391.1093
....
This is a fake string that splits the data.
....
1323.4942
2445.34223
914432.4
495391.1093090
Desired eventual DataFrame:
Column A Column B
123.4932 1323.4942
239.348 2445.34223
912.098098989 914432.4
49391.1093 495391.1093090
... ...
Thanks again if you can point me in the right direction.
* * *
20131123 EDIT: Thank you for the responses thus far. Updated to reflect that
the splitting string will not remain constant, hence my statement that I'd
been trying to find a solution employing a regex `raw.str.findall('[a-z]')`
instead of using `.contains`.
My solution at this point is to just read the .csv file and split with `re`,
accumulate into lists, and load those into pandas.
import pandas as pd
import re
raw = open('myfile.csv', 'r').read().split('\n')
df = pd.DataFrame()
keeper = []
counter = 0
# Iterate through the rows. Consecutive rows that can be made into float are accumulated.
for row in raw:
try:
keeper.append(float(row))
except:
if keeper:
df = pd.concat([df, pd.DataFrame(keeper, columns = [counter] )], axis = 1)
counter += 1
keeper = []
# Get the last column, assuming the file hasn't ended on a line
# that will trigger the exception in the above loop.
if keeper:
df = pd.concat([df, pd.DataFrame(keeper, columns = [counter] )], axis = 1)
df.describe()
Thank you for any further suggestions.
Answer: If you know you only have two columns, then you could do something like
>>> ser = pd.read_csv("colsplit.csv", header=None, squeeze=True)
>>> split_at = ser.str.contains("fake string that splits").idxmax()
>>> parts = [ser[:split_at], ser[split_at+1:]]
>>> parts = [part.reset_index(drop=True) for part in parts]
>>> df = pd.concat(parts, axis=1)
>>> df.columns = ["Column A", "Column B"]
>>> df
Column A Column B
0 123.4932 ....
1 239.348 1323.4942
2 912.098098989 2445.34223
3 49391.1093 914432.4
4 .... 495391.1093090
5 NaN extra test element
If you have an arbitrary number of places to split at, then you can use a
boolean Series/shift/cumsum/groupby pattern, but if you can get away without
it, so much the better.
(PS: I'm sure there's a better way than `idxmax`, but for the life of me I
can't remember the idiom to find the first True right now.
`split_at[split_at].index[0]` would do it, but I'm not sure that's much
better.)
|
issue with self made random number generator in python. (using time.time())
Question: I am well aware that python has a built in random library, so please do not
include any reference to it in this. I am taking a do-it-yourself approach
using `time.time()`. I made a simple random number generator as is shown
below:
def random(x, y):
import time
choices = [i for i in range(x, y)]
t = time.time()
num = int(str(t-int(t))[2:])%100
xnum = [i for i in range(num, choices[-1])]
ynum = [choices[i] for i in xnum]
znum = ynum[choices[num-1]]
ret = choices[znum]
return ret
This works pretty well for anything like `random(1, 200)`; however, it has
some issues. For example, if i were to do `random(10, 200)`, it would give an
index error. How could I improve/change this to make it handle any two
numbers?
Answer: You would need to have a function, say `randombits()` which generates a random
integer in some range (i.e. 256 or 2^32). Then your `random()` function needs
to first take the difference of `x` and `y`, i.e. the number of possible
outputs of `random(x, y)`. Then it needs to calculate how many times
`randombits()` has to be called in order to have enough entropy. Then it needs
to combine the values of the multiple calls of `randombits()` into one long
integer. One way to do this is to place output of `randombits()` into a
separate digit of the long random integer, i.e. if the outputs of the calls
are `8, 9, 1` then the long random integer will be 891.
Then it needs to scale down the long random integer to the number of possible
outputs (the difference between x and y). One way to do this which
`random.random()` uses is to keep getting a long random integer until it is
within the range. Then it needs to add the long random integer to `x` and that
should be returned.
Code:
RANDOMBITS_RANGE = 256
def randombits():
return 4 # Replace this code with something that generates real random numbers
def random(x,y):
num_range = y-x
num_of_calls = num_range//RANDOMBITS_RANGE
random_integer = num_range + 1 #So that the first time the while loop block will execute
while random_integer >= num_range: #Keeps looping until random_integer < num_range
random_integer = sum(randombits()*RANDOMBITS_RANGE**x for x in range(num_of_calls))
return random_integer
|
how to convert json to python class?
Question: I want to Json to Python class.
example
{'channel':{'lastBuild':'2013-11-12', 'component':['test1', 'test2']}}
self.channel.component[0] => 'test1'
self.channel.lastBuild => '2013-11-12'
do you know python library of json converting?
Answer: Use `object_hook` special parameter in load functions of json module:
import json
class JSONObject:
def __init__( self, dict ):
vars(self).update( dict )
#this is valid json string
data='{"channel":{"lastBuild":"2013-11-12", "component":["test1", "test2"]}}'
jsonobject = json.loads( data, object_hook= JSONObject)
print( jsonobject.channel.component[0] )
print( jsonobject.channel.lastBuild )
This method have some issue, like some names in python are reserved. You can
filter them out inside `__init__` method.
|
Ruby script executed by os.system() using python
Question: I am facing issue while writing output of ruby script executed by os.system()
import os
def main():
os.system('\\build.rb -p test > out1.txt')
os.system('\\newRelease.rb -l bat > out2.txt')
if __name__ == '__main__':
main()
When I try to execute the code without passing the '> out1.txt' it gets
executed and shows output on cmd but when I pass the parameter ' > out1.txt'
it is not writing output in out1.txt. I want the output of the ruby script to
be redirected to the txt file.
Answer: I'd do it this way:
from subprocess import check_output
build = check_output(['\\build.rb', '-p', 'test'])
with open('out1.txt', 'w') as out1:
out1.write(build)
release = check_output(['\\newRelease.rb', '-l', 'bat'])
with open('out2.txt', 'w') as out2:
out2.write(release)
|
paramiko is installed but mysql workbench saying "ImportError: No module named paramiko"
Question: When i tried to open mysql workbench then it saying "ImportError: No module
named paramiko; Operation failed: Cannot start SSH tunnel manager" although i
have installed paramiko. I am using python 2.7.3 ubuntu 12.04
I am getting this error after trying to upgrade from python 2.7.3 to python 3
again i installed python2.7.5 but my gedit not working then coming back to
python 2.7.3. Now everything became normal except mysql work bench. I am using
workbench version 6.0
error report: Traceback (most recent call last):
File "/usr/share/mysql-workbench/sshtunnel.py", line 30, in
import paramiko
ImportError: No module named paramiko
Operation failed: Cannot start SSH tunnel manager
Answer: Try to create a virtual environment and to install `paramiko` in it.
virtualenv test
source test/bin/activate
pip install paramiko
mysql-workbench
For some reason it solved the problem for me, after `sudo apt-get install
python-paramiko` and `sudo pip install paramiko` both failed. I also tried
installing `paramiko` in a `conda` environment, and it failed as well.
|
Python Homework - file i/o - read file and turn into dictionary
Question: I need to create a function that takes no arguments and reads back the
dictionary that is in a previously-saved file. I must first determine if the
file exists. If it does, I must read the contents of the file and return it as
a dictionary. If not, return `[]`.
I'm fairly new to Python and I've been brain dead looking at this for a couple
hours now. Any help would be much appreciated!
For example:
dave 12
brad 18
stacy 8
This would now be read as `{'dave': 12, 'brad': 18, 'stacy': 8}`.
So far I have this:
def readit():
file1 = open('save.txt', 'r')
data = []
lines = file1.readlines()
for i in range(len(lines)):
data.append(lines[i].split('\n'))
return data
file1.close()
Answer: This'll do it:
import os.path
def readit():
filename = 'save.txt'
if not os.path.isfile(filename):
return {}
with open(filename) as ifh:
return dict(line.split() for line in ifh)
This first tests if the file exists; if it does not an empty dictionary is
returned.
If there are spaces between the names, use `.rsplit(None, 1)`; this'll split
on the last whitespace within the line only:
def readit():
filename = 'save.txt'
if not os.path.isfile(filename):
return {}
with open('save.txt') as ifh:
return dict(line.rsplit(None, 1) for line in ifh)
which will turn:
Martijn Pieters 42
user3014014 38
into
{'Martijn Pieters': '42', 'user3014014': '38'}
Note the `with` statement here also. This uses the file object as a context
manager, meaning that as soon as the block exits (using `return`, due to an
exception or simply because the block ends) then the file is automatically
closed for you. Your `file1.close()` line on the other hand will never be
executed as it is placed after the `return` statement.
The above, of course, gives us _strings_ for values. Lets expand this to
produce integer values instead:
def readit():
filename = 'save.txt'
if not os.path.isfile(filename):
return {}
with open('save.txt') as ifh:
return {key: int(value) for line in ifh for key, value in (line.rsplit(None, 1),)}
This produces:
{'Martijn Pieters': 42, 'user3014014': 38}
for my sample input.
|
Django get environment variables from apache
Question: I cannot seem to get Django to read the settings I configure from the
environment variables. I have followed some guides online, and found some
other questions, and as a result have tried configuring as below:
**Apache Config:**
WSGIScriptAlias "/v4" /usr/local/myproject4/myproject4/wsgi.py
WSGIPythonPath /usr/local/myproject4:/usr/local/myproject4/env/lib/python2.7/site-packages
<VirtualHost *:8000>
SetEnv MYPROJECT_SECRET_KEY 'xxx'
SetEnv MYPROJECT_DB_USER 'xxxx'
SetEnv MYPROJECT_DB_PASS 'xxxx'
<Directory /usr/local/myproject4/myproject4>
<Files wsgi.py>
Order deny,allow
Allow from all
</Files>
</Directory>
</VirtualHost>
**My wsgi.py file looks contains this (to retrieve the settings):**
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject4.settings")
from django.core.handlers.wsgi import WSGIHandler
_application = WSGIHandler()
def application(environ, start_response):
for key, value in environ:
if key.startswith('MYPROJECT_'):
os.environ[key] = value;
return _application(environ, start_response)
**However whenever I try to retrieve the settings I get this:**
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] mod_wsgi (pid=21912): Target WSGI script '/usr/local/myproject4/myproject4/wsgi.py' cannot be loaded as Python module.
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] mod_wsgi (pid=21912): Exception occurred processing WSGI script '/usr/local/myproject4/myproject4/wsgi.py'.
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] Traceback (most recent call last):
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] File "/usr/local/myproject4/myproject4/wsgi.py", line 14, in <module>
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] from django.core.handlers.wsgi import WSGIHandler
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] File "/usr/local/myproject4/env/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 11, in <module>
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] from django.core.handlers import base
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] File "/usr/local/myproject4/env/lib/python2.7/site-packages/django/core/handlers/base.py", line 12, in <module>
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] from django.db import connections, transaction
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] File "/usr/local/myproject4/env/lib/python2.7/site-packages/django/db/__init__.py", line 83, in <module>
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] signals.request_started.connect(reset_queries)
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] File "/usr/local/myproject4/env/lib/python2.7/site-packages/django/dispatch/dispatcher.py", line 88, in connect
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] if settings.DEBUG:
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] File "/usr/local/myproject4/env/lib/python2.7/site-packages/django/conf/__init__.py", line 54, in __getattr__
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] self._setup(name)
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] File "/usr/local/myproject4/env/lib/python2.7/site-packages/django/conf/__init__.py", line 49, in _setup
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] self._wrapped = Settings(settings_module)
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] File "/usr/local/myproject4/env/lib/python2.7/site-packages/django/conf/__init__.py", line 128, in __init__
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] mod = importlib.import_module(self.SETTINGS_MODULE)
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] File "/usr/local/myproject4/env/lib/python2.7/site-packages/django/utils/importlib.py", line 40, in import_module
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] __import__(name)
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] File "/usr/local/myproject4/myproject4/settings.py", line 29, in <module>
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] SECRET_KEY = get_env_variable('MYPROJECT_SECRET_KEY')
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] File "/usr/local/myproject4/myproject4/settings.py", line 23, in get_env_variable
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] raise ImproperlyConfigured(error_msg)
[Wed Nov 20 17:07:08 2013] [error] [client xxx.xxx.xxx.xxx] ImproperlyConfigured: Set the MYPROJECT_SECRET_KEY environment variable
Really appreciate if someone could help me identify what I am doing wrong.
Answer: I needed the same feature to deal with prod/dev environments... and found out
the following article: <http://drumcoder.co.uk/blog/2010/nov/12/apache-
environment-variables-and-mod_wsgi/>
I just tried it, and it worked at once. Pay attention to the Handler's name
that is prefixed with a underscode:
_application = django.core.handlers.wsgi.WSGIHandler()
|
Sum Values in a Dictionary w/ respect to the Key - Python 2.7
Question: My dictionary `Dict` is arranged as follows. Each key is associated with a
list of values, where each value is a tuple:
Dict = {
'key1': [('Red','Large',30),('Red','Medium',40),('Blue','Small',45)],
'key2': [('Red','Large',35)],
'key3': [('Yellow','Large',30),('Red','Medium',30)],
}
I then want to sum the integers (index 2 of each tuple) given a new key, Color
in this case.
The resulting new dictionary should look something like:
{
'key1': [('Red', 70), ('Blue', 45)],
'key2': [('Red', 35)],
'key3': [('Yellow', 30), ('Red', 30)],
}
How would I accomplish this?
I was thinking something like the following, but I know this is wrong in
several ways.
sum = 0
new_dict = {}
new_key = raw_input("Enter a new key to search on: ")
for k,v in Dict:
if v[0] == new_key:
sum = sum + v[2]
new_dict[k].append(sum)
else:
sum = 0
new_dict[k] = [sum]
Answer: Use a dict comprehension to produce your new output:
{key: [color, sum(t[2] for t in value if t[0] == color)] for key, value in Dict.iteritems()}
where `color` is the key to search on.
Demo:
>>> Dict = {
... 'key1': [('Red','Large',30),('Red','Medium',40),('Blue','Small',45)],
... 'key2': [('Red','Large',35)],
... 'key3': [('Yellow','Large',30),('Red','Medium',30)],
... }
>>> color = 'Red'
>>> {key: [color, sum(t[2] for t in value if t[0] == color)] for key, value in Dict.iteritems()}
{'key3': ['Red', 30], 'key2': ['Red', 35], 'key1': ['Red', 70]}
To sum _all_ values by color, use a `Counter()` to sum the values:
from collections import defaultdict, Counter
new_dict = {}
for key, values in Dict.iteritems():
counts = Counter()
for color, _, count in values:
counts[color] += count
new_dict[key] = counts.items()
which gives:
>>> new_dict = {}
>>> for key, values in Dict.iteritems():
... counts = Counter()
... for color, _, count in values:
... counts[color] += count
... new_dict[key] = counts.items()
...
>>> new_dict
{'key3': [('Red', 30), ('Yellow', 30)], 'key2': [('Red', 35)], 'key1': [('Blue', 45), ('Red', 70)]}
|
Passing numpy array to Cython
Question: I am learning Cython. I have problem with passing numpy arrays to Cython and
don't really understand what is going on. Could you help me?
I have two simple arrays:
a = np.array([1,2])
b = np.array([[1,4],[3,4]])
I want to compute a dot product of them. In python/numpy everything works
fine:
>>> np.dot(a,b)
array([ 7, 12])
I translated the code to Cython (as here:
<http://docs.cython.org/src/tutorial/numpy.html>):
import numpy as np
cimport numpy as np
DTYPE = np.int
ctypedef np.int_t DTYPE_t
def dot(np.ndarray a, np.ndarray b):
cdef int d = np.dot(a, b)
return d
It compiled with no problems but returns an error:
>>> dot(a,b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "test.pyx", line 8, in test.dot (test.c:1262)
cdef int d = np.dot(a, b)
TypeError: only length-1 arrays can be converted to Python scalars
Could you tell me why and how to do it correctly? Unfortunately Google was not
helpful...
Thanks!
Answer: Your result is np.ndarray, not int. It fails trying to convert first one to
latter. Do instead
def dot(np.ndarray a, np.ndarray b):
cdef np.ndarray d = np.dot(a, b)
return d
|
How to read from a text file compressed with 7z in Python
Question: I would like to read (in Python 2.7), line by line, from a csv (text) file,
which is 7z compressed. I don't want to decompress the entire (large) file,
but to stream the lines.
I tried `pylzma.decompressobj()` unsuccessfully. I get a data error. Note that
this code doesn't yet read line by line:
input_filename = r"testing.csv.7z"
with open(input_filename, 'rb') as infile:
obj = pylzma.decompressobj()
o = open('decompressed.raw', 'wb')
obj = pylzma.decompressobj()
while True:
tmp = infile.read(1)
if not tmp: break
o.write(obj.decompress(tmp))
o.close()
Output:
o.write(obj.decompress(tmp))
ValueError: data error during decompression
Answer: This will allow you to stream the lines. It's derived from some code I found
in [another answer](http://stackoverflow.com/a/12292596/355230).
import os
import py7zlib
class SevenZFileError(py7zlib.ArchiveError):
pass
class SevenZFile(object):
@classmethod
def is_7zfile(cls, filepath):
""" Determine if filepath points to a valid 7z archive. """
is7z = False
fp = None
try:
fp = open(filepath, 'rb')
archive = py7zlib.Archive7z(fp)
n = len(archive.getnames())
is7z = True
finally:
if fp: fp.close()
return is7z
def __init__(self, filepath):
fp = open(filepath, 'rb')
self.filepath = filepath
self.archive = py7zlib.Archive7z(fp)
def __contains__(self, name):
return name in self.archive.getnames()
def bytestream(self, name):
""" Iterate stream of bytes from an archive member. """
if name not in self:
raise SevenZFileError('member %s not found in %s' %
(name, self.filepath))
else:
member = self.archive.getmember(name)
for byte in member.read():
if not byte: break
yield byte
def readlines(self, name):
""" Iterate lines from an archive member. """
linesep = os.linesep[-1]
line = ''
for ch in self.bytestream(name):
line += ch
if ch == linesep:
yield line
line = ''
if line: yield line
Sample usage:
import csv
if SevenZFile.is_7zfile('testing.csv.7z'):
sevenZfile = SevenZFile('testing.csv.7z')
if 'testing.csv' not in sevenZfile:
print 'testing.csv is not a member of testing.csv.7z'
else:
reader = csv.reader(sevenZfile.readlines('testing.csv'))
for row in reader:
print ', '.join(row)
|
Can one declare an abstract exception in Python?
Question: I would like to declare a hierarchy of user-defined exceptions in Python.
However, I would like my top-level user-defined class (`TransactionException`)
to be abstract. That is, I intend `TransactionException` to specify methods
that its subclasses are required to define. However, `TransactionException`
should never be instantiated or raised.
I have the following code:
from abc import ABCMeta, abstractmethod
class TransactionException(Exception):
__metaclass__ = ABCMeta
@abstractmethod
def displayErrorMessage(self):
pass
However, the above code allows me to instantiate `TransactionException`...
a = TransactionException()
In this case `a` is meaningless, and should instead draw an exception. The
following code removes the fact that `TransactionException` is a subclass of
`Exception`...
from abc import ABCMeta, abstractmethod
class TransactionException():
__metaclass__ = ABCMeta
@abstractmethod
def displayErrorMessage(self):
pass
This code properly prohibits instantiation but now I cannot raise a subclass
of `TransactionException` because it's not an `Exception` any longer.
Can one define an abstract exception in Python? If so, how? If not, why not?
NOTE: I'm using Python 2.7, but will happily accept an answer for Python 2.x
or Python 3.x.
Answer:
class TransactionException(Exception):
def __init__(self, *args, **kwargs):
raise NotImplementedError('you should not be raising this')
class EverythingLostException(TransactionException):
def __init__(self, msg):
super(TransactionException, self).__init__(msg)
try:
raise EverythingLostException('we are doomed!')
except TransactionException:
print 'check'
try:
raise TransactionException('we are doomed!')
except TransactionException:
print 'oops'
|
Speed of urllib.urlretrieve vs urllib.urlopen
Question: I am trying to download SEC filings directly from the SEC ftp server. When I
use `urllib.urlretrieve(url,dst)`, it takes significantly longer than when
doing something like `page = urllib.urlopen(url).read()` followed by
`writeFile.write(page)`. As an example:
from time import time
import urllib
url = 'ftp://ftp.sec.gov/edgar/data/886475/0001019056-13-000804.txt'
t0 = time()
urllib.urlretrieve(url,'D:/temp.txt')
t1 = time()
t = t1-t0
print "urllib.urlretrieve time = %s" % t
t0 = time()
writefile = open('D:/temp2.txt','w')
page = urllib.urlopen(url).read()
writefile.write(page)
writefile.close()
t1 = time()
t = t1-t0
print "urllib.urlopen time = %s" % t
When I run this, I get 33 seconds for `urllib.urlretrieve` and 2.6 seconds for
the `urllib.urlopen` block. If I watch the D drive, the full ~5.6MB is
downloaded very quickly, but then it hangs for ~30 seconds. What is going on
here? I can proceed with my project using the `urllib.urlopen` method, but
would like to know for future projects. I am running Windows 7 professional
64-bit and this is Python 2.7. Thanks in advance for your help.
Answer: Timing is a funny thing, especially considering the stateless environment of
the web.
While I don't have a smoking gun, I would recommend you take a look at the
[source for
urllib](http://svn.python.org/view/python/tags/r27/Lib/urllib.py?view=markup)
(as of 2.7).
You can see at line 69: `def urlopen`, and line 87: `def urlretrieve`. Both
create a `FancyURLopener()`, but call separate functions within the class.
My best guess is the delay revolves around either:
1. Windows file handlers, opening, closing, etc.
2. DNS resolution (Less likely, since the file resolves and downloads with 5.6 seconds as you claim.
You could always hack your `urllib.py` source to print out timings of each
sub-function call, even if only temporarily to trace down the hangup. To
locate where your installation is storing urllib.py, use the following:
import urllib
print urllib.__file__
|
Arithmetic Operation in a SQL query (nested Select statement) using Python
Question: I am trying to do an arithmetic operation in a SQL query using Python (I am
using sqlite3). My SQL Table (TwTbl) has a coloumn geo_count(number). I have
to count the number of entries in which the Geo_count Coloumn has a number
greater than 0, and also count the number of entries with Geo_count = 0 and
then subtract them. i.e.
(number of entries with Geo_count = 0) - (number of entries with Geo_count > 0)
I have to write a nested select statement for that.
import sqlite3
c.execute("SELECT (COUNT(SELECT geo_count FROM TwTbl WHERE geo_id == 0) –
COUNT(SELECT geo_count FROM TwTbl WHERE geo_count IS <> 0)) FROM TwTbl").fetchall()
This is giving me a syntax error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
sqlite3.OperationalError: near "SELECT": syntax error
I tried writing this query in another way. It is not giving me any syntax
error, but not the expected results. If I run the following query I get 0.
c.execute("select count(geo_count) from TwTbl where geo_count == 0 -
(select count(geo_count) from TwTbl where geo_count <> 0)").fetchall()
Although If I run the queries individually the results are as follows:
c.execute("select count(geo_count) from TwTbl where geo_count <> 0").fetchall()
>>> 13
c.execute("select count(geo_count) from TwTbl where geo_count ==0").fetchall()
>>> 880
I am not sure about the correct syntax. Not sure what am I doing wrong.
Answer:
SELECT SUM(geo_count==0) - SUM(geo_count<>0) FROM TwTbl
|
Why does SVD result of Armadillo differ from NumPy?
Question: In my Python code, I was computing SVD of some data using
[numpy.linalg.svd](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.svd.html):
from numpy import linalg
(_, _, v) = linalg.svd(m)
V matrix returned by this was:
[[ 0.4512937 -0.81992002 -0.35222884]
[-0.22254721 0.27882908 -0.93419863]
[ 0.86417981 0.4999855 -0.05663711]]
While porting my code to C++, I switched to using
[Armadillo](http://arma.sourceforge.net/) for computing SVD:
#include <armadillo>
arma::fmat M; // Input data
arma::fmat U;
arma::fvec S;
arma::fmat V;
arma::svd(U, S, V, M);
The resulting V for the same data is:
0.4513 -0.2225 -0.8642
-0.8199 0.2788 -0.5000
-0.3522 -0.9342 0.0566
We can see that the transpose of V from Armadillo matches V from NumPy. Except
that is, for the last column of V from Armadillo. Those values have the
opposite sign of the values in last row of NumPy result.
What is happening here? Why do the SVD results from two popular libraries
differ like this? And which of the two is the correct result?
Answer: Both are correct... The rows of the `v` you got from numpy are the
eigenvectors of `M.dot(M.T)` (the transpose would be a conjugate transpose in
the complex case). Eigenvectors are in the general case defined only up to a
multiplicative constant, so you could multiply any row of `v` by a different
number, and it will still be an eigenvector matrix.
There is the additional constraint on `v` that it be a [unitary
matrix](http://en.wikipedia.org/wiki/Unitary_matrix), which loosely translates
to its rows being orthonormal. This reduces your available choices for every
eigenvector to only 2: the normalized eigenvector pointing in either
direction. But you still get to multiply any row by -1 and still have a valid
`v`.
If you want to test it for your matrix, which I have loaded as `a`:
>>> u, d, v = np.linalg.svd(a)
>>> D = np.zeros_like(a)
>>> idx = np.arange(a.shape[1])
>>> D[idx, idx] = d
>>> np.allclose(a, u.dot(D).dot(v))
True
>>> v[2] *= -1
>>> np.allclose(a, u.dot(D).dot(v))
True
* * *
Actually, you can only multiply the rows of `v` by -1 in the real domain, but
in the complex case you can multiply them by any complex number of absolute
value 1:
>>> vv = v.astype(np.complex)
>>> vv[0] *= (1+1.j)/np.sqrt(2)
>>> np.allclose(a, u.dot(D).dot(v))
True
|
python pandas convert dataframe to dictionary with multiple values
Question: I have a dataframe with 2 columns Address and ID. I want to merge IDs with the
same addresses in a dictionary
import pandas as pd, numpy as np
df = pd.DataFrame({'Address' : ['12 A', '66 C', '10 B', '10 B', '12 A', '12 A'],
'ID' : ['Aa', 'Bb', 'Cc', 'Dd', 'Ee', 'Ff']})
AS=df.set_index('Address')['ID'].to_dict()
print df
Address ID
0 12 A Aa
1 66 C Bb
2 10 B Cc
3 10 B Dd
4 12 A Ee
5 12 A Ff
print AS
{'66 C': 'Bb', '12 A': 'Ff', '10 B': 'Dd'}
What I want is for the duplicates to store multiple values like:
{'66 C': ['Bb'], '12 A': ['Aa','Ee','Ff'], '10 B': ['Cc','Dd']}
Answer: I think you can use `groupby` and a dictionary comprehension here:
>>> df
Address ID
0 12 A Aa
1 66 C Bb
2 10 B Cc
3 10 B Dd
4 12 A Ee
5 12 A Ff
>>> {k: list(v) for k,v in df.groupby("Address")["ID"]}
{'66 C': ['Bb'], '12 A': ['Aa', 'Ee', 'Ff'], '10 B': ['Cc', 'Dd']}
|
How to get the size/length of sub-elements within an XML tag using python
Question: I'm newbie to python and I was wondering how to get the size or number of sub
elements within a parent tag lets say `participants`. The idea is to get the
number of `participant` within `participants` tag.
Here is the xml:
<participants>
<participant>
<userId>James</userId>
<role>Author</role>
</participant>
<participant>
<userId>Alex</userId>
<role>Reader</role>
</participant>
</participants>
I'm using xml:
`import xml.etree.ElementTree as ET` as the module
and `ET` is assigned `dom = ET.fromstring(output)`
so far, to parse xml, I have written the following code which works:
for participant in dom.iter('participant'):
userId = participant.find('userId').text
role = participant.find('role').text
But I want to get the size/length of number of `participant` in `participants`
tag and this is what I'm trying to do but it doesn't give me the length:
print 'length', dom.findall('participants').length
The output I want should be:
length 2
Answer: try
print(len(dom.findall('participant')))
|
Internal server error while running dev_appserver.py
Question: I am trying to upload my Unity Web Player app to Google App Engine but when i
start the server using dev_appserver.py I am getting a Internal server error
while browsing the Localhost page The error The server has either erred or is
incapable of performing the requested operation
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~gcdc2013-space3d/1.371793060412129756/main.py", line 27, in get
self.response.out.write(template.render(path, template_values))
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/webapp/template.py", line 89, in render
t = _load_internal_django(template_path, debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/webapp/template.py", line 163, in _load_internal_django
template = django.template.loader.get_template(file_name)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/_internal/django/template/loader.py", line 157, in get_template
template, origin = find_template(template_name)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/_internal/django/template/loader.py", line 138, in find_template
raise TemplateDoesNotExist(name)
TemplateDoesNotExist: WebPlayer.html
My app.yaml content
application: gcdc2013-space3d
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /favicon\.ico
static_files: favicon.ico
upload: favicon\.ico
- url: /WebPlayer\.unity3d
static_files: WebPlayer/WebPlayer.unity3d
upload: WebPlayer/WebPlayer\.unity3d
- url: .*
script: main.app
libraries:
- name: webapp2
version: "2.5.2"
main.py content
from google.appengine.ext.webapp import template
import webapp2
import os
class MainHandler(webapp2.RequestHandler):
def get(self):
template_values = {
'greetings': 'greetings',
}
path = os.path.join('WebPlayer', 'WebPlayer.html')
self.response.out.write(template.render(path, template_values))
app = webapp2.WSGIApplication([
('/', MainHandler)
], debug=True)
any help will be appreciated
Answer: It's probably because Google datastore indexes haven't updated in time. Wait a
little while after deploying, and it should work.
See here: <http://stackoverflow.com/a/21965590/2137369>
|
How to insert values of a whole column in python using xlwt
Question: I have found the intersection of two columns in the same excel sheet and I
would like to write the result in a third column in the same sheet using xlwt,
how do I do it? I post the code I am working with below.
import xlrd
import xlwt
wb=xlrd.open_workbook('try2.xls')
xlsname = 'try2.xls'
book = xlrd.open_workbook(xlsname, on_demand=True)
sheet0=book.sheet_by_name('one')
A = sheet0.col(0)
B = sheet0.col(1)
C = sheet0.col(2)
D = sheet0.col(3)
E = sheet0.col(4)
F = sheet0.col(5)
W = E and F
How do I write W in `G = sheet0.col(6)`? Thanks!!
Answer: HOW to insert a column before specified column in excel sheet using python
import win32com.client
xlApp = win32com.client.Dispatch("Excel.Application")
Open excel sheet
wkbk = xlApp.Workbooks.Open("C:\\Sunil\\myexcel.xlsx")
Activcate the sheet you want by number
wksht = wkbk.Sheets(1)
Activcate the sheet you want by name
wksht = wkbk.Worksheets("Sheet2").Activate()
wksht = wkbk.Worksheets("Sheet1").Activate()
Take the object of activated sheet
wksht = wkbk.ActiveSheet;
insert column before Column B
wksht.Columns("B").EntireColumn.Insert()
|
SWIG c to python lost function?
Question: considering: <https://github.com/dmichel76/ViSi-Genie-RaspPi-Library>
I've tried a serial read and a write, from raspbian to 4d panel, and it all
worked fine.
I 'm trying to use a slider controller, this way it work for one minute then
goes down returning -1 at read.
import geniePi as D
import wiringpi2 as W
W.wiringPiSetup()
D.genieSetup("/dev/ttyAMA0", 115200)
while 1:
a = D.genieReadObj(32,0)
print ("a:" % (a))
I'm looking at genieGetReply to solve this issue(to avoid the use of ReadObj):
But looking at swig files seems like genieReplyAvail is always considered as
0, am I right?
SWIGINTERN PyObject *_wrap_genieReplyAvail(PyObject *SWIGUNUSEDPARM(self), PyObject *args) {
PyObject *resultobj = 0;
int result;
So genieGetReply() isn't called. It appear that to use genieGetReply() I must
create a class of the struct genieReplyStruct.
x =D.genieReplyStruct()
D.genieGetReply(x)
(if geniReplyAvail never get other value than 0 so this stay forever in the
delay(1) statement)
I cannot understand how this should be implemented and called, so I kindly ask
you a little advice.
Answer: I experienced exactly the same issue. I fixed it as follow: Did you check that
you report a message from the Workshop software ? Select the button, go to the
Events tab, and put "report Message" in "On Changed". Build/download and try
again. It should work now. Hope it helps.
|
Problems with importing a python library
Question: Currently I'm trying to use this python library:
<https://github.com/etotheipi/BitcoinArmory/blob/master/armoryd.py>
Essentially, I'm able to run:
python armoryd armory_2BEfTgvpofds_.watchonly.wallet
Only when I pass a .wallet argument.
I want to do the same with a script that I create. But when I import the
library, it's asking for a wallet argument. When I do something like:
import armoryd armory_2BEfTgvpofds_.watchonly.wallet
It is complaining about an invalid syntax.
Is it possible to import this library?
Answer:
from armoryd import armory_2BEfTgvpofds_.watchonly.wallet
Your import statement is invalid, it needs to be `from MODULE import
SOMETHING1, SOMETHING2...etc`
Also you need to ensure that your armoryd library is on the _PYTHONPATH_
## update
<https://github.com/etotheipi/BitcoinArmory/blob/master/extras/sample_armory_code.py>
take a look there - a sample on how to use the armory code in python.
|
yaml and compiling libYaml for python under windows
Question: I'm wish to write&read data files ( big size 10mb+ ), I'm thinking about using
using yaml for that. But, after some testing, seems that yaml is extremely
slow in both write and read for file that size. Than I read about libYaml C++,
that speed things up for using yaml.CLoader.
I'm using Windows7 64bit and I couldn't find any installer for libYaml, so I
rolled out my sleeves and tried ( for the first time ever ) compiling the
source (using VS2008). I mange to compile the output yaml.dll. but that's not
the file type I need for python to import/use , I need *.pyd so I got stuck at
this point and could use some help :)
Any idea how can I compile libYaml for win64bit and python? Or What's you're
favorite writer/reader of big size dictionary-like files ( where speed and
human-readability matters )
Answer: you can get a 64 bit windows installer here (not me):
<http://www.lfd.uci.edu/~gohlke/pythonlibs/>
|
Google App Engine: ImportError: No Module named appengine.tools
Question: When running google app engine and trying to import `google.appengine.tools`,
I receive an uncaught exception complaining that `appengine.tools` is
undefined.
I have confirmed that Google SDK is on the PYTHONPATH:
echo $PYTHONPATH
:/usr/local/google_appengine:/usr/local/google_appengine/lib/django-1.4
Answer: After investigating, I found that there was another `google` package installed
in the `dist-packages` folder, which was in the `PYTHONPATH`, before
`google_appengine` SDK...
Searching for the `google` package, I found `protobuf` inside.
For example, to see everything in the google package, you can go to the
directory (location may vary, depending on system)
cd /usr/lib/python2.7/dist-packages/google
ls -al
You can either:
A) Remove dist-packages from the PYTHONPATH, since you are using GAE, you most
likely don't need it, because 3rd party apps should be included in the app
itself.
B) Remove protobuf and the google package:
sudo pip uninstall protobuf
sudo rm -R /usr/lib/python2.7/dist-packages/google
|
Writing python regex that recognizes all unicode letters
Question: There is no [\p{Ll}\p{Lo}\
[1](http://stackoverflow.com/questions/5224835/what-is-the-proper-regular-
expression-to-match-all-utf-8-unicode-lowercase-lette) in python, and I'm
struggling to write a regular expression that recognizes unicode...and doesn't
confuse punctuation such as '-' or add funny diacritics when the script
encounters a phonetic mark (like 'ô' or 'طس').
My goal is to label ALL letters (ASCII and any unicode) and return an "A". A
number [1-9] as a 9.
My current function is:
def multiple_replace(myString):
myString = re.sub(r'(?u)[^\W\d_]|-','A', myString)
myString = re.sub(r'[0-9]', '9', myString)
return myString
The returns I am getting are (notice the incosistency in how '-' is being
labeled...sometimes as an 'A' sometimes as a 'Aœ'):
TX 35-L | AA 99AA
М-21 | AAœA99
A 1 طس | A 9 A~˜A·A~AA
US-50 | AAA99
yeni sinop-erfelek yolu çevre yolu | AAAA AAAAAAAAAAAAA AAAA AƒA§AAAA AAAA
Av Antônio Ribeiro | AA AAAAƒA´AAA AAAAAAA
What I need to get is this:
TX 35-L | AA 99-A
М-21 | A-99
A 1 طس | A 9 AAAAA
US-50 | AA-99
yeni sinop-erfelek yolu çevre yolu | AAAA AAAAAAAAAAAAA AAAA AAAAAAAA AAAA
Av Antônio Ribeiro | AA AAAAAAAAAA AAAAAAA
...is it even possible (with python re 2.7) to commonly identify ALL UTF-8
characters that ARE NOT common punctuation marks (i.e. '()', ',', '.', '-',
etc) and NOT 1-9 numbers without [\p{Ll}\p{Lo}\?
Answer: If using Python 2.7, use Unicode strings. I'm assuming your "What I need"
examples are incorrect, or do you really want `AAAAA` for `طس`? If reading the
strings from a file, decode the strings to Unicode first.
#!python2
#coding: utf8
import re
# Note leading u
data = u'TX 35-L|М-21|A 1 طس|US-50|yeni sinop-erfelek yolu çevre yolu|Av Antônio Ribeiro'.split('|')
for d in data:
r = re.sub(ur'(?u)[^\W\d_]',u'A', d)
r = re.sub(ur'[0-9]', u'9', r)
print d
print r
print
Output:
TX 35-L
AA 99-A
М-21
A-99
A 1 طس
A 9 AA
US-50
AA-99
yeni sinop-erfelek yolu çevre yolu
AAAA AAAAA-AAAAAAA AAAA AAAAA AAAA
Av Antônio Ribeiro
AA AAAAAAA AAAAAAA
|
Adding label to an edge of a graph in nodebox opnegl
Question: I am trying to add a label to each edge in my Graph, below:

Basically the above with labels for each edge at the center:

I've tried to add a label when I add an edge to each graph, like so (for the
graph `g`):
g.add_edge(... label=edge.distance ...)
After some research, I found that such labeling was possible under [Nodebox 1,
which only works for Mac](http://nodebox.net/code/index.php/Graph), there
seems to be no suitable alternative for [Nodebox-
OpenGL](http://www.cityinabottle.org/nodebox/physics/) from the documentation.
The error I receive:
Traceback (most recent call last):
File "C:\foo\bar\baz\Imager.py", line 29, in <module>
g.add_edge(edge.fr, edge.to, length=edge.distance, weight=2, stroke=color(1.0, 0.2, 0.0), label="cheese")
File "C:\Python27\lib\site-packages\nodebox\graphics\physics.py", line 1254, in add_edge
e2 = e2(n1, n2, *args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'label'
You can reproduce the problem:
from nodebox.graphics import *
from nodebox.graphics.physics import Node, Edge, Graph
# Create a graph with randomly connected nodes.
# Nodes and edges can be styled with fill, stroke, strokewidth parameters.
# Each node displays its id as a text label, stored as a Text object in Node.text.
# To hide the node label, set the text parameter to None.
g = Graph()
# Random nodes.
for i in range(50):
g.add_node(id=str(i+1),
radius = 5,
stroke = color(0),
text = color(0))
# Random edges.
for i in range(75):
node1 = choice(g.nodes)
node2 = choice(g.nodes)
g.add_edge(node1, node2,
length = 1.0,
weight = random(),
stroke = color(0),
label = "Placeholder") #!!!!!!!!!!!!! ADDING THE label HERE
# Two handy tricks to prettify the layout:
# 1) Nodes with a higher weight (i.e. incoming traffic) appear bigger.
for node in g.nodes:
node.radius = node.radius + node.radius*node.weight
# 2) Nodes with only one connection ("leaf" nodes) have a shorter connection.
for node in g.nodes:
if len(node.edges) == 1:
node.edges[0].length *= 0.1
g.prune(depth=0) # Remove orphaned nodes with no connections.
g.distance = 10 # Overall spacing between nodes.
g.layout.force = 0.01 # Strength of the attractive & repulsive force.
g.layout.repulsion = 15 # Repulsion radius.
dragged = None
def draw(canvas):
canvas.clear()
background(1)
translate(250, 250)
# With directed=True, edges have an arrowhead indicating the direction of the connection.
# With weighted=True, Node.centrality is indicated by a shadow under high-traffic nodes.
# With weighted=0.0-1.0, indicates nodes whose centrality > the given threshold.
# This requires some extra calculations.
g.draw(weighted=0.5, directed=True)
g.update(iterations=10)
# Make it interactive!
# When the mouse is pressed, remember on which node.
# Drag this node around when the mouse is moved.
dx = canvas.mouse.x - 250 # Undo translate().
dy = canvas.mouse.y - 250
global dragged
if canvas.mouse.pressed and not dragged:
dragged = g.node_at(dx, dy)
if not canvas.mouse.pressed:
dragged = None
if dragged:
dragged.x = dx
dragged.y = dy
canvas.size = 500, 500
canvas.run(draw)
So, the question remains, how can one add a label to a graph's edge in
Nodebox-OpenGL?
Answer: As you can see in the [source](http://pydoc.net/Python/nodebox-
opengl/1.6/nodebox.graphics.physics/) there is no argument `label`for
`add_edge`. (search for `class Edge(object):`)
The best way i can see is to create your own `MyEdge` Class derived from the
official `Edge` Class which adds a Text (the `label`) using
txt = Text(str, x=0, y=0, width=None, height=None)
or
textpath(string, x=0, y=0, fontname=None, fontsize=None, fontweight=None)
in the draw() Method.
**EDIT** Mind the `add_edge` Methods docstring:
def add_edge(self, id1, id2, *args, **kwargs):
""" Appends a new Edge to the graph.
An optional base parameter can be used to pass a subclass of Edge:
Graph.add_edge("cold", "winter", base=IsPropertyOf)
"""
|
What is the most elegant way to initialize a dictionary consists of chars and digits
Question: I'm looking of creating a dictionary in python which its keys are the chars
'0' to '9' , afterwards keys from 'a' to 'z', and their ids should be a
counter from 0 to 36
like this: `dict = {'0':0, '1':1, '2':2, ....., '9':9, 'a':10, .... , 'x':33,
'y':34, 'z':35}`
I manage to write this
dict = {}
for i in range(10):
dict[str(i)] = i
ord_a = ord('a')
for i in range(0,26):
dict[chr(ord_a + i)] = i+10
Can you help me with a better way to implement it?
And one more thing, `print(dict)` returns an unsorted object:
{'d': 13, 'e': 14, 'f': 15, 'g': 16, 'r': 27, 'a': 10, 'b': 11,
'c': 12, 'l': 21, 'm': 22, 'n': 23, 'o': 24, 'h': 17, 'i': 18,
'j': 19, 'k': 20, '4': 4, '5': 5, '6': 6, '7': 7, '0': 0, '1': 1,
'2': 2, '3': 3, '8': 8, '9': 9, 'z': 35, 't': 29, 'u': 30,
'x': 33, 'v': 31, 'y': 34, 'w': 32, 's': 28, 'p': 25, 'q': 26}
Why's that? I actually initialize it quite sorted, no?
Answer:
import string
keys = string.digits+string.ascii_lowercase
values = range(len(keys))
d = dict(zip(keys,values))
`dicts` have unordered keys. To have ordered keys, use a
[collections.OrderedDict](http://docs.python.org/2/library/collections.html#collections.OrderedDict).
(Also, never name a variable `dict` or `list`, etc., since this prevents you
from easily accessing the Python built-in of the same name. The built-in is
useful, as you can see above.)
|
how to add rrule to icalendar event in python?
Question: I am trying to create simple recurring events in Python with icalendar
from icalendar import Event
from datetime import datetime
ev = Event()
ev.add('dtstart', datetime(2013,11,22,8))
ev.add('dtend', datetime(2013,11,22,12))
ev.add('rrule', 'freq=daily')
I have got this exception : `ValueError: dictionary update sequence element #0
has length 1; 2 is required` on the last line (the one with 'rrule')
Any thoughts ? I checked the ical doc but they don't have many python examples
Answer: Looking at src/icalendar/tests/test_timezoned.py :
tzs.add('rrule', {'freq': 'yearly', 'bymonth': 10, 'byday': '-1su'})
# event.add('rrule', u'FREQ=YEARLY;INTERVAL=1;COUNT=10'
So they must have changed their format to a dictionary instead
ev.add('rrule', {'freq': 'daily'} works
|
how to find what events overlap a date in icalendar in python?
Question: Question is pretty much in the title. I have an event :
from icalendar import Event
from datetime import datetime
# every day from 8am to 12pm
ev = Event(dtstart=datetime(2013,11,22,8), dtend=datetime(2013,11,22,12), rrule='freq=daily')
# tomorrow 10am
d = datetime(2013, 11, 23, 10)
does ev overlap/contains d ? What is the function I should use ? I strangely
don't find anything in icalendar's unit tests
Answer: I may be wrong, but IIRC icalendar just does parsing and serialisation of
icalendar file, it doesn't do interpretation of rules and the like.
For that, you want [dateutil](http://labix.org/python-dateutil)'s
dateutil.rrule. And it will only do recurrence rule computation, it doesn't
have an Event interface so you have to perform these steps separately.
|
Webscraping from directory of HTML files using BS4 and python
Question: I have a website in which each person's details are stored in separate .HTML
file. So there are totally 100 person whose details are stored in 100
different .html files. But all have same HTML structure.
Here is the website link <http://www.coimbatore.com/doctors/home.htm>.
So if you see this website there are many categories and the `~all-
doctors.html~` files are in same directory.
<http://www.coimbatore.com/doctors/cardiology.htm>
has 5 doctors' link. If I click on any doctors name it will take to
<http://www.coimbatore.com/doctors/>**thatdoctorname**.htm. So all the files
are in the same directory /doctors/ If I am not wrong. So how do I scrape the
details of each doctor ?
I was planning to `wget` all the files from that
<http://www.coimbatore.com/doctors/> URL, save locally and merge as one
`whole.html` file using `join` function in LINUX. Is there any better way?
**UPDATE**
letters = ['doctor1','doctor2'...]
for i in range(30):
try:
page = urllib2.urlopen("http://www.coimbatore.com/doctors/{}.htm".format(letters[i]))
except urllib2.HTTPError:
continue
else:
Answer: This code should get you started.
import urllib2
from bs4 import BeautifulSoup
doctors = ['thomas']
for doctor in doctors:
try:
page = urllib2.urlopen("http://www.coimbatore.com/doctors/{}.htm".format(doctor))
soup = BeautifulSoup(page)
except urllib2.HTTPError:
continue
rows = soup.find("table", cellspacing=0).find_all('tr')
for row in rows:
cols = row.find_all('td')
print "%s: %s" % (cols[0].get_text().replace('\n', ' '), cols[1].get_text().replace('\n', ' '))
It has an output of
Name of Doctor: Dr.Thomas Alexander
Qualification: M.D (Internal Medicine), D.M. (Cardiology)
Fellowship & Membership: Fellow of Indian College of Cardiology Associate Fellow
of American College of Cardiology
Address of Clinic / Visiting Hospitals: Kovai Medical Center and Hospital, P.B.N
o.3209, Avanashi Road, Coimbatore-641 014
Telephone Number: +91-422-827784
Consulting Hours: 8am - 5pm
Specialist in: Senior Consultant and Interventional Cardiologist
A few notes that you may wish to deal with differently. I replaced all
newlines (`\n`) with spaces because the code has weird line breaks like so:
<td><b><font face="Arial,Helvetica"><font color="#0000FF"><font size=-1>Name
of Doctor</font></font></font></b></td>
Notice that it forces the break between `Name` and `of`.
If you are attempting to make a CSV out of this, the script can be easily
modified to pull only the second cell on each row.
|
Python error: unorderable types: list()<int()
Question: I keep getting the error unorderable types: list()< int(). What am i doing
wrong and how should i fix it??
My code:
import sys
from List import *
def main():
strings=ArrayToList(sys.argv[1:])
numbers=ListMap(int,strings)
smallest=numbers[0]
for i in range(len(numbers)):
if numbers[i]<smallest:
smallest=numbers[i]
return smallest
print("The smallest is", smallest(numbers))
main()
The error:
Traceback (most recent call last):
File "command.py", line 18, in <module>
main()
File "command.py", line 12, in main
if numbers[i]<smallest:
TypeError: unorderable types: list() < int()
Answer: Looks like you're trying to compare a list with an integer, this is not
possible in Python3. Make sure that all items of `numbers` are integers or
not.
>>> [] < 1
Traceback (most recent call last):
File "<ipython-input-1-de4ae201066c>", line 1, in <module>
[] < 1
TypeError: unorderable types: list() < int()
|
matplotlib in gtk window with i18n (gettext) support
Question: I am trying to show a matplotlib plot with axes labeled using gettext's
_("label") construct. Trying to create a minimal example, I came up with the
following python code. It runs fine through the NULLTranslations() like this:
python mpl_i18n_test.py
But when I switch to japanese, I get nothing but small squares in the plot --
though on the command-line, the translations look fine:
LANG=ja_JP.utf8 python mpl_i18n_test.py
Here is the file mpl_i18n_test.py Note that this requires the mona-sazanami
font installed, and the various python modules: pygtk, numpy, matplotlib,
gettext and polib
So my question: Is there some trick to getting matplotlib play nicely with
gettext? Am I missing something obvious here? Thank you.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import gtk
import numpy as np
import matplotlib as mpl
from matplotlib.figure import Figure
from matplotlib.backends.backend_gtkagg import \
FigureCanvasGTKAgg as FigureCanvas
from matplotlib.backends.backend_gtkagg import \
NavigationToolbar2GTKAgg as NavigationToolbar
import locale
import gettext
import polib
mpl.rcParams['font.family'] = 'mona-sazanami'
def append(po, msg):
occurances = []
for i,l in enumerate(open(__file__,'r')):
if "_('"+msg[0]+"')" in l:
occurances += [(__file__,str(i+1))]
entry = polib.POEntry(msgid=msg[0],
msgstr=msg[1],
occurrences=occurances)
print msg
print occurances
po.append(entry)
def generate_ja_mo_file():
po = polib.POFile()
msgs = [
(u'hello', u'こんにちは'),
(u'good-bye', u'さようなら'),
]
for msg in msgs:
append(po, msg)
po.save('mpl_i18n_test.po')
po.save_as_mofile('mpl_i18n_test.mo')
return 'mpl_i18n_test.mo'
def initialize():
'''prepare i18n/l10n'''
locale.setlocale(locale.LC_ALL, '')
loc,enc = locale.getlocale()
lang,country = loc.split('_')
l = lang.lower()
if l == 'ja':
filename = generate_ja_mo_file()
trans = gettext.GNUTranslations(open(filename, 'rb'))
else:
trans = gettext.NullTranslations()
trans.install()
if __name__ == '__main__':
initialize() # provides _() method for translations
win = gtk.Window(gtk.WINDOW_TOPLEVEL)
win.connect("destroy", lambda x: gtk.main_quit())
win.connect("delete_event", lambda x,y: False)
win.set_default_size(400,300)
win.set_title("Test of unicode in plot")
fig = Figure()
fig.subplots_adjust(bottom=.14)
ax = fig.add_subplot(1,1,1)
xx = np.linspace(0,10,100)
yy = xx*xx + np.random.normal(0,1,100)
ax.plot(xx,yy)
print 'hello --> ', _('hello')
print 'good-bye --> ', _('good-bye')
ax.set_title(u'こんにちは')
ax.set_xlabel(_('hello'))
ax.set_ylabel(_('good-bye'))
can = FigureCanvas(fig)
tbar = NavigationToolbar(can,None)
vbox = gtk.VBox()
vbox.pack_start(can, True, True, 0)
vbox.pack_start(tbar, False, False, 0)
win.add(vbox)
win.show_all()
gtk.main()
Answer: A solution I found was to merely specify unicode when the translation is
"installed." It was a one-line change:
trans.install(unicode=True)
I will add that this is only needed in python 2.7, but not needed in python 3.
Looks like python 2.6 and earlier still have issues with this
|
How to log everything into a file using RotatingFileHandler by using logging.conf file?
Question: I am trying to use `RotatingHandler` for our logging purpose in Python. I have
kept backup files as 500 which means it will create maximum of 500 files I
guess and the size that I have set is 2000 Bytes (not sure what is the
recommended size limit is).
If I run my below code, it doesn't log everything into a file. I want to log
everything into a file -
#!/usr/bin/python
import logging
import logging.handlers
LOG_FILENAME = 'testing.log'
# Set up a specific logger with our desired output level
my_logger = logging.getLogger('agentlogger')
# Add the log message handler to the logger
handler = logging.handlers.RotatingFileHandler(LOG_FILENAME, maxBytes=2000, backupCount=100)
# create a logging format
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
my_logger.addHandler(handler)
my_logger.debug('debug message')
my_logger.info('info message')
my_logger.warn('warn message')
my_logger.error('error message')
my_logger.critical('critical message')
# Log some messages
for i in range(10):
my_logger.error('i = %d' % i)
This is what gets printed out in my `testing.log` file -
2013-11-22 12:59:34,782 - agentlogger - WARNING - warn message
2013-11-22 12:59:34,782 - agentlogger - ERROR - error message
2013-11-22 12:59:34,782 - agentlogger - CRITICAL - critical message
2013-11-22 12:59:34,782 - agentlogger - ERROR - i = 0
2013-11-22 12:59:34,782 - agentlogger - ERROR - i = 1
2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 2
2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 3
2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 4
2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 5
2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 6
2013-11-22 12:59:34,784 - agentlogger - ERROR - i = 7
2013-11-22 12:59:34,784 - agentlogger - ERROR - i = 8
2013-11-22 12:59:34,784 - agentlogger - ERROR - i = 9
It doesn't print out `INFO`, `DEBUG` message into the file somehow.. Any
thoughts why it is not working out?
And also, right now, I have defined everything in this python file for logging
purpose. I want to define above things in the `logging conf` file and read it
using the `fileConfig()` function. I am not sure how to use the
`RotatingFileHandler` example in the `logging.conf` file?
**UPDATE:-**
Below is my updated Python code that I have modified to use with `log.conf`
file -
#!/usr/bin/python
import logging
import logging.handlers
my_logger = logging.getLogger(' ')
my_logger.config.fileConfig('log.conf')
my_logger.debug('debug message')
my_logger.info('info message')
my_logger.warn('warn message')
my_logger.error('error message')
my_logger.critical('critical message')
# Log some messages
for i in range(10):
my_logger.error('i = %d' % i)
And below is my `log.conf file` \-
[loggers]
keys=root
[handlers]
keys=logfile
[formatters]
keys=logfileformatter
[logger_root]
level=DEBUG
handlers=logfile
[logger_zkagentlogger]
level=DEBUG
handlers=logfile
qualname=zkagentlogger
propagate=0
[formatter_logfileformatter]
format=%(asctime)s %(name)-12s: %(levelname)s %(message)s
[handler_logfile]
class=handlers.RotatingFileHandler
level=NOTSET
args=('testing.log',2000,100)
formatter=logfileformatter
But whenever I compile it, this is the error I got on my console -
$ python logtest3.py
Traceback (most recent call last):
File "logtest3.py", line 6, in <module>
my_logger.config.fileConfig('log.conf')
AttributeError: 'Logger' object has no attribute 'config'
Any idea what wrong I am doing here?
Answer: > It doesn't print out INFO, DEBUG message into the file somehow.. Any
> thoughts why it is not working out?
you don't seem to set a loglevel, so the default (warning) is used
from <http://docs.python.org/2/library/logging.html> :
> Note that the root logger is created with level WARNING.
as for your second question, something like this should do the trick (I
haven't tested it, just adapted from my config which is using the
TimedRotatingFileHandler):
[loggers]
keys=root
[handlers]
keys=logfile
[formatters]
keys=logfileformatter
[logger_root]
level=DEBUG
handlers=logfile
[formatter_logfileformatter]
format=%(asctime)s %(name)-12s: %(levelname)s %(message)s
[handler_logfile]
class=handlers.RotatingFileHandler
level=NOTSET
args=('testing.log','a',2000,100)
formatter=logfileformatter
|
Python: Importing Modules of Modules
Question: I currently have the directory structure
- module
- __init__.py
- foo.py
- bar.py
I want to use function definitions from both `foo.py` and `bar.py` so have
written this:
import module
module.foo.fooFunction()
module.bar.barFunction()
However I am getting the error `'module' object has no attribute 'foo'`. What
is the problem here?
Answer: first, it should be `__init__.py`, not `init.py`. Now, `__init__.py` is what
sets up your modules namespace.
if `__init__.py` is empty, then to use `fooFunction`, you'd need to import
`foo` too. It doesn't automatically get imported with `module`.:
import module.foo
module.foo.fooFunction()
If you don't like that, you could do:
# __init__.py
import foo
import bar
# script
import module
module.foo.fooFunction()
See what happened there? Since `__init__.py` imports `foo`, when you `import
module`, it in turn imports `foo` and `bar` into it's namespace. So, when you
go to access it in your script, `module` already has a `foo` submodule
imported into it's namespace.
You can even import names directly into the module namespace from foo or bar:
# __init__.py
from foo import fooFunction
# script
import module
module.fooFunction()
|
cant get ndb query results
Question: i just started learning python ndb i want to know how can i display students
attending a selected course (filtering Attendance) then mark their attendance
with a ardio button for each student,add the attendance value to the preveious
one and finally save the result back to the datastore or print it to file
# -_\- coding: cp1256 -_ -
import webapp2
import os
import cgi
from google.appengine.ext import ndb
from google.appengine.api import users
from google.appengine.api import mail
from google.appengine.ext.webapp import template
class Student(ndb.Model):
id = ndb.IntegerProperty()
name = ndb.StringProperty()
email = ndb.StringProperty()
#courses= ndb.StructuredProperty(Courses, repeated=True) and attendance
class Course(ndb.Model):
code=ndb.StringProperty()
title=ndb.StringProperty()
#time=ndb.TimeProperity
#students= ndb.StructuredProperty(Students, repeated=True)
#attendance
class Attendance(ndb.Model):
courseCode=ndb.StructuredProperty(Course)
#course=ndb.StructuredProperty(Course, repeated=True)
date=ndb.StringProperty()
#studentID=ndb.IntegerProperty(repeated=True)
student=ndb.StructuredProperty(Student, repeated=True)
attendance=ndb.IntegerProperty(repeated=True)# for each student
class MainHandler(webapp2.RequestHandler):
def get(self):
#cerate ndb from file
coursesfile = open('courses.txt', 'r').read()
studentsfile = open('students.txt', 'r').read()
dailyattendancefile = open('dailyattendance.txt','r').read()
for line in coursesfile.split('\n'):
line=coursesfile.split('\t')
#stroe courses to datastore
course=Course(code=line[0],title=line[1])#create Course entity
course.put()
for line in studentsfile.split('\n'):
line=studentsfile.split('\t')
student=Student(id=int(line[0]),name=line[1],email=line[2])
student.put()
for line in dailyattendancefile.split('\n'):
line=dailyattendancefile.split('\t')
attendance=Attendance(courseCode=Course(code=line[0]),date=line[1],student=Student(id=int(line[2])),attendance=int(line[3]))
attendance.put()
#print to html to test
#self.response.out.write("<tr><td>"+ course.code + "</td>")
#self.response.out.write("<td>"+ course.title+ "</td>")
#self.response.out.write("</tr>")
self.response.out.write("""
<html>
<body>
<form method="post" align="left">
<select align="center" name="course_code">
<option value="cs681" selected>CS681</option>
<option value="cs681">CS611</option>
</select>
<input type="submit" value="Submit"/>
""")
def post(self):
#get info from user
coursecode=self.request.get('course_code')
#self.response.out.write(Attendance.courseCode.code)
self.response.out.write("""
<table align="center" >
<tr align="center">
<td>Course code</td>
<td>Student ID</td>
<td>Date</td>
<td>Attendance</td>
</tr>
""")
qry=Attendance.query(Attendance.courseCode.code==coursecode).fetch()
for ent in qry:
self.response.out.write('<tr><td>%s</td></tr>' %ent.courseCode)
self.response.out.write("""
</table>
</form>
</body>
</html>
""")
app = webapp2.WSGIApplication([
('/', MainHandler)
], debug=True)
Answer: Try the following. I think you have a minor issue in your schema definition.
class Course(ndb.Model):
code=ndb.StringProperty(indexed=True)
title=ndb.StringProperty()
|
Scrapy crawl spider stopped working
Question: Prehistory: I'm running Scrapy version 0.16.2 on Python 2.7.2+ and it is on
Linux Mint. A few days ago [I had this
problem](http://stackoverflow.com/questions/20025427/scrapy-crawler-spider-
doesnt-follow-links) and with help I managed to overcome it. For a few moments
Crawler worked as it should:
2013-11-23 01:02:51+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-11-23 01:02:51+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-11-23 01:02:51+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-11-23 01:02:51+0200 [scrapy] DEBUG: Enabled item pipelines:
2013-11-23 01:02:51+0200 [basketsp17] INFO: Spider opened
2013-11-23 01:02:51+0200 [basketsp17] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-11-23 01:02:51+0200 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6024
2013-11-23 01:02:51+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6081
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Redirecting (301) to <GET http://www.euroleague.net/main/results/by-date> from <GET http://www.euroleague.net/main/results/by-date/>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date> (referer: None)
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'www.euroleaguebasketball.net': <GET http://www.euroleaguebasketball.net/>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'www.eurocupbasketball.com': <GET http://www.eurocupbasketball.com/>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'www.euroleague.tv': <GET http://www.euroleague.tv/>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'www.euroleaguestore.net': <GET http://www.euroleaguestore.net/>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'fantasychallenge.euroleague.net': <GET http://fantasychallenge.euroleague.net/>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'www.facebook.com': <GET http://www.facebook.com/TheEuroleague>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'www.youtube.com': <GET http://www.youtube.com/euroleague>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'euroleaguedevotion.ourtoolbar.com': <GET http://euroleaguedevotion.ourtoolbar.com/>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'euroleague.synapticdigital.com': <GET http://euroleague.synapticdigital.com/>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'twitter.com': <GET http://twitter.com/Euroleague>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'kort.es': <GET http://kort.es/ulpGt>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Filtered offsite request to 'adserver.itsfogo.com': <GET http://adserver.itsfogo.com/click.aspx?zoneid=136145>
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Crawled (200) <GET http://www.euroleague.net/> (referer: http://www.euroleague.net/main/results/by-date)
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Crawled (200) <GET http://www.euroleague.net/devotion/home> (referer: http://www.euroleague.net/main/results/by-date)
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Crawled (200) <GET http://www.euroleague.net/euroleaguenews/transactions/2013-14-signings> (referer: http://www.euroleague.net/main/results/by-date)
2013-11-23 01:02:51+0200 [basketsp17] DEBUG: Crawled (200) <GET http://www.euroleague.net/features/blog/2013-2014> (referer: http://www.euroleague.net/main/results/by-date)
But after several times it stopped crawling. I want to know where is the
problem. If I try code next day it works again for several moments and stops.
Well, it works but it doesn't crawl. If I change start_urls it starts to work
again and it stops again with the same code. What could be wrong here?
Here is what I see after it stops:
scrapy crawl basketsp17
2013-11-22 03:07:15+0200 [scrapy] INFO: Scrapy 0.20.0 started (bot: basketbase)
2013-11-22 03:07:15+0200 [scrapy] DEBUG: Optional features available: ssl, http11, boto, django
2013-11-22 03:07:15+0200 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'basketbase.spiders', 'SPIDER_MODULES': ['basketbase.spiders'], 'BOT_NAME': 'basketbase'}
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Enabled item pipelines:
2013-11-22 03:07:16+0200 [basketsp17] INFO: Spider opened
2013-11-22 03:07:16+0200 [basketsp17] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-11-22 03:07:16+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-11-22 03:07:16+0200 [basketsp17] DEBUG: Redirecting (301) to <GET http://www.euroleague.net/main/results/by-date> from <GET http://www.euroleague.net/main/results/by-date/>
2013-11-22 03:07:16+0200 [basketsp17] DEBUG: Crawled (200) <GET http://www.euroleague.net/main/results/by-date> (referer: None)
2013-11-22 03:07:16+0200 [basketsp17] INFO: Closing spider (finished)
2013-11-22 03:07:16+0200 [basketsp17] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 489,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 12181,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/301': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2013, 11, 22, 1, 7, 16, 471690),
'log_count/DEBUG': 8,
'log_count/INFO': 3,
'response_received_count': 1,
'scheduler/dequeued': 2,
'scheduler/dequeued/memory': 2,
'scheduler/enqueued': 2,
'scheduler/enqueued/memory': 2,
'start_time': datetime.datetime(2013, 11, 22, 1, 7, 16, 172756)}
2013-11-22 03:07:16+0200 [basketsp17] INFO: Spider closed (finished)
Here is a code that I am using:
from basketbase.items import BasketbaseItem
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from scrapy.http import TextResponse
from scrapy.http import HtmlResponse
class Basketspider(CrawlSpider):
name = "basketsp17"
allowed_domains = ["www.euroleague.net"]
start_urls = ["http://www.euroleague.net/main/results/by-date/"]
rules = (
Rule(SgmlLinkExtractor(allow=('main\/results\/showgame\?gamecode\=/\d$\&seasoncode\=E2013\#!boxscore')),follow=True),
Rule(SgmlLinkExtractor(allow=()),callback='parse_item'),
)
def init_request(self):
return HtmlResponse("http://www.euroleague.net/main/results/by-date/", body = body)
def parse_item(self, response):
sel = HtmlXPathSelector(response)
items=[]
item = BasketbaseItem()
item['date'] = sel.select('//div[@class="gs-dates"]/text()').extract() # Game date
item['time'] = sel.select('//div[@class="gs-dates"]/span[@class="GameScoreTimeContainer"]/text()').extract() # Game time
items.append(item)
return items
Answer: I modified the code of yours to make it work. The changes,
I don't see the purpose of init_request, at least I don't think anybody is
calling it.
Overriding the parse of the CrawlSpider and changing the response to
HtmlResponse before passing it to the base parse.
Again changing the response to HtmlResponse in the parse_item
Please understand that we are blindly converting response to HtmlResponse. At
least you should check that response is type "Response" and if possible check
for html tag in body before converting it to HtmlResponse.(Other checks scrapy
does, but fails). Also, may be this conversion may be neatly handled in
downloadmiddleware. If may try converting it Response in the process_response
method, subject to , that process_response is handled before the , call_back
of the spider.
#from basketbase.items import BasketbaseItem
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from scrapy.http import TextResponse
from scrapy.http import HtmlResponse
class Basketspider(CrawlSpider):
name = "basketsp17"
allowed_domains = ["www.euroleague.net"]
start_urls = ["http://www.euroleague.net/main/results/by-date/"]
rules = (
Rule(SgmlLinkExtractor(allow=('main\/results\/showgame\?gamecode\=/\d$\&seasoncode\=E2013\#!boxscore')),follow=True),
Rule(SgmlLinkExtractor(allow=()),callback='parse_item'),
)
def init_request(self):
print 'init request is called'
return HtmlResponse("http://www.euroleague.net/main/results/by-date/", body = body)
def parse(self,response):
response = HtmlResponse(url=response.url, status=response.status, headers=response.headers, body=response.body)
return super(Basketspider,self).parse(response)
def parse_item(self, response):
response = HtmlResponse(url=response.url, status=response.status, headers=response.headers, body=response.body)
sel = HtmlXPathSelector(response)
items=[]
print 'parse item is called'
#item = BasketbaseItem()
#item['date'] = sel.select('//div[@class="gs-dates"]/text()').extract() # Game date
#item['time'] = sel.select('//div[@class="gs-dates"]/span[@class="GameScoreTimeContainer"]/text()').extract() # Game time
#items.append(item)
return items
I think this problem of yours is both a combination of the site not following
standard and scrapy not using body to build the reponse. I think we should
raise this issue with scrapy either as a enquiry or issue.
|
AlignIO gives 'AssertionError' when reading emboss alignment files
Question: I have been stuck on a problem for three days... searched everywhere, posted
on [Biostar](http://www.biostars.org/post/edit/87226/), still waiting for EMBL
to respond to emails... would make a bounty if I had more rep.
After aligning sequences with EMBOSSwin `needle()` (pairwise global
alignments) I get alignment files in `pair` format, with a `.needle` file
extension. I want to use [Biopython](http://biopython.org/wiki/Main_Page) to
read these alignments for later analysis.
I use `AlignIO.read(open('alignment.needle'),'emboss')` following the
instructions in [Biopython's AlignIO wiki](http://biopython.org/wiki/AlignIO)
but I keep getting an `AssertionError`.
_**My code:_**
>>> from Bio import AlignIO
>>> alignment = AlignIO.read(open("data/all/out/pair1_alignment.needle"), "emboss")
_**My error:_**
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "C:\Python27\lib\Bio\AlignIO\__init__.py", line 423, in read
first = next(iterator)
File "C:\Python27\lib\Bio\AlignIO\__init__.py", line 370, in parse
for a in i:
File "C:\Python27\lib\Bio\AlignIO\EmbossIO.py", line 150, in __next__
assert seq.replace("-", "") != ""
AssertionError
_**Example Alignment File:_**
Download the alignment file
[here](https://www.dropbox.com/s/clxmrsr750xern3/pair1.needle)

_**Versions:_**
* _Windows 7_
* _Python version 2.7.3_
* _Biopython version 1.63_
* _EMBOSS version 2.10.0-0.8_
_**Clues:_**
I suspect this may be related to a warning message I kept getting when
actually making the alignments, which was outputted by EMBOSS `needle()`
function:
Warning: Sequence character string not found in ajSeqCvtKS
Answer: Duplicate post on BioStars, <http://www.biostars.org/p/87226/#87399>
This appears to be down to a subtle change in the EMBOSS output. You have an
extremely old version, EMBOSS version 2.10.0 (February 2005), and your output
file has lines like this:
gag 1288 -------------------------------------------------- 1287
Using a newer version of EMBOSS (e.g. 6.3.0), gives lines like this:
gag 1287 -------------------------------------------------- 1287
The Biopython parser is expecting the latter for alignment sections with no
letters (e.g. when one sequence is much longer than the other), where the
start and end coordinates agree. Please update your copy of EMBOSS, and then
the parser should be happy. The current EMBOSS release is version 6.5.0.
|
How to get the text of a widget/window using python-xlib?
Question: I'm trying to find the whole text that is currently being edited in gedit
window. Firstly i tried to find out the current gedit tab that is focused, by
using Xlib.display. Now i got an Xlib.display.window object . Now i want to
find out the text that is in that particular window/widget using this window
object
And my code is like this
import gtk, gobject, Xlib.display
currentFocus=''
def current_focused:
global currentFocus
display = Xlib.display.Display()
window = display.get_input_focus().focus
wmname = window.get_wm_name()
wmclass = window.get_wm_class()
if wmclass is None and wmname is None:
window = window.query_tree().parent
wmname = window.get_wm_name()
if currentFocused!=wmname:
if window.get_wm_class()[0]=='gedit':
print "\nNow you are in : < %s >" % (wmname,)
# Here i have to find the text of the gedit's widget
currentFocused=wmname
return True
gobject.timeout_add(1000, current_focused)
gtk.main()
is there any API to get the text of a specific widget using
Xlib.display.window Please help me. I'm completely new in this area
Thank you
Answer: WM_CLASS / WM_NAME properties are only used to provide information to a window
manager ( hence WM prefix ) and not usually set on child windows. Check [GTK
source code](https://git.gnome.org/browse/gtk+/) if edit widget sets similar
properties, but in general it is not possible for external process to read
edit control text
|
How to encode stream of bits (not bytes) in Python - is any simple module for it?
Question: I want encode and decode **variable** and **countable** stream of **bits**
into **binary string** , **number** , **64 bases encoded string**. Maximum
length of stream will be about 21 + 20 = 41 bits but can be little longer 43,
45.
Let assume that bits will be represented by some array.
bits = [1]
encoded = someEncoder(bits)
decoded = someDecoder(encoded)
assert bits == decoded
The stream of bits can be longer for example:
import random
def generateRandomBits(l):
bitsBytes = random.getrandbits(l)
bits = []
for i in range(l):
bitsBytes
bits.append(int(bitsBytes >> i & 1))
return bits
bits = generateRandomBits(21)
# [0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1]
Consider that it is **variable** list of bits and encode/decode must be
**lossless** :
a = someEncoder([0])
b = someEncoder([0, 0])
assert a != b
Answer: To turn integers, representing bits, into 8-bit bytes (still as integers), use
bit shifting:
result = 0
for bit in bits:
result = result << 1 | bit
or with `reduce()`:
result = reduce(lambda n, b: n << 1 | b, bits)
This can be turned into a binary string with `format(result, 'b')`, optionally
with width and `0` prefixed to pad with zeros to a certain length.
Demo:
>>> bits = [0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1]
>>> reduce(lambda n, b: n << 1 | b, bits)
934809
>>> result = 0
>>> for bit in bits:
... result = result << 1 | bit
...
>>> result
934809
>>> format(result, '021b')
'011100100001110011001'
If you need to go straight to the binary string, just map the bits to strings
and join:
>>> ''.join(map(str, bits))
'011100100001110011001'
Padding this further to 64 characters could be done with `str.zfill()`:
>>> ''.join(map(str, bits)).zfill(64)
'0000000000000000000000000000000000000000000011100100001110011001'
|
How can I get the number of lives change (pygame)?
Question: I'm new to python and pygame and so far ive managed to get everything working,
but cant work out how to make my lives go down.
If you haven't worked out its a simple fruit catching game.
I've managed to make my score go up. I've tried saying if the fruit is below a
certain x co ordinate take away a life but it doesn't work.
import time
import random
import pygame
from pygame import*
pygame.init()
myname=input('What is your name')
#set the window size
window= pygame.display.set_mode((800,600) ,0,24)
pygame.display.set_caption("Fruit Catch")
#game variables
myscore=0
mylives=3
mouth_x=300
fruit_x=250
fruit_y=75
fruitlist=['broccoli.gif','chicken.gif']
#prepare for screen
myfont=pygame.font.SysFont("Britannic Bold", 55)
label1=myfont.render(myname, 1, (240, 0, 0))
label3=myfont.render(str(mylives), 1, (20, 255, 0))
#grapchics
fruit=pygame.image.load('data/chicken.png')
mouth=pygame.image.load('data/v.gif')
backGr=pygame.image.load('data/kfc.jpg')
#endless loop
running=True
while running:
if fruit_y>=460:#check if at bottom, if so prepare new fruit
fruit_x=random.randrange(50,530,1)
fruit_y=75
fruit=pygame.image.load('data/'+fruitlist[random.randrange(0,2,1)])
else:fruit_y+=5
#check collision
if fruit_y>=440:
if fruit_x>=mouth_x and fruit_x<=mouth_x+300 :
myscore+=1
fruit_y=600#move it off screen
pygame.mixer.music.load('data/eating.wav')
#detect key events
for event in pygame.event.get():
if (event.type==pygame.KEYDOWN):
if (event.key==pygame.K_LEFT):
mouth_x-=55
if (event.key==pygame.K_RIGHT):
mouth_x+=55
label2=myfont.render(str(myscore), 1, (20, 255, 0))
window.blit(backGr,(0,0))
window.blit(mouth, (mouth_x,440))
window.blit(fruit,(fruit_x, fruit_y))
window.blit(label1, (174, 537))
window.blit(label2, (700, 157))
window.blit(label3, (700, 400))
pygame.display.update()
Answer: When checking for the collision, use `else`.
#check collision
if fruit_y>=440:
if fruit_x>=mouth_x and fruit_x<=mouth_x+300 :
myscore+=1
fruit_y=600#move it off screen
pygame.mixer.music.load('data/eating.wav')
else:
mylives-= 1
That should work.
|
Send html email with python
Question: I tried to send a email with html text using python.
The html text is loaded from a html file:
ft = open("a.html", "r", encoding = "utf-8")
text = ft.read()
ft.close()
And after, I send the email:
message = "From: %s\r\nTo: %s\r\nMIME-Version: 1.0\nContent-type: text/html\r\nSubject:
%s\r\n\r\n%s" \
% (sender,receiver,subject,text)
try:
smtpObj = smtplib.SMTP('smtp.gmail.com:587')
smtpObj.starttls()
smtpObj.login(username,password)
smtpObj.sendmail(sender, [receiver], message)
print("\nSuccessfully sent email")
except SMTPException:
print("\nError unable to send email")
I got this error:
Traceback (most recent call last):
File "C:\Users\Henry\Desktop\email_prj\sendEmail.py", line 54, in <module>
smtpObj.sendmail(sender, [receiver] + ccn, message)
File "C:\Python33\lib\smtplib.py", line 745, in sendmail
msg = _fix_eols(msg).encode('ascii')
UnicodeEncodeError: 'ascii' codec can't encode character '\xe0' in position 1554:
ordinal not in range(128)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Henry\Desktop\email_prj\sendEmail.py", line 56, in <module>
except SMTPException:
NameError: name 'SMTPException' is not defined
How can I solve this problem? Thanks.
Answer: > NameError: name 'SMTPException' is not defined
This is because in your current context, SMTPException doesn't stand for
anything.
You'll need to do:
except smtplib.SMTPException:
* * *
Also, note that building the headers by hand is a bad idea. Can't you use
inbuilt modules?
The below is a copy-paste of _relevant parts_ from one of my projects.
from email.MIMEMultipart import MIMEMultipart
from email.MIMEBase import MIMEBase
from email.MIMEText import MIMEText
....
....
....
msg = MIMEMultipart()
msg['From'] = self.username
msg['To'] = to
msg['Subject'] = subject
msg.attach(MIMEText(text))
mailServer = smtplib.SMTP("smtp.gmail.com", 587)
mailServer.ehlo()
mailServer.starttls()
mailServer.ehlo()
mailServer.login(self.username, self.password)
mailServer.sendmail(self.username, to, msg.as_string())
|
Compatibility with matplotlib, python and pandas on RHEL6
Question: I have a manual install of numpy, matplotlib and pandas, basic tests seem to
work fine.
Versions here:
Numpy 1.8.0
Matplotlib 1.3.1
Python 2.6.6
Pandas 0.12.0
When I run this code on this platform (RHEL 6.4) i get the following stack
trace.
'plot'.format(numeric_data.__class__.__name__))
TypeError: Empty 'DataFrame': no numeric data to plot
The same code runs fine on Fedora 19 without having to deal with any dtype
issues and on that platform I have matplotlib 1.2.1, numpy 1.7.1 and python
2.7.4
So will this not work on the RHEL6.4 Python version
## Code snippit
#!/usr/bin/python
### Get the libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pandas import *
disk_data = read_csv('collectl.sD.fullday.clean', sep=' ', index_col=1, parse_dates=True)
sda_io = disk_data[['sda-Reads','sda-Writes']]
print sda_io[:50]
sda_io[:1000].plot(grid='on')
plt.show()
## Trace
Traceback (most recent call last):
File "./parse-collectl.py", line 19, in <module>
sda_io[:1000].plot(grid='on')
File "/usr/lib64/python2.6/site-packages/pandas/tools/plotting.py", line 1636, in plot_frame
plot_obj.generate()
File "/usr/lib64/python2.6/site-packages/pandas/tools/plotting.py", line 854, in generate
self._compute_plot_data()
File "/usr/lib64/python2.6/site-packages/pandas/tools/plotting.py", line 949, in _compute_plot_data
'plot'.format(numeric_data.__class__.__name__))
TypeError: Empty 'DataFrame': no numeric data to plot
Answer: Thanks to alko, who asked for a smaller dataset, I found an issue with the
dataset that the newer Fedora19 stack was ignoring.
For other folks, be aware of silent dataset repair that seems to take place on
the newer stack on Fedora19
The stack I have on RHEL6.4 below works fine, so this is solved Numpy 1.8.0
Matplotlib 1.3.1 Python 2.6.6 Pandas 0.12.0
|
Using Soundcloud Python library in Google App Engine - what files do I need to move?
Question: I want to use the soundcloud python library in a web app I am developing in
Google App Engine. However, I can't find any file called "soundcloud.py" in
the soundcloud library files I downloaded. When using pip install it works
fine on my local computer.
What files exactly do I need to move - or what exact steps do I need to take -
in order to be able to "import soundcloud" within Google App Engine.
I already tried moving all the *.py files into my main app directory, but
still got this error: import soundcloud ImportError: No module named
soundcloud
Answer: You need to include your `soundcloud` external library into your App Package.
Where exactly to place them within your application is fairly dependant on
frameworks your application uses, but the `soundcloud` must exist on your
PYTHONPATH somewhere.
Also, `soundcloud` isn't a solitary library. You'll also need
[fudge](http://farmdev.com/projects/fudge/) and `simplejson` (Included in
appengine since 1.4.2). Just grab both package folders from your local pip
install, and stick em somewhere in your application that's already loaded in
your path.
To see where your local installed pip folders are, just fake that you're
uninstalling one of the packages. On my machine, this looks like this:
sparker% sudo pip uninstall fudge
Password:
Uninstalling fudge:
/Library/Python/2.7/site-packages/fudge
/Library/Python/2.7/site-packages/fudge-1.0.3-py2.7.egg-info
Proceed (y/n)? n
You only need to grab the `fudge` folder to include in your application, but
I'd take both. It's always nice later to know which version of applications
you're bundling with your app, and that's the easiest way to know.
|
Downloading Links with Python
Question: I have two sets of scripts. One to download a webpage and another to download
links from the webpage. They both run but the links script doesn't return any
scripts. Can anyone see or tell me why?
webpage script;
import sys, urllib
def getWebpage(url):
print '[*] getWebpage()'
url_file = urllib.urlopen(url)
page = url_file.read()
return page
def main():
sys.argv.append('http://www.bbc.co.uk')
if len(sys.argv) != 2:
print '[-] Usage: webpage_get URL'
return
else:
print getWebpage(sys.argv[1])
if __name__ == '__main__':
main()
Links Script
import sys, urllib, re
import getWebpage
def print_links(page):
print '[*] print_links()'
links = re.findall(r'\<a.*href\=.*http\:.+', page)
links.sort()
print '[+]', str(len(links)), 'HyperLinks Found:'
for link in links:
print link
def main():
sys.argv.append('http://www.bbc.co.uk')
if len(sys.argv) != 2:
print '[-] Usage: webpage_links URL'
return
page = webpage_get.getWebpage(sys.argv[1])
print_links(page)
Answer: This will fix most of your problems:
import sys, urllib, re
def getWebpage(url):
print '[*] getWebpage()'
url_file = urllib.urlopen(url)
page = url_file.read()
return page
def print_links(page):
print '[*] print_links()'
links = re.findall(r'\<a.*href\=.*http\:.+', page)
links.sort()
print '[+]', str(len(links)), 'HyperLinks Found:'
for link in links:
print link
def main():
site = 'http://www.bbc.co.uk'
page = getWebpage(site)
print_links(page)
if __name__ == '__main__':
main()
Then you can move on to fixing your regular expression.
While we are on the topic, though, I have two material recommendations:
* [use python library `requests`](http://stackoverflow.com/questions/22676/how-do-i-download-a-file-over-http-using-python/10744565#10744565) for getting web pages
* [use a real XML/HTML library for parsing HTML](http://stackoverflow.com/questions/2782097/python-is-there-a-built-in-package-to-parse-html-into-dom/2782492#2782492) (recommend `lxml`)
|
Django 1.6 upgrade: "cannot import name BaseHandler"
Question: I am trying to upgrade from Django 1.5.5 to Django 1.6. Everything tests fine,
but when I try to run my django project, I get the following error:
ValueError: Unable to configure handler 'mail_admins': Cannot resolve 'vbenergyzone.core.utils.log.StaffSuperuserEmailHandler': cannot import name BaseHandler
Why?
Here is what I have tried:
* I have removed my custom logging handler code completely. The project runs fine without any errors.
* I have replaced my custom logging handler code and replaced it with the default handler code. I get the same error.
* I have tried Django 1.6c1 and get the same error.
* I have created a ticket: <https://code.djangoproject.com/ticket/21502>.
Here is my stacktrace:
20:29:03 web.1 | started with pid 1501
20:29:03 worker.1 | started with pid 1504
20:29:03 web.1 | 2013-11-23 20:29:03 [1503] [INFO] Starting gunicorn 18.0
20:29:03 web.1 | 2013-11-23 20:29:03 [1503] [INFO] Listening at: http://192.168.50.4:5000 (1503)
20:29:03 web.1 | 2013-11-23 20:29:03 [1503] [INFO] Using worker: sync
20:29:03 web.1 | 2013-11-23 20:29:03 [1518] [INFO] Booting worker with pid: 1518
20:29:04 web.1 | 2013-11-23 14:29:04 [1518] [ERROR] Exception in worker process:
20:29:04 web.1 | Traceback (most recent call last):
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 495, in spawn_worker
20:29:04 web.1 | worker.init_process()
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 106, in init_process
20:29:04 web.1 | self.wsgi = self.app.wsgi()
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 114, in wsgi
20:29:04 web.1 | self.callable = self.load()
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 62, in load
20:29:04 web.1 | return self.load_wsgiapp()
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load_wsgiapp
20:29:04 web.1 | return util.import_app(self.app_uri)
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/util.py", line 354, in import_app
20:29:04 web.1 | __import__(module)
20:29:04 web.1 | File "/vagrant/vbenergyzone/wsgi.py", line 23, in <module>
20:29:04 web.1 | from django.core.wsgi import get_wsgi_application
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/core/wsgi.py", line 1, in <module>
20:29:04 web.1 | from django.core.handlers.wsgi import WSGIHandler
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 11, in <module>
20:29:04 web.1 | from django.core.handlers import base
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 12, in <module>
20:29:04 web.1 | from django.db import connections, transaction
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/db/__init__.py", line 83, in <module>
20:29:04 web.1 | signals.request_started.connect(reset_queries)
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/dispatch/dispatcher.py", line 88, in connect
20:29:04 web.1 | if settings.DEBUG:
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/conf/__init__.py", line 54, in __getattr__
20:29:04 web.1 | self._setup(name)
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/conf/__init__.py", line 50, in _setup
20:29:04 web.1 | self._configure_logging()
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/conf/__init__.py", line 80, in _configure_logging
20:29:04 web.1 | logging_config_func(self.LOGGING)
20:29:04 web.1 | File "/usr/lib/python2.7/logging/config.py", line 777, in dictConfig
20:29:04 web.1 | dictConfigClass(config).configure()
20:29:04 web.1 | File "/usr/lib/python2.7/logging/config.py", line 575, in configure
20:29:04 web.1 | '%r: %s' % (name, e))
20:29:04 web.1 | ValueError: Unable to configure handler 'mail_admins': Cannot resolve 'vbenergyzone.core.utils.log.StaffSuperuserEmailHandler': cannot import name BaseHandler
20:29:04 web.1 | Traceback (most recent call last):
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 495, in spawn_worker
20:29:04 web.1 | worker.init_process()
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/workers/base.py", line 106, in init_process
20:29:04 web.1 | self.wsgi = self.app.wsgi()
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 114, in wsgi
20:29:04 web.1 | self.callable = self.load()
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 62, in load
20:29:04 web.1 | return self.load_wsgiapp()
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load_wsgiapp
20:29:04 web.1 | return util.import_app(self.app_uri)
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/gunicorn/util.py", line 354, in import_app
20:29:04 web.1 | __import__(module)
20:29:04 web.1 | File "/vagrant/vbenergyzone/wsgi.py", line 23, in <module>
20:29:04 web.1 | from django.core.wsgi import get_wsgi_application
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/core/wsgi.py", line 1, in <module>
20:29:04 web.1 | from django.core.handlers.wsgi import WSGIHandler
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 11, in <module>
20:29:04 web.1 | from django.core.handlers import base
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 12, in <module>
20:29:04 web.1 | from django.db import connections, transaction
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/db/__init__.py", line 83, in <module>
20:29:04 web.1 | signals.request_started.connect(reset_queries)
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/dispatch/dispatcher.py", line 88, in connect
20:29:04 web.1 | if settings.DEBUG:
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/conf/__init__.py", line 54, in __getattr__
20:29:04 web.1 | self._setup(name)
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/conf/__init__.py", line 50, in _setup
20:29:04 web.1 | self._configure_logging()
20:29:04 web.1 | File "/home/vagrant/.virtualenvs/VBEZ/local/lib/python2.7/site-packages/django/conf/__init__.py", line 80, in _configure_logging
20:29:04 web.1 | logging_config_func(self.LOGGING)
20:29:04 web.1 | File "/usr/lib/python2.7/logging/config.py", line 777, in dictConfig
20:29:04 web.1 | dictConfigClass(config).configure()
20:29:04 web.1 | File "/usr/lib/python2.7/logging/config.py", line 575, in configure
20:29:04 web.1 | '%r: %s' % (name, e))
20:29:04 web.1 | ValueError: Unable to configure handler 'mail_admins': Cannot resolve 'vbenergyzone.core.utils.log.StaffSuperuserEmailHandler': cannot import name BaseHandler
Here is my logging configuration:
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'formatters': {
'verbose': {
'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
},
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'django.utils.log.NullHandler',
},
'console':{
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
'mail_admins': {
'level': 'ERROR',
# 'class': 'django.utils.log.AdminEmailHandler',
'class': 'vbenergyzone.core.utils.log.StaffSuperuserEmailHandler',
'filters': [],
}
},
'loggers': {
'django': {
'handlers': ['null'],
'propagate': True,
'level': 'INFO',
},
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'vbenergyzone': {
'handlers': ['console', 'mail_admins'],
'level': 'INFO',
}
}
}
Here is my custom logger:
"""
Logging utilities
"""
import logging
import traceback
from django.contrib.sites.models import Site
from django.views.debug import get_exception_reporter_filter
from vbenergyzone.core import send_message
# ==============================================================================
class StaffSuperuserEmailHandler(logging.Handler):
def emit(self, record):
try:
request = record.request
domain = Site.objects.get_current().domain
subject = 'User %s experienced an error on %s: %s' % \
(request.user, domain, record.getMessage())
filter = get_exception_reporter_filter(request)
request_repr = filter.get_request_repr(request)
except Exception:
subject = '%s: %s' % (
record.levelname,
record.getMessage(),
)
request = None
request_repr = "Request repr() unavailable."
if record.exc_info:
exc_info = record.exc_info
stack_trace = \
'\n'.join(traceback.format_exception(*record.exc_info))
else:
exc_info = (None, record.getMessage(), None)
stack_trace = 'No stack trace available'
context_object = {
'message': record.getMessage(),
'request': request,
'request_repr': request_repr,
'stack_trace': stack_trace,
}
send_message(
context_object=context_object,
html_template='core/email/staff_super_email_handler.html',
send_to_staff_and_superusers=True,
subject=self.format_subject(subject),
text_template='core/email/staff_super_email_handler.txt',
)
def format_subject(self, subject):
"""
Escape CR and LF characters, and limit length.
RFC 2822's hard limit is 998 characters per line. So, minus "Subject: "
the actual subject must be no longer than 989 characters.
"""
formatted_subject = subject.replace('\n', '\\n').replace('\r', '\\r')
return formatted_subject[:989]
Answer: This is a known bug with a patch that will be coming out with Django 1.6.1. In
the meantime, this commit includes the patch:
<https://github.com/django/django/commit/432de546113942be60089a879371073ad09fb4fe>.
Thanks to claudep and bmispelon at the Django project for helping me with
this!
|
Python string compare error
Question: I am getting the following error when converting my binary d.type_str variable
to 'bid' or 'ask'. Thanks for the help guys! I'm using python 2.7
My code:
from itertools import izip_longest
import itertools
import pandas
import numpy as np
all_trades = pandas.read_csv('C:\\Users\\XXXXX\\april_trades.csv', parse_dates=[0], index_col=0)
usd_trades = all_trades[all_trades['d.currency'] == 'USD']
volume = (usd_trades['d.amount_int'])
trades = (usd_trades['d.price_int'])
def cleanup(x):
if isinstance(x, str) and 'e-' in x:
return 0
else:
return float(x)
volume = volume.apply(lambda x: cleanup(x))
volume = volume.astype(float32)
#####
typestr = (usd_trades['d.type_str'])
typestr[typestr == 'bid'] = 0
typestr[typestr == 'ask'] = 1
Error output:
>>> typestr[typestr == 'ask'] = 1
File "C:\Anaconda\lib\site-packages\pandas\core\series.py", line 240, in wrapper
% type(other))
TypeError: Could not compare <type 'str'> type with Series
>>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Answer: As you stated, your `typestr` is binary. Pandas complains when you try to
compare string with serise with `int` data, see
>>> s = pd.Series([1], dtype=np.int64)
>>> s == 'a'
Traceback (most recent call last):
...
TypeError: Could not compare <type 'str'> type with Series
From your text I guess you want instead do
>>> typestr[typestr == 1] = 'ask'
|
How to make a script execute just after python shell starts and before customuer's input?
Question: In Pythonxy I can edit some python scripts in an pre-defined folder so that
they will be executed (or import? I don't know). For example, if I put a
script in that folder:
import numpy as np
import scipy as sp
Then after I enter python shell, I can conveniently use np and sp as quick
references.
Now the problem is, how can I use such technique without Pythonxy's support in
standard python shell? Can I write a script to handle pre import? I think it
will be very useful.
Answer: You can do this by setting up a python startup file, and set `PYTHONSTARTUP`
variable to that file. In the startup file, just write
import numpy as np
import scipy as sp
Then, every time you enter python shell, it will automatically import the
above modules.
Here is an example. [Installing pythonstartup
file](http://stackoverflow.com/questions/5837259/installing-pythonstartup-
file)
|
Using python to read text files and answer questions
Question: I have this file animallog1.txt which contains information that i would like
to use to answer questions using python. How would i import the file to
python, which would latter be used I tried
with open('animallog1.txt', 'r') as myfile
but this is not working and just outputs no such file or directory even though
i am pretty sure it exists
animal_names, dates, locations = [], [], []
with open('animallog1.txt', 'r') as myfile:
for line in myfile:
animal_name, date, location = line.strip().split(':')
animal_names.append(animal_name)
dates.append(date)
locations.append(location)
print(animal_names)
print(dates)
print(locations)
so this is the code that i have. animallog1.txt is the name of the file that i
want to use.
However my output is
Traceback (most recent call last):
File "None", line 3, in <module>
builtins.FileNotFoundError: [Errno 2] No such file or directory: 'animallog1.txt'
how can i fix this?
Answer: Make sure you path is corret or try this:
file = open("sample.txt")
sample.txt is present in the current path, in my ubuntu it works.
|
Multiple Files handling with Codependency - Python
Question: I just finished the tutorial for making a rogue-like-game and I'm on my way to
implement freatures.
The problem is, the whole game is a single file with 1k+ lines.
As you can see:
<http://roguebasin.roguelikedevelopment.org/index.php?title=Complete_Roguelike_Tutorial,_using_python%2Blibtcod,_part_13_code>
I want to divide it in different files/folders to handle the implements
better.Maybe a file for each aspect of the game like map/player/npcs/items...
but at least divide in Classes/Functions/Main.
The problem is, when I put this in the Main.py:
from Classes import *
from Functions import *
I get
NameError: name 'libtcod' is not defined
Which is a Module used in Main. Then I tried to import the Main.py in the
Classes.py and Functions.py
And get the
NameError: global name 'MAP_WIDTH' is not defined
MAP_WIDTH is a global variable in Main.py
I also tried to import the whole Main.py in Classes.py and Functions.py
But I get:
NameError: name 'Fighter' is not defined
Fighter is a Class inside Classes.py
Can anyone help me sort this out so I can start implement freatures.
EDIT: One simple example is:
Main.py
from Functions import *
def plus_one(ab):
return ab +1
a = 1
b = 2
c = add_things()
print plus_one(c)
Functions.py
from Main import *
def add_things():
global a,b
return a + b
It's a simple example, but in the project it get a lot of mutual dependecy
between classes/functions and the main file.
Answer: There are many issues with your code and your planned program architecture.
Please read my comment on your post. You need to shore up your knowledge of
object oriented programming.
First, it is highly recommended to never use `from Classes import *`. You
should use `import Classes`. Then to access functions or constants from the
module you would use Classes.function_name or Classes.constant. See for more
info on how to properly import in Python: <http://effbot.org/zone/import-
confusion.htm>
Second, global variables are not recommended in Python. But if you do need
them, you need to remember that in python a `global variable` means global to
a module, not your entire program. Global variables in Python are a strange
beast. If you need to read a global variable nothing special is required.
However if you want to modify a global variable from within a function, then
you must use the `global` keyword.
Thirdly, what you are doing is called a circle dependancy. Module A, imports
Module B and Module B imports Module A. You can define shared functions,
classes etc. in a third Module C. Then both A and B can import Module C. You
can also defined your constants like `MAP_WIDTH` in module C and access them
from A or B with `C.MAP_WIDTH` provided you have an `import C`.
|
Python CFFI module fails when loading dll: OSError 0x7e
Question: I run Python 3.3 (Anaconda distribution) under Windows 7, 64-bit. I have
attempted to install the Weasyprint app/library, which has a number of
dependencies, including CFFI, which I had to compile from source because no
compatible version of it was available in a binary distribution.
When I run weasyprint, it chokes during the import loading process,
specifically when it calls CFFI in order to load the GTK+ library dll for
Cairo. The error that I get is as follows:
$ weasyprint
Traceback (most recent call last):
File "c:\anaconda\envs\py33\lib\site-packages\cffi-0.8-py3.3-win-amd64.egg\cffi\api.py", line 399, in _make_ffi_library
backendlib = backend.load_library(name, flags)
OSError: cannot load library libcairo-2.dll: error 0x7e
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Anaconda\envs\py33\Scripts\weasyprint-script.py", line 9, in <module>
load_entry_point('WeasyPrint==0.20', 'console_scripts', 'weasyprint')()
File "C:\Anaconda\envs\py33\lib\site-packages\pkg_resources.py", line 343, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "C:\Anaconda\envs\py33\lib\site-packages\pkg_resources.py", line 2355, in load_entry_point
return ep.load()
File "C:\Anaconda\envs\py33\lib\site-packages\pkg_resources.py", line 2061, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "c:\anaconda\envs\py33\lib\site-packages\weasyprint-0.20-py3.3.egg\weasyprint\__init__.py", line 309, in <module>
from .css import PARSER, preprocess_stylesheet
File "c:\anaconda\envs\py33\lib\site-packages\weasyprint-0.20-py3.3.egg\weasyprint\css\__init__.py", line 30, in <module>
from . import computed_values
File "c:\anaconda\envs\py33\lib\site-packages\weasyprint-0.20-py3.3.egg\weasyprint\css\computed_values.py", line 18, in <module>
from .. import text
File "c:\anaconda\envs\py33\lib\site-packages\weasyprint-0.20-py3.3.egg\weasyprint\text.py", line 18, in <module>
import cairocffi as cairo
File "c:\anaconda\envs\py33\lib\site-packages\cairocffi-0.5.1-py3.3.egg\cairocffi\__init__.py", line 39, in <module>
cairo = dlopen(ffi, 'libcairo-2.dll', 'cairo', 'libcairo-2')
File "c:\anaconda\envs\py33\lib\site-packages\cairocffi-0.5.1-py3.3.egg\cairocffi\__init__.py", line 34, in dlopen
return ffi.dlopen(names[0]) # pragma: no cover
File "c:\anaconda\envs\py33\lib\site-packages\cffi-0.8-py3.3-win-amd64.egg\cffi\api.py", line 117, in dlopen
lib, function_cache = _make_ffi_library(self, name, flags)
File "c:\anaconda\envs\py33\lib\site-packages\cffi-0.8-py3.3-win-amd64.egg\cffi\api.py", line 405, in _make_ffi_library
backendlib = backend.load_library(path, flags)
OSError: cannot load library C:\Windows\system32\libcairo-2.dll: error 0x7e
The environment I have is as follows: Windows 7.1 64-bit, Python 3.3 64 bit,
CFFI compiled (by me) under Visual Studio 2010 with a 64-bit environment, and
Cairo's libcairo-2.dll also in a 64-bit version.
I am not a Windows programmer, and am only delving into this mess because I
want to get Weasyprint to work for another (Python language) project. I have
done a minor bit of Windows programming a long time ago under Delphi, so I
have a vague grasp of how this stuff works, but I have been unable to solve
this problem.
Answer: I was getting similar errors (conflicting DLLs) and it was finally resolved
simply by moving the path to GTK (ex: "C:\gtk\bin") to the beginning of my
environment variables.
|
Python - Pygame Name Error: name 'display_s' is not defined. (bug?) + How do I get variables from inside 'scopes'
Question: recently began working on a pygame project, and came across this error:
Traceback (most recent call last):
File "GameTesting.py", line 50, in <module>
screen.blit(display_s, (space_ship_rect.centerx, space_ship_rect.centery))
NameError: name 'display_s' is not defined
Most of the time it pops up, BUT the thing is sometimes it doesn't come up
with an error and runs perfectly fine, here's the code: (I commented the parts
that are important to this thread)
import sys, pygame, math, time;
from pygame.locals import *;
spaceship = ('spaceship.png')
mouse_c = ('crosshair.png')
backg = ('background.jpg')
fire_beam = ('beams.png')
pygame.init()
screen = pygame.display.set_mode((800, 600))
bk = pygame.image.load(backg).convert_alpha()
mousec = pygame.image.load(mouse_c).convert_alpha()
space_ship = pygame.image.load(spaceship).convert_alpha()
f_beam = pygame.image.load(fire_beam).convert_alpha()
f_beam = pygame.transform.scale(f_beam, (50, 50))
f_beam_rect = f_beam.get_rect()
clock = pygame.time.Clock()
pygame.mouse.set_visible(False)
space_ship_rect = space_ship.get_rect()
space_ship_rect.centerx = 375
space_ship_rect.centery = 300
speed = 3.5
pressed_down = 0
while True:
clock.tick(60)
screen.blit(bk, (0, 0))
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
elif event.type == MOUSEBUTTONDOWN and event.button == 1:
global movex
global movey
global degs #HERE is where I need movex, movey, degs from below..
#This probably won't work, don't even know what global does...
elif event.type == MOUSEBUTTONDOWN and event.button == 3:
pressed_down = 1
elif event.type == MOUSEBUTTONUP:
pressed_down = 0
if pressed_down == 1:
x, y = pygame.mouse.get_pos()
x1, y1 = x - space_ship_rect.x, y - space_ship_rect.y
angle = math.atan2(y1, x1)
dx = speed*math.cos(angle)
dy = speed*math.sin(angle)
movex = space_ship_rect.centerx = space_ship_rect.centerx + dx
movey = space_ship_rect.centery = space_ship_rect.centery + dy
if event.type == MOUSEMOTION:
x1, y1 = pygame.mouse.get_pos()
x2, y2 = space_ship_rect.x, space_ship_rect.y
dx, dy = x2 - x1, y2 - y1
rads = math.atan2(dx, dy)
degs = math.degrees(rads)
display_s = pygame.transform.rotate(space_ship, (degs)) #ERROR HERE
screen.blit(display_s, (space_ship_rect.centerx, space_ship_rect.centery))#ERROR HERE
pos = pygame.mouse.get_pos()
screen.blit(mousec, (pos))
pygame.display.update()
Answer: Likely as not, it's NOT having an error when you trigger `event.type ==
MOUSEMOTION` \- because `display_s` gets defined.
If, in the first tick of the game running, you move your mouse, `display_s` is
initialised and therefore you do not get an error. If you do NOT move your
mouse, an error occurs.
One solution is to create a default value for `display_s` in your initial
level loading code.
Why you're having a lot of trouble is probably caused by your code structure,
or lack thereof.
Try [refactoring](http://en.wikipedia.org/wiki/Code_refactoring) your code, I
like to make my game loops look something like:
def main():
load()
while(playing == True):
handle_input()
update()
draw()
unload()
|
Python pygame window keeps crashing
Question: Whenever I run my code the Python Window that shows up does not respond.
Is there something wrong with my code or do I have to re-install pygame and
python?
I get a black pygame window and then it turns white and says not responding?
Also I am new to this so please make this as simple as possible. I tried
looking everywhere for the answer but could not get it in a way that I could
understand.
Please help me out. Thanks :)
# 1 - Import library
import pygame
from pygame.locals import *
# 2 - Initialize the game
pygame.init()
width, height = 640, 480
screen=pygame.display.set_mode((width, height))
# 3 - Load Images
player = pygame.images.load("resources/images/dude.png")
# 4 - keep looping through
while 1:
# 5 - clear the screen before drawing it again
screen.fill(0)
# 6 - draw the screen elements
screen.blit(player, (100,100))
# 7 - update the screen
pygame.display.flip()
# 8 - loop through the events
for event in pygame.event.get():
# check if the event is the X button
if event.type==pygame.QUIT:
# if it is quit the game
pygame.quit()
exit(0)
Answer: Don't import `pygame.locals`. It is actually unnecessary, since you are
already importing `pygame`.
Also, as @furas said, it should be:
player = pygame.image.load("resources/images/dude.png")
Not:
player = pygame.images.load("resources/images/dude.png")
This will clear up some of the problems in your code.
|
Python how to combine two matrices in numpy
Question: new to Python, struggling in numpy, hope someone can help me, thank you!
from numpy import *
A = matrix('1.0 2.0; 3.0 4.0')
B = matrix('5.0 6.0')
C = matrix('1.0 2.0; 3.0 4.0; 5.0 6.0')
print "A=",A
print "B=",B
print "C=",C
results:
A= [[ 1. 2.]
[ 3. 4.]]
B= [[ 5. 6.]]
C= [[ 1. 2.]
[ 3. 4.]
[ 5. 6.]]
Question: how to use A and B to generate C, like in matlab C=[A;B]? Thank you
so much
Answer: Use
[`numpy.concatenate`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html):
>>> import numpy as np
>>> np.concatenate((A, B))
matrix([[ 1., 2.],
[ 3., 4.],
[ 5., 6.]])
|
Unexpected PHP/jQuery/JSON interaction differences
Question: ## Background:
So, I have the following PHP snippet written to help me debug a much larger
problem, but now I'm even more confused as to how jQuery and PHP expect JSON
to be sent/received, as this code does not seem to be doing what I expect:
I want to be able to _receive_ data as either `application/x-www-form-
urlencoded` **_or_** as `application/json`, while responding only with
`application/json` (for now).
**`test.php`:**
<?php
$content_type = $_SERVER['CONTENT_TYPE'];
switch ($_SERVER['REQUEST_METHOD']) {
case 'GET':
echo <<< _EOF
<html>
<body>
<div id="content"></div>
<script type="text/javascript" src="/jquery-1.10.2.min.js"></script>
</body>
</html>
_EOF;
break;
case 'POST':
header('Content-type: application/json');
if ($content_type == 'application/x-www-form-urlencoded') {
$_POST['message'] = 'debugged';
echo json_encode($_POST);
# Otherwise, get raw body data.
} else {
$data = file_get_contents('php://input');
if ($data == null) {
$data = file_get_contents('php://stdin');
}
$object = json_decode($data, true);
$object['message'] = 'debugged';
echo json_encode($object);
}
break;
default:
echo "These aren't the droids you're looking for. Move along.<br />";
break;
}
## Testing:
I've been testing this code using both the [Python Requests
Library](http://docs.python-requests.org/en/latest/index.html) and [jQuery's
Ajax method](http://api.jquery.com/jQuery.ajax/).
In Python, data can be sent as either `application/x-www-form-urlencoded` or
as raw body data attached to the request, depending on the type of the `data`
argument you supply to the `post()` method (see [here](http://docs.python-
requests.org/en/latest/user/quickstart/#more-complicated-post-requests) for
more information).
## All going well so far...
Using Requests, I get exactly what I expect.
When the `data` parameter of the `post()` method is a **Python dictionary
object,** the data is sent form-encoded:
>>> import requests
>>> r = requests.post('http://example.com/test.php', data={"name": "tester", "message": "bug"})
>>> print r.text
{"message": "debugged", "name": "tester"}
When the `data` parameter is a **literal string (note the single-quotes!)**
the data is sent as the body of the request, and may (or may not) be
interpreted as JSON at the endpoint (I do interpret it as JSON in the sample
PHP code above, and it's guaranteed it will only ever be JSON):
>>> r = requests.post('http://example.com/test.php', data='{"name": "tester", "message": "bug"}')
>>> print r.text
{"message": "debugged", "name": "tester"}
Both of these results are what I expect.
## Here's the weird part...
I seem to be getting different responses from this code, not depending on
**_how_** I send the data, but **_what I use to send it_** (i.e. Requests, or
jQuery). There must be something different between the two that I'm completely
missing.
If I point my browser (Chrome) at <http://example.com/test.php>, triggering
the GET request handler to serve me the tiny HTML page contained in the PHP
code above, and then fire the following jQuery code in the JavaScript console,
this is what I get back as a result:
> var output = null;
> $.ajax({
type: 'POST',
url: 'http://example.com/test.php',
data: {
name: 'tester',
message: 'bug'
},
dataType: 'json',
beforeSend: function(x) {
if (x && x.overrideMimeType) {
x.overrideMimeType('application/json; charset=UTF-8');
}
},
success: function(d) {
$output = d;
}
});
> output
Object {message: "debugged"}
By now I'm thinking, **_where the heck did the 'name' entry go?_**
As it turns out, only the things that I **_explicitly set_** from within the
PHP handler seem to be accessible as output to jQuery. But sometimes I want to
use data from a user's input, modify it, and then return it as output. But
with jQuery acting like this, (or PHP... I'm not exactly sure in which bit of
code my error lies) I'm unable to do what I want.
## Weirder still!
The next thing I did was to change the output message to be "debugged (form-
data)" or "debugged (body-data)" depending on whether the if-statement in the
POST handler was entered or "elsed", and was surprised to find that **jQuery
is sending the data as body data, but is setting the`application/x-www-form-
urlencoded` header.** This is completely not what I expected. I feel like
either one of jQuery (1.10.2) or Python Requests is misbehaving due to this
discrepancy, but I'm not sure which. (Or is sending form-encoded data really
this ad-hoc?)
Any thoughts or insight on what's going on here, what I'm missing, what I
SHOULD be expecting if my expectations or wrong, or even links to relevant
documentation I've managed to overlook while researching this would be greatly
appreciated.
Answer: Have you tried putting the identifier "name" in quotes. It's a reserved word
and some parsers don't handle it not being in quotes well. I ran into a
similar problem a couple of years back.
|
Upgrading from 4.0.4 to 4.0.10 - TypeError: Can't use implementer with classes. Use one of the class-declaration functions instead
Question: I have a (working) Plone 4.0.4 site that uses Dexterity. I am trying to
upgrade it to 4.0.10. When I start an instance on the new (4.0.10) site, I get
the error:
TypeError: Can't use implementer with classes. Use one of the class-declaration functions instead.
(Full backtrace below) This error seems to come from `zope.interface`, and
obviously it must be caused by some problem with the new site's version set,
because everything else is the same. The versions of `plone.app.dexterity` and
`zope.interface` are the same on both sites.
I don't know where to look for a solution, any suggestion welcome!
traceback: <http://pastie.org/8506203>
bin/instance: <http://pastie.org/8506250>
Thanks!
Traceback (most recent call last):
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/OFS/Application.py", line 671, in install_product
initmethod(context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Products/Five/__init__.py", line 31, in initialize
zcml.load_site()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Products/Five/zcml.py", line 51, in load_site
_context = xmlconfig.file(file)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 647, in file
include(context, name, package)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 546, in include
processxmlfile(f, context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 378, in processxmlfile
parser.parse(src)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 207, in feed
self._parser.Parse(data, isFinal)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 349, in end_element_ns
self._cont_handler.endElementNS(pair, None)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 357, in endElementNS
self.context.end()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 537, in end
self.stack.pop().finish()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 685, in finish
actions = self.handler(context, **args)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Products/Five/fiveconfigure.py", line 74, in loadProducts
handleBrokenProduct(product)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Products/Five/fiveconfigure.py", line 72, in loadProducts
xmlconfig.include(_context, zcml, package=product)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 546, in include
processxmlfile(f, context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 378, in processxmlfile
parser.parse(src)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 207, in feed
self._parser.Parse(data, isFinal)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 349, in end_element_ns
self._cont_handler.endElementNS(pair, None)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 357, in endElementNS
self.context.end()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 537, in end
self.stack.pop().finish()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 685, in finish
actions = self.handler(context, **args)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/z3c.autoinclude-0.3.5-py2.6.egg/z3c/autoinclude/zcml.py", line 104, in includePluginsDirective
includeZCMLGroup(_context, info, filename)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/z3c.autoinclude-0.3.5-py2.6.egg/z3c/autoinclude/zcml.py", line 30, in includeZCMLGroup
include(_context, filename, includable_package)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 546, in include
processxmlfile(f, context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 378, in processxmlfile
parser.parse(src)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 207, in feed
self._parser.Parse(data, isFinal)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 349, in end_element_ns
self._cont_handler.endElementNS(pair, None)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 357, in endElementNS
self.context.end()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 537, in end
self.stack.pop().finish()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 685, in finish
actions = self.handler(context, **args)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/z3c.autoinclude-0.3.5-py2.6.egg/z3c/autoinclude/zcml.py", line 54, in includeDependenciesDirective
includeZCMLGroup(_context, info, 'configure.zcml')
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/z3c.autoinclude-0.3.5-py2.6.egg/z3c/autoinclude/zcml.py", line 30, in includeZCMLGroup
include(_context, filename, includable_package)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 546, in include
processxmlfile(f, context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 378, in processxmlfile
parser.parse(src)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 207, in feed
self._parser.Parse(data, isFinal)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 349, in end_element_ns
self._cont_handler.endElementNS(pair, None)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 357, in endElementNS
self.context.end()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 537, in end
self.stack.pop().finish()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 685, in finish
actions = self.handler(context, **args)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 546, in include
processxmlfile(f, context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 378, in processxmlfile
parser.parse(src)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 207, in feed
self._parser.Parse(data, isFinal)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 349, in end_element_ns
self._cont_handler.endElementNS(pair, None)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 357, in endElementNS
self.context.end()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 537, in end
self.stack.pop().finish()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 684, in finish
args = toargs(context, *self.argdata)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 1376, in toargs
args[str(name)] = field.fromUnicode(s)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/fields.py", line 139, in fromUnicode
value = self.context.resolve(name)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 180, in resolve
mod = __import__(mname, *_import_chickens)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/plone.app.dexterity-1.2.1-py2.6.egg/plone/app/dexterity/browser/types.py", line 14, in <module>
from plone.z3cform.crud import crud
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/plone.z3cform-0.7.8-py2.6.egg/plone/z3cform/crud/crud.py", line 14, in <module>
import z3c.batching.batch
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/z3c.batching-2.0.0-py2.6.egg/z3c/batching/batch.py", line 27, in <module>
class Batch(object):
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.interface-3.5.3-py2.6-linux-x86_64.egg/zope/interface/declarations.py", line 496, in __call__
raise TypeError("Can't use implementer with classes. Use one of "
ZopeXMLConfigurationError: File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/parts/instance1/etc/site.zcml", line 16.2-16.23
ZopeXMLConfigurationError: File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Plone-4.0.10-py2.6.egg/Products/CMFPlone/configure.zcml", line 94.4-98.10
ZopeXMLConfigurationError: File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/src/ecobuilding.content/ecobuilding/content/configure.zcml", line 10.4-10.39
ZopeXMLConfigurationError: File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/plone.app.dexterity-1.2.1-py2.6.egg/plone/app/dexterity/configure.zcml", line 33.4-33.34
ZopeXMLConfigurationError: File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/plone.app.dexterity-1.2.1-py2.6.egg/plone/app/dexterity/browser/configure.zcml", line 47.4-52.51
TypeError: Can't use implementer with classes. Use one of the class-declaration functions instead.
Traceback (most recent call last):
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Zope2/Startup/run.py", line 56, in <module>
run()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Zope2/Startup/run.py", line 21, in run
starter.prepare()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Zope2/Startup/__init__.py", line 87, in prepare
self.startZope()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Zope2/Startup/__init__.py", line 264, in startZope
Zope2.startup()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Zope2/__init__.py", line 47, in startup
_startup()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Zope2/App/startup.py", line 116, in startup
OFS.Application.initialize(application)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/OFS/Application.py", line 251, in initialize
initializer.initialize()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/OFS/Application.py", line 279, in initialize
self.install_products()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/OFS/Application.py", line 492, in install_products
return install_products(app)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/OFS/Application.py", line 523, in install_products
folder_permissions, raise_exc=debug_mode)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/OFS/Application.py", line 671, in install_product
initmethod(context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Products/Five/__init__.py", line 31, in initialize
zcml.load_site()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Products/Five/zcml.py", line 51, in load_site
_context = xmlconfig.file(file)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 647, in file
include(context, name, package)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 546, in include
processxmlfile(f, context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 378, in processxmlfile
parser.parse(src)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 207, in feed
self._parser.Parse(data, isFinal)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 349, in end_element_ns
self._cont_handler.endElementNS(pair, None)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 357, in endElementNS
self.context.end()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 537, in end
self.stack.pop().finish()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 685, in finish
actions = self.handler(context, **args)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Products/Five/fiveconfigure.py", line 74, in loadProducts
handleBrokenProduct(product)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Zope2-2.12.20-py2.6-linux-x86_64.egg/Products/Five/fiveconfigure.py", line 72, in loadProducts
xmlconfig.include(_context, zcml, package=product)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 546, in include
processxmlfile(f, context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 378, in processxmlfile
parser.parse(src)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 207, in feed
self._parser.Parse(data, isFinal)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 349, in end_element_ns
self._cont_handler.endElementNS(pair, None)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 357, in endElementNS
self.context.end()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 537, in end
self.stack.pop().finish()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 685, in finish
actions = self.handler(context, **args)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/z3c.autoinclude-0.3.5-py2.6.egg/z3c/autoinclude/zcml.py", line 104, in includePluginsDirective
includeZCMLGroup(_context, info, filename)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/z3c.autoinclude-0.3.5-py2.6.egg/z3c/autoinclude/zcml.py", line 30, in includeZCMLGroup
include(_context, filename, includable_package)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 546, in include
processxmlfile(f, context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 378, in processxmlfile
parser.parse(src)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 207, in feed
self._parser.Parse(data, isFinal)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 349, in end_element_ns
self._cont_handler.endElementNS(pair, None)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 357, in endElementNS
self.context.end()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 537, in end
self.stack.pop().finish()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 685, in finish
actions = self.handler(context, **args)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/z3c.autoinclude-0.3.5-py2.6.egg/z3c/autoinclude/zcml.py", line 54, in includeDependenciesDirective
includeZCMLGroup(_context, info, 'configure.zcml')
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/z3c.autoinclude-0.3.5-py2.6.egg/z3c/autoinclude/zcml.py", line 30, in includeZCMLGroup
include(_context, filename, includable_package)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 546, in include
processxmlfile(f, context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 378, in processxmlfile
parser.parse(src)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 207, in feed
self._parser.Parse(data, isFinal)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 349, in end_element_ns
self._cont_handler.endElementNS(pair, None)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 357, in endElementNS
self.context.end()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 537, in end
self.stack.pop().finish()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 685, in finish
actions = self.handler(context, **args)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 546, in include
processxmlfile(f, context)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 378, in processxmlfile
parser.parse(src)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/xmlreader.py", line 123, in parse
self.feed(buffer)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 207, in feed
self._parser.Parse(data, isFinal)
File "/usr/local/python_git/parts/opt/lib/python2.6/xml/sax/expatreader.py", line 349, in end_element_ns
self._cont_handler.endElementNS(pair, None)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/xmlconfig.py", line 357, in endElementNS
self.context.end()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 537, in end
self.stack.pop().finish()
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 684, in finish
args = toargs(context, *self.argdata)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 1376, in toargs
args[str(name)] = field.fromUnicode(s)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/fields.py", line 139, in fromUnicode
value = self.context.resolve(name)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.configuration-3.6.0-py2.6.egg/zope/configuration/config.py", line 180, in resolve
mod = __import__(mname, *_import_chickens)
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/plone.app.dexterity-1.2.1-py2.6.egg/plone/app/dexterity/browser/types.py", line 14, in <module>
from plone.z3cform.crud import crud
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/plone.z3cform-0.7.8-py2.6.egg/plone/z3cform/crud/crud.py", line 14, in <module>
import z3c.batching.batch
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/z3c.batching-2.0.0-py2.6.egg/z3c/batching/batch.py", line 27, in <module>
class Batch(object):
File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/zope.interface-3.5.3-py2.6-linux-x86_64.egg/zope/interface/declarations.py", line 496, in __call__
raise TypeError("Can't use implementer with classes. Use one of "
zope.configuration.xmlconfig.ZopeXMLConfigurationError: File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/parts/instance1/etc/site.zcml", line 16.2-16.23
ZopeXMLConfigurationError: File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/Plone-4.0.10-py2.6.egg/Products/CMFPlone/configure.zcml", line 94.4-98.10
ZopeXMLConfigurationError: File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/src/ecobuilding.content/ecobuilding/content/configure.zcml", line 10.4-10.39
ZopeXMLConfigurationError: File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/plone.app.dexterity-1.2.1-py2.6.egg/plone/app/dexterity/configure.zcml", line 33.4-33.34
ZopeXMLConfigurationError: File "/usr/local/zope/prod/Zope-2.12.20/plone4.0.10/eggs/plone.app.dexterity-1.2.1-py2.6.egg/plone/app/dexterity/browser/configure.zcml", line 47.4-52.51
TypeError: Can't use implementer with classes. Use one of the class-declaration functions instead.
Answer: This error refers to the use of 'implementer' as a class decorator, e.g.:
@implementer(IBatch)
class Batch(object):
...
'implementer' as a class decorator was introduced in zope.interface 4.0.0 in
order to support Python 3, where the classic "class advice" mechanism used by
'implements' no longer works.
Plone 4.0 does not use such a new version of zope.interface. Your problem is
happening because you're ending up with too new a version of z3c.batching,
which tries to use 'implementer.' This suggests that you haven't correctly
pinned the set of versions for Dexterity with Plone 4.0.10.
You can find the correct set of pins here: <http://good-
py.appspot.com/release/dexterity/1.2.1?plone=4.0.10>
|
Scapy packet sent cannot be received
Question: I'm trying to send UDP Packets with scapy with the following command:
>> send(IP(dst="127.0.0.1",src="111.111.111.111")/UDP(dport=5005)/"Hello")
.
Sent 1 packets.
And from `tcpdump` I can see:
22:02:58.384730 IP 111.111.111.111.domain > localhost.5005: [|domain]
I'm trying to receive this packet with the following code:
import socket
UDP_IP = "127.0.0.1"
UDP_PORT = 5005
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
sock.bind((UDP_IP, UDP_PORT))
while True:
data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
print "received message:", data
But cannot receive the message.
I have tested the network by sending udp packets normally with the following
code and the packet can be received:
import socket
import time
UDP_IP = "127.0.0.1"
UDP_PORT = 5005
print "UDP target IP:", UDP_IP
print "UDP target port:", UDP_PORT
sock = socket.socket(socket.AF_INET, # Internet
socket.SOCK_DGRAM) # UDP
num = 0
while True:
sock.sendto(str(num), (UDP_IP, UDP_PORT))
print "Message sent: " + str(num)
num += 1
time.sleep(1)
Any help will be greatly appreciated.
\----------------Updates-----------------------
A packet sent by Scapy that can not be received:
13:22:52.984862 IP (tos 0x0, ttl 64, id 1, offset 0, flags [DF], proto UDP (17), length 33)
127.0.0.1.5555 > 127.0.0.1.12345: [udp sum ok] UDP, length 5
0x0000: 4500 0021 0001 4000 4011 3cc9 7f00 0001 E..!..@.@.<.....
0x0010: 7f00 0001 15b3 3039 000d 9813 4865 6c6c ......09....Hell
0x0020: 6f
o
While a packet sent by normal python script that can be received:
13:20:02.374481 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 33)
127.0.0.1.53143 > 127.0.0.1.12345: [bad udp cksum 0xfe20 -> 0xde2e!] UDP, length 5
0x0000: 4500 0021 0000 4000 4011 3cca 7f00 0001 E..!..@.@.<.....
0x0010: 7f00 0001 cf97 3039 000d fe20 4865 6c6c ......09....Hell
0x0020: 6f
Answer: Looks like you are using Scapy to send the UDP traffic to your localhost
interface. In the `send()` function, specify the appropriate outbound
interface to send the traffic out.
Example:
send(IP(dst="127.0.0.1",src="111.111.111.111")/UDP(dport=5005)/"Hello"),iface="lo0")
On my computer, the lo0 is my local loopback interface. To see or set the
default interface for scapy, check out the bottom half of this post:
<http://thepacketgeek.com/scapy-p-02-installing-python-and-scapy/>
|
Why does django/tastypie with postgresql concatenate models_?
Question: I am using postgresql with django and tastypie. I have my models and resources
set up and working with mongodb for certain models and am trying to use
postgresql for relational data models. For some reason, when the query
executes against postgresql, the folder (module) is concatenated to the table
name and all the fields for said model:
"error_message": "relation \"models_member\" does not exist\nLINE 1: ..._member\".\"dob\", \"models_member\".\"last_login\" FROM \"models_me...\n ^\n",
Member Resource:
from api.models.member import Member
from django.conf.urls import url
from api.helper_methods import HelperMethods
from tastypie.resources import ModelResource
import json
class MemberResource(ModelResource):
class Meta:
max_limit = 0
queryset = Member.objects.all().order_by('id')
allowed_methods = ('get', 'post')
resource_name = 'members'
include_resource_uri = False
def prepend_urls(self):
return [
url(r"^(?P<resource_name>%s)/(?P<pk>[\w\d_.-]+)/$" % self._meta.resource_name, self.wrap_view('get_member'), name="api_get_member"),
]
def get_member(self, request, **kwargs):
member = Member.objects.get(id=kwargs['pk'])
return self.create_response(request, member)
Member Model:
from tastypie.utils.timezone import now
from django.db import models
class Member(models.Model):
id = models.IntegerField()
fname = models.CharField()
lname = models.CharField()
addr1 = models.CharField()
addr2 = models.CharField()
city = models.CharField()
state = models.CharField()
zip = models.CharField()
country = models.CharField()
email = models.CharField()
password = models.CharField()
sex = models.CharField(max_length=6)
dob = models.CharField()
last_login = models.DateTimeField(default=now)
How do I tell my resource or... whatever... to say hey, don't concatenate
anything, just make the call? I'm lost (and new to pythong/django/tastypie/all
of the above).
Answer: Django prefixes table names with the name of the app in which they're defined
and an underscore. Did you create the table in postgres manually or did you
let django create it? You can tell django what the name of the table should be
by setting db_table in the model's meta. More info [in the
docs](https://docs.djangoproject.com/en/dev/ref/models/options/#table-names).
class Member(models.Model):
id = models.IntegerField()
class Meta:
db_table = 'member'
|
Write multiple lists to a JSON file in python
Question: Assume I have the following lists
list1 = [{"created_at": "2012-01-31T10:00:04Z"},{"created_at": "2013-01-31T10:00:04Z"}]
list2 = [{"created_at": "2014-01-31T10:00:04Z"}]
I can write the first list to a JSON file using
`json.dump(list1,file,indent=2)` and the result is
[
{
"created_at": "2012-01-31T10:00:04Z"
},
{
"created_at": "2013-01-31T10:00:04Z"
}
]
My question is, how do I append the contents of the second list? if I simple
do `json.dump(list2,file,indent=2)`, it results in an invalid JSON file as
below.
[
{
"created_at": "2012-01-31T10:00:04Z"
},
{
"created_at": "2013-01-31T10:00:04Z"
}
][
{
"created_at": "2014-01-31T10:00:04Z"
}
]
**Edit** : The lists are created dynamically by parsing about 8000 files. The
above lists are just example. I could potentially be writing 8000 lists to the
JSON file, so simple appending will not work.
Answer:
In [1]: import json
In [2]: list1 = [{"created_at": "2012-01-31T10:00:04Z"},{"created_at": "2013-01-31T10:00:04Z"}]
In [3]: list2 = [{"created_at": "2014-01-31T10:00:04Z"}]
In [4]: list1.extend(list2)
In [5]: json.dumps(list1)
Out[5]: '[{"created_at": "2012-01-31T10:00:04Z"}, {"created_at": "2013-01-31T10:00:04Z"}, {"created_at": "2014-01-31T10:00:04Z"}]'
or
In [8]: json.dumps(list1 + list2)
Out[8]: '[{"created_at": "2012-01-31T10:00:04Z"}, {"created_at": "2013-01-31T10:00:04Z"}, {"created_at": "2014-01-31T10:00:04Z"}]'
|
Python SOAP Client Nested Request
Question: I got a problem with a Python SOAP request. I tested two python SOAP client
libraries so far: SUDS and pysimplesoap. Both work well for the following
example:
from suds.client import Client
from pysimplesoap.client import SoapClient, SoapFault
# suds example
url = "http://www.webservicex.net/geoipservice.asmx?WSDL"
client = Client(url, cache=None)
print client.service.GetGeoIP((ip))
# pysimplesoap example
client = SoapClient(wsdl="http://www.webservicex.net/geoipservice.asmx?WSDL")
# call the remote method
response = client.GetGeoIP(("10.0.1.152"))
print response
Both work fine and give me the expected response:
{'GetGeoIPResult': {'ReturnCodeDetails': 'Success', 'IP': '10.0.1.152', 'ReturnCode': 1, 'CountryName': 'Reserved', 'CountryCode': 'ZZZ'}}
With a UI SOAP testing programm the request looks like this:
-<soap:Envelope>
-<soap:Body>
-<GetGeoIP>
<IPAddress>("10.0.1.152")</IPAddress>
</GetGeoIP>
</soap:Body>
</soap:Envelope>
Now the problem is, I need to contact another WS via SOAP, but that doesn't
work. With the UI SOAP program it works (keys and token can be empty) and
looks like:
-<soap:Envelope>
-<soap:Body>
-<getNews>
-<shrequest>
<data>{'account_number':202VA7, 'track_nr':1757345939}</data>
<function>getnewsdata</function>
<keys/>
<token/>
</shrequest>
</getNews>
</soap:Body>
</soap:Envelope>
But my code doesn't work:
url = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
client = Client(url, cache=None)
data = "{'account_number':202VA7, 'track_nr':1757345939}"
function = "getnewsdata"
keys = ""
token = ""
shrequest = [data,function,keys,token]
response = client.service.getNews(shrequest)
print response
I get:
ValueError('Invalid Args Structure. Errors: %s' % errors)
ValueError: Invalid Args Structure. Errors:
How do I have to nest my request right?
Answer: I finally solved it. The SUDS library offers sth. nice:
url = "xxxxxxxxxxxxxxxxxxxxxxxxx?wsdl"
client = Client(url, cache=None)
# Creating 'shrequest' obj. before the request
shrequest = client.factory.create('shrequest')
shrequest.data = "{'account_number':202VA7, 'track_nr':1757345939}"
shrequest.function = "getnewsdata"
response = client.service.getShipment(shrequest)
print response
|
regular expression in python between two words
Question: I am trying to get value
l1 = [u'/worldcup/archive/southafrica2010/index.html', u'/worldcup/archive/germany2006/index.html', u'/worldcup/archive/edition=4395/index.html', u'/worldcup/archive/edition=1013/index.html', u'/worldcup/archive/edition=84/index.html', u'/worldcup/archive/edition=76/index.html', u'/worldcup/archive/edition=68/index.html', u'/worldcup/archive/edition=59/index.html', u'/worldcup/archive/edition=50/index.html', u'/worldcup/archive/edition=39/index.html', u'/worldcup/archive/edition=32/index.html', u'/worldcup/archive/edition=26/index.html', u'/worldcup/archive/edition=21/index.html', u'/worldcup/archive/edition=15/index.html', u'/worldcup/archive/edition=9/index.html', u'/worldcup/archive/edition=7/index.html', u'/worldcup/archive/edition=5/index.html', u'/worldcup/archive/edition=3/index.html', u'/worldcup/archive/edition=1/index.html']
I'm trying to do regular expression starting off with something like this
below
m = re.search(r"\d+", l)
print m.group()
but I want value between "archive/" and "/index.html"
I goggled and have tried something like
`(?<=archive/\/index.html).*(?=\/index.html:)`
but It didn't work for me .. how can I get my result list as '
result = ['germany2006','edition=4395','edition=1013' , ...]
Answer: If you know for sure that the pattern will match always, you can use this
import re
print [re.search("archive/(.*?)/index.html", l).group(1) for l in l1]
Or you can simply split like this
print [l.rsplit("/", 2)[-2] for l in l1]
|
Python i2c write_bus_data usage
Question: 8I have a number of 4 digit seven segment displays that I am trying to control
using Beaglebone Black (running Ubuntu) and i2c.
The SSD's are Byvac BV4614's and the full datasheet [is available
here](http://www.byvac.co.uk/downloads/datasheets/BV4614%20DataSheet.pdf).
I have wired up the circuit correctly using pins P9_19 and P9_20 on the
Beaglebone. I have included pullup resistors and am using a [i2c logic
converter](http://www.hobbytronics.co.uk/logic-level-i2c?keyword=i2c) for
added safety.
I have verified the device using i2cdetect (it's 0x31 which is correct) and
the device powers up nicely and enters it's i2c mode.
However I do not understand how to read or write data to the device using
Python SMBbus. The SSD manual says
> The format used by this device consists of a command, this is a number,
> followed by other bytes depending on that command. The method of writing to
> the device using the I2C protocol follows a consistent format, typically:
> Where S-Addr is the start condition followed by the device address (0x62).
> Command is one of the commands given in the table. Data is one or more bytes
> and Stop is the stop condition.
(on 7-bit addressing the device is at 0x31 which is what I'm using).
So if for example I wanted to get the SSD to display a number on digit 1 the
manual says I have to do
> Command 5 - Name: Send number to digit -
> Format:<S-addr><5><digit><byte><Stop>
My question is, how do I write that command using Python? I think i'm looking
for something like this
import smbus
b = smbus.SMBus(1)
b.write_byte_data(0x31, 0x35, 0x30, 0x38)
which I'd like to mean "send command 5 (0x35) to device 0x31, digit 0 (0x30)
and display the number 8 (0x38)" but wrrite_byte_data does not accept that
number of arguments. I have also tried using write_i2c_block_data() which
looks more hopeful but again I cannot work out how to use the functions
correctly.
I feel that I am facing a BCK problem here but any help would be appreciated.
_edit_ Ok, I have tried
b.write_block_data(0x31, 5, [8, 0x38] )
which makes the number 8 appear on digit 3
b.write_block_data(0x31, 5, [2] )
makes 2 appear on digit 2. I've not yet worked out how to place anything on
digit 1 or 4 yet.
b.write_block_data(0x31, 4, [44]) # just entering random stuff now
increases the brightness and then makes the SSD unresponsive.
So I'm getting closer,
_edit and SOLVED_
In the end it was ridiculously easy.
b.write_i2c_block_data(0x31, 5, [0, 8]) # write number 8 to digit 0
b.write_i2c_block_data(0x31, 5, [4, 5]) # write number 5 to digit 4 etc etc
According to <http://www.raspberry-projects.com/pi/programming-in-
python/i2c-programming-in-python/using-the-i2c-interface-2> the function
write_block_data is not very good.
> write_block_data(int addr,char cmd,long vals[])
>
> Write up to 32 bytes to a device. This function adds an initial byte
> indicating the length of the vals array before the valls array. Use
> write_i2c_block_data instead!
Answer:
b.write_i2c_block_data(0x31, 5, [0, 8]) # write number 8 to digit 0
b.write_i2c_block_data(0x31, 5, [4, 5]) # write number 5 to digit 4 etc etc
Solved the issue for me! All other commands in the spec now function as
intended!
The page at <http://www.raspberry-projects.com/pi/programming-in-
python/i2c-programming-in-python/using-the-i2c-interface-2> informed me that
the function I was using was not ideal.
|
Python code for Adding tag in xml where parent tag is multiple with different attributes
Question: I'm parsing the following XML file using `xml.etree.ElementTree`:
<main>
<stream id="1" name="some">
<inner id="500">
<sub-inner>
<inside> 500 </inside>
<sub-inner>
<inner>
<stream id="2" name="some">
<inner id="500">
<sub-inner>
<inside> 500 </inside>
<sub-inner>
<inner>
</stream>
</main>
How do I insert `<outer>200</outer>` element into the < sub-inner> tag where
stream id ="2" one?
Answer:
import xml.etree.ElementTree as ET
root = ET.fromstring('''
<main>
<stream id="1" name="some">
<inner>500</inner>
</stream>
<stream id="2" name="some">
<inner>500</inner>
</stream>
</main>''')
stream = root.find('.//stream[@id="2"]')
outer = ET.SubElement(stream, 'outer')
outer.text = '200'
print(ET.tostring(root))
output:
<main>
<stream id="1" name="some">
<inner>500</inner>
</stream>
<stream id="2" name="some">
<inner>500</inner>
<outer>200</outer></stream>
</main>
* * *
If you want `outer` to come before the `inner`:
...
stream = root.find('.//stream[@id="2"]')
outer = ET.Element('outer')
outer.text = '200'
stream.insert(0, outer)
print(ET.tostring(root))
output:
<main>
<stream id="1" name="some">
<inner>500</inner>
</stream>
<stream id="2" name="some">
<outer>200</outer><inner>500</inner>
</stream>
</main>
|
How do I kill unresponsive threads
Question: I've been trying to find a way to kill the threads that are unresponsive at
the end of this program:
Most of the time the code works, but on certain domains some of the threads
will hang, not allowing the program to complete.
Any help would be much appreciated.
#!/usr/bin/python
from socket import gethostbyaddr
import dns.resolver
import sys
import Queue
import threading
import subprocess
import time
exitFlag = 0
lines = ''
class myThread (threading.Thread):
def __init__(self, threadID, name, q):
threading.Thread.__init__(self)
self.threadID = threadID
self.name = name
self.q = q
def run(self):
process_data(self.name, self.q)
class Timer():
def __enter__(self): self.start = time.time()
def __exit__(self, *args):
taken = time.time() - self.start
print " [*] Time elapsed " + str(round(taken,1)) + " seconds at " + str(round(len(subdomains) / taken)) + " lookups per second."
def process_data(threadName, q):
while not exitFlag:
queueLock.acquire()
if not workQueue.empty():
data = q.get()
queueLock.release()
host = data.strip() + '.' + domain.strip()
try:
answers = resolver.query(host)
try:
output = gethostbyaddr(host)
if len(host) < 16:
print str(host) + "\t\t" + str(output[0]) + " " + str(output[2])
found.append(str(host) + "\t\t" + str(output[0]) + " " + str(output[2]))
else:
print str(host) + "\t" + str(output[0]) + " " + str(output[2])
found.append(str(host) + "\t" + str(output[0]) + " " + str(output[2]))
except:
print str(host)
found.append(str(host))
except:
pass
else:
queueLock.release()
if len(sys.argv) < 3:
print
print 'Usage: dnsbrute.py <target.com> <subdomains.txt> (threads)'
exit()
if len(sys.argv) >= 4:
maxthreads = int(sys.argv[3])
else:
maxthreads = int(40)
domain = sys.argv[1]
maked = "mkdir -p logs"
process = subprocess.Popen(maked.split(), stdout=subprocess.PIPE)
poutput = process.communicate()[0]
found = []
subdomains = [line.strip() for line in open(sys.argv[2], 'r')]
dnsservers = ["8.8.8.8", "8.8.4.4", "4.2.2.1", "4.2.2.2", "4.2.2.3", "4.2.2.4", "4.2.2.5", "4.2.2.6", "209.244.0.3", "209.244.0.4" ]
threadList = []
numthreads = 1
resolver = dns.resolver.Resolver()
resolver.nameservers = dnsservers
logfile = open("logs/" + domain + ".log", 'w')
while numthreads <= maxthreads:
threadList.append(str("Thread-") + str(numthreads))
numthreads += 1
print " [*] Starting " + str(maxthreads) + " threads to process " + str(len(subdomains)) + " subdomains."
print
queueLock = threading.Lock()
workQueue = Queue.Queue(len(subdomains))
threads = []
threadID = 1
with Timer():
for tName in threadList:
thread = myThread(threadID, tName, workQueue)
thread.start()
threads.append(thread)
threadID += 1
queueLock.acquire()
for work in subdomains:
workQueue.put(work)
queueLock.release()
while not workQueue.empty():
pass
exitFlag = 1
for t in threads:
t.join()
for item in found:
logfile.write("%s\n" % item)
print
print " [*] All threads complete, " + str(len(found)) + " subdomains found."
print " [*] Results saved to logs/" + domain + ".log"
Answer: Please bear in mind that it is a bad approach to kill threads in whatever
language, becuase the resources can be left in an unconsistent state. If you
can, try to redesign your program such that the threads will close themselves
by checking a boolean value. Anyway, a very good answer from a fellow here on
SO:
[Is there any way to kill a Thread in
Python?](http://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-
thread-in-python)
|
Timing accuracy in Python using WXPython
Question: I'm making a simple mp3 player that plays multiple mp3's at a time. It acts
like it is mixing. Timing is critical to ensure a pleasant user experience,
otherwise it just sounds like two cats trying to solve the mid east crisis. I
need accuracy down to 10ms, and probably will need 1ms accuracy if it doesn't
work out.
Currently, I'm using wxpython and I find the timing functions very poor (but
great for audio playback).
import wx
import wx.media
class MediaPanel(wx.Panel):
INTERVAL = 10
def tick(self, event):
self.totalMS += self.INTERVAL
if mixstart == self.totalMS:
# play the song
elif mixend == self.totalMS:
# stop the song
else
pass
* * *
So here's my issue: I find that the timing is inconsistent. I can have the
same songs loaded up in a playlist ready to mix, and it will mix in at what
seems like random times. I'm still new to programming in general, and from
what I can tell, it's the timing function that is not accurate. For debugging
purposes, I will print out a timestamp to let me know where I am in the
functions. For some reason, the timer will just stop randomly, then start up
again.
Question: Is there a better way to do this? Or is wx just broken for something
like this?
Answer: I do not think that you can mix mp3 playbacks reliably in any GUI library. It
is not meant to synchronize sound and if you just call two functions playing
mp3s I believe there is not way to control the timing.
Easiest what comes to mind is to mix the two tracks first and then play (like
in [**this question**](http://stackoverflow.com/questions/7629873/how-do-i-
mix-audio-files-using-python)). Is that not good enough?
|
Find exact string in tuple
Question: I am using python to access results from another program. The program has a
specific module in order to so. Unfortunately I do not understand the tuple
format ("stuff") that comes out as a result.
I am familiar with looking up keys/values in dictionaries, but not how this
would work. I am new to tuples and I am using 2.7.5 - any help would be great!
>>> import OrcFxAPI
>>> model = model.OrcFxAPI('C:\#17.sim')
>>> stuff = model.objects
>>> print stuff
(<General Data: 'General'>, <Environment Data: 'Environment'>, <Line Contact Data: 'Line Contact Data'>, <Vessel: 'vesselA'>, <Vessel: 'vesselB'>, <Line: 'A'>, <Line: 'B'>, <Line: 'C'>, <Line: 'D'>, <Line: 'E'>, <Line: 'F'>)
>>> print type(stuff)
<type 'tuple'>
>>> print map(type, stuff)
[<class 'OrcFxAPI.OrcaFlexObject'>, <class 'OrcFxAPI.OrcaFlexObject'>, <class 'OrcFxAPI.OrcaFlexObject'>, <class 'OrcFxAPI.OrcaFlexVesselObject'>, <class 'OrcFxAPI.OrcaFlexVesselObject'>, <class 'OrcFxAPI.OrcaFlexLineObject'>, <class 'OrcFxAPI.OrcaFlexLineObject'>, <class 'OrcFxAPI.OrcaFlexLineObject'>, <class 'OrcFxAPI.OrcaFlexLineObject'>, <class 'OrcFxAPI.OrcaFlexLineObject'>, <class 'OrcFxAPI.OrcaFlexLineObject'>]
I want to lookup instances of Line, and output 'A','B','C','D','E','F':
>>> for thing in stuff:
if isinstance(thing, Line) and thing.name == 'C':
line_c = thing
break
else:
raise ValueError('No Line C in stuff!')
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
NameError: name 'Line' is not defined
Answer: It's hard to tell from your vague question, but I think what you want is
something like this:
for thing in stuff:
if isinstance(thing, OrcFxAPI.OrcaFlexLineObject):
print thing.name,
print
Or, equivalently:
lines = (thing for thing in stuff
if isinstance(thing, OrcFxAPI.OrcaFlexLineObject))
names = (line.name for line in lines)
print ' '.join(names)
* * *
If you're looking for the `Line` object that has a specific name, that's just
as easy:
for thing in stuff:
if isinstance(thing, OrcFxAPI.OrcaFlexLineObject) and thing.name == 'C':
line_c = thing
break
else:
raise ValueError('No Line C in stuff!')
… or:
line_c = next(thing for thing in stuff
if isinstance(thing, OrcFxAPI.OrcaFlexLineObject) and thing.name == 'C')
* * *
But in general, you don't want to do this like of "type switching" code. It
would be better to store this information in some way that kept all the lines
separate from all the other things. Maybe this:
{'general': <General Data: 'General'>,
'environment': <Environment Data: 'Environment'>,
'line contact': <Line Contact Data: 'Line Contact Data'>,
'code checks': <Code Checks: 'Code Checks'>,
'shear7': <SHEAR7 Data: 'SHEAR7 Data'>,
'vessels': (<Vessel: 'vesselA'>, <Vessel: 'vesselB'>),
'lines': (<Line: 'A'>, <Line: 'B'>, <Line: 'C'>, <Line: 'D'>, <Line: 'E'>, <Line: 'F'>)
}
And then it would be a matter of just using a dict lookup (which you already
know how to do) and iterating over a sequence (which you already know how to
do):
for line in stuff['lines']:
print line.name,
print
* * *
Or, maybe even better, use a class instead of a dict, so the object has a
`lines` attribute that you can use like this:
for line in stuff.lines:
print line.name,
print
|
implementing add and iadd for custom class in python?
Question: I am writing a `Queue` class that wraps list for most of its operations. But I
do not sublcass from `list`, since I do not want to provide all the `list
API's`. I have my code pasted below. The `add` method seems to work fine, but
`iadd` seems to go wrong, it is printing none. Here is the code:
import copy
from iterator import Iterator
class Abstractstruc(object):
def __init__(self):
assert False
def __str__(self):
return "<%s: %s>" %(self.__class__.__name__,self.container)
class Queue(Abstractstruc,Iterator):
def __init__(self,value=[]):
self.container=[]
self.size=0
self.concat(value)
def add(self, data):
self.container.append(data)
def __add__(self,other):
return Queue(self.container + other.container)
def __iadd__(self,other):
for i in other.container:
self.add(i)
def remove(self):
self.container.pop(0)
def peek(self):
return self.container[0]
def __getitem__(self,index):
return self.container[index]
def __iter__(self):
return Iterator(self.container)
def concat(self,value):
for i in value:
self.add(i)
def __bool__(self):
return len(self.container)>0
def __len__(self):
return len(self.container)
def __deepcopy__(self,memo):
return Queue(copy.deepcopy(self.container,memo))
if __name__=='__main__':
q5 = Queue()
q5.add("hello")
q6 = Queue()
q6.add("world")
q5 = q5+q6
print q5
q5+=q6
print q5
Output:
<Queue: ['hello', 'world']>
None
Answer: `__iadd__` needs to return `self` when adding in-place:
def __iadd__(self,other):
for i in other.container:
self.add(i)
return self
`__iadd__` needs to return the resulting object; for immutable types the new
object, for mutable types, `self`. Quoting the [in-place operator hooks
documentation](http://docs.python.org/2/reference/datamodel.html#object.__iadd__):
> These methods should attempt to do the operation in-place (modifying `self`)
> and return the result (which could be, but does not have to be, `self`).
|
django Context syntax error
Question: I'm new to Django and try to create a simple blog, but a syntax error keeps
appearing in the views.py file in the Context line. I use Django 1.6, and the
syntax seems compatible with this version. Here's the simple method from
views.py, where I get the error:
def archive(request):
posts = blogPost.objects.all()
t = loader.get_template("archive.html")
c = Context({'posts': })
return HttpResponse(t.render(c))
Here's the traceback:
Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/blog
Django Version: 1.6
Python Version: 2.7.6
Installed Applications:
('django.contrib.auth',
'django.contrib.admin',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admindocs',
'blog')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Traceback:
File "C:\Python27\lib\site-packages\django\core\handlers\base.py" in get_response
90. response = middleware_method(request)
File "C:\Python27\lib\site-packages\django\middleware\common.py" in process_request
71. if (not urlresolvers.is_valid_path(request.path_info, urlconf) and
File "C:\Python27\lib\site-packages\django\core\urlresolvers.py" in is_valid_path
573. resolve(path, urlconf)
File "C:\Python27\lib\site-packages\django\core\urlresolvers.py" in resolve
453. return get_resolver(urlconf).resolve(path)
File "C:\Python27\lib\site-packages\django\core\urlresolvers.py" in resolve
318. for pattern in self.url_patterns:
File "C:\Python27\lib\site-packages\django\core\urlresolvers.py" in url_patterns
346. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Python27\lib\site-packages\django\core\urlresolvers.py" in urlconf_module
341. self._urlconf_module = import_module(self.urlconf_name)
File "C:\Python27\lib\site-packages\django\utils\importlib.py" in import_module
40. __import__(name)
File "C:\Python27\Lib\site-packages\django\bin\myblog\myblog\urls.py" in <module>
12. url(r'^blog/', include('blog.urls')),
File "C:\Python27\lib\site-packages\django\conf\urls\__init__.py" in include
26. urlconf_module = import_module(urlconf_module)
File "C:\Python27\lib\site-packages\django\utils\importlib.py" in import_module
40. __import__(name)
File "C:\Python27\Lib\site-packages\django\bin\myblog\blog\urls.py" in <module>
2. from blog.views import archive
Exception Type: SyntaxError at /blog
Exception Value: invalid syntax (views.py, line 14)
Answer: This line is invalid:
c = Context({'posts': })
There needs to be a value there:
c = Context({'posts': posts})
|
Python AttributeError: 'module' object has no attribute 'atoi'
Question: I tried to run following program of using python 3.2 , there is error:
'module' object has no attribute 'atoi' Can anybody tell me what should I do
to fix this? i really appreciate it !
import string
def converttoint(str):
try:
value = string.atoi(str)
return value
except ValueError:
return None
Answer: `string.atoi` has been deprecated for a _very_ long time. Since [Python
2.0](http://docs.python.org/2/library/string.html#string.atoi), in fact, and
it doesn't exist in Python 3.
Simply use
value = int(s)
instead, and don't call your variable `str`. That's a bad habit, as it shadows
the builtin string type `str`.
|
Sending data Curl/Json in Python
Question: I`m trying to make those 2 requests in python:
Request 1:
curl -X POST -H "Content-Type: application/json" -d '{ "auth_token": "auth1", "widget": "id1", "title": "Something1", "text": "Some text", "moreinfo": "Subtitle" }' serverip
Request 2:
vsphere_dict = {}
vsphere_dict['server_name'] = "servername"
vsphere_dict['api_version'] = apiVersion
vsphere_dict['guest_count'] = guestCount
vsphere_dict['guest_on'] = guestOnLen
vsphere_dict['guest_off'] = guestOffLen
#Convert output to Json to be sent
data = json.dumps(vsphere_dict)
curl -X POST -H "Content-Type: application/json" -d 'data' serverip
Neither of them seems to work. Is there any way I can send them in Python?
Update:
The part that I cannot handle is the pass auth and widget. I have tried the
following without success:
import urllib2
import urllib
vsphere_dict = dict(
server_name="servername",
api_version="apiVersion",
guest_count="guestCount",
guest_on="guestOnLen",
guest_off="guestOffLen",
)
url = "http://ip:port"
auth = "authid89"
widget = "widgetid1"
# create request object, set url and post data
req = urllib2.Request(auth,url, data=urllib.urlencode(vsphere_dict))
# set header
req.add_header('Content-Type', 'application/json')
# send request
response = urllib2.urlopen(req)**
Resulting in "urllib2.HTTPError: HTTP Error 500: Internal Server Error"
Any ideas how I can pass the auth and widget correctly?
UPDATE:
To see what is different I have started a nc server locally. Here are the
results:
Correct curl request using this code:
curl -X POST -H "Content-Type: application/json" -d '{ "auth_token": "auth", "widget": "widgetid", "title": "Something", "text": "Some text", "moreinfo": "Subtitle" }' http://localhost:8123
sends this **which does work:**
POST / HTTP/1.1
User-Agent: curl/7.21.0 (i386-redhat-linux-gnu) libcurl/7.21.0 NSS/3.12.10.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.4
Host: localhst:8123
Accept: */*
Content-Type: application/json
Content-Length: 165
{ "auth_token": "token", "widget": "widgetid", "title": "Something", "text": "Some text", "moreinfo": "Subtitle" }
And request using this code
import requests
import simplejson as json
url = "http://localhost:8123"
data = {'auth_token': 'auth1', 'widget': 'id1', 'title': 'Something1', 'text': 'Some text', 'moreinfo': 'Subtitle'}
headers = {'Content-type': 'application/json'}
r = requests.post(url, data=json.dumps(data), headers=headers)
sends this which **does not work:**
POST / HTTP/1.1
Host: localhst:8123
Content-Length: 108
Content-type: application/json
Accept-Encoding: gzip, deflate, compress
Accept: */*
User-Agent: python-requests/2.0.1 CPython/2.7.0 Linux/2.6.35.14-106.fc14.i686
{"text": "Some text", "auth_token": "auth1", "moreinfo": "Subtitle", "widget": "id1", "title": "Something1"}
Answer: [Requests](http://www.python-requests.org/en/latest/) provides you with the
simplest and yet (very) powerful way to deal with HTTP requests in Python.
Maybe try something like this:
import requests
import simplejson as json
url = "http://ip:port"
data = {'auth_token': 'auth1', 'widget': 'id1', 'title': 'Something1', 'text': 'Some text', 'moreinfo': 'Subtitle'}
headers = {'Content-type': 'application/json'}
r = requests.post(url, data=json.dumps(data), headers=headers)
If the API requests authentication:
r = requests.post(url, data=json.dumps(data), headers=headers, auth=('user', 'pass'))
See [Requests auth] for details.
|
gobject.MainLoop and tornado.IOLoop at once?
Question: How can you run two event loops in one application?
I need to use
[tornado.IOLoop](http://www.tornadoweb.org/en/stable/ioloop.html#ioloop-
objects) (WebSocket client) and
[gobject.MainLoop](http://www.pygtk.org/pygtk2reference/class-
gobjectmainloop.html) (pygtk) in one Python program.
**Update:** Threadless solutions are preferred.
Answer: You could use a thread
import threading
threading.Thread(target=my_ioloop.start).start()
|
Python tkFileDialog.askdirectory error
Question: i have a following code in python
import Tkinter,tkFileDialog
top=Tkinter.Tk()
from tkFileDialog import askopenfilename
dirname = tkFileDialog.askdirectory(parent=top)
when i print the dirname it appears as normal,and gives
F:/Project/Dropbox/My Mtech Main Project/Programming Folder/SPIDER/Ubuntu Spyder/
But when i acess the dirname,otherwise, ie.. like type dirname in command
windows i got the following output,
u"F:/Project/Dropbox/My Mtech Main Project/Programming Folder/SPIDER/Ubuntu Spyder/"
How can i avoid this u ,so that i can use the dirname as an input to another
operation like, read a wavefile etc. Now i got the error becuase of that u.
Answer: Well the `u` before the `path` is just to show that the string is of `unicode
type`
you can check the `type` of `data` by using the function `type(data)` (check
for `dirname`)
Unicode strings are expressed as instances of the unicode type
one of Python's repertoire of built-in types
more information [here](http://docs.python.org/2/howto/unicode.html)
In case you want to avoid this just change the type.
import Tkinter,tkFileDialog
top=Tkinter.Tk()
from tkFileDialog import askopenfilename
dirname = str(tkFileDialog.askdirectory(parent=top))
hope that helps
|
python-social-auth and impersonate django user
Question: I want to avoid store personal information in database (no last names, no
email). This is my approach to achieve it:
1. Delegate authentication to social networks authentication service ( thanks to [python-social-auth](https://github.com/omab/python-social-auth) )
2. Change python-social-auth pipeline to anonymize personal information.
Then I replaced `social_details` step on pipeline by this one:
#myapp/mypipeline.py
def social_details(strategy, response, *args, **kwargs):
import md5
details = strategy.backend.get_user_details(response)
email = details['email']
fakemail = unicode( md5.new(email).hexdigest() )
new_details = {
'username': fakemail[:5],
'email': fakemail + '@noreply.com',
'fullname': fakemail[:5],
'first_name': details['first_name'],
'last_name': '' }
return {'details': new_details }
settings.py
SOCIAL_AUTH_PIPELINE = (
'myapp.mypipeline.social_details',
'social.pipeline.social_auth.social_uid',
...
The question:
**Is this the right way to get my purpose?**
Answer: Looks good.
I'm doing something similar to anonymize IP addresses (hash them).
|
Can't find my error in tt060.py in "Thinking in Tkinter" tutorial
Question: The following code is the source code from the [tutorial "Thinking in
Tkinter"](http://www.ferg.org/thinking_in_tkinter/all_programs.html).
The file is called `tt060.py`, a small tutorial on event binding. Below the
code is the traceback that I get from IDLE (Py/IDLE ver2.7.3 - Tk ver 8.5).
What is wrong with the following code that makes it not run correctly and give
an error?
from Tkinter import *
class MyApp:
def __init__(self, parent):
self.myParent = parent
self.myContainer1 = Frame(parent)
self.myContainer1.pack()
self.button1 = Button(self.myContainer1)
self.button1.configure(text="OK", background= "green")
self.button1.pack(side=LEFT)
self.button1.bind("<Button-1>", self.button1Click) #
self.button2 = Button(self.myContainer1)
self.button2.configure(text="Cancel", background="red")
self.button2.pack(side=RIGHT)
self.button2.bind("<Button-1>", self.button2Click) #
def button1Click(self, event):
if self.button1["background"] == "green":
self.button1["background"] = "yellow"
else:
self.button1["background"] = "green"
def button2Click(self, event):
self.myParent.destroy()
root = Tk()
myapp = MyApp(root)
root.mainloop()
Traceback:
Traceback (most recent call last):
File "C:/Current/MY_PYTHON/ThinkingInTkinter/tt060.py", line 29, in <module>
myapp = MyApp(root)
File "C:/Current/MY_PYTHON/ThinkingInTkinter/tt060.py", line 12, in __init__
self.button1.bind("<Button-1>", self.button1Click) #
AttributeError: MyApp instance has no attribute 'button1Click'
The first thing I tried, as suggested in the tutorial, was to comment out the
`root.mainloop()` line (no go - I put the line back in). I then removed
`self.` from the event names (lines 12 & 17) to see if that had any effect
(nope). I then tried putting the 2 method definitions before the `.bind` lines
to see if that had any effect (nope). I can make it work if I just use the
command options, but the tutorial is on event binding, so I'm wondering why
the above code will not work?
Answer: You have an indentation problem. You need to start each `def` at the same
column:
from Tkinter import *
class MyApp:
def __init__(self, parent):
self.myParent = parent
self.myContainer1 = Frame(parent)
self.myContainer1.pack()
self.button1 = Button(self.myContainer1)
self.button1.configure(text="OK", background= "green")
self.button1.pack(side=LEFT)
self.button1.bind("", self.button1Click) #
self.button2 = Button(self.myContainer1)
self.button2.configure(text="Cancel", background="red")
self.button2.pack(side=RIGHT)
self.button2.bind("", self.button2Click) #
def button1Click(self, event):
if self.button1["background"] == "green":
self.button1["background"] = "yellow"
else:
self.button1["background"] = "green"
def button2Click(self, event):
self.myParent.destroy()
root = Tk()
myapp = MyApp(root)
root.mainloop()
|
How to determine pid of process started via os.system
Question: I want to start several subprocesses with a programm, i.e. a module `foo.py`
starts several instances of `bar.py`.
Since I sometimes have to terminate the process manually, I need the process
id to perform a kill command.
Even though the whole setup is pretty “dirty”, is there a good pythonic way to
obtain a process’ `pid`, if the process is started via `os.system`?
`foo.py:`
import os
import time
os.system("python bar.py \"{0}\ &".format(str(argument)))
time.sleep(3)
pid = ???
os.system("kill -9 {0}".format(pid))
`bar.py:`
import time
print("bla")
time.sleep(10) % within this time, the process should be killed
print("blubb")
Answer: `os.system` return exit code. It does not provide pid of the child process.
Use [`subprocess`](http://docs.python.org/3/library/subprocess.html) module.
import subprocess
import time
argument = '...'
proc = subprocess.Popen(['python', 'bar.py', argument], shell=True)
time.sleep(3) # <-- There's no time.wait, but time.sleep.
pid = proc.pid # <--- access `pid` attribute to get the pid of the child process.
To terminate the process, you can use
[`terminate`](http://docs.python.org/3/library/subprocess.html#subprocess.Popen.terminate)
method or
[`kill`](http://docs.python.org/3/library/subprocess.html#subprocess.Popen.kill).
(No need to use external `kill` program)
proc.terminate()
|
How to measure execution time of this dinning philosopher program(python)?
Question:
from __future__ import print_function
from threading import Semaphore, Lock, Thread
from time import sleep
from random import random
import argparse
from timeit import Timer
(THINKING, EATING) = (0, 1) #philosopher states
def left_fork(id):
return id
def right_fork(id):
return (id+1) % NUM_PHILOSOPHER
def right(id):
return (id+1) % NUM_PHILOSOPHER
def left(id):
return (id+NUM_PHILOSOPHER-1) % NUM_PHILOSOPHER
def get_fork(id):
global mutex
global tstate
global sem
mutex.acquire()
tstate[id] = 'hungry'
test(id)
mutex.release()
sem[id].acquire()
def put_fork(id):
global mutex
global tstate
global sem
mutex.acquire()
tstate[id] = 'thinking'
test(right(id))
test(left(id))
mutex.release()
def test(id):
global tstate
if tstate[id] == 'hungry' and tstate[left(id)] != 'eating' and tstate[right(id)] != 'eating':
tstate[id] = 'eating'
sem[id].release()
def philosophize_footman(id,meal):
global forks
global footman
state = THINKING
for i in range(meal):
sleep(random())
if(state == THINKING):
msg = "Philosopher " + str(id) + " is thinking."
#print(msg)
footman.acquire()
forks[right_fork(id)].acquire()
forks[left_fork(id)].acquire()
state = EATING
else:
msg = "Philosopher " + str(id) + " is eating."
#print(msg)
forks[right_fork(id)].release()
forks[left_fork(id)].release()
state = THINKING
footman.release()
print("Finish philosophize_footman")
def philosophize_lefthand(id,meal):
global forks
state = THINKING
for i in range(meal):
sleep(random())
if(state == THINKING):
#define the left hand user.
if(id == 3):
forks[left_fork(id)].acquire()
forks[right_fork(id)].acquire()
state = EATING
else:
forks[right_fork(id)].acquire()
forks[left_fork(id)].acquire()
state = EATING
else:
if(id == 3):
forks[left_fork(id)].release()
forks[right_fork(id)].release()
state == THINKING
else:
forks[right_fork(id)].release()
forks[left_fork(id)].release()
state == THINKING
print("Finish philosophize_lefthand")
def philosophize_Tanenbaum(id,meal):
for i in range(meal):
get_fork(id)
sleep(random())
put_fork(id)
print("Finish philosophize_Tanenbaum")
def run_c(numP,numM):
for m in range(numP):
phil1 = Thread(target = philosophize_Tanenbaum,args = (m,numM))
phil1.start()
def run_a():
global NUM_PHILOSOPHER
global MEAL
for i in range(NUM_PHILOSOPHER):
phil = Thread(target = philosophize_footman, args = (i,MEAL))
phil.start()
def run_b(numP,numM):
for n in range(numP):
phil2 = Thread(target = philosophize_lefthand, args = (n,numM))
phil2.start()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description = 'Philosopher dining')
parser.add_argument('--nphi','-n',
type = int,
default = 5,
help = 'add num_phi',
metavar = 'number of philosophers')
parser.add_argument('--meal','-m',
type = int,
default = 100,
help = 'number of meals',
metavar = 'meal')
args = parser.parse_args()
NUM_PHILOSOPHER = args.nphi #define number fo philosophers
MEAL = args.meal #define number of meals
forks = [Semaphore(1) for i in range(NUM_PHILOSOPHER)] #defines forks
sem = [Semaphore(0) for i in range(NUM_PHILOSOPHER)] #semaphores
footman = Semaphore(4) #limit the number of philosophers
mutex = Semaphore(1) #mutex
tstate = ['thinking'] * NUM_PHILOSOPHER #T-states
run_a()
# run_b(args.nphi,args.meal)
# run_c(args.nphi,args.meal)
timer = Timer(run_a)
print("Time:{:0.3f}s".format(timer. timeit(100)/100))
It is dinning philosopher problem solution by python. The code is listed
above. I want to measure the running time of function run_a(). But when using
timer, I found it doesn't work well. It prints the time result immediately(e.g
0.001s, but the code is still running.) So please help me with it! Thank you
very much.
Answer: You need to wait for the threads to finish; call
[`Thread.join()`](http://docs.python.org/2/library/threading.html#threading.Thread.join)
on each thread:
def run_a():
global NUM_PHILOSOPHER
global MEAL
threads = []
for i in range(NUM_PHILOSOPHER):
phil = Thread(target = philosophize_footman, args = (i,MEAL))
phil.start()
threads.append(phil)
for t in threads:
t.join()
The `Thread.join()` method blocks until the thread has completed, or you can
specify a timeout.
|
tcp socket, select tells writable, but write() blocks
Question: I wrote a little tcp socket server program, using select() to check if a
client socket is writable. If the client is writable, I will write data to it.
The client is written in Python, for testing only, it connect to the server,
and never read from the connect.
The result is, server write() finally blocks.
**Server(C):**
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <errno.h>
#include <arpa/inet.h>
#include <netinet/tcp.h>
#include <sys/select.h>
int main(int argc, char **argv){
int serv_sock;
struct sockaddr_in addr;
const char *ip = "0.0.0.0";
int opt = 1;
int port = 7000;
if(argc > 2){
port = atoi(argv[1]);
}
bzero(&addr, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_port = htons((short)port);
inet_pton(AF_INET, ip, &addr.sin_addr);
serv_sock = socket(AF_INET, SOCK_STREAM, 0);
setsockopt(serv_sock, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt));
bind(serv_sock, (struct sockaddr *)&addr, sizeof(addr));
listen(serv_sock, 1024);
printf("server listen on port: %d\n", port);
int sock = accept(serv_sock, NULL, NULL);
printf("accepted\n");
while(1){
fd_set w_set;
FD_ZERO(&w_set);
int maxfd = sock;
FD_SET(sock, &w_set);
int ret = select(maxfd + 1, NULL, &w_set, NULL, NULL);
if(ret < 0){
if(errno == EINTR){
continue;
}else{
printf("select error! %s\n", strerror(errno));
exit(0);
}
}
char buf[128 * 1024];
if(FD_ISSET(sock, &w_set)){
printf("write begin\n");
int n = write(sock, buf, sizeof(buf));
printf("write %d\n", n);
}
}
return 0;
}
**Client(Python):**
import socket
s = socket.socket()
s.connect(('127.0.0.1', 7000))
**Output:**
$ gcc a.c; ./a.out
server listen on port: 7000
accepted
write begin
write 131072
write begin
write 131072
write begin
write 131072
write begin
write 131072
write begin
write 131072
write begin
My qustion is: **why would write() operation blocks even if select tells
writable?** The manual of select() says:
> A file descriptor is considered ready if it is possible to perform the
> corresponding I/O operation (e.g., read(2)) without blocking.
Answer: Select() doesnt know how many bytes you are going to try to write, it only
knows that the socket's outgoing-data buffer isn't full.
If you want write() not to block, set the socket to non-blocking mode.
|
Python: Associating Function Output to Strings, then Combining into Dictionary
Question: I have a list of computer nodes called node_names, and I want to find the
amount of free ram in each node, and store that in a second list. I then want
to combine these lists into a dictionary.
I have:
for i in range(0, number_of_nodes):
sys_output = [commands.getoutput('ssh %s \'free -m\'' % node_names[i])]
free_memory = [x.split()[9] for x in sys_output]
print free_memory
For 4 nodes, this returns `[mem1],[mem2],[mem3],[mem4]`.
How can I combine each memory value into a single list? I'm having difficulty
assigning `free_memory` as a list instead of a string which is replaced after
each loop iteration.
Once I have a memory list, I should be able to combine it with the node_names
list to make a dictionary file and do any necessary sorting.
Answer: I would recommend just building the dictionary directly:
import commands
node_free_mem = {}
for n in node_names:
sys_output = commands.getoutput("ssh %s 'free -m'" % n)
free_memory = sys_output.split()[9]
node_free_mem[n] = int(free_memory)
Here's code that does exactly what you asked: it builds a list, then uses the
list to make a dictionary. Discussion after the code.
import commands
def get_free_mem(node_name):
sys_output = commands.getoutput('ssh %s \'free -m\'' % node_name)
free_memory = sys_output.split()[9]
return int(free_memory)
free_list = [get_free_mem(n) for n in node_names]
node_free_mem = dict(zip(node_names, free_list))
Note that in both code samples I simply iterate over the list of node names,
rather than using a `range()` to get index numbers and indexing the list. It's
simplest and best in Python to just ask for what you want: you want the names,
so ask for those.
I made a helper function for the code to get free memory. Then a simple list
comprehension builds a parallel list of free memory values.
The only tricky bit is building the dict. This use of `zip()` is actually
pretty common in Python and is discussed here:
[Map two lists into a dictionary in
Python](http://stackoverflow.com/questions/209840/map-two-lists-into-a-
dictionary-in-python)
For large lists in Python 2.x you might want to use `itertools.izip()` instead
of the built-in `zip()`, but in Python 3.x you just use the built-in `zip()`.
EDIT: cleaned up the code; it should work now.
`commands.getoutput()` returns a string. There is no need to package up the
string inside a list, so I removed the square braces. Then in turn there is no
need for a list comprehension to get out the free_memory value; just split the
string. Now we have a simple string that may be passed to `int()` to convert
to integer.
|
Perl's Inline::Python fails on pyephem
Question:
#!/bin/perl
use Inline Python;
$s = new Sun();
print "SUN: $s\n";
$m = new Moon();
__END__
__Python__
from ephem import Sun as Sun;
from ephem import Moon as Moon;
The code above yields:
SUN: <Sun "Sun" at 0x9ef6f14>
Can't bless non-reference value at /usr/local/lib/perl5/site_perl/5.10.0/i386-linux-thread-multi/Inline/Python.pm line 317.
What's wrong? I've tried this with many other objects (eg:
from ephem import Observer as Observer;
and then
$o= new Observer();
in the body of my code) and it works fine for everything I've tried _EXCEPT_
Moon.
EDIT (probably useless information):
In <https://github.com/brandon-rhodes/pyephem/tree/master/libastro-3.7.5> :
* The routines for calculating Sun, Mercury, Venus, Mars (the ones that work fine) are done in vsop87.c, function vsop87()
* The routines for calculating Jupiter, Saturn, etc (the ones that don't work) are done in chap95.c, function chap95()
* vsop87() "returns" an array of 6 doubles, which appear to be some sort of spherical coordinates.
* chap95() "returns" an array of 6 doubles, which appear to be Cartesian coordinates, ie, rectangular and NOT spherical.
* planpos() in plans.c calls one of the two functions above, depending on which planet you choose. What's odd is that planpos() treats the function results the same (sort of), even though they return very different things.
* After planpos(), all planets are treated the same. planpos() is called by plans() (also in plans.c), which is in turn called by obj_planet() in circum.c which is then called by obj_cir() also in circum.c
* obj_planet() and obj_cir() define the planet. Since planets are treated the same after planpos(), there should be no difference between them.
Answer: It is indeed the different handling for the Moon, Jupiter and Saturn bodies,
as pointed out by Slaven in the comments. In fact, you're running into the
Python 2 issue that there is a difference between `types` and `classes`. I
can't give you the details, but there is
[quite](http://stackoverflow.com/questions/4479819/types-and-classes-in-
python) [a bit of](http://docs.python.org/2/reference/datamodel.html)
[material](http://www.python.org/download/releases/2.2/descrintro/) on the
subject.
Suffice to say, that the Python wrapper provided by PyEphem turns the bodies
into a proper class, which `Python::Inline` can handle. The Python-C wrapper,
`_libastro`, provides types instead, and thus setting `Moon` to
`_libastro.Moon` makes `Moon` a type instead of a class. Why `Python::Inline`
can deal with classes and not types, I don't know.
This, however, provides enough information for a work-around: turn
`ephem.Moon` into a class. Thus, the following may work:
#!/usr/bin/env perl
use Inline Python;
$s = new Sun();
print "SUN: $s\n";
$m = new Moon();
print "Moon: $m\n";
__END__
__Python__
from ephem import Sun
from ephem import Moon
class Moon(Moon):
pass
which for me results in:
SUN: <Sun "Sun" at 0x1f450b0>
Moon: <Moon "Moon" at 0x20eec50>
You can apply the same trick to Saturn and Jupiter of course.
(I've "Pythonized" the import statements a bit: no need for `as` or semi-
colons.)
In case you'd like to verify that the Moon is still a special body even after
turning it into a class, try to use on of its special attributes, for example
`libration_lat`:
$m->compute()
$mllat = $m->{libration_lat};
print "Moon: ${mllat}\n";
Moon: 5:50:29.6
which will fail for any other type such as the Sun.
(I found these special attributes in `test_bodies.py` in the PyEphem package,
though I presume these are documented as well. In case you'd like to test for
Saturn and Jupiter, you can find them there.)
|
Python os.geteuid() for windows
Question: I saw that os.geteuid() is only available for unix, how to replace its usage
in windows. I needed this because celery is using the function and for celery
to run in windows I need this function alternative for windows. Please do
help.
Answer: User id in Windows? I'm not sure
[getpass](http://docs.python.org/2/library/getpass.html) is what you want.
import getpass
getpass.getuser()
>>> 'HelloWorld'
But be careful, the function is return values of various **environment
variables**.
Environment variable can be changed.
|
How to reconstruct Python function from memory address?
Question:
>> def spam():
... print("top secret function")
...
>>> print(spam)
<function spam at 0x7feccc97fb78>
>>> spam = "spam"
So I lose the reference to `spam` function. Can I get it back from that memory
address: 0x7feccc97fb78?
>>> orig_spam_function = get_orig_func_from_memory_address("0x7feccc97fb78")
**Edit** (_responding to thefourtheye_):
Sorry for the lousy question, consider this case:
>>> from collections import defaultdict
>>> d = defaultdict(spam)
>>> d
defaultdict(<function spam at 0x7f597572c270>, {})
So the function is not garbaged collected yet. Can I recover it? Of course, in
this case, you can use `default_factory` attribute.
>>> d.default_factory
<function spam at 0x7f597572c270>
But imagine `defaultdict` without `default_factory` attribute.
Answer: When you assign
spam = "spam"
the last reference to the `spam` function is gone, the reference count becomes
0 and that will be garbage collected later. So, there is no way, we can get it
back. We can check that with this program
def spam():
print("top secret function")
import sys
print id(spam), sys.getrefcount(spam)
spam = "spam"
print id(spam), sys.getrefcount(spam)
**Output on my machine**
140068817052928 2
140068817075440 12
The actual address of `spam` was different than the one which we see after the
assignment statement. So, it is pointing to a different object now. But,
originally, the reference count is 1 (`getrefcount` will always be [one higher
than the actual
count](http://docs.python.org/2/library/sys.html#sys.getrefcount)). When we
reassign `spam`, now no one is actually pointing to that function. So, it will
be ready for garbage collection.
|
"Optional feature not implemented (106) (SQLBindParameter)" error with pyodbc
Question: I'm being driven nuts trying to figure this one out. I'm using Python for the
first time, and trying to write data collected from twitter out to an Access
2010 database.
The command I'm using is:
cursor.execute('''insert into core_data(screen_name,retweet_count) values (?,?,)''', (sname,int(rcount)))
The error message being returned is:
Traceback (most recent call last): File "C:/Documents and Settings/Administrator/PycharmProjects/clientgauge/tw_scraper.py", line 44, in <module>
cursor.execute('''insert into core_data(screen_name,retweet_count) values (?,?,)''', (sname,int(rcount)))
pyodbc.Error: ('HYC00', '[HYC00] [Microsoft][ODBC Microsoft Access Driver]Optional feature not implemented (106) (SQLBindParameter)')
I've tried various permutations of passing the data into the db. If I remove
the int(rcount) entry, it will post the first value, sname, without any
issues. As soon as I try to pass in more than one parameter though, this is
when the problems start.
I have a feeling I'm missing something really basic, but I can't find any
examples of this which actually have a similar look to what I'm trying to do,
and what I'm trying is NOT difficult...user error probably :)
Any help would be much appreciated.
Cheers, Kev
Complete code is:
from twython import Twython
import pyodbc
ACCESS_DATABASE_FILE = 'C:\\data\\ClientGauge.accdb'
ODBC_CONN_STR = 'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=%s;' %ACCESS_DATABASE_FILE
cnxn = pyodbc.connect(ODBC_CONN_STR, autocommit=True)
cursor = cnxn.cursor()
APP_KEY = '<removed>'
APP_SECRET = '<removed>'
# Authenticate on twitter using keys above
t = Twython(APP_KEY, APP_SECRET, oauth_version=2)
# Obtain new access token for this session
ACCESS_TOKEN = t.obtain_access_token()
# Authenticate using new access token
t = Twython(APP_KEY, access_token=ACCESS_TOKEN)
# Carry out search
search = t.search(q='<removed>', #**supply whatever query you want here**
count=1, result_type='recent')
tweets = search['statuses']
for tweet in tweets:
sname=tweet['user']['screen_name']
rcount=int(tweet['retweet_count'])
fcount=tweet['favorite_count']
coord=tweet['coordinates']
tzone=tweet['user']['time_zone']
cdate=tweet['created_at']
htags=tweet['entities']['hashtags']
sql = "insert into core_data(screen_name,retweet_count,favourited_count) values (?,?,?)", (str(sname),rcount,fcount)
print(sql)
cursor.execute('''insert into core_data(screen_name,retweet_count) values (?,?)''', (sname,rcount))
cursor.commit()
cnxn.close()
I'm using MS Access 2010, pyodbc-3.0.7.win32-py3.3.exe, Python 3.3 & PyCharm.
Don't judge my coding prowess :) Python is new to me. You'll be able to see
that I've tried setting the INSERT statement up as a string initially (sql),
and I was calling the cursor using:
cursor.execute(sql)
Unfortunately, this didn't work for me either! If I replace the second
parameter with a number such as 1...it still doesn't work. Frustrating.
Answer: You've got an extra comma in your parameters list that is messing you up. The
following code works for me:
import pyodbc
sname = 'Gord'
rcount = 3
cnxn = pyodbc.connect(
'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};' +
'DBQ=C:\\Users\\Public\\Database1.accdb;')
cursor = cnxn.cursor()
cursor.execute('''insert into core_data(screen_name,retweet_count) values (?,?)''', (sname,int(rcount)))
cursor.commit()
cnxn.close()
Edit:
If the above sample code does not work on your system, then perhaps you could
try downloading and installing [pypyodbc](http://code.google.com/p/pypyodbc/)
and then trying this code (which works for me, too):
import pypyodbc
sname = 'Gord'
rcount = 3
cnxn = pypyodbc.connect(
'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};' +
'DBQ=C:\\Users\\Public\\Database1.accdb;')
cursor = cnxn.cursor()
cursor.execute('''insert into core_data(screen_name,retweet_count) values (?,?)''', (sname,rcount))
cursor.commit()
cnxn.close()
Edit:
## This issue was resolved by using
[pypyodbc](http://code.google.com/p/pypyodbc/).
|
Python module: how to prevent importing modules called by the new module
Question: I am new in `Python` and I am creating a module to re-use some code. My module
(`impy.py`) looks like this (it has one function so far)...
import numpy as np
def read_image(fname):
....
and it is stored in the following directory:
custom_modules/
__init.py__
impy.py
As you can see it uses the module numpy. The problem is that when I import it
from another script, like this...
import custom_modules.impy as im
and I type `im.` I get the option of calling not only the function
`read_image()` but also the module `np`.
**How can I do to make it only available the functions I am writing in my
module and not the modules that my module is calling (numpy in this case)?**
Thank you very much for your help.
Answer: You could import numpy inside your function
def read_image(fname):
import numpy as np
....
making it locally available to the `read_image` code, but not globally
available.
Warning though, this might cause a performance hit (as `numpy` would be
imported each time the code is run rather than just once on the initial
import) - especially if you run `read_image` multiple times.
|
How to make list of datetimes using rrule
Question: I am creating my own .ics parser.
I am using icalendar python module. It works great but I would like to get
list of datetimes for events which have RRULE set.
I have starting date as datetime object instance and RRULE parsed like this:
CaselessDict({'FREQ': ['MONTHLY'], 'INTERVAL': [1], 'BYDAY': ['4TH']})
But I cannot figure out how to make a list of datetimes from these two things.
Thank you
Answer: You can use the [python-dateutil](http://labix.org/python-dateutil) library
for generating `rrule`s, eg:
from dateutil.rrule import rrule, MONTHLY
dts = list(rrule(MONTHLY, interval=10, byweekday=4, count=3))
# [datetime.datetime(2013, 11, 29, 15, 44, 45), datetime.datetime(2014, 9, 5, 15, 44, 45), datetime.datetime(2014, 9, 12, 15, 44, 45)]
Adjust arguments as needed.
|
python generator endless stream without using yield
Question: i'm trying to generate an endless stream of results given a function f and an
initial value x so first call should give the initial value, second call
should give f(x), third call is f(x2) while x2 is the previous result of f(x)
and so on..
what i have come up with:
def generate(f, x):
return itertools.repeat(lambda x: f(x))
which does not seem to work. any ideas? (i cant use yield in my code). also i
cant use more than 1 line of code for this problem. any help would be
appreciated.
also note that in a previous ex. i was asked to use the yield. with no
problems:
while True:
yield x
x = f(x)
this works fine. but now.. no clue how to do it without
Answer: In Python 3.3, you can use `itertools.accumulate`:
import itertools
def generate(f, x):
return itertools.accumulate(itertools.repeat(x), lambda v,_:f(v))
for i, val in enumerate(generate(lambda x: 2*x, 3)):
print(val)
if i == 10:
break
|
from past import print_statement
Question: Is there some equivalent to `from __future__ import print_function` that
forward-ports the `print` statement from python 2.x?
An answer involving some `ipython` magic that lets me print without need of
surrounding parens during prototyping is also acceptable.
Answer: Some suggestion for IPython
%autocall
print "Hi"
Define magic with autocall on
from IPython.core.magic import register_line_magic
@register_line_magic
def p(line):
print(line)
p "Hi"
Define magic with autocall off
from IPython.core.magic import register_line_magic
@register_line_magic
def p(line):
print(eval(line))
%p "Again"
You could create a .config/ipython/profile_default/startup/autoprint.py file
for you line magic functions.
|
Python Pandas: How to filter a dataframe with more than one expression stored in different variables?
Question: I am building a multy purpose User Interface, and I am adding Pandas to it.
For this, I need to form expressions by components (stored in variables) which
are defined by users choices.
All seems to work fine, but I got into a dead end. I want the user to be able
to pick several expressions, and then concatenate them to form the new
dataframe. If I only use one expression, everything will work:
from pandas import read_csv
df = read_csv("SomeCsv.csv")
b= df[r'ID']
a=(b==r'p')
Value=df[a] #Works,returning the rows in df whichs column 'ID' equals r'p'
But if I want to include more expressions:
from pandas import read_csv
df = read_csv("SomeCsv.csv")
b= df[r'ID']
c=(b==r'p')
d=(b==r'ul')
a=c or d #Breaks at this line
Value=df[a] #Doesnt work. I would expect the rows in df whichs column 'ID' equals r'p' or 'ID' equals r'ul'
And throws the following error:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Before asking, I tried all the .any and .all combinations of the expressions I
could think of, and all of them failed.
How to filter this dataframe by columns matching more than one expression
stored in variables?
Answer: As a newcomer to numpy I struggled a bit (no pun intended) about this too. I
believe you want something like this:
>>> df[(df['ID'] == 'p') | (df['ID'] == 'ul')]
The expression must evaluate to a boolean (and the terms must be connected
through **bitwise** operations), which then is used to mask or filter the
corresponding elements.
* * *
See also:
* <http://pandas.pydata.org/pandas-docs/dev/indexing.html#boolean-indexing>
* <http://docs.scipy.org/doc/numpy/user/basics.indexing.html#boolean-or-mask-index-arrays>
* <http://stackoverflow.com/a/13572798/89391>
* <http://pandas.pydata.org/pandas-docs/dev/generated/pandas.Series.isin.html>
|
How to check size of the files in a directory with python?
Question: I have a folder that contains 4 text files. I want to program a code with
which I would be able to check the size of the files in my folder and only
open those that has equal sizes. Anyone has any idea?
I have already tried this
import os
d=os.stat('H:/My Documents/211').st_size
Answer: I can't reproduce your error. This
import os
print os.path.getsize('mydata.csv')
print os.stat('mydata.csv').st_size
Yields
359415
359415
I'm guessing that the filename you provide is wrong. This will print the size
of all files in a folder
my_dir = r'path/to/subdir/'
for f in os.listdir(my_dir):
path = os.path.join(my_dir, f)
if os.path.isfile(path):
print os.path.getsize(path)
|
API tkinter python entry
Question: I have a problem with the entry. I want the user to write in any city they
want in the
API link. I get that i cant convert to a str. So when ever the user choose to
enter a city he can just click on the button forecast after he entered the
city in the Entry and the weather will be printed:
from tkinter import *
import requests
import json
class Application(Frame):
def __init__(self, master=None):
Frame.__init__(self, master)
self.root = master
self.pack()
self.create_widgets()
def create_widgets(self):
self.v = StringVar()
self.e = Entry(self, textvariable=self.v)
self.e.pack(side="left")
self.v.set("Enter City")
self.s = self.v.get()
self.e.focus_set()
self.butn = Button(self)
self.butn["text"] = "Forecast"
self.butn["command"] = self.make_request
self.butn.pack(side="left")
self.QUIT = Button(self, text="QUIT", command=self.root.destroy)
self.QUIT.pack(side="right")
def make_request(self):
r = requests.get("http://api.wunderground.com/api/ab78bcbaca641959/forecast/q/Sweden/" + ???? + ".json")
data = r.json()
for day in data['forecast']['simpleforecast']['forecastday']:
print (day['date']['weekday'] + ":")
print ("Conditions: ", day['conditions'])
print ("High: ", day['high']['celsius'] + "C", "Low: ", day['low']['celsius'] + "C", '\n')
return data
rot = Tk()
rot.geometry("900x650+200+50")
rot.title("The Trip")
app = Application(master=rot)
app.mainloop()
Answer: Use `self.v.get()` in place of `????` and it works
r = requests.get("http://api.wunderground.com/api/ab78bcbaca641959/forecast/q/Sweden/" + self.v.get() + ".json")
By the way: you don't need `self.s = self.v.get()` anymore.
|
P4Python - p4.run_changes returning empty list
Question: The following code printing empty list "[]".I am expecting list of all change
list between the date range specified. What do I need to fix to get the change
list?
from P4 import P4,P4Exception
p4 = P4()
p4.port = "perforce:1666"
p4.user = "kvenkatraman"
p4.client = "kvenkatraman_tp"
p4.cwd = "C:\sv"
try:
p4.connect()
#info = p4.run("info")
info = p4.run("changes","-s","submitted","\"//depot/psp/dev/...%402013/06/01,%40now\"")
print info
p4.disconnect()
except P4Exception:
for e in p4.errors:
print e
Regards Kumar
Answer: Try changing `"\"//depot/psp/dev/...%402013/06/01,%40now\""` to
`"//depot/psp/dev/...@2013/06/01,@now"`
|
Converting a string representation of an array to an actual array in python
Question: Hi doing some stuff over a network and wondering if there is any way of
converting python array as a string back into a python array.. for example
x = "[1,2,3,4]"
converting x to
x_array = [1,2,3,4]
Bonus if it can also work for numpy multidimensional arrays!
Answer: For the normal arrays, use
[`ast.literal_eval`](http://docs.python.org/2/library/ast.html#ast.literal_eval):
>>> from ast import literal_eval
>>> x = "[1,2,3,4]"
>>> literal_eval(x)
[1, 2, 3, 4]
>>> type(literal_eval(x))
<type 'list'>
>>>
`numpy.array`'s though are a little tricky because of how Python renders them
as strings:
>>> import numpy as np
>>> x = [[1,2,3], [4,5,6]]
>>> x = np.array(x)
>>> x
array([[1, 2, 3],
[4, 5, 6]])
>>> x = str(x)
>>> x
'[[1 2 3]\n [4 5 6]]'
>>>
One hack you could use though for simple ones is replacing the whitespace with
commas using [`re.sub`](http://docs.python.org/2/library/re.html#re.sub):
>>> import re
>>> x = re.sub("\s+", ",", x)
>>> x
'[[1,2,3],[4,5,6]]'
>>>
Then, you can use `ast.literal_eval` and turn it back into a `numpy.array`:
>>> x = literal_eval(x)
>>> np.array(x)
array([[1, 2, 3],
[4, 5, 6]])
>>>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.