text
stringlengths 226
34.5k
|
---|
mysql-connector-python GBK encoding error
Question: I have a GBK encoding data table. Sometimes, a insert SQL with unicode string
failed with exception:
mysql.connector.errors.ProgrammingError: Failed processing pyformat-
parameters; 'gbk' codec can't encode character u'\u2022' in position 14:
illegal
It is caused by encoding a unicode object without param 'ignore' in mysql-
connector-python library. But I cannot modify the code. How to solve this
problem?
Answer: Comment of hago already mentioned to filter Unicode characters which are not
part of GBK, but I would like to give a full example using MySQL
Connector/Python.
# -*- coding: utf-8 -*-
import mysql.connector
cnx = mysql.connector.connect(
database='test', charset='gbk', use_unicode=False
)
cur = cnx.cursor()
cur.execute("DROP TABLE IF EXISTS gbktest")
table = (
"CREATE TABLE gbktest ("
"id INT AUTO_INCREMENT KEY, "
"c1 VARCHAR(40)"
") CHARACTER SET 'gbk'"
)
cur.execute(table)
data = {
'c1': u'\u2022国家标准'.encode('gbk', 'ignore')
}
cur.execute("INSERT INTO gbktest (c1) VALUES (%(c1)s)", data)
cnx.commit()
cur.execute("SELECT id, c1 FROM gbktest")
rows = cur.fetchall()
# Terminal using UTF-8 encoding:
#print rows[0][1].decode('gbk')
# Terminal using GBK encoding:
print rows[0][1]
The last two lines need to be commented/uncommented depending on whether your
Terminal is using UTF-8 or GBK encoding.
|
Pycharm and bitbucket plugin
Question: I have installed bitbucket plufin to connect my Pycharm with bitbucket.I have
tried in VCS menu in PyCharm -> import into Versions control -> Share project
(with bitbucket icon) -> name it like my project -> mark that it is Git
repository -> click Ok and I get then error message "Share project on
bitbucket - push failed"
Log 11:11:33.157: cd /Users/apple/Documents/Projects/Python/Study_python2
11:11:33.157: git show --name-status -M --pretty=format:%x01%h%x02%H%x02%ct%x02%an%x02%at%x02%ae%x02%cn%x02%ce%x02%p%x02%d%x02%s%x02%b%x02%B%x03 --encoding=UTF-8 5847233
11:11:33.066: cd /Users/apple/Documents/Projects/Python/Study_python2
11:11:33.066: git log HEAD --branches --remotes --tags --max-count=340 --date-order --pretty=format:%x01%h%x02%ct%x02%p%x02%an%x03 --encoding=UTF-8 --full-history --sparse -- .
* I have tried to import into Versions control -> Share project on Github - and it's OK
* Also I have local .git installed in project dir
**[Update]** After setting up my repo and initial commit I met new error in
pycharm when I tried to push smth: `Push failed: fatal: The remote end hung up
unexpectedly` I add my public key to bitbucket but that din't solve my
problem. The solution was to connect to bitbucket via https not ssh
* Go to bitbucket and copy https link to your repo
* Edit `url` in the file in your project dir `.git/config` with https link
Answer: I've faced the same problem, here is the solution:
<http://www.youtube.com/watch?v=klfLSRXUOzY>
The main problem here, is that no remote is added to the project by PyCharm /
PHPStorm / IntelliJ. This means you have to manually add the remote via `git
add remote <your remote here>`
|
Error on Tkinter import
Question: I'm writing a Tkinter app in Python 2.7, but I'm running into some troubles
that I haven't had before. From what I can tell, it looks like the Tkinter
module is getting imported for the `__init__` function in my class, but not
for the other functions. Here's a simplified version of what I've got:
from Tkinter import *
class App:
def __init__(self):
self.master = Tk()
self.window = Frame(self.master)
self.window.grid()
self.BuildFrames()
self.master.mainloop()
def BuildFrames(self):
frames = []
frames.append(Frame(self.window,borderwidth=2,padx=10,pady=10))
# more code follows...
for Frame in frames:
Frame.grid()
App()
When I run this, I get the following error:
Traceback (most recent call last):
File "myApp.py", line 131, in <module>
App()
File "myApp.py", line 12, in __init__
self.BuildFrames()
File "myApp.py", line 26, in BuildFrames
frame1 = Frame(self.window,borderwidth=2,padx=10,pady=10)
UnboundLocalError: local variable 'Frame' referenced before assignment
From what I can tell, the `Frame` function isn't being recognized as a Tkinter
method within the `BuildFrames()` function. How on earth could it be
recognized in `__init__` but not within `BuildFrames`???
I can fix the problem by changing the import to:
import Tkinter as Tk
and then adding a `Tk.` in front of all the Tkinter methods, but would rather
avoid it (and I shouldn't have to do this anyway!)
I must be missing something big about the way the import works, but I could
swear this same type of code has worked for me before. Can someone help me out
with this one?
Answer: Maybe somewhere in your code, `Frame = ...` exists. Rename that variable to
other than `Frame`.
You are doing something like following code:
>>> def f():
... a + 1
... a = 0
...
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in f
UnboundLocalError: local variable 'a' referenced before assignment
[Why am I getting an UnboundLocalError when the variable has a
value?](http://docs.python.org/2/faq/programming.html#why-am-i-getting-an-
unboundlocalerror-when-the-variable-has-a-value)
**EDIT**
Change your BuildFrames as follow:
def BuildFrames(self):
frames = []
frames.append(Frame(self.window,borderwidth=2,padx=10,pady=10))
# more code follows...
for frame in frames:
frame.grid()
|
more efficient way to calculate distance in numpy?
Question: i have a question on how to calculate distances in numpy as fast as it can,
def getR1(VVm,VVs,HHm,HHs):
t0=time.time()
R=VVs.flatten()[numpy.newaxis,:]-VVm.flatten()[:,numpy.newaxis]
R*=R
R1=HHs.flatten()[numpy.newaxis,:]-HHm.flatten()[:,numpy.newaxis]
R1*=R1
R+=R1
del R1
print "R1\t",time.time()-t0, R.shape, #11.7576191425 (108225, 10500)
print numpy.max(R) #4176.26290975
# uses 17.5Gb ram
return R
def getR2(VVm,VVs,HHm,HHs):
t0=time.time()
precomputed_flat = numpy.column_stack((VVs.flatten(), HHs.flatten()))
measured_flat = numpy.column_stack((VVm.flatten(), HHm.flatten()))
deltas = precomputed_flat[None,:,:] - measured_flat[:, None, :]
#print time.time()-t0, deltas.shape # 5.861109972 (108225, 10500, 2)
R = numpy.einsum('ijk,ijk->ij', deltas, deltas)
print "R2\t",time.time()-t0,R.shape, #14.5291359425 (108225, 10500)
print numpy.max(R) #4176.26290975
# uses 26Gb ram
return R
def getR3(VVm,VVs,HHm,HHs):
from numpy.core.umath_tests import inner1d
t0=time.time()
precomputed_flat = numpy.column_stack((VVs.flatten(), HHs.flatten()))
measured_flat = numpy.column_stack((VVm.flatten(), HHm.flatten()))
deltas = precomputed_flat[None,:,:] - measured_flat[:, None, :]
#print time.time()-t0, deltas.shape # 5.861109972 (108225, 10500, 2)
R = inner1d(deltas, deltas)
print "R3\t",time.time()-t0, R.shape, #12.6972110271 (108225, 10500)
print numpy.max(R) #4176.26290975
#Uses 26Gb
return R
def getR4(VVm,VVs,HHm,HHs):
from scipy.spatial.distance import cdist
t0=time.time()
precomputed_flat = numpy.column_stack((VVs.flatten(), HHs.flatten()))
measured_flat = numpy.column_stack((VVm.flatten(), HHm.flatten()))
R=spdist.cdist(precomputed_flat,measured_flat, 'sqeuclidean') #.T
print "R4\t",time.time()-t0, R.shape, #17.7022118568 (108225, 10500)
print numpy.max(R) #4176.26290975
# uses 9 Gb ram
return R
def getR5(VVm,VVs,HHm,HHs):
from scipy.spatial.distance import cdist
t0=time.time()
precomputed_flat = numpy.column_stack((VVs.flatten(), HHs.flatten()))
measured_flat = numpy.column_stack((VVm.flatten(), HHm.flatten()))
R=spdist.cdist(precomputed_flat,measured_flat, 'euclidean') #.T
print "R5\t",time.time()-t0, R.shape, #15.6070930958 (108225, 10500)
print numpy.max(R) #64.6240118667
# uses only 9 Gb ram
return R
def getR6(VVm,VVs,HHm,HHs):
from scipy.weave import blitz
t0=time.time()
R=VVs.flatten()[numpy.newaxis,:]-VVm.flatten()[:,numpy.newaxis]
blitz("R=R*R") # R*=R
R1=HHs.flatten()[numpy.newaxis,:]-HHm.flatten()[:,numpy.newaxis]
blitz("R1=R1*R1") # R1*=R1
blitz("R=R+R1") # R+=R1
del R1
print "R6\t",time.time()-t0, R.shape, #11.7576191425 (108225, 10500)
print numpy.max(R) #4176.26290975
return R
results in the following times:
R1 11.7737319469 (108225, 10500) 4909.66881791
R2 15.1279799938 (108225, 10500) 4909.66881791
R3 12.7408981323 (108225, 10500) 4909.66881791
R4 17.3336868286 (10500, 108225) 4909.66881791
R5 15.7530870438 (10500, 108225) 70.0690289494
R6 11.670968771 (108225, 10500) 4909.66881791
While the last one gives sqrt((VVm-VVs)^2+(HHm-HHs)^2), while the others give
(VVm-VVs)^2+(HHm-HHs)^2, This is not really important, since otherwise further
on in my code i take the minimum of R[i,:] for each i, and sqrt doesnt
influence the minimum value anyways, (and if i am interested in the distance,
i just take sqrt(value), instead of doing the sqrt over the entire array, so
there is really no timing difference due to that.
The question remains: how come the first solution is the best, (the reason the
second and third are slower is because deltas=... takes 5.8seconds, (which is
also why those two methods take 26Gb)), And why is the sqeuclidean slower than
the euclidean?
sqeuclidean should just do (VVm-VVs)^2+(HHm-HHs)^2, while i think it does
something different. Anyone know how to find the sourcecode (C or whatever is
at the bottom) of that method? I think it does sqrt((VVm-VVs)^2+(HHm-HHs)^2)^2
(the only reason i can think why it would be slower than (VVm-VVs)^2+(HHm-
HHs)^2 - I know its a stupid reason, anyone got a more logical one?)
Since i know nothing of C, how would i inline this with scipy.weave? and is
that code compilable normally like you do with python? or do i need special
stuff installed for that?
Edit: ok, i tried it with scipy.weave.blitz, (R6 method), and that is slightly
faster, but i assume someone who knows more C than me can still improve this
speed? I just took the lines which are of the form a+=b or *=, and looked up
how they would be in C, and put them in the blitz statement, but i guess if i
put lines with the statements with flatten and newaxis in C as well, that it
should go faster too, but i dont know how i can do that (someone who knows C
maybe explain?). Right now, the difference between the stuff with blitz and my
first method are not big enough to really be caused by C vs numpy i guess?
I guess the other methods like with deltas=... can go much faster too, when i
would put it in C ?
Answer: Whenever you have multiplications and sums, try to use one of the dot product
functions or `np.einsum`. Since you are preallocating your arrays, rather than
having different arrays for horizontal and vertical coordinates, stack them
both together:
precomputed_flat = np.column_stack((svf.flatten(), shf.flatten()))
measured_flat = np.column_stack((VVmeasured.flatten(), HHmeasured.flatten()))
deltas = precomputed_flat - measured_flat[:, None, :]
From here, the simplest would be:
dist = np.einsum('ijk,ijk->ij', deltas, deltas)
You could also try something like:
from numpy.core.umath_tests import inner1d
dist = inner1d(deltas, deltas)
* * *
There is of course also SciPy's spatial module
[`cdist`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html):
from scipy.spatial.distance import cdist
dist = cdist(precomputed_flat, measured_flat, 'euclidean')
* * *
**EDIT** I cannot run tests on such a large dataset, but these timings are
rather enlightening:
len_a, len_b = 10000, 1000
a = np.random.rand(2, len_a)
b = np.random.rand(2, len_b)
c = np.random.rand(len_a, 2)
d = np.random.rand(len_b, 2)
In [3]: %timeit a[:, None, :] - b[..., None]
10 loops, best of 3: 76.7 ms per loop
In [4]: %timeit c[:, None, :] - d
1 loops, best of 3: 221 ms per loop
For the above smaller dataset, I can get a slight speed up over your method
with `scipy.spatial.distance.cdist` and match it with `inner1d`, by arranging
data differently in memory:
precomputed_flat = np.vstack((svf.flatten(), shf.flatten()))
measured_flat = np.vstack((VVmeasured.flatten(), HHmeasured.flatten()))
deltas = precomputed_flat[:, None, :] - measured_flat
import scipy.spatial.distance as spdist
from numpy.core.umath_tests import inner1d
In [13]: %timeit r0 = a[0, None, :] - b[0, :, None]; r1 = a[1, None, :] - b[1, :, None]; r0 *= r0; r1 *= r1; r0 += r1
10 loops, best of 3: 146 ms per loop
In [14]: %timeit deltas = (a[:, None, :] - b[..., None]).T; inner1d(deltas, deltas)
10 loops, best of 3: 145 ms per loop
In [15]: %timeit spdist.cdist(a.T, b.T)
10 loops, best of 3: 124 ms per loop
In [16]: %timeit deltas = a[:, None, :] - b[..., None]; np.einsum('ijk,ijk->jk', deltas, deltas)
10 loops, best of 3: 163 ms per loop
|
pymssql (python module) losing item when fetching data
Question: I have a database named "sina2013",and the columus is Title,Content Now I want
to use pymssql module to get the data.At the same time ,using the Title as the
filename of a txt file,the Content as the content of the txt file. The strange
thing is the number of files is less than the items in database. where is the
error? the code i have tried is:
import pymssql
conn = pymssql.connect(...)
cur = conn.cursor()
cur.execute('SELECT Title,Content FROM sina2013')
count=len(cur.fetchall()) #Will return the right number :5913
for Title,Content in cur:
filename=file(str(Title)+r'.txt',r'w')
filename.write(Content )
filename.close()
cur.close()
The number of txt file is less than it should be. what is the reason?
Answer: Perhaps changing your for loop into this:
# cursor fetchall() method returns all rows from a query
for Title,Content in cur.fetchall():
... will fix the issue?
|
Python PIL TypeError: integer argument expected, got float
Question: I keep getting this error when running a paste script in Python 3.x:
TypeError: integer argument expected, got float
from PIL import Image
img=Image.open('C:\Mine.jpg','r')
img_w,img_h=img.size
background = Image.new('RGBA', (1440,900), (255, 255, 255, 255))
bg_w,bg_h=background.size
offset=((bg_w-img_w)/2,(bg_h-img_h)/2)
background.paste(img,offset)
background.save('C:\new.jpg')
Error MSG:
Traceback (most recent call last):
File "C:\Users\*****\workspace\Canvas Imager\src\Imager.py", line 7, in <module>
background.paste(img,offset)
File "C:\Python33\lib\site-packages\PIL\Image.py", line 1127, in paste
self.im.paste(im, box)
TypeError: integer argument expected, got float
I see that there is suppose to be an integer but is getting a float in the
end. What can I do to make it int instead?
Answer: In Python 3, to get an integer result from a division you need to use `//`
instead of `/`:
offset=((bg_w-img_w)//2,(bg_h-img_h)//2)
|
Can't import turtle module in Python 2.x and Python 3.x
Question: I want to play with [turtle](http://docs.python.org/2/library/turtle.html)
module in Python. But when i do import turtle module, i've the following
error:
$ python
Python 2.7.3 (default, Sep 26 2012, 21:51:14)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import turtle
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "turtle.py", line 3, in <module>
myTurtle = turtle.Turtle()
AttributeError: 'module' object has no attribute 'Turtle'
and for Python 3.x:
$ python3
Python 3.2.3 (default, Sep 30 2012, 16:41:36)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import turtle
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "turtle.py", line 3, in <module>
myTurtle = turtle.Turtle()
AttributeError: 'module' object has no attribute 'Turtle'
I working under Kubuntu Linux 12.10. I've played with Tkinter gui. There is no
problem. What happen with turtle module?
Answer: You've called a script `turtle.py`, which is shadowing the `turtle` module in
the standard library. Rename it.
|
Issue with python module importing
Question: In dir tree looks like this
PyPong \+ Main.py \+ Rectangle.py
Now, I have imported Rectangle.py like this in Main.py
import pygame, sys, Rectangle
However, whenever I try making an instance of the class Rectangle.py like here
rectangles.append(Rectangle(400 + x * rectangleWidth + x * 10, 30 + y * rectangleHeight + y * 10, rectangleWidth, rectangleHeight, (randint(0, 255), randint(0, 255), randint(0, 255)), screen))
into this array
rectangles = []
I recieve this error:
TypeError: 'module' object is not callable
Any help is greatly appreciated
Also, here is the full Rectangle.py
class Rectangle:
y = 0
x = 0
width = 0
height = 0
color = 0
screen = 0
GO_UP = 1
GO_DOWN = 2
GO_LEFT = 3
GO_RIGHT = 4
closeX = 0
closeY = 0
removed = False
def __init__(self, x, y, width, height, color, screen):
self.x = x
self.y = y
self.height = height
self.width = width
self.color = color
self.screen = screen
def render(self):
pygame.draw.rect(self.screen, self.color, (self.x, self.y, self.width, self.height), 0)
pass
def intersects(self, x, y, r):
#TOP SIDE
self.closeX = 0
self.closeY = 0
intersectsTop = True;
if x <= self.x: self.closeX = self.x
elif x >= self.x + self.width: self.closeX = self.x + self.width
else: self.closeX = self.x
self.closeY = self.y
if abs(x - self.closeX) >= r: intersectsTop = False
if abs(y - self.closeY) >= r: intersectsTop = False
if intersectsTop:
self.remove()
return self.GO_UP
#LEFT SIDE
self.closeX = 0
self.closeY = 0
intersectsLeft = True
if y <= self.y: self.closeY = self.y
elif y >= self.y + self.height: self.closeY = self.y + self.height
else: self.closeY = y;
self.closeX = self.x
if abs(x - self.closeX) >= r: intersectsLeft = False
if abs(y - self.closeY) >= r: intersectsLeft = False
if intersectsLeft:
self.remove()
return self.GO_LEFT
#RIGHT SIDE
self.closeX = 0
self.closeY = 0
intersectsRight = True
if y <= self.y: self.closeY = self.y
elif y >= self.y + self.height: self.closeY = self.y + self.height
else: self.closeY = y;
self.closeX = self.x + self.width
if abs(x - self.closeX) >= r: intersectsRight = False
if abs(y - self.closeY) >= r: intersectsRight = False
if intersectsRight:
self.remove()
return self.GO_RIGHT
#BOTTOM SIDE
self.closeX = 0
self.closeY = 0
intersectsBottom = True;
if x <= self.x: self.closeX = self.x
elif x >= self.x + self.width: self.closeX = self.x + self.width
else: self.closeX = self.x
self.closeY = self.y + self.height
if abs(x - self.closeX) >= r: intersectsBottom = False
if abs(y - self.closeY) >= r: intersectsBottom = False
if intersectsBottom:
self.remove()
return self.GO_DOWN
pass
def remove(self):
self.removed = True
pass
Answer: You need to import the class from the module:
from Rectangle import Rectangle
or refer to the class as an attribute of the module you imported:
rectangles.append(Rectangle.Rectangle(400 + x * rectangleWidth + x * 10, 30 + y * rectangleHeight + y * 10, rectangleWidth, rectangleHeight, (randint(0, 255), randint(0, 255), randint(0, 255)), screen))
This is one reason for the [Python style guide
PEP-8](http://www.python.org/dev/peps/pep-0008/#package-and-module-names) to
recommend that you use all-lowercase names for your module files, to avoid
confusing the module with the contents of the module.
|
Refactor Python Code
Question: I was wondering if anyone could help me refactor the following Python code:
In this example, `endDate` is a string like such: `"2012-08-22"`
dateArray = [int(x) for x in endDate.split('-')]
event.add('dtend', datetime(dateArray[0], dateArray[1], dateArray[2]))
I appreciate it!
Answer:
from datetime import strptime
event.add('dtend', strptime(endDate, '%Y-%m-%d')
|
Look ahead without itertools
Question: I am looking for a way to look at the next line in a text file when the first
characters are the letters are only A,G,C,U or N. I created a dict. of all
possibilities in which I can look. I have tried itertools, but to no avail and
I have heard that the itertools would keep everything in memory which would be
most unproductive since my files are rather large(>10GB sometimes). I would
really appreciate help, I have wandered here for days looking for an answer. I
was thinking or trying regex, but I do not know how to. I really want to find
the most productive way for big files. Here is my (pitiful) attempt.
I have taken part of an answer found on : [Python for-loop look-
ahead](http://stackoverflow.com/questions/4197805/python-for-loop-look-ahead)
f2 = open(path to file)
from itertools import tee
from itertools import permutations
def pairwise(iter):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iter)
next(b, None)
return zip(a, b)
p = permutations(['A','G','U','C','N'])
for per in p:
per = ''.join(per)
dic={'a':[]}
dic['a'].append(per)
for line, next_line in pairwise(f2):
if line in dic['a']:
letter= next_line.split()
unilist.append('%s' %next_line)
print (unilist)
It appears the problem lies in the: for line, next_line in pairwise(f2) I
would be truly grateful for every tip and advice.
Edit: I meant the characters in the line and not the ones in the next_line.
Answer: You have several problems with your code:
* You discard `dic` each permutation loop iteration and re-create it from scratch. Build it once:
dic={'a':[''.join(per) for per in permutations('AGUCN')]}
but for fast membership tests (`if something in sequence`), use a `set`
instead:
dic={'a': set(''.join(per) for per in permutations('AGUCN'))}
Note that it is not clear why you need a dictionary with one key; a simple
variable would do fine here.
* You read a file but don't strip the newlines from the lines. With a newline, your `if line in dic['a']` test will never return True because the code that generates the
Just treat the file as an iterable, call `next()` on it to get the next line:
from itertools import permutations
patterns = set(''.join(per) for per in permutations('AGUCN'))
unilist = []
for line in f2:
if line.strip() in patterns:
unilist.append(next(f2).strip())
or even:
from itertools import permutations
patterns = set(''.join(per) for per in permutations('AGUCN'))
unilist = [next(f2).strip() for line in f2 if line.strip() in patterns]
You are not really looking ahead. You are looking behind; if the previous line
matched a condition, the next line is appended.
|
How to convert UTC-4 to US/Eastern in python?
Question: I read time stamps from text file. These time stamps are in UTC-4. I need to
convert them to US/Eastern.
import datetime
datetime_utc4 = datetime.datetime.strptime("12/31/2012 16:15", "%m/%d/%Y %H:%M")
How do I convert it to US/Eastern? One-line answer would be best.
Note: my original question stated EST to EDT. But it does not change the
essence of the question, which is how to go from one time zone to another.
Upon some reading (following comments) I gather that python (pytz in
particular) does not treat EST and EDT as separate time zones, rather as two
flavors of US/Eastern. But this is an implementation detail. It is common to
refer to EST and EDT as two different time zones, see e.g.
[here](http://www.timeanddate.com/library/abbreviations/timezones/na/edt.html).
Answer: Based on your update and comments, I now understand that you have data that is
fixed at UTC-4 and you want to correct this so that it is valid in US Eastern
Time, including both EST/EDT where appropriate. Here is how you do that with
[pytz](http://pytz.sourceforge.net/).
from datetime import datetime
import pytz
dt = datetime.strptime("12/31/2012 16:15", "%m/%d/%Y %H:%M") \
.replace(tzinfo = pytz.FixedOffset(-240)) \
.astimezone(pytz.timezone('America/New_York'))
Note that I used the `America/New_York` time zone id. This is the most correct
form of identifier. You could instead use `US/Eastern` and it would work just
fine, but be aware that this is an alias, and it is just there for backwards
compatibility.
|
How to install a package using the python-apt API
Question: I'm quite a newbie when it comes to Python, thus I beg foregiveness beforehand
:). That said, I'm trying to make a script that, among other things, installs
some Linux packages. First I tried to use subopen as explained
[here](http://stackoverflow.com/questions/8481943/using-apt-get-install-xxx-
inside-python-script). While this can eventually work, I stumbled upon the
[python-apt API](http://apt.alioth.debian.org/python-apt-doc/) and since I'm
not a big fan or re-inventing the wheel, I decided to give a try.
Problem comes when trying to find examples/tutorials on installing a package
using python-apt. Searching the documentation I found the
[PackageManager](http://apt.alioth.debian.org/python-apt-
doc/library/apt_pkg.html#apt_pkg.PackageManager) class that has some methods
to install a package. I tried some simple code to get this working:
apt_pkg.PackageManager.install("python")
This does not seem to work that easily, the install method expects
apt_pkg.PackageManager instead of a plain String. Thus, looking a bit more, [I
found this example](http://mancoosi.org/~abate/aptget-installation-plan) that
looks promising, but I'm a bit reluctant to use since I don't really
understand some of what is happening there.
Then, has anyone tried to install a package using python-apt or should I go
for using plain-old subopen style?
Thanks!
Answer: It's recommended to use the `apt` module from the `python-apt` Debian package.
This is a higher level wrapper around the underlying C/C++ `libapt-xxx`
libraries and has a Pythonic interface.
Here's an example script which will install the `libjs-yui-doc` package:
#!/usr/bin/env python
# aptinstall.py
import apt
import sys
pkg_name = "libjs-yui-doc"
cache = apt.cache.Cache()
cache.update()
pkg = cache[pkg_name]
if pkg.is_installed:
print "{pkg_name} already installed".format(pkg_name=pkg_name)
else:
pkg.mark_install()
try:
cache.commit()
except Exception, arg:
print >> sys.stderr, "Sorry, package installation failed [{err}]".format(err=str(arg))
As with the use of `apt-get`, this must be run with superuser privileges to
access and modify the APT cache.
$ sudo ./aptinstall.py
If you're attempting a package install as part of a larger script, it's
probably a good idea to only raise to root privileges for the minimal time
required.
You can find a small example in the
`/usr/share/pyshared/apt/progress/gtk2.py:_test()` function showing how to
install a package using a GTK front-end.
|
get ''expected-doctype-but-got-chars " error when i use html5lib of python?
Question: This is my code:
from html5lib import treebuilders, HTMLParser
parser = HTMLParser(tree=treebuilders.getTreeBuilder("lxml"))
parser.parse("hello world!")
print parser.errors
what cause the error?
But the doc of html5lib use this:
import html5lib
parser = html5lib.HTMLParser(tree=html5lib.getTreeBuilder("dom"))
minidom_document = parser.parse("<p>Hello World!")
Answer: `HTMLParser.errors` contains all parse errors from parsing the document;
html5lib should handle all parse errors gracefully by default (and yes, the
documentation does contain examples that generate parse errors — the aim is to
document the API, not show good HTML usage!), and hence unless you are for
some reason concerned about parse errors (unless you have a good reason to be,
don't be), its value is totally irrelevant.
|
Python: Appending API Calls to Spreadsheet
Question: I'll start out by saying I'm very new to Python and programming in general but
am very hands on in my learning style.
I would like to use Python to:
1. Gather an entire column of a spreadsheet into a list
2. Call to Klout's API to (a) Get the Klout User ID and (b) Get the Klout Score
3. Append those two variables in columns in the same spreadsheet
I have an API key, the spreadsheet data, Python, and the klout Python scripts
Thanks for your help!
UPDATE
Thanks to Lonely for his help with getting me this far. Now I just need to
write my score results to a spreadsheet.
from xlrd import open_workbook
from klout import *
k=Klout('my_API_key')
book=open_workbook('path_to_file')
sheet0=book.sheet_by_index(0)
List1=sheet0.col_values(0)
for screen_name in List1:
kloutId = k.identity.klout(screenName=screen_name).get('id')
score = k.user.score(kloutId=kloutId).get('score')
print screen_name, score
UPDATE 2
Have successfully put Twitter Screennames back into a new spreadsheet. Can't
seem to get scores to display correctly. It's also stopping at 30 (which
happens to be the request per second limit for Klout). Here's what I have
right now.
from xlrd import open_workbook
import xlwt
from klout import *
k=Klout('My_API_Key')
book=open_workbook('Path_to_My_File')
sheet0=book.sheet_by_index(0)
List1=sheet0.col_values(0)
for screen_name in List1:
kloutId = k.identity.klout(screenName=screen_name).get('id')
score = k.user.score(kloutId=kloutId).get('score')
wbk = xlwt.Workbook()
sheet = wbk.add_sheet('sheet 1')
i = -1
for n in List1:
i = i+1
sheet.write(i,0,n)
b = 0
score1 = int(score)
for x in xrange(score1):
b = b+1
sheet.write(b,1,x)
wbk.save("KScores.xls")
FINAL WORKING VERSION
With a ton of help from a personal contact who I credit most of the writing of
this script too I now have a completed .py script.
from xlrd import open_workbook
import xlwt
from time import sleep
from klout import *
klout_con = Klout('API_KEY_HERE', secure=True)
book = open_workbook('PATH_TO_YOUR_FILE_HERE')
sheet0 = book.sheet_by_index(0)
List1 = sheet0.col_values(0)
wbk = xlwt.Workbook()
sheet = wbk.add_sheet('sheet 1')
row = 0
for screen_name in List1:
klout_id = klout_con.identity.klout(screenName=screen_name).get('id')
score = klout_con.user.score(kloutId=klout_id).get('score')
sheet.write(row, 0, screen_name)
sheet.write(row, 1, score)
row += 1
print screen_name
sleep(1)
wbk.save('KScores.xls')
Thanks to the community and Adam who both helped me put this thing together.
Was an excellent starter project.
Answer: To read excel... use XLRD
Take all id's in list format.
Read each one of them by using iteration like `for i in list with Klout_APi`
like ...
score = k.user.score(kloutId=kloutId).get('score')
However a sample data would have been great...
|
Compile thread-safe tcl for python on Windows
Question: I'm doing a project with Python and I need to put something in thread. It
turned out that if you do something that uses Tk in thread, it will somehow
crash. The error is:
TclError: out of stack space (infinite loop?)`
I searched on Google and I think this perhaps because Tcl is not thread-safe.
When I ran this I got the Tcl error:
import Tkinter
Tkinter.Tk().getvar("tcl_platform(threaded)")
It is said recompiling tcl with --enable-threads could fix this problem. My
question is how to recompiling tcl in Windows. And how to replace the current
one with the compiled one. I'm using Python 2.7 and Tcl 8.5
Thanks
Answer: **Summary:** Each Tk widget must only be used from a single thread; there's a
lot of thread-specific data in use inside the implementation, so that's a
really hard requirement. Your hacking is not going to get around this.
**Details:** Python communicates with Tcl under the covers to work with Tk,
and threaded Tcl is designed to be strongly thread-bound (so as to avoid
having things like the GIL). It's possible to use a non-threaded build, but
then you get hit by problems with the code to guard against stack overflows (a
very nasty hack under the covers) which gets confused by the existence of
multiple C stacks when it thinks it is unthreaded. **_This is the part that
particular error you report comes from._** The stack check code is removed in
Tcl 8.6 (which uses a “stackless” implementation) but that's not likely to
help as no attempts are made to deal with inter-thread locking issues unless
you're in a threaded build (and that takes you back into thread-specific data
issues).
Tk in unthreaded mode makes lots of assumptions about global shared data and
is so really unsafe to use from multiple threads (the quality of the threading
code involved is not what you might wish, with much of it dating from a time
when the underlying OS libraries were often not very good at thread handling
either). Tk in threaded mode uses TSD extensively; the only way to use Tk in
threaded mode is for each thread to have its own main window and event loop,
and to _never_ mix things up between them.
The easiest thing is to keep one thread dedicated to managing the GUI (often
the main application thread) and to move all non-GUI work into other threads.
|
How to count number of files available in a directory recursively in Python using rsync?
Question: I want to count number of files recursively in a remote server path using
rsync in Python? I tried it doing like this:
def find_remote_files(source, password):
cmdline = ['sshpass', '-p', password, 'rsync', '--recursive', source]
with open(os.devnull, "w") as devnull:
proc = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stderr=devnull)
try:
for entry in proc.stdout:
items = entry.strip().split(None, 4)
if not items[0].startswith("d"):
yield lent(items[4])
proc.wait()
except:
# On any exception, terminate process and re-raise exception.
proc.terminate()
proc.wait()
raise
It works in a situation where I have less files. But in case if I have more
than 3000 files the rsync will take a long time to store it in a list and
again calculate the length. That's why, I wanted to know if there is a `rsync`
command just to count the file.
Answer: I would use a different approach using fabric, which is a great tool for
executing remote commands.
from fabric.api import run, env
env.host_string = 'example.org'
output = run('find /tmp -type f | wc -l')
num_files = int(output)
Now you have the number of files in your variable `num_files`. I was just
using the `find` command to search for files recursively beginning at
directory `/tmp`, and counted the returned lines with `wc -l`.
|
Python how to filter string based on substring
Question: Im new to Python coming from Java world.
1. I'm trying to write a simple python function that prints out only the data rows of a CSV or "arff" file. The non data rows begin with these 3 patterns @ , [@ , [%, and such rows should not be printed.
2. Example data file snippet:
% 1. Title: Iris Plants Database
%
% 2. Sources:
% (a) Creator: R.A. Fisher
% (b) Donor: Michael Marshall (MARSHALL%[email protected])
% (c) Date: July, 1988
@RELATION iris
@ATTRIBUTE sepallength REAL
@ATTRIBUTE sepalwidth REAL
@ATTRIBUTE petallength REAL
@ATTRIBUTE petalwidth REAL
@ATTRIBUTE class {Iris-setosa,Iris-versicolor,Iris-virginica}
@DATA
5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5.0,3.6,1.4,0.2,Iris-setosa
5.4,3.9,1.7,0.4,Iris-setosa
Python script:
import csv
def loadCSVfile (path):
csvData = open(path, 'rb')
spamreader = csv.reader(csvData, delimiter=',', quotechar='|')
for row in spamreader:
if row.__len__ > 0:
#search the string from index 0 to 2 and if these substrings(@ ,'[\'%' , '[\'@') are not found, than print the row
if (str(row).find('@',0,1) & str(row).find('[\'%',0,2) & str(row).find('[\'@',0,2) != 1):
print str(row)
loadCSVfile('C:/Users/anaim/Desktop/Data Mining/OneR/iris.arff')
actual output:
['% 1. Title: Iris Plants Database']
['% ']
['% 2. Sources:']
['% (a) Creator: R.A. Fisher']
['% (b) Donor: Michael Marshall (MARSHALL%[email protected])']
['% (c) Date: July', ' 1988']
['% ']
[]
['@RELATION iris']
[]
['@ATTRIBUTE sepallength\tREAL']
['@ATTRIBUTE sepalwidth \tREAL']
['@ATTRIBUTE petallength \tREAL']
['@ATTRIBUTE petalwidth\tREAL']
['@ATTRIBUTE class \t{Iris-setosa', 'Iris-versicolor', 'Iris-virginica}']
[]
['@DATA']
['5.1', '3.5', '1.4', '0.2', 'Iris-setosa']
['4.9', '3.0', '1.4', '0.2', 'Iris-setosa']
['4.7', '3.2', '1.3', '0.2', 'Iris-setosa']
['4.6', '3.1', '1.5', '0.2', 'Iris-setosa']
['5.0', '3.6', '1.4', '0.2', 'Iris-setosa']
['5.4', '3.9', '1.7', '0.4', 'Iris-setosa']
['4.6', '3.4', '1.4', '0.3', 'Iris-setosa']
['5.0', '3.4', '1.5', '0.2', 'Iris-setosa']
Desired output:
['5.1', '3.5', '1.4', '0.2', 'Iris-setosa']
['4.9', '3.0', '1.4', '0.2', 'Iris-setosa']
['4.7', '3.2', '1.3', '0.2', 'Iris-setosa']
['4.6', '3.1', '1.5', '0.2', 'Iris-setosa']
['5.0', '3.6', '1.4', '0.2', 'Iris-setosa']
['5.4', '3.9', '1.7', '0.4', 'Iris-setosa']
['4.6', '3.4', '1.4', '0.3', 'Iris-setosa']
['5.0', '3.4', '1.5', '0.2', 'Iris-setosa']
Answer: To test if a row was empty, just use it in a boolean context; empty lists are
false.
To test if a string starts with some specific characters, use
`str.startswith()`, which can take either a single string or a tuple of
strings:
import csv
def loadCSVfile (path):
with open(path, 'rb') as csvData:
spamreader = csv.reader(csvData, delimiter=',', quotechar='|')
for row in spamreader:
if row and not row[0].startswith(('%', '@')):
print row
Because you are really testing for fixed-width character strings, you can also
just slice the first column and test with `in` against a sequence; a set would
be most efficient:
def loadCSVfile (path):
ignore = {'@', '%'}
with open(path, 'rb') as csvData:
spamreader = csv.reader(csvData, delimiter=',', quotechar='|')
for row in spamreader:
if row and not row[0][:1] in ignore:
print row
Here the `[:1]` slice notation returns the first character of the `row[0]`
column (or an empty string if that first column is empty).
I used the open file object as a context manager (`with ... as ...`) so that
Python automatically closes the file for us when the code block is done (or an
exception is raised).
You should never call double-underscore methods ("dunder" methods, or special
methods) directly, the proper API call would be `len(row)` instead.
Demo:
>>> loadCSVfile('/tmp/iris.arff')
['5.1', '3.5', '1.4', '0.2', 'Iris-setosa']
['4.9', '3.0', '1.4', '0.2', 'Iris-setosa']
['4.7', '3.2', '1.3', '0.2', 'Iris-setosa']
['4.6', '3.1', '1.5', '0.2', 'Iris-setosa']
['5.0', '3.6', '1.4', '0.2', 'Iris-setosa']
['5.4', '3.9', '1.7', '0.4', 'Iris-setosa']
|
Vertical text in Tkinter Canvas
Question: Is there a way to draw vertical text in Tkinter library? (Python recommended)
textID = w1.create_text(5, 5, anchor="nw")
w1.itemconfig(textID, text = "This is some text")
Answer: If you are asking whether
[`tkinter.Canvas.create_text`](http://effbot.org/tkinterbook/canvas.htm#Tkinter.Canvas.create_text-
method) has something like this:
textID = w1.create_text(5, 5, anchor="nw", orient=tkinter.VERTICAL)
then the answer is no. The `create_text` method can only create horizontal
text.
* * *
However, you _can_ use
[`str.join`](http://docs.python.org/3/library/stdtypes.html#str.join) to
create vertical text:
from tkinter import Tk, Canvas
root = Tk()
canvas = Canvas()
canvas.grid()
canvas.create_text((10, 5), text="\n".join("This is some text"), anchor="nw")
root.mainloop()
Example:

While this may not be as elegant as simply setting an option on the
`create_text` method, it does work.
|
Sqlite3 / python - Export from sqlite to csv text file does not exceed 20k
Question: I am attempting to export a sqlite table to a text file and I found some great
help at this site. It works great for smaller outputs, but once I reach around
20k it appears to limit the output.
# first attempt was:
Mark Bells UniCodeWriter as found in [It is possible export table sqlite3
table to csv or similiar?](http://stackoverflow.com/questions/4264379/it-is-
possible-export-table-sqlite3-table-to-csv-or-similiar)
my table has 15 columns I just listed 5 here to make it easier to read
writer = UnicodeWriter(open("Export8.csv", "wb"))
writer.writerow(["RunID","JobNumber","StartTime","EndTime","Period"])
writer.writerows(results)
# second attempt was:
response = cursor.execute("SELECT RunID, JobNumber, StartTime, EndTime, strftime('%s',substr(endtime,1,19)) - strftime('%s',substr(starttime,1,19)) FROM tblTest WHERE RunID <>0")
strfile = open('_output1.csv','wb')
for row in response:
print >> strfile,row
# third attempt was:
strfile = open('_output3.csv','wb')
while True:
row = cursor.fetchone()
if row == None:
break
print >> strfile,row
enter code here
# 4th attempt/test:
response = cursor.execute("SELECT RunID, JobNumber, StartTime, EndTime, Period FROM tblTest WHERE RunID <>0")
print response
# Result
In attempt 1: I get an output of 183 full records and the very first column of
the 184 record
In attempt 2 and 3: I get an output of 181 full records and some columns of
the 182
In attempt 4: I get all my data on the screen
When i check the sqlite database I see 205 records. I am aware that I can just
output 100 lines at a time, but i am wondering why I am not getting all my
rows outputted
Answer: You can try using [pandas](http://pandas.pydata.org/) to load the sql data and
then to dump it into a csv. You'd have to install the dependencies (notably
NumPy) to use it though. It's really simple then:
import sqlite3
import pandas.io.sql as sql
con = sqlite3.connect('database.db')
table = sql.read_frame('select * from some_table', con)
table.to_csv('output.csv')
|
MATLAB twice as fast as Numpy
Question: I am an engineering grad student currently making the transition from MATLAB
to Python for the purposes of numerical simulation. I was under the impression
that for basic array manipulation, Numpy would be as fast as MATLAB. However,
it appears for two different programs I write that MATLAB is a little under
twice as fast as Numpy. The test code I am using for Numpy (Python 3.3) is:
import numpy as np
import time
a = np.random.rand(5000,5000,3)
tic = time.time()
a[:,:,0] = a[:,:,1]
a[:,:,2] = a[:,:,0]
a[:,:,1] = a[:,:,2]
toc = time.time() - tic
print(toc)
Whereas for MATLAB 2012a I am using:
a = rand(5000,5000,3);
tic;
a(:,:,1) = a(:,:,2);
a(:,:,3) = a(:,:,1);
a(:,:,2) = a(:,:,3);
toc
The algorithm I am using is the one used on a NASA
[website](https://modelingguru.nasa.gov/docs/DOC-1762) comparing Numpy and
MATLAB. The website shows that Numpy surpasses MATLAB in terms of speed for
this algorithm. Yet my results show a 0.49 s simulation time for Numpy and a
0.29 s simulation time for MATLAB. I also have run a Gauss-Seidel solver on
both Numpy and Matlab and I get similar results (16.5 s vs. 9.5 s)
I am brand new to Python and am not extremely literate in terms of
programming. I am using the WinPython 64 bit Python distribution but have also
tried Pythonxy to no avail.
One thing I have read which should improve performance is building Numpy using
MKL. Unfortunately I have no idea how to do this on Windows. Do I even need to
do this?
Any suggestions?
Answer: That comparison ends up being apples to oranges due to caching, because it is
more efficient to transfer or do some work on contiguous chunks of memory.
This particular benchmark is memory bound, since in fact no computation is
done, and thus the percentage of cache hits is key to achieve good
performance.
Matlab lays the data in column-major order (Fortran order), so `a(:,:,k)` is a
contiguous chunk of memory, which is fast to copy.
Numpy defaults to row-major order (C order), so in `a[:,:,k]` there are big
jumps between elements and that slows down the memory transfer. Actually, the
data layout can be chosen. In my laptop, creating the array with `a =
np.asfortranarray(np.random.rand(5000,5000,3))` leds to a 5x speed up (1 s vs
0.19 s).
This result should be very similar both for numpy-MKL and plain numpy because
MKL is a fast LAPACK implementation and here you're not calling any function
that uses it (MKL definitely helps when solving linear systems, computing dot
products...).
I don't really know what's going on on the Gauss Seidel solver, but some time
ago I wrote an answer to a question titled [Numpy running at half the speed of
MATLAB](http://stackoverflow.com/questions/16178471/numpy-running-at-half-the-
speed-of-matlab) that talks a little bit about MKL, FFT and Matlab's JIT.
|
Python group and splice: splicing the result returned from itertools.groupby
Question: I am trying to read a csv file using numpy genfromtxt into a structured array.
I plan to sort it and then use groupby to separate the file into groups based
on the string values of one of the columns. Finally, I will splice the columns
from each group for additional processing.
Here is a small example where I want to then have a specific column returned
for each of the groups.
import numpy as np
from itertools import groupby
food1 = [[" vegetable", "tomato"], [" vegetable", "spinach"], [" fruit", "watermelon"], [" fruit", "grapes"], [" meat", "beef"]]
for key, group in groupby(food1, lambda x: x[0]):
print key
group[:1]
# In the line above, TypeError: 'itertools._grouper' object is unsubscriptable, I have tried it with food1 or food2
for thing in group:
print key + ": " + thing[1];
print " "
The output I would like is returning several arrays of the second column
va;ies grouped by the first column's values,
So vegetable: ["tomato", "spinach"], fruits: ["water melon", "grapes"] ...
etc.
I tried to splice the group return from groupby, but as it is an iterator, I
would get TypeError: 'itertools._grouper' object is unsubscriptable.
I know I could splice the data loaded from genfromtxt, but it is the
combination of grouping first and then splicing that is giving me trouble.
data = np.genfromtxt("file.txt", delimiter=',', skiprows=3)
# splicing a column from the ndarray read from the csv file
column2 = data[:,2];
Any other ideas how could I accomplish this group then splice?
Thanks.
Answer: I think you are trying to do this:
from itertools import groupby
food1 = [[" vegetable", "tomato"], [" vegetable", "spinach"], [" fruit", "watermelon"], [" fruit", "grapes"], [" meat", "beef"]]
data={}
for key, group in groupby(sorted(food1), key=lambda x: x[0]):
data[key.strip()]=[v[1] for v in group]
data then is:
{'vegetable': ['tomato', 'spinach'],
'fruit': ['watermelon', 'grapes'],
'meat': ['beef']}
|
python csv list by rows instead of columns
Question: I have a script that can take the final redirection url and save it into CSV
file.
The script write codes in 1 column for example A1 then A3 then A5
How to make it write the codes by rows for example A1 B1 C1 D1
please see [this](http://i.imgur.com/Gl5jdrf.jpg) the red color that what i
want, the blue color that is the final result and i don't want it to be like
that ( the list in 1 column and goes down A1 A3 A5 and there are a spaces
between every cell !! )
this is my final script
import urllib2
import csv
import sys
url = 'http://www.test.com'
u = urllib2.urlopen(url)
localfile = open('C:\\test\\file.csv', 'a')
writer = csv.writer(localfile)
writer.writerow([u.geturl()])
localfile.close()
Answer: Why not just create CSV by yourself if it will have only one row?
import urllib2
url = 'http://www.google.com'
u = urllib2.urlopen(url)
localFile = open('C:\\file.csv', 'ab')
localFile.write(u.geturl() + ",")
localFile.close()
|
Calculating power for Decimals in Python
Question: I want to calculate power for `Decimal` in Python like:
from decimal import Decimal
Decimal.power(2,2)
Above should return me as `Decimal('2)`
How can I calculate power for `Decimals`?
EDIT: This is what i did
y = Decimal('10')**(x-deci_x+Decimal(str(n))-Decimal('1'))
x,deci_x are of decimal type
but above expression is throwing error as:
decimal.InvalidOperation: x ** (non-integer)
Stacktrace:
Traceback (most recent call last):
File "ha.py", line 28, in ?
first_k_1=first_k(2,n-1,k)
File "ha.py", line 18, in first_k
y = Decimal('10')**(x-deci_x+Decimal(str(n))-Decimal('1'))
File "/usr/lib64/python2.4/decimal.py", line 1709, in __pow__
return context._raise_error(InvalidOperation, 'x ** (non-integer)')
File "/usr/lib64/python2.4/decimal.py", line 2267, in _raise_error
raise error, explanation
Answer: You can calculate power using `**`:
2**3 # yields 8
a = Decimal(2)
a**2 # yields Decimal(4)
Following your update, seems okay for me:
>>> x = Decimal(2)
>>> deci_x = Decimal(1)
>>> n=4
>>> y = Decimal('10')**(x-deci_x+Decimal(str(n))-Decimal('1'))
>>> y
Decimal('10000')
|
What's the fastest way to create an API over a RESTful JSON interface for MongoDB?
Question: My technical experise is restricted to **Javascript** and **Python**.
How can I create an API for MongoDb that I may use with my client side
Javascript MVC framework?
Answer: If you are working with Django I'd recommend a stack consisting of an API
library plus a decent MongoDB schema layer (if necessary).
For instance:
* Define your models with MongoEngine (<http://mongoengine.org/>)
* Structure your API with Django-tastypie (<https://github.com/toastdriven/django-tastypie>)
That being said, I feel like MongoDB is not a perfect match for Django. Django
provides a lot of facilities like database syncing, which is set in place to
work around the very same issues NoSQL databases readily solve.
Some of the extra features that Django provides, like Admin UI, may not even
work out-of-the-box with NoSQL. I'm aware there is Django-nonrel which is
trying to bridge this gap (<https://github.com/django-nonrel>), but to be
honest, I'm not sure if its very stable or if its still being developed.
A little more approachable alternative might be to simply use Flask
(<http://flask.pocoo.org/>) with MongoEngine andFlask-RESTful
(<https://github.com/twilio/flask-restful>).
A proof-of-concept structure for such application:
from flask import Flask
from flask.ext import restful
from mongoengine import connect, Document
# MongoEngine model
class User(Document):
email = StringField(required=True)
app = Flask(__name__)
api = restful.Api(app)
connect('yourdb') # connect to Mongo
class MyAPI(restful.Resource):
def get(self):
return User.objects
api.add_resource(MyAPI, '/')
if __name__ == '__main__':
app.run(debug=True)
etc.
|
Pexpect throws unicode decode error when make command is run to compile C libraries
Question: I am running make to compile C libraries in a python project and using
python(python 3.3) pexpect for automation part. So the output of make command
is read in chunks by pexpect and in one such chunk it throws the following
error when the pexpect tries to convert (python 3 bytes) to (python3's str)
type . The main problem is this issue is intermittent not occuring frequently.
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 1998-1999:
unexpected end of data
\--> Below sample code shows that when data contains multibyte character (i.e.
special character or any unicode data). Pexpect fails to decode when it is
processing partial data of multibyte character.
#!/usr/bin/python
# -*- coding: utf-8 -*-
from base import pexpect
MAX_READ_CHUNK = 8
def run(cmd):
child = pexpect.spawn(cmd, maxread=MAX_READ_CHUNK)
while True:
i = child.expect([pexpect.EOF,pexpect.TIMEOUT])
if child.before:
print(child.before)
if i == 0: # EOF
break
elif i == 1: # TIMEOUT
continue
child.close()
return child.exitstatus
############## Main ################
data='“HELLO WORLD”'
#i.e. data = b'\xe2\x80\x9cabcd\xe2\x80\x9d'
print("Data in readable form = %s "%data)
print("Data in bytes = %s \n\n"%data.encode('utf-8'))
run("echo %s"%data)
Following Traceback error is coming:
Data in readable form = “HELLO WORLD”
Data in bytes = b'\xe2\x80\x9cHELLO WORLD\xe2\x80\x9d'
_cast_unicode() enc=[utf-8] s=[b'\xe2\x80\x9cHELLO']
_cast_unicode() enc=[utf-8] s=[b' WORLD\xe2\x80']
Traceback (most recent call last):
File "test.py", line 33, in <module>
run("echo %s"%data)
File "test.py", line 11, in run
i = child.expect([pexpect.EOF,pexpect.TIMEOUT])
File "/home/test/Downloads/base/pexpect.py", line 1358, in expect
return self.expect_list(compiled_pattern_list, timeout, searchwindowsize)
File "/home/test/Downloads/base/pexpect.py", line 1372, in expect_list
return self.expect_loop(searcher_re(pattern_list), timeout, searchwindowsize)
File "/home/test/Downloads/base/pexpect.py", line 1425, in expect_loop
c = self.read_nonblocking (self.maxread, timeout)
File "/home/test/Downloads/base/pexpect.py", line 1631, in read_nonblocking
return super(spawn, self).read_nonblocking(size=size, timeout=timeout)\
File "/home/test/Downloads/base/pexpect.py", line 868, in read_nonblocking
s2 = self._cast_buffer_type(s)
File "/home/test/Downloads/base/pexpect.py", line 1614, in _cast_buffer_type
return _cast_unicode(s, self.encoding)
File "/home/test/Downloads/base/pexpect.py", line 156, in _cast_unicode
return s.decode(enc)
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 6-7:
unexpected end of data
When MAX_READ_CHUNK value is changed to 9 in above code, it is working fine.
# Output When "MAX_READ_CHUNK = 9"
Data in readable form = “HELLO WORLD”
Data in bytes = b'\xe2\x80\x9cHELLO WORLD\xe2\x80\x9d'
_cast_unicode() enc=[utf-8] s=[b'\xe2\x80\x9cHELLO ']
_cast_unicode() enc=[utf-8] s=[b'WORLD\xe2\x80\x9d\r']
_cast_unicode() enc=[utf-8] s=[b'\n']
“HELLO WORLD”
How to handle this "UnicodeDecodeError: 'utf-8' codec can't decode bytes in
position: unexpected end of data" in pexpect during make.
Answer: What's happening is that pexpect fails to process bytes of a Unicode code
point that span different buffers; in your example, the `\xe2\x80\x9d` can't
be decoded because the `\x9d` byte is missing when the chunk size is a
multiple of 8.
Unfortunately I'm not that familiar with pexpect to know how to solve this,
but I can imagine two ways:
* Try setting `maxread` to 1 (unbuffered), or
* (This is dirty) Catch the exception, buffer the output, and process it along with the next output window.
* If you are processing buffers of known size, set `maxread` to the buffer size.
|
Printing all elements of graph one by one in python
Question: I tried to traverse a graph in this algorithm in python. What Changes should I
make if I want to print all elements of graph one by one or traversing the
whole graph.
Any help will be very much appreciated. Thanks.
grapth={'A': ['B', 10, 'B', 10, 'B', 10, 'C', 15], 'C': [1001, 'OUT'], 'B':
[1000, 'IN', 1000, 'IN']}
print "Path:",find_all_paths(Portdict1,'A','IN')
def find_all_paths(graph, start, end, path=[]):
path = path + [start]
if start == end:
return [path]
if not graph.has_key(start):
return []
paths = []
for node in graph[start]:
if node not in path:
newpaths = find_all_paths(graph, node, end, path)
for newpath in newpaths:
paths.append(newpath)
return paths
Answer: Your "Graph" is a dictionary, dictionaries in Python are unordered, if you
want to use an [ordered
dictionary](http://docs.python.org/2/library/collections.html#collections.OrderedDict),
you can import it from the
[`collections`](http://stackoverflow.com/questions/17571438/test-case-
execution-order-in-pytest) module.
from collections import OrderedDict
graph = OrderedDict({'A': ['B', 10, 'B', 10, 'B', 10, 'C', 15], 'C': [1001, 'OUT'], 'B': [1000, 'IN', 1000, 'IN']})
Proof that it is ordered:
>>> for key, value in graph.items():
print key, value
A ['B', 10, 'B', 10, 'B', 10, 'C', 15]
C [1001, 'OUT']
B [1000, 'IN', 1000, 'IN']
Notice, that since your initial code has the keys in the order "A, C, B" that
is the order they will stay in with the OrderedDict.
|
Improve python performance for array operations
Question: I have a python script that reads two tiff images and finds unique
combinations, counts the observations and saves the count to a txt file.
You can find the full script [in www.spatial-ecology.net](http://spatial-
ecology.net/dokuwiki/doku.php?id=wiki%3ageotools_uniq)
The result is:
tif1
2 2 3
0 0 3
2 3 3
tif2
2 2 3
3 3 4
1 1 1
result
2 2 2
3 3 1
0 3 2
3 4 1
2 1 1
3 1 2
The script works fine. This is how it is implemented.
1. read line by line (for irows in range(rows):) in order do not load the full image in the memory (eventually a flag option can be insert to read 10 by 10 lines)
2. go trough the arrays and create a tuple
3. check if the tuple is already stored in the dic()
My question is: which are the tricks in this case to speed up the process?
I tested to save the results in 2 dimension array rather than dic() but it
slow down the process. I check [this
link](http://wiki.python.org/moin/PythonSpeed/PerformanceTips) and maybe the
python map function can improve the speed. Is this the case?
Thanks in advance Giuseppe
Answer: If you want some real advice on performance you need to post the parts of your
code that are _directly_ relevant to what you want to do. If you won't at
least do that there's very little we can do. That said, if you are having
trouble discovering where exactly your inefficiencies are, I would _highly_
recommend using python's `cProfile` module. Usage is as follows:
import cProfile
def foo(*args, **kwargs):
#do something
cProfile.run("foo(*args, **kwargs)")
This will print a detailed time profile of your code that let's know you which
steps of your code take up the most time. Usually it'll be a couple methods
that either get called far more frequently than they should be getting called,
or do some silly extra processing leading to a performance bottleneck.
|
Looping in Python
Question: I am trying to make a sprite in pygame throw a grenade. What I would like is
for it to move forward a bit, then stopping. My problem is getting the grenade
to smoothly move forward. What the following code does is having it move to
the point of intrest immediately, not smoothly moving.
[Grenade Class]
#Grenade Class
class Explosive(pygame.sprite.Sprite):
def __init__(self, location):
pygame.sprite.Sprite.__init__(self)
self.pos = location
self.image = Nade
self.rect = self.image.get_rect()
self.rect.right = self.image.get_rect().right
self.rect.left = self.image.get_rect().left
self.rect.top = self.image.get_rect().top
self.rect.bottom = self.image.get_rect().bottom
self.rect.center = location
def move(self):
if Player.direction == 0:
self.rect.centery = self.rect.centery - 5
if Player.direction == 180:
self.rect.centery = self.rect.centery + 5
if Player.direction == 90:
self.rect.centerx = self.rect.centerx + 5
if Player.direction == 270:
self.rect.centerx = self.rect.centerx - 5
[Throwing the Grenade(in main loop)]
if event.key == pygame.K_e and grenadeNum > 0:
Grenade = Explosive([Player.rect.centerx, Player.rect.centery])
for i in range(1, 10)
Grenade.move()
Answer: While you do need to update the display, there is a second issue. Right now,
you are not telling it to wait at all, so it will make all ten movements with
no delay. This will be just as abrupt, so you may want to import time and add
a time.sleep() somewhere in there. Because moving the grenade in a self
contained "for" loop, if you have anything else going on this will stop. If
you have a while loop to manage everything, call an "update" function on the
grenades and they will have counters to know if they should move during this
round. If other sprites need to do things, you can then call their methods as
well. Finally, just to speed up the program, you can only update the screen in
the portion that the sprite moves in. When you put this all together, it is
something along these lines:
import time
grenades = pygame.sprite.RenderUpdates() #initializes a pygame group for sprites
class Explosive(pygame.sprite.Sprite):
def __init__(self, location):
pygame.sprite.Sprite.__init__(self)
self.pos = location
self.image = Nade
self.rect = self.image.get_rect()
self.rect.right = self.image.get_rect().right
self.rect.left = self.image.get_rect().left
self.rect.top = self.image.get_rect().top
self.rect.bottom = self.image.get_rect().bottom
self.rect.center = location
self.move_counter = 10 #adds a counter for how many rounds left of moving the grenade has
def update(self):
if self.move_counter > 0: #checks that self still has turns left to move
<insert screen variable name>.fill(<insert background color>,self.rect) #erases self in old location
if Player.direction == 0:
self.rect.centery = self.rect.centery - 5
if Player.direction == 180:
self.rect.centery = self.rect.centery + 5
if Player.direction == 90:
self.rect.centerx = self.rect.centerx + 5
if Player.direction == 270:
self.rect.centerx = self.rect.centerx - 5
pygame.display.update(self.draw((screen))) #draws self in new location
self.move_counter -= 1 #subtracts one from the counter to mark that another turn of moving has passed by
while True:
grenades.update()
#anything else you want to do in real time here
time.sleep(0.1)
|
Using python to issue command prompts
Question: I have been teaching myself python over the past few months and am finally
starting to do some useful things.
What I am trying to ultimately do is have a python script that acts as a
queue. That is, I would like to have a folder with a bunch of input files that
another program uses to run calculations (I am a theoretical physicist and do
many computational jobs a day).
The way I must do this now is put all of the input files on the box that has
the computational software. Then I have to convert the dos input files to unix
(dos2unix), following this I must copy the new input file to a file called
'INPUT'. Finally I run a command that starts the job.
All of these tasks are handled in a command prompt. My question is how to I
interface my program with the command prompt? Then, how can I monitor the
process (which I normally do via cpu usage and the TOP command), and have
python start the next job as soon as the last job finishes.
Sorry for rambling, I just do not know how to control a command prompt from a
script, and then have it automatically 'watch' the job.
Thanks
Answer: The [subprocess](http://docs.python.org/2/library/subprocess.html) module has
many tools for executing system commands in python.
from subprocess import call
call(["ls", "-l"])
[source](http://stackoverflow.com/a/89243/2502012)
call will wait for the command to finish and return its returncode, so you can
call another one afterwards knowing that the previous one has finished.
`os.system` is an older way to do it, but has fewer tools and isn't
recommended:
import os
os.system('"C:/Temp/a b c/Notepad.exe"')
**edit** FvD left a comment explaning how to "watch" the process below
|
Python selenium error when trying to launch firefox
Question: I am getting an error when trying to open Firefox using Selenium in ipython
notebook. I've looked around and have found similar errors but nothing that
exactly matches the error I'm getting. Anybody know what the problem might be
and how I fix it? I'm using Firefox 22.
The code I typed in was as follows:
from selenium import webdriver
driver = webdriver.Firefox()
The error the code returns is as follows:
WindowsError Traceback (most recent call last)
<ipython-input-7-fd567e24185f> in <module>()
----> 1 driver = webdriver.Firefox()
C:\Anaconda\lib\site-packages\selenium\webdriver\firefox\webdriver.pyc in __init__(self, firefox_profile, firefox_binary, timeout, capabilities, proxy)
56 RemoteWebDriver.__init__(self,
57 command_executor=ExtensionConnection("127.0.0.1", self.profile,
---> 58 self.binary, timeout),
59 desired_capabilities=capabilities)
60 self._is_remote = False
C:\Anaconda\lib\site-packages\selenium\webdriver\firefox\extension_connection.pyc in __init__(self, host, firefox_profile, firefox_binary, timeout)
45 self.profile.add_extension()
46
---> 47 self.binary.launch_browser(self.profile)
48 _URL = "http://%s:%d/hub" % (HOST, PORT)
49 RemoteConnection.__init__(
C:\Anaconda\lib\site-packages\selenium\webdriver\firefox\firefox_binary.pyc in launch_browser(self, profile)
45 self.profile = profile
46
---> 47 self._start_from_profile_path(self.profile.path)
48 self._wait_until_connectable()
49
C:\Anaconda\lib\site-packages\selenium\webdriver\firefox\firefox_binary.pyc in _start_from_profile_path(self, path)
71
72 Popen(command, stdout=PIPE, stderr=STDOUT,
---> 73 env=self._firefox_env).communicate()
74 command[1] = '-foreground'
75 self.process = Popen(
C:\Anaconda\lib\subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags)
677 p2cread, p2cwrite,
678 c2pread, c2pwrite,
--> 679 errread, errwrite)
680
681 if mswindows:
C:\Anaconda\lib\subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite)
894 env,
895 cwd,
--> 896 startupinfo)
897 except pywintypes.error, e:
898 # Translate pywintypes.error to WindowsError, which is
WindowsError: [Error 2] The system cannot find the file specified
Answer: Try specify your Firefox binary when initialize `Firefox()`
from selenium import webdriver
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
binary = FirefoxBinary('path/to/binary')
driver = webdriver.Firefox(firefox_binary=binary)
The default path FirefoxDriver looking for is at `%PROGRAMFILES%\Mozilla
Firefox\firefox.exe`. See
[FirefoxDriver](https://code.google.com/p/selenium/wiki/FirefoxDriver)
Or add your path of Firefox binary to Windows'
[PATH](http://www.computerhope.com/issues/ch000549.htm#0).
|
Search an id in python with BeautifulSoup
Question: I need help with a problem... I am doing a code for know the content of a tag
but... What can I do for take the content if it have got a id?
from bs4 import BeautifulSoup
import urllib2
code = '<span class="vi-is1-prcp" id="v4-27"> 15,00 EUR </span>'
soup = BeautifulSoup(code)
price = soup.find('a', id='v4-27') # <-- PROBLEM
print price
Answer: if that is the html code then you should replace the `'a'` tag with a `'span'`
tag. It should look something like this...
...
price = soup.find('span',id="v4-27")
print price #optional price.string will give you just the 15,00 EUR
#instead of the entire html line
|
Multiple, specific, regex substitutions in Python
Question: What I would like to do is to make specific substitions in a given text. For
example, '<' should be changed to '[', '>' to ']', and so forth. It is similar
to the solution given here: [How can I do multiple substitutions using regex
in python?](http://stackoverflow.com/questions/15175142/how-can-i-do-multiple-
substitutions-using-regex-in-python), which is
import re
def multiple_replace(dict, text):
# Create a regular expression from the dictionary keys
regex = re.compile("(%s)" % "|".join(map(re.escape, dict.keys())))
# For each match, look-up corresponding value in dictionary
return regex.sub(lambda mo: dict[mo.string[mo.start():mo.end()]], text)
Now, the problem is that I would also like to replace regex-matched patterns.
For example, I want to replace 'fo.+' with 'foo' and 'ba[rz]*' with 'bar'.
Removing the map(re.escape in the code helps, so that the regex actually
matches, but I then receive key errors, because, for example, 'barzzzzzz'
would be a match, and something I want to replace, but 'barzzzzzz' isn't a key
in the dictionary, the literal string 'ba[rz]*' is. How can I modify this
function to work?
(On an unrelated note, where do these 'foo' and 'bar' things come from?)
Answer: Just do multiple `sub` calls.
On an unrelated note, Jargon File to the rescue: [Metasyntactic
variables](http://www.catb.org/jargon/html/M/metasyntactic-variable.html),
[foo](http://www.catb.org/jargon/html/F/foo.html).
|
I am having difficulty using mincemeat in python for map-reduce to calculate wordcount of different files
Question: Here is the code:
import glob
import mincemeat
import re
text_files = glob.glob('finalcount/1/*')
def file_contents(file_name):
f = open(file_name)
try:
return f.read()
finally:
f.close()
source = dict((file_name, file_contents(file_name))
for file_name in text_files)
def mapfn(key, value):
for line in value.splitlines():
list1 = [ ]
for temp in re.split('[\t]+',line):
list1.append(temp)
x = int(list1[1].strip());
yield [list1[0],x]
def reducefn(key, value):
return key, sum(value)
s = mincemeat.Server()
s.datasource = source
s.mapfn = mapfn
s.reducefn = reducefn
results = s.run_server(password="wola")
print results
This code is supposed to compute word counts of multiple files. But it keeps
returning an error:
error: uncaptured python exception, closing channel <__main__.Client connected at 0x25c1990>
(<type 'exceptions.ValueError'>:invalid literal for int() with base 10: ''
[C:\Python27\lib\asyncore.py|read|83]
[C:\Python27\lib\asyncore.py|handle_read_event|444]
[C:\Python27\lib\asynchat.py|handle_read|140]
[mincemeat.py|found_terminator|97]
[mincemeat.py|process_command|195]
[mincemeat.py|call_mapfn|171]
[projcount.py|mapfn|21])
The input files that I am working on look like this. Now I want to add the
words and sum the number next to them in different files.
fawn 24
gai 1
nunnery 11
sowell 3
sonja 29
woods 591
clotted 1
spiders 84
hanging 522
After replacing `re.split` with `line.split()`, I got this error.
error: uncaptured python exception, closing channel <__main__.Client connected at 0x2531990>
(<type 'exceptions.IndexError'>:list index out of range
[C:\Python27\lib\asyncore.py|read|83]
[C:\Python27\lib\asyncore.py|handle_read_event|444]
[C:\Python27\lib\asynchat.py|handle_read|140]
[mincemeat.py|found_terminator|97]
[mincemeat.py|process_command|195]
[mincemeat.py|call_mapfn|171]
[projcount.py|mapfn|21])
Answer: I had got this error on a different occasion, i figured out that the problem
comes when you are using python 3.3, i removed 3.3 and installed
2.7.5(<http://python.org/download/>) and it works fine now. :)
|
What does hash do in python?
Question: I saw an example of code that where `hash` function is applied to tuple. As a
result it returns a negative integer. I wonder what does this function does.
Google does not help. I found a page that explains how hash is calculated but
it does not explain why we need this function.
Answer: [A hash is an fixed sized integer that identifies a particular
value](http://en.wikipedia.org/wiki/Hash_function). Each value need to have
it's own hash, so for the same value you will get the same hash even if it's
not the same object.
>>> hash("Look at me!")
4343814758193556824
>>> f = "Look at me!"
>>> hash(f)
4343814758193556824
Hash values need to be created in such a way that the resulting values are
evenly distributed to reduce the number of hash collisions you get. Hash
collisions are when two different values have the same hash. Therefore,
relatively small changes often result in very different hashes.
>>> hash("Look at me!!")
6941904779894686356
These numbers are very useful, as they enable quick look-up of values in a
large collection of values. Examples of they use is in Python's `set` and
`dict`. In a `list`, if you want to check if a value is in the list, with `if
x in values:`, Python needs to go through the whole list and compare `x` with
each value in the list `values`. This can take a long time for a long `list`.
In a `set`, Python keeps track of each hash, and when you if `if x in
values:`, Python will get the hash-value for `x`, look that up in an internal
structure and then only compare `x` with the values that have the same hash as
`x`.
The same methodology is used for dictionary lookup. This makes lookup in `set`
and `dict` very fast, while lookup in `list` is slow. It also means you can
have non-hashable objects in a `list`, but not in a `set` or as keys in a
`dict`. The typical example of non-hashable objects is any object that is
mutable, ie, you can change it. If you have a mutable object it should not be
hashable, as it's hash then will change over it's life-time, which would cause
a lot of confusion, as an object could end up under the wrong hash value in a
dictionary.
Note that the hash of a value only needs to be the same for one run of Python.
In Python 3.3 they will in fact change for every new run of Python:
$ /opt/python33/bin/python3
Python 3.3.2 (default, Jun 17 2013, 17:49:21)
[GCC 4.6.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> hash("foo")
1849024199686380661
>>>
$ /opt/python33/bin/python3
Python 3.3.2 (default, Jun 17 2013, 17:49:21)
[GCC 4.6.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> hash("foo")
-7416743951976404299
This is to make is harder to guess what hash value a certain string will have,
which is an important security feature for web applications etc.
Hash values should therefore not be stored permanently. If you need to use
hash values in a permanent way you can take a look at the more "serious" types
of hashes, [cryptographic hash
functions](http://en.wikipedia.org/wiki/Cryptographic_hash_function), that can
be used for making verifiable checksums of files etc.
|
Drawing text in python
Question: I have the following code (derived from [this
answer](http://stackoverflow.com/a/17556210/35070)) with my attempt to add the
text drawing of numbers. It doesn't work. It doesn't create an image and the
cmd prompt is too fast to see which error it is throwing.
#!/usr/bin/env python
import os.path
import sys
from time import strftime
import Image
import ImageDraw
import ImageFont
row_size = 3
margin = 3
def generate_montage(filenames, output_fn):
images = [Image.open(filename) for filename in filenames]
width = max(image.size[0] + margin for image in images)*row_size
height = sum(image.size[1] + margin for image in images)
montage = Image.new(mode='RGBA', size=(width, height), color=(0,0,0,255))
image_font = ImageFont.truetype('font/Helvetica.ttf', 18)
draw = ImageDraw.Draw(montage)
max_x = 0
max_y = 0
offset_x = 0
offset_y = 0
for i,image in enumerate(images):
montage.paste(image, (offset_x, offset_y))
text_coords = offset_x + image.size[0] - 45, offset_y + 120
draw.text(text_coords, '#{0}'.format(i+1), font=image_font)
max_x = max(max_x, offset_x + image.size[0])
max_y = max(max_y, offset_y + image.size[1])
if i % row_size == row_size-1:
offset_y = max_y + margin
offset_x = 0
else:
offset_x += margin + image.size[0]
montage = montage.crop((0, 0, max_x, max_y))
montage.save(output_fn)
if __name__ == '__main__':
basename = strftime("Montage %Y-%m-%d at %H.%M.%S.png")
exedir = os.path.dirname(os.path.abspath(sys.argv[0]))
filename = os.path.join(exedir, basename)
generate_montage(sys.argv[1:], filename)
Answer: You can open a command line window by pressing `Win`+`r``cmd` `Enter`. Once in
there, you execute your program and still see its output. Another option would
be to wrap the `generate_montage` call, like this:
try:
generate_montage(sys.argv[1:], filename)
except:
import traceback,time
traceback.print_exc()
time.sleep(600)
In any case, the most likely problem is that the font isn't being found, as
you're loading it from a directory relative to the cwd. Pass in the base
directory, like this:
base = os.path.dirname(os.path.abspath(__file__))
try:
fn = os.path.join(base, 'font', 'Helvetica.ttf')
image_font = ImageFont.truetype(fn, 18)
except:
try:
fn = os.path.join(base, 'font', 'Helvetica-18.pil')
image_font = ImageFont.load(fn)
except:
image_font = ImageFont.load_default()
The reason that TTF font loading [doesn't
work](http://i.imgur.com/tqgUVfP.png) is probably because that [your PIL has
been compiled without TTF support](http://stackoverflow.com/a/4011715/35070).
|
Port Python virtualenv to another system
Question: I am using many python packages like numpy, bottleneck, h5py, ... for my daily
work on my computer. Since I am root on this machine it is no problem to
install these packages. However I would like to use my "environment" of
different packages also on a server machine where I only have a normal user
account. So I thought about creating a virtual environment (with virtualenv)
on my machine by installing all needed packages in there. Then I just copy the
whole folder to the server and can run everything from it?
My machine uses Fedora 19 whereas the server uses Ubuntu. Is this a problem? I
could not find any information on how to move such a virtual environment to
another system. The reason I would like to create the virtual environment on
my machine first is that there are a lot of tools missing on the server like
python-dev, so I can't compile numpy for instance.
I looked into Anaconda and Enthought Python distributions, but they don't
include a couple of packages I need. Also, there should be a completely "open"
way for this problem?
Moving the virtual environment to the server failed, since it is complaining
about some missing files when I import the packages. This is not surprising
probably...
Answer: You shouldn't move your virtualenv since it is essentially linked to your
system python and the binary won't work on other machines.
However... you can export a list of installed packages and install them in
another virtualenv through a `requirements.txt` file.
Basically, what I usually do with most of my projects:
# Generate a requirements file:
pip freeze > requirements.txt
On the new machine:
# This uses virtualenvwrapper, but you can do it without as well
mkproject my_project_name
git clone git://..../ .
pip install -r requirements.txt
|
Can I get a list of the variables that reference an other in Python 2.7?
Question: Imagine I have:
X = [0,1]
Y = X
Z = Y
Is there a function like referenced_by(X) that returns something like `['Y',
'Z']`? And a function like points_to(Y) that returns `'X'`?
I know there is `is` to test whether to objects are the same, I just would
like a quick way to get the names though.
Answer: Yes, and no. You can get a list of _global_ variables:
for name, val in globals().items():
if val is obj:
yield name
You can also get a list of _local_ variables:
for name, val in locals().items():
if val is obj:
yield name
However, you will with this miss all variables in other contexts than local to
your function or global to the module. You can find variables in calling
contexts with frame-magic, but you won't be able to find anything that is
global to other modules, for example.
What you would use this for, I don't know.
You will also not find any attributes that reference the object, but
attributes aren't variables so maybe that's OK.
You can get all objects that reference your object though. And that will
include the globals and locals for all the functions. But you can't get the
name of the variables in that case. You can do
>>> import gc
>>> gc.get_referrers(obj)
To get a list of all objects referencing the object `obj`. Once again this is
pretty useless. :-)
If you want the names you can look up the keys in the cases that the referrer
is a dictionary or a stack frame:
import gc
import types
def find_ref_names(obj):
for ref in gc.get_referrers(obj):
if isinstance(ref, types.FrameType):
look_in = [ref.f_locals, ref.f_globals]
elif isinstance(ref, dict):
look_in = [ref]
else:
continue
for d in look_in:
for k, v in d.items():
if v is obj:
yield k
def main():
a = "heybaberiba"
b = a
c = b
print list(find_ref_names(b))
if __name__ == '__main__':
main()
This will print:
['a', 'c', 'b', 'obj']
But as you don't know which context the variables `a`, `b`, `c` and `obj` is
defined in, it's yet again pretty useless. As an example move the definition
of a to the module-level and you get this result:
['c', 'b', 'a', 'obj', 'a', 'a']
Where one of these `a` is the global one, and others are copies into local
contexts.
As for your second question:
> And a function like points_to(Y) that returns 'X'?
That's the same function. Both `X` and `Y` are just names pointing to the same
object, in this case a list. `X` is no different from `Y`, and `Y` does not
point to `X`. `Y` points to `[0,1]` and so does `X`.
|
Python Function returns wrong value
Question:
periodsList = []
su = '0:'
Su = []
sun = []
SUN = ''
I'm formating timetables by converting
extendedPeriods = ['0: 1200 - 1500',
'0: 1800 - 2330',
'2: 1200 - 1500',
'2: 1800 - 2330',
'3: 1200 - 1500',
'3: 1800 - 2330',
'4: 1200 - 1500',
'4: 1800 - 2330',
'5: 1200 - 1500',
'5: 1800 - 2330',
'6: 1200 - 1500',
'6: 1800 - 2330']
into `'1200 - 1500/1800 - 2330'`
* su is the day identifier
* Su, sun store some values
* SUN stores the converted timetable
for line in extendedPeriods:
if su in line:
Su.append(line)
for item in Su:
sun.append(item.replace(su, '', 1).strip())
SUN = '/'.join([str(x) for x in sun])
Then I tried to write a function to apply my "converter" also to the other
days..
def formatPeriods(id, store1, store2, periodsDay):
for line in extendedPeriods:
if id in line:
store1.append(line)
for item in store1:
store2.append(item.replace(id, '', 1).strip())
periodsDay = '/'.join([str(x) for x in store2])
return periodsDay
But the function returns 12 misformatted strings...
'1200 - 1500', '1200 - 1500/1200 - 1500/1800 - 2330',
Answer: You can use `collections.OrderedDict` here, if order doesn't matter then use
`collections.defaultdict`
>>> from collections import OrderedDict
>>> dic = OrderedDict()
for item in extendedPeriods:
k,v = item.split(': ')
dic.setdefault(k,[]).append(v)
...
>>> for k,v in dic.iteritems():
... print "/".join(v)
...
1200 - 1500/1800 - 2330
1200 - 1500/1800 - 2330
1200 - 1500/1800 - 2330
1200 - 1500/1800 - 2330
1200 - 1500/1800 - 2330
1200 - 1500/1800 - 2330
To access a particular day you can use:
>>> print "/".join(dic['0']) #sunday
1200 - 1500/1800 - 2330
>>> print "/".join(dic['2']) #tuesday
1200 - 1500/1800 - 2330
|
GAE: Exceeded maximum allocated IDs
Question: It seems gae assigns very high IDs to the models. When I download my entities,
I get for some entries very big numbers. These were autogenerated in first
place. Downloading them as csv is no problem. But deleting the existing data
and re-uploading the same data throws an exception.
`Exceeded maximum allocated IDs`
**Trace:**
Traceback (most recent call last):
File "/opt/eclipse/plugins/org.python.pydev_2.7.5.2013052819/pysrc/pydevd.py", line 1397, in <module>
debugger.run(setup['file'], None, None)
File "/opt/eclipse/plugins/org.python.pydev_2.7.5.2013052819/pysrc/pydevd.py", line 1090, in run
pydev_imports.execfile(file, globals, locals) #execute the script
File "/home/kave/workspace/google_appengine/appcfg.py", line 171, in <module>
run_file(__file__, globals())
File "/home/kave/workspace/google_appengine/appcfg.py", line 167, in run_file
execfile(script_path, globals_)
File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 4247, in <module>
main(sys.argv)
File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 4238, in main
result = AppCfgApp(argv).Run()
File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 2396, in Run
self.action(self)
File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 3973, in __call__
return method()
File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 3785, in PerformUpload
run_fn(args)
File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 3676, in RunBulkloader
sys.exit(bulkloader.Run(arg_dict))
File "/home/kave/workspace/google_appengine/google/appengine/tools/bulkloader.py", line 4379, in Run
return _PerformBulkload(arg_dict)
File "/home/kave/workspace/google_appengine/google/appengine/tools/bulkloader.py", line 4244, in _PerformBulkload
loader.finalize()
File "/home/kave/workspace/google_appengine/google/appengine/ext/bulkload/bulkloader_config.py", line 384, in finalize
self.increment_id(high_id_key)
File "/home/kave/workspace/google_appengine/google/appengine/tools/bulkloader.py", line 1206, in IncrementId
unused_start, end = datastore.AllocateIds(high_id_key, max=high_id_key.id())
File "/home/kave/workspace/google_appengine/google/appengine/api/datastore.py", line 1965, in AllocateIds
return AllocateIdsAsync(model_key, size, **kwargs).get_result()
File "/home/kave/workspace/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 612, in get_result
return self.__get_result_hook(self)
File "/home/kave/workspace/google_appengine/google/appengine/datastore/datastore_rpc.py", line 1863, in __allocate_ids_hook
self.check_rpc_success(rpc)
File "/home/kave/workspace/google_appengine/google/appengine/datastore/datastore_rpc.py", line 1236, in check_rpc_success
raise _ToDatastoreError(err)
google.appengine.api.datastore_errors.BadRequestError: Exceeded maximum allocated IDs
Usually my Id's are around `26002` but the new id's since a few days ago are
as big as `4948283361329150`. These are causing problems now. (If I change
them to lower values, its all fine, but i didn't generate these ids in first
place) Why does GAE have such problems with its own generated ids?
Many Thanks
Answer: This is a known issue, fixed in the 1.8.2 or later SDKs.
Note, if you use bulkloader against the dev appserver those SDKs (1.8.2,
1.8.3) unfortunately have a separate bulkloader issue with that use case (see
[appcfg-py-upload-data-fails-in-google-app-engine-
sdk-1-8-2](http://stackoverflow.com/questions/18060579/appcfg-py-upload-data-
fails-in-google-app-engine-sdk-1-8-2)) but not in production.
|
Installing matplotlib on Ubuntu: ImportError
Question: My platform:
Ubuntu 13.04, Python 2.7.4.
Installing matplotlib failed, ImportError: No module named pyplot.
I have tried many ways such as
$ sudo apt-get install python-matplotlib
and easy install, install from source..., I'm folllowing
<http://matplotlib.org/faq/installing_faq.html>
But none of them works, This ImportError always happen, Anyone can help?
**EDIT** The trace back:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-4-82be63b7783c> in <module>()
----> 1 import matplotlib
/home/wuhuijia/matplotlib.py in <module>()
1 import numpy as np
----> 2 import matplotlib.pyplot as plt
3 import scipy.optimize as so
4
5 def find_confidence_interval(x, pdf, confidence_level):
ImportError: No module named pyplot
Answer: Your script is named `matplotlib.py`. Python will first look locally when
importing modules, that is, on the directory itself. Thus, Python imports
_your script_ (and not the installed matplotlib) when you execute `import
matplotlib.pyplot`, and since your script has no submodule `pyplot`, it fails.
Rename your script to something else (e.g., `testmpl.py`) and you should be
fine.
|
correct style for element-wise operations on lists without numpy (python)
Question: I would like to operate on lists element by element without using numpy, for
example, i want `add([1,2,3], [2,3,4]) = [3,5,7]` and `mult([1,1,1],[9,9,9]) =
[9,9,9]`, but i'm not sure which way of doing is it considered 'correct'
style.
The two solutions i came up with were
def add(list1,list2):
list3 = []
for x in xrange(0,len(list1)):
list3.append(list1[x]+list2[x])
return list3
def mult(list1, list2):
list3 = []
for x in xrange(0,len(list1)):
list3.append(list1[x]*list2[x])
return list3
def div(list1, list2):
list3 = []
for x in xrange(0,len(list1)):
list3.append(list1[x]/list2[x])
return list3
def sub(list1, list2):
list3 = []
for x in xrange(0,len(list1)):
list3.append(list1[x]-list2[x])
return list3
where each operator is given a separate function
and
def add(a,b)
return a+b
def mult(a,b)
return a*b
def div(a,b)
return a/b
def sub(a,b)
return a-b
def elementwiseoperation(list1, list2, function):
list3 = []
for x in xrange(0,len(list1)):
list3.append(function(list1[x],list2[x]))
return list3
where all the basic functions are defined, and I have a separate function to
use them on each element. I skimmed through PEP8, but didn't find anything
directly relevant. Which way is better?
Answer: The normal way to do this would be to use `map` or `itertools.imap`:
import operator
multiadd = lambda a,b: map(operator.add, a,b)
print multiadd([1,2,3], [2,3,4]) #=> [3, 5, 7]
Ideone: <http://ideone.com/yRLHxW>
`map` is a c-implemented version of your `elementwiseoperation`, with the
advantage of having the standard name, working with any iterable type and
being faster.
Alternatively, you could use `partial` and `map` for a pleasingly pointfree
style:
import operator
import functools
multiadd = functools.partial(map, operator.add)
print multiadd([1,2,3], [2,3,4]) #=> [3, 5, 7]
Ideone: <http://ideone.com/BUhRCW>
Anyway, you've taken the first steps in functional programming yourself. I
suggest you read around the topic.
As a general matter of style, iterating by index using `range` is generally
considered the wrong thing, if you want to visit every item. The usual way of
doing this is simply to iterate the structure directly. Use `zip` or
`itertools.izip` to iterate in parallel:
for x in l:
print l
for a,b in zip(l,k):
print a+b
And the usual way to iterate to create a list is not to use `append`, but a
list comprehension:
[a+b for a,b in itertools.izip(l,k)]
|
Installing MySQL-python on mac
Question: I am using OSX 10.8 and PyCharm to work on a Python development project. I
have installed MySQL-python for the mac using the instructions on the website
<http://blog.infoentropy.com/MySQL-
python_EnvironmentError_mysql_config_not_found>
However, running the project gives me this error:
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: dlopen(/Users/ashishagarwal/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.6-intel.egg-tmp/_mysql.so, 2): Symbol not found: _mysql_affected_rows
Referenced from: /Users/ashishagarwal/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.6-intel.egg-tmp/_mysql.so
Expected in: flat namespace
in /Users/ashishagarwal/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.6-intel.egg-tmp/_mysql.so
The file mentioned int the error exists at the location -
/Users/ashishagarwal/.python-
eggs/MySQL_python-1.2.3-py2.7-macosx-10.6-intel.egg-tmp/_mysql.so
The entire error message is -
/usr/local/bin/python2.7-32 /Users/ashishagarwal/Optimus/MashPotato/backend/mashpotato/manage.py testserver --addrport 8000
Running on development server
Traceback (most recent call last):
File "/Users/ashishagarwal/Optimus/MashPotato/backend/mashpotato/manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 453, in execute_from_command_line
utility.execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 272, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/core/management/__init__.py", line 77, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/south/management/commands/__init__.py", line 10, in <module>
import django.template.loaders.app_directories
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/template/loaders/app_directories.py", line 23, in <module>
mod = import_module(app)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/contrib/admin/__init__.py", line 3, in <module>
from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/contrib/admin/helpers.py", line 4, in <module>
from django.contrib.admin.util import (flatten_fieldsets, lookup_field,
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/contrib/admin/util.py", line 6, in <module>
from django.db import models
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/__init__.py", line 40, in <module>
backend = load_backend(connection.settings_dict['ENGINE'])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 93, in __getitem__
backend = load_backend(db['ENGINE'])
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 27, in load_backend
return import_module('.base', backend_name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 17, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: dlopen(/Users/ashishagarwal/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.6-intel.egg-tmp/_mysql.so, 2): Symbol not found: _mysql_affected_rows
Referenced from: /Users/ashishagarwal/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.6-intel.egg-tmp/_mysql.so
Expected in: flat namespace
in /Users/ashishagarwal/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.6-intel.egg-tmp/_mysql.so
Process finished with exit code 1
Answer: You should install MySQL through [Homebrew](http://brew.sh/) first, to get
python-mysql work properly on OS X.
pip uninstall MySQL-python
brew install mysql
pip install MySQL-python
|
Dynamically choosing class to inherit from
Question: My Python knowledge is limited, I need some help on the following situation.
Assume that I have two classes `A` and `B`, is it possible to do something
like the following (conceptually) in Python:
import os
if os.name == 'nt':
class newClass(A):
# class body
else:
class newClass(B):
# class body
So the problem is that I would like to create a class `newClass` such that it
will inherit from different base classes based on platform difference, is this
possible to do in Python? Thanks.
Answer: You can use a [conditional
expression](http://docs.python.org/2/reference/expressions.html#conditional-
expressions):
class newClass(A if os.name == 'nt' else B):
...
|
Commit file in svn with python3
Question: Is there any simple way to commit a file (.txt) that my script creates to svn?
I found a lot of tools but seems complicated to use
Answer: I need to know which OS are you using before I answer this.
Anyway if you are using linux/unix you can use this:
Suppose all the files that need to be updated are stored in a file names.txt
with absolute path.
import os
f = open('/root/names.txt')
line = f.readline()
f.close()
words= line.split()
for word in words:
cmd1 = 'svn up ' +word
if os.system(cmd1) == 0:
print('All files updated')
|
Better way to initialize python ctypes structure field
Question: Is there a better way to initialize a ctypes field that is meant to be
static/constant than what I have below?
from ctypes import *
class foo(LittleEndianStructure):
_fields_ = [
("signature", c_ulonglong),
]
def __init__(self):
super(LittleEndianStructure,self).__init__()
self.signature = 0x896489648964
f = foo()
print hex(f.signature)
For example, I was hoping I could do something similar to how you could do it
with a normal python object:
class bar:
signature = 0x896489648964
b = bar()
print hex(b.signature)
Answer: The short answer is no, you can't do this, and shouldn't want to.
* * *
Your normal Python object sample doesn't do what you think. It's not
automatically initializing an instance attribute; it's creating a class
attribute instead.
They work similarly in some cases, but they're not the same thing. For
example, compare:
>>> class Foo(object):
... bar=[]
... def __init__(self):
... self.baz=[]
...
>>> f1 = Foo()
>>> f2 = Foo()
>>> f1.bar.append(100)
>>> f1.baz.append(100)
>>> f2.bar
[100]
>>> f2.baz
[]
Here, `f1` and `f2` each initialize their own `baz`, but they do _not_
automatically initialize their own `bar`—they share a single `bar` with every
other instance.
And, more directly relevant to this case:
>>> f1.__dict__
{'baz': [1]}
The `bar` class attribute is not part of `f1`'s dictionary.
So, translating the same thing to `ctypes`, your `"signature"` would not be a
member of your structure if you made it a class attribute—that is, it wouldn't
be laid out in memory as part of each instance. Which would defeat the entire
purpose of having it.
* * *
If you know C++, it may help to look at it in C++ terms.
A class attribute, like `bar` above, is sort of* like a static member variable
in C++, while an instance attribute is like a normal instance member variable.
In this C++ code:
struct bar {
static const long signature = 0x896489648964;
};
… each `bar` is actually an empty structure; there's a single `bar::signature`
stored somewhere else in memory. You can reference it through `bar` instances,
but only because the compiler turns `b1.signature` into `bar::signature`.
* * *
* The reason I say "sort of" is that Python class attributes can be overridden by subclasses, while C++ static members can't, they really are just global variables, and they can only be "hidden" by subclasses.
|
How to pass a list of strings to an opencl kernel using pyopencl?
Question: How to pass list of strings to an opencl kernel the right way?
I tried this way using buffers (see following code), but I failed.
OpenCL (struct.cl):
typedef struct{
uchar uc[40];
} my_struct9;
inline void try_this7_now(__global const uchar * IN_DATA ,
const uint IN_len_DATA ,
__global uchar * OUT_DATA){
for (unsigned int i=0; i<IN_len_DATA ; i++) OUT_DATA[i] = IN_DATA[i];
}
__kernel void try_this7(__global const my_struct9 * pS_IN_DATA ,
const uint IN_len ,
__global my_struct9 * pS_OUT){
uint idx = get_global_id(0);
for (unsigned int i=0; i<idx; i++) try_this7_now(pS_IN_DATA[i].uc, IN_len, pS_OUT[i].uc);
}
Python (opencl_struct.py):
# -*- coding: utf-8 -*-
import pyopencl as cl
import pyopencl.array as cl_array
import numpy
ctx = cl.create_some_context()
queue = cl.CommandQueue(ctx)
# --------------------------------------------------------
LIMIT = 40
mf = cl.mem_flags
import ctypes,sys,struct
"""
typedef struct{
uchar uc[40];
} my_struct9;
"""
INlist = []
INlist.append("That is VERY cool!")
INlist.append("It is a list!")
INlist.append("A big one!")
#INlist.append("But it failes to output. :-(") # PLAY WITH THOSE
INlist.append("WTF is THAT?") # PLAY WITH THOSE
print "INlist : "+str(INlist)
print "largest string "+str( max( len(INlist[iL]) for iL in range(len(INlist)) ) )
strLIMIT=str(LIMIT)
s7 = struct.Struct( (str(strLIMIT+'s') *len(INlist)) )
IN_host_buffer = ctypes.create_string_buffer(s7.size)
s7.pack_into(IN_host_buffer, 0, *INlist)
IN_dev_buffer = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=IN_host_buffer)
OUT_host_buffer = ctypes.create_string_buffer(s7.size)
OUT_dev_buffer = cl.Buffer(ctx, mf.WRITE_ONLY, len(OUT_host_buffer))
print "> len(OUT_host_buffer) "+str(len(OUT_host_buffer))
# ========================================================================================
f = open("struct.cl", 'r')
fstr = "".join(f.readlines())
prg = cl.Program(ctx, fstr).build()
#cl.enqueue_copy(queue, IN_dev_buffer, IN_host_buffer, is_blocking=True) # copy data to device
cl.enqueue_write_buffer(queue, IN_dev_buffer, IN_host_buffer).wait()
prg.try_this7(queue, (1,), None, IN_dev_buffer, numpy.uint32(LIMIT), OUT_dev_buffer)
# ========================================================================================
cl.enqueue_copy(queue, OUT_host_buffer, OUT_dev_buffer).wait()
SSS = s7.unpack_from(OUT_host_buffer,0)
# unpack here OUT_host_buffer
print "(GPU) output : "+str( SSS )+" "
for s in range(len(SSS)):
print ">>> (GPU) output : "+str( SSS[s] )
I ran the program first time with "but it failes to output" as 4th list
element. Then I played around by increasing and decreasing elements of the
list. Finally, there appeared this problem: **The output of the program is
supposed to be** (short version)
> > > > (GPU) output : That is VERY cool!
>>>>
>>>> (GPU) output : It is a list!
>>>>
>>>> (GPU) output : A big one!
>>>>
>>>> (GPU) output : WTF is THAT?
But it is:
> python opencl_struct.py
>
> INlist : ['That is VERY cool!', 'It is a list!', 'A big one!', 'WTF is
> THAT?']
>
> largest string 18
>
>> len(OUT_host_buffer) 160 (GPU) output : ('That is VERY
cool!\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'It is a
list!\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'A big
one!\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
'But it failes to output.
:-(\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
>>
>>> > (GPU) output : That is VERY cool!
>>>>
>>>> (GPU) output : It is a list!
>>>>
>>>> (GPU) output : A big one!
>>>>
>>>> (GPU) output : But it failes to output. :-(
As you can see, the the 4th list element differes.
So, maybe my approach is wrong or there is a bug in pyopencl or somewhere
else.
I am using a NVidia 9400 GPU.
Rambo
Answer: You code seems to me very complicated. And some part are not very clear to me.
For instance, I don't see why you create only one work item:
prg.try_this7(queue, (1,), None,...)
Which force you to loop through your strings (in the kernel) instead of using
the available parallelism. Anyhow, if I well understand, you want to send some
strings to the GPU copy them in another buffer, get them back in the host side
and display them.
If it's the case here is a version using only numpy and of course pyopencl:
import numpy as np
import pyopencl as cl
ctx = cl.create_some_context()
queue = cl.CommandQueue(ctx)
#The kernel uses one workitem per char transfert
prog_str = """kernel void foo(global char *in, global char *out, int size){
int idx = get_global_id(0);
if (idx < size){
out[idx] = in[idx];
}
}"""
prog = cl.Program(ctx, prog_str).build()
#Note that the type of the array of strings is '|S40' for the length
#of third element is 40, the shape is 3 and the nbytes is 120 (3 * 40)
original_str = np.array(('this is an average string',
'and another one',
"let's push even more with a third string"))
mf = cl.mem_flags
in_buf = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=original_str)
out_buf = cl.Buffer(ctx, mf.WRITE_ONLY, size=str_size)
copied_str = np.zeros_like(original_str)
#here launch the kernel with str_size number of workitems in this case 120
#this mean that some of the workitems won't process any meaningful char
#(not all string have a lenght of 40) but it's no biggie
prog.foo(queue, (str_size,), None, in_buf, out_buf, np.int32(str_size))
cl.enqueue_copy(queue, copied_str, out_buf).wait()
print copied_str
And the displayed result:
['this is an average string' 'and another one'
"let's push even more with a third string"]
|
Regex findall start() and end() ? Python
Question: i'm trying to get the start and end positions of a query in sequence by using
re.findall
import re
sequence = 'aaabbbaaacccdddeeefff'
query = 'aaa'
findall = re.findall(query,sequence)
>>> ['aaa','aaa']
how do i get something like findall.start() or findall.end() ?
i would like to get
start = [0,6]
end = [2,8]
i know that
search = re.search(query,sequence)
print search.start(),search.end()
>>> 0,2
would give me only the first instance
Answer: Use [`re.finditer`](http://docs.python.org/2/library/re.html#re.finditer):
>>> import re
>>> sequence = 'aaabbbaaacccdddeeefff'
>>> query = 'aaa'
>>> r = re.compile(query)
>>> [[m.start(),m.end()] for m in r.finditer(sequence)]
[[0, 3], [6, 9]]
From the docs:
> Return an `iterator` yielding `MatchObject` instances over all non-
> overlapping matches for the RE pattern in string. The string is scanned
> left-to-right, and matches are returned in the order found.
|
Python Overwriting text in Tkinter
Question: I'm having some trouble updating the countdown timer text printed on a canvas.
My current code leaves the area where the text should be blank.
I've tried placing it in various places, but they all lead to no effect or the
same effect I described above. Removing the `canvas.delete` function just
leads to the new time being written on top of the old one.
Here is my current code:
def cdtimerr():
global cdtimer
cdtimer -= 1
cdtext = canvas.create_text(510, 6, text=cdtimer, font="Ubuntu 29 bold", anchor = NW)
if cdtimer == 0 :
canvas.delete(ALL)
scorescreen()
else:
canvas.after(1000, cdtimerr)
canvas.after(1000, canvas.delete(cdtext))
Also, how would I go about terminating the timer prematurely if I pressed an
exit button on the canvas which loads another screen? Currently, if I load
another screen the timer still continues.
Answer: To cancel timer. Save return value of `after`, call `after_cancel` with saved
value.
from Tkinter import *
remain = 11
_timer = None
def cdtimerr():
global remain, _timer
remain -= 1
cdtext = canvas.create_text(510, 6, text=remain, font="Ubuntu 29 bold", anchor=NW)
if remain == 0:
canvas.delete(ALL)
else:
_timer = canvas.after(1000, lambda: (canvas.delete(cdtext), cdtimerr()))
root = Tk()
root.geometry('1024x768')
canvas = Canvas(root)
canvas.pack(expand=1, fill=BOTH)
_timer = canvas.after(0, cdtimerr)
Button(root, text='Cancel', command=lambda: canvas.after_cancel(_timer)).pack()
root.mainloop()
|
python script to remove reversed repeated lines
Question: I got a python code that removes lines if they are similar when reversed. For
example if I have a document that contains:
1,2 3,4
5,6 7,8
2,1 4,3
5,6 8,7
After executing the script, the output is
5,6 7,8
2,1 4,3
5,6 8,7
Consider first column is 1,2 and second column is 7,8 for a line then, if
another line contains reversed values for each column as 2,1 AND 8,7 this is
considered as reversed.
However, i noticed that the script doesn't keep the order of the lines. The
lines order are important for me. Also, I need to remove the second similar
reversed line, not the first one.The code is
import sys
with open(sys.argv[1]) as inf:
keys = set()
for line in inf:
ports, ips = line.split()
port1, port2 = ports.split(",")
ip1, ip2 = ips.split(",")
if ip1 < ip2:
keys.add((ip1, port1, ip2, port2))
else:
keys.add((ip2, port2, ip1, port1))
with open('results', 'w') as outf:
for result in keys:
outf.write("{1},{3}\t{0},{2}\n".format(*result))
Any ideas? Any suggestion if we can do it on bash scripting?
Thanks
Answer: You can use `collections.OrderedDict` here:
>>> from collections import OrderedDict
>>> dic = OrderedDict()
with open('file.txt') as f:
for line in f:
key = tuple(tuple(x.split(',')) for x in line.split())
rev_key = tuple(x[::-1] for x in key)
if key not in dic and rev_key not in dic:
dic[key] = line.strip()
...
>>> for v in dic.itervalues():
print v
...
1,2 3,4
5,6 7,8
5,6 8,7
|
Haystack indexing error
Question: I am trying to implement haystack [tutorial](https://django-
haystack.readthedocs.org/en/latest/tutorial.html#installation) : But i am
facing problems :
If i already have data in my DB and try to build index using :
`python manage.py rebuild_index` it gives the following error :
vaibhav@ubuntu:~/temp/HayStackDemo$ python manage.py rebuild_index -v2
WARNING: This will irreparably remove EVERYTHING from your search index in connection 'default'.
Your choices after this are to restore from backups or rebuild via the `rebuild_index` command.
Are you sure you wish to continue? [y/N] y
Removing all documents from your index because you said so.
All documents removed.
Skipping '<class 'django.contrib.auth.models.Permission'>' - no index.
Skipping '<class 'django.contrib.auth.models.Group'>' - no index.
Skipping '<class 'django.contrib.auth.models.User'>' - no index.
Skipping '<class 'django.contrib.contenttypes.models.ContentType'>' - no index.
Skipping '<class 'django.contrib.sessions.models.Session'>' - no index.
Skipping '<class 'django.contrib.sites.models.Site'>' - no index.
Skipping '<class 'django.contrib.admin.models.LogEntry'>' - no index.
Indexing 1 notes
indexed 1 - 1 of 1 (by 30508).
ERROR:root:Error updating demoApp using default
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 210, in handle_label
self.update_backend(label, using)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 256, in update_backend
do_update(backend, index, qs, start, end, total, self.verbosity)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 78, in do_update
backend.update(index, current_qs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/elasticsearch_backend.py", line 155, in update
prepped_data = index.full_prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 196, in full_prepare
self.prepared_data = self.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 187, in prepare
self.prepared_data[field.index_fieldname] = field.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 152, in prepare
return self.convert(super(CharField, self).prepare(obj))
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 73, in prepare
return self.prepare_template(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 129, in prepare_template
t = loader.select_template(template_names)
File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 193, in select_template
raise TemplateDoesNotExist(', '.join(not_found))
TemplateDoesNotExist: search/indexes/demoApp/note_text.txt
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 443, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/rebuild_index.py", line 15, in handle
call_command('update_index', **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 150, in call_command
return klass.execute(*args, **defaults)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 184, in handle
return super(Command, self).handle(*items, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 341, in handle
label_output = self.handle_label(label, **options)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 210, in handle_label
self.update_backend(label, using)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 256, in update_backend
do_update(backend, index, qs, start, end, total, self.verbosity)
File "/usr/local/lib/python2.7/dist-packages/haystack/management/commands/update_index.py", line 78, in do_update
backend.update(index, current_qs)
File "/usr/local/lib/python2.7/dist-packages/haystack/backends/elasticsearch_backend.py", line 155, in update
prepped_data = index.full_prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 196, in full_prepare
self.prepared_data = self.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/indexes.py", line 187, in prepare
self.prepared_data[field.index_fieldname] = field.prepare(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 152, in prepare
return self.convert(super(CharField, self).prepare(obj))
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 73, in prepare
return self.prepare_template(obj)
File "/usr/local/lib/python2.7/dist-packages/haystack/fields.py", line 129, in prepare_template
t = loader.select_template(template_names)
File "/usr/local/lib/python2.7/dist-packages/django/template/loader.py", line 193, in select_template
raise TemplateDoesNotExist(', '.join(not_found))
django.template.base.TemplateDoesNotExist: search/indexes/demoApp/note_text.txt
And if i remove all of the data and then try i get this :
vaibhav@ubuntu:~/temp/HayStackDemo$ python manage.py rebuild_index -v2
WARNING: This will irreparably remove EVERYTHING from your search index in connection 'default'.
Your choices after this are to restore from backups or rebuild via the `rebuild_index` command.
Are you sure you wish to continue? [y/N] y
Removing all documents from your index because you said so.
All documents removed.
Skipping '<class 'django.contrib.auth.models.Permission'>' - no index.
Skipping '<class 'django.contrib.auth.models.Group'>' - no index.
Skipping '<class 'django.contrib.auth.models.User'>' - no index.
Skipping '<class 'django.contrib.contenttypes.models.ContentType'>' - no index.
Skipping '<class 'django.contrib.sessions.models.Session'>' - no index.
Skipping '<class 'django.contrib.sites.models.Site'>' - no index.
Skipping '<class 'django.contrib.admin.models.LogEntry'>' - no index.
Indexing 0 notes
my search_indexes.py
import datetime
from haystack import indexes
from demoApp.models import Note
#------------------------------------------------------------------------------
class NoteIndex(indexes.SearchIndex, indexes.Indexable):
author = indexes.CharField(model_attr='user')
pub_date = indexes.DateTimeField(model_attr='pub_date')
text = indexes.CharField(document=True, use_template=True)
def get_model(self):
return Note
def index_queryset(self, using=None):
"""Used when the entire index for model is updated."""
return self.get_model().objects.filter(pub_date__gte=datetime.datetime.now())
I have also tried using this class and methods but nothing worked ..
import datetime
from haystack import indexes
from demoApp.models import Note
#------------------------------------------------------------------------------
#All Fields
class AllNoteIndex(indexes.ModelSearchIndex, indexes.Indexable):
class Meta:
model = Note
And this :
import datetime
from haystack import indexes
from demoApp.models import Note
#------------------------------------------------------------------------------
class NoteIndex(indexes.SearchIndex, indexes.Indexable):
author = indexes.CharField(model_attr='user')
pub_date = indexes.DateTimeField(model_attr='pub_date')
text = indexes.CharField(document=True, use_template=True)
def get_model(self):
return Note
def load_all_queryset(self):
# Pull all objects related to the Note in search results.
return Note.objects.all().select_related()
But every time same issue. If i change the time zone setting in my project
settings file and try to update or rebuild index again i get this error ....
My DIR structure :
vaibhav@ubuntu:~/temp/HayStackDemo$ tree
.
├── demoApp
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── models.py
│ ├── models.pyc
│ ├── search_indexes.py
│ ├── search_indexes.pyc
│ ├── templates
│ │ └── search
│ │ ├── indexes
│ │ │ └── demoApp
│ │ │ └── note_text.txt
│ │ └── search.html
│ ├── tests.py
│ └── views.py
├── HayStackDemo
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── settings.py
│ ├── settings.pyc
│ ├── urls.py
│ ├── urls.pyc
│ ├── wsgi.py
│ └── wsgi.pyc
├── manage.py
└── sqlite.db
settings.py
# Django settings for HayStackDemo project.
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ADMINS = (
# ('Your Name', '[email protected]'),
)
MANAGERS = ADMINS
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': '/home/vaibhav/temp/HayStackDemo/sqlite.db', # Or path to database file if using sqlite3.
'USER': '', # Not used with sqlite3.
'PASSWORD': '', # Not used with sqlite3.
'HOST': 'localhost', # Set to empty string for localhost. Not used with sqlite3.
'PORT': '', # Set to empty string for default. Not used with sqlite3.
}
}
# Hosts/domain names that are valid for this site; required if DEBUG is False
# See https://docs.djangoproject.com/en/1.4/ref/settings/#allowed-hosts
ALLOWED_HOSTS = []
# Local time zone for this installation. Choices can be found here:
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
# although not all choices may be available on all operating systems.
# In a Windows environment this must be set to your system time zone.
TIME_ZONE = 'America/Chicago'
#'Asia/Kolkata'
# Language code for this installation. All choices can be found here:
# http://www.i18nguy.com/unicode/language-identifiers.html
LANGUAGE_CODE = 'en-us'
SITE_ID = 1
# If you set this to False, Django will make some optimizations so as not
# to load the internationalization machinery.
USE_I18N = True
# If you set this to False, Django will not format dates, numbers and
# calendars according to the current locale.
USE_L10N = True
# If you set this to False, Django will not use timezone-aware datetimes.
USE_TZ = True
# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/home/media/media.lawrence.com/media/"
MEDIA_ROOT = ''
# URL that handles the media served from MEDIA_ROOT. Make sure to use a
# trailing slash.
# Examples: "http://media.lawrence.com/media/", "http://example.com/media/"
MEDIA_URL = ''
# Absolute path to the directory static files should be collected to.
# Don't put anything in this directory yourself; store your static files
# in apps' "static/" subdirectories and in STATICFILES_DIRS.
# Example: "/home/media/media.lawrence.com/static/"
STATIC_ROOT = ''
# URL prefix for static files.
# Example: "http://media.lawrence.com/static/"
STATIC_URL = '/static/'
# Additional locations of static files
STATICFILES_DIRS = (
# Put strings here, like "/home/html/static" or "C:/www/django/static".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
# Make this unique, and don't share it with anybody.
SECRET_KEY = 'kg5kd%92#5*ybo-$92ci$u349s$1*xhmhnq68!oue%r=^fq#yz'
# List of callables that know how to import templates from various sources.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# Uncomment the next line for simple clickjacking protection:
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'HayStackDemo.urls'
# Python dotted path to the WSGI application used by Django's runserver.
WSGI_APPLICATION = 'HayStackDemo.wsgi.application'
TEMPLATE_DIRS = (
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'haystack',
'demoApp',
)
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'haystack',
},
}
# A sample logging configuration. The only tangible logging
# performed by this configuration is to send an email to
# the site admins on every HTTP 500 error when DEBUG=False.
# See http://docs.djangoproject.com/en/dev/topics/logging for
# more details on how to customize your logging configuration.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
Can some please help me...
Answer:
class PersonIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(document=True, use_template=True, template_name="person_text.txt")
name = indexes.CharField(model_attr='name')
def get_model(self):
return Person
Have to add the name of the template, otherwise it uses the default path.
template_name="person_text.txt"
|
Why is my python output delayed to the end of the program?
Question: I've got an extremely simple application:
import sys
from time import sleep
for i in range(3):
sys.stdout.write('.')
sleep(1)
print('Welcome!')
I expect it to print out a dot every second (3 times), after which it should
display "Welcome!". Unfortunately, it simply waits three seconds, and then
prints out everything at once. I'm on a mac running regular Python 2.7 and I
have no clue why this code behaves like this. Any suggestions?
Answer: It's because `sys.stdout` is buffered. Use `flush`:
import sys
from time import sleep
for i in range(3):
sys.stdout.write('.')
sys.stdout.flush()
sleep(1)
print('Welcome!')
|
python check if colour exists on mouse click
Question: I am trying to automate part of a game. I've done all but one part of what I
need and its proving to be difficult. My code so far:
import win32api, win32con, time
import win32com.client as comclt
counter = 0
holder = 611
def click(x,y):
win32api.SetCursorPos((x,y))
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,x,y,0,0)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,x,y,0,0)
click(100,470)#click auto
time.sleep(1)
click(336,222)#click private parking
time.sleep(0.5)
click(1200,330)#click steal
time.sleep(1)
click(429,y)#select car
The last line "select car" is where I am having an issue. Basically I cannot
always steal a car because the attempt will fail so I need a way to check if a
car has been stolen. Once you steal a car it gives you some info in a row and
at the end is a colour bar. What I would like to do is this:
After stealing a car, check to see if the colour red exists at x,y
coordinates. If so, do something else (which is already done). If not I will
set the y coord back. What is the best way for me to go about checking whether
the colour red is present?
Answer: solved using: color =
win32gui.GetPixel(win32gui.GetDC(win32gui.GetActiveWindow()), 1242 , 523)
|
Python: Running a strip of code unless imported
Question: I have a file that I'm importing into my program (say a file with
dictionaries). At the beginning of this file I want to put a strip of code
which prints that this is not the main file and then `exit()`. The problem I
find is that this code is being run on import of the dictionaries module which
I don't want happening. How to prevent that?
I tried this but it doesn't work:
if not Main_file:
print('These aren\'t the droids you\'re looking for')
exit()
in the main file there would of course be `Main_file = True` before import.
Answer: You can use the `__name__` special variable to check if your module is used as
main:
if __name__ == '__main__':
print('These aren\'t the droids you\'re looking for')
exit()
|
Import a .mdb to SQLServer via Stored Procedure
Question: **Context**
We need to import a .mdb archive to our local database so that we can
manipulate all the DATA.
**DATA**
that .mdb file Always have the same amount of tables (58) and the same table
structure, those tables may have 109.000 to 10million entries
**Actual situation**
Now we have a python program that do the migration (called Migrathon) that is
actually old and pretty damn slow, it takes more than 10 hours to import
16.000 entries to our local data base so here they wanted to change it.
**What i have to do**
First of all i work for ppl that use GeneXus Evo1, this tool can execute SP
from a datasource , so what i need or what i want to do is a Procedure that
can Take from the .mdb source an migrate every table that is into that file to
a local DataBase where i manipulate everything as i please
**My question**
Is there any chance of doing it? its SQLServer2008 and the Access Files are
from AC2003, the data structure as i said before are always the same
structure, same tables, same name the only diference are the amount of
entries, Thanks in advance
Answer: You can use and OLEDB driver into a t-sql procedure like this :
SELECT * INTO #yourWorkTable FROM OPENDATASOURCE (‘Microsoft.Jet.OLEDB.4.0′, ‘Data Source=\\server-name\mdbs\test.mdb’)…[tableName]
With this query you got everything you need; just add some programming to
iterate through tables and you are done
|
Add files from one tar into another tar in python
Question: I would like to make a copy of a tar, with some files removed (based on their
name and possably other properties like symlink or so). As I already have the
tar file open in python, so I would like to do this in python. I understood
that TarFile.getmembers() returns a list of TarInfo objects and
TarFile.addfile(tarinfo) accepts a TarInfo object. But when I feed one into
the other, a corrupted tar is created (without errors).
import tarfile
oldtar=tarfile.open('/tmp/old.tar',"r")
newtar=tarfile.open('/tmp/new.tar',"w")
for member in oldtar.getmembers():
if not member.name == 'dev/removeme.txt':
newtar.addfile(member)
else:
print "Skipped", member.name
newtar.close()
oldtar.close()
Answer: You have to pass the `fileobj`-argument to
[`addfile()`](http://docs.python.org/2/library/tarfile.html#tarfile.TarFile.addfile):
newtar.addfile(member, oldtar.extractfile(member.name))
|
Python: "if closest to 1"
Question: I am doing some calculations on a dictionary. But the important thing is I
want to make a if-condition that kind of says
"if **x** has a value that is closer to 1 (or equal to 1) than **variable** "
kind of hard to explain, but hope you understand.
Answer: You can use [absolute
value](http://docs.python.org/2/library/functions.html#abs):
if abs(x-1) < abs(variable-1):
...
since absolute value of `x-1` is the distance between `x` and `1`, and
similarly absolute value of `variable-1` is the distance between `variable`
and `1`.
|
Python Sigma Sums
Question: I have a list of values x=`[1,-1,-1,1,1,-1,1,-1,1,-1]` and I have another
blank list `y=[ ]`
I am trying to create a function that will take a sigma sum of values in `x`
and store them in `y`.
For instance, `y[0]` should be the sum of `x[0]*x[0] + x[0]*x[1] + x[0]*x[2] +
... + x[0]*x[9]` .
Similarly, `y[1]` should be the sum of `x[1]*x[0] + x[1]*x[1] + x[1]*x[2]+ ...
+ x[1]*x[9]`.
This has to be done for `y[0] through y[9]`.
Also, in the sums, `x[i]*x[i]` must be zero. So for instance in `y[0]`,
`x[0]*x[0]` has to be zero. Similarly, in the sum for `y[1]`, `x[1]*x[1]` must
be zero.
This is my code, but it always gives me some sort of error regarding indices:
x=[1,-1,-1,1,1,-1,1,-1,1,-1]
y=[]
def list_extender(parameter):
for i in parameter:
parameter[i]*parameter[i]==0
variable=numpy.sum(parameter[i]*parameter[:])
if variable>0:
variable=1
if variable<0:
variable=-1
y.append(variable)
return y
Then i run `print list_extender(x)` which should print list `y` with the sigma
sums described above, but I always get an error. What I am doing wrong? The
help will be highly appreciated!
Answer: You're doing way too much typing and computation here. Your function could be
shorter and simpler if you computed the sum of `x` first, then used that to
compute the elements of `y`. It'd also run faster.
Just do this:
x_sum = sum(x)
y = [item * (x_sum - item) for item in x]
# or, if you really want to store the results into an existing list y
# y[:] = [item * (x_sum - item) for item in x]
Replace `sum` and the list comprehension with numpy operations if you're using
numpy:
import numpy as np
x = np.array([1,-1,-1,1,1,-1,1,-1,1,-1])
y = x * (x.sum() - x)
|
py2app ImportError with watchdog
Question: I am attempting to use py2app to bundle a small Python app that I've made in
Python 2.7 on Mac. My app uses the [Watchdog
library](http://pythonhosted.org/watchdog/), which is imported at the top of
my main file:
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
When running my program, these import statements work just fine, and the
program works as expected. However, after running py2app, launching the
bundled application generates the following error:
ImportError: No module named watchdog.observers
At first I thought it was something to do with the `observers` module being
nested inside `watchdog`, but to test that, I added the line
import watchdog
to the top of my program, and then upon running the app, got the error
ImportError: No module named watchdog
so it seems that it actually can't find the `watchdog` package, for some
reason.
I tried manually adding the `watchdog` package using py2app's `--packages`
option:
$ python setup.py py2app --packages watchdog
but it had no effect.
My unbundled Python program runs just fine from the command line; other
downloaded modules I've imported are giving no errors; and I have successfully
bundled a simple "Hello World!" app using py2app, so I believe my setup is
correct.
But I'm kind of out of ideas for how to get py2app to find the `watchdog`
package. Any ideas or help would be greatly appreciated.
Edit: Here is the text of my `setup.py`, as generated by py2applet. I haven't
modified it.
from setuptools import setup
APP = ['watcher.py']
DATA_FILES = []
OPTIONS = {'argv_emulation': True}
setup(
app=APP,
data_files=DATA_FILES,
options={'py2app': OPTIONS},
setup_requires=['py2app'],
)
Answer: Try manually including the desired packages in the setup.py file:
from setuptools import setup
APP = ['watcher.py']
DATA_FILES = []
PKGS = ['watchdog', /*whatever other packages you want to include*/]
OPTIONS = {
'argv_emulation': True,
'packages' : PKGS,
}
setup(
app=APP,
data_files=DATA_FILES,
options={'py2app': OPTIONS},
setup_requires=['py2app'],
)
|
How to get a Python console to access the vim module
Question: Recently I have been looking into vim plugin development, and I found I missed
the ability to use a Python REPL (ipython/bpython for eg) to inspect the vim
module, and generally the environment (current open document, line number,
selection etc).
This is in principle - not very advanced and something I've done from other
applications that embed Python...
Typically you could do this:
import code
code.interact(local=locals())
Or with IPython:
import IPython
IPython.embed()
However when vim embeds Python, it replaces `sys.stdin/stdout/stderr`, I
managed to temp restore these and it very nearly works but there is still some
problems with scrambled line endings and stdin isnt reading properly.
eg,
std_back = sys.stderr, sys.stdin, sys.stdout
sys.stderr = sys.__stderr__
sys.stdout = sys.__stdout__
sys.stdin = sys.__stdin__
import IPython
IPython.embed()
sys.stderr, sys.stdin, sys.stdout = std_back
I tried with both vim and gvim on linux, and the stdin/stdout was not working
right. (hard to explain, but only every second key input was accepted and
newlines were not printed, text wrapping).
So my final attempt was to use idle, which bypasses the terminal and opens up
a tk interface with a command line.
import idlelib
import idlelib.PyShell
idlelib.PyShell.main()
This loads the console, but in a sub-process so no access to `vim` module, I
checked on the idlelib source code, and found you can disable subprocess use
by faking a command line argument which idle would normally access from being
launched from the command line directly.
import sys
sys.argv.append("-n")
import idlelib
import idlelib.PyShell
idlelib.PyShell.main()
Ok, so this works, **but** , Python devs are going to remove idlelib's option
to run inside the process (currently its deprecated). Do any other devs know a
way of using a Python REPL within vim called from `pyfile` / `py3file` ?
Answer: This is a simple interpreter shell that listens on a network socket:
import socket
import sys
import code
from threading import Thread
def remote_shell(local, host='localhost', port=5555):
serv = socket.socket()
serv.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
serv.bind((host, port))
serv.listen(1)
streams = sys.stdout, sys.stderr
print('listening on %s:%d' % (host, port))
try:
sock, addr = serv.accept()
f = sock.makefile('rw')
sys.stdout = sys.stderr = f
def read(prompt):
f.write(prompt)
f.flush()
return f.readline().rstrip('\n')
code.interact(readfunc=read, local=local)
finally:
sys.stdout, sys.stderr = streams
serv.close()
sock.close()
if __name__ == '__main__':
remote_shell(local=locals())
# or if it should run in background:
# Thread(target=remote_shell, kwargs={'local': locals()}).start()
To connect, simply use: `nc localhost 5555` \- but it's probably too
restricted for advanced use. With a tool like `rlwrap` at least it would be
possible to have history, but the more advanced readline features like tab
completion won't work. But at least it's something...
|
Variables while reading a file with multiple rows in a list - python or shell
Question: I am looking to do this in python or a basic shell script.
I have a file with multiple entries that I would like to manipulate its data
and store them in variables.
The file has rows with multiple columns. The first column is a person's name
(i.e., Joe, Mary, etc). The second (after the comma) is an ID. I would like to
store each ID into a variable and then construct some links as shown below.
The problem is that one name can have only one ID or multiple, as you can see
below:
Joe, 21142 21143 21909 24125
Mary, 22650 23127
John, 24325
Mike, 24683 24684 26973
How can I store each value in the "second column" into a variable so I can
then construct links like this:
http://example/Joe/21142
http://example/Joe/21143
http://example/Joe/21909
http://example/Joe/24125
http://example/Mary/22650
http://example/Mary/23127
Thank you in advance!
* Omar
Answer: can be done with `GNU awk`
awk -F'[, ]+' '{for (i=2; i<=NF; ++i) print "http://example/"$1"/"$i }' input.txt
http://example/Joe/21142
http://example/Joe/21143
http://example/Joe/21909
http://example/Joe/24125
http://example/Mary/22650
http://example/Mary/23127
http://example/John/24325
http://example/Mike/24683
http://example/Mike/24684
http://example/Mike/26973
Or in Python
s = '''Joe, 21142 21143 21909 24125
Mary, 22650 23127
John, 24325
Mike, 24683 24684 26973
'''
from StringIO import StringIO
from contextlib import closing
with closing(StringIO(s)) as f:
for line in f:
x, y = line.split(',')
x = x.strip()
y = y.strip().split()
leader = 'http://example/{}'.format(x)
print '\n'.join('{}/{}'.format(leader, z) for z in y)
|
python: several functions in one file, AttributeError 'module' object has no attribute
Question: I am a beginner of python, and I just created a module file in python which
includes several functions together. When I called the first function defined
in the file, it was fine. But when I tried to call the second function it
says:
`AttributeError: 'module' object has no attribute 'file2file'` (file2file is a
function I defined by myself)
Here is the code of the file
import sys
import scipy as sci
import scipy.sparse as sp
import numpy as np
def file2map(inf):
dic = dict()
with open(inf, "r") as fin:
for line in fin:
s = line.split("\t")
dic[(int(s[0]),int(s[1]))] = float(s[2])
return dic
def file2file(inf,outf):
with open(inf, "r") as fin:
with open(outf, "w") as fout:
for line in fin:
s = line.split("\t")
fout.write("t{0}\t{1}\t{2}\n",s[0],s[1],s[2])
the file's name is `dataprocessing.py`, when I typed
`dataprocessing.file2map('xxx.data')`, it is fine, but generated an error
message of `AttributeError: 'module' object has no attribute 'file2file'` when
I typed `dataprocessing.file2file('xxx.data','out.data')`.
Thank you very much!
Answer: Maybe this is not the thing that causes your error, but I see that you missed
an indent in the most inner for loop in `file2file(inf,outf)`
It should be:
def file2file(inf,outf):
with open(inf, "r") as fin:
with open(outf, "w") as fout:
for line in fin:
s = line.split("\t")
fout.write("t{0}\t{1}\t{2}\n",s[0],s[1],s[2])
|
NLTK with flask import error
Question: My folder directory is as such
/maindir
__init__.py
settings.py
start
/run.py
/venv
.. other directories for flask here bin,include..etc
/app
__init__.py
main.py
views.py
/nbc
/__init__.py
naivebayesclassifier.py
The naivebayesclassifier.py module uses the nltk library as such
from nltk.probability import ELEProbDist, FreqDist
import nltk
from collections import defaultdict
from os import listdir
from os.path import isfile, join
I'm having an issue where if I try to run the program directly from going into
/app and running
python main.py
which further goes on to include nbc and use it, I have no problems.
However when I try to deploy this along with flask. I move one directory out
and run ./start, which has the following
. venv/bin.activate
./run.py
and run.py has the following
#!venv/bin/python
from app import app
app.run(debug = True)
This has worked before I included the nltk library, however now it gives me
the error saying
ImportError: No module named nltk.probability
I had installed nltk using
sudo pip install -U pyyaml nltk
but I feel I'm missing some installation somewhere to make it work while
deploying.
Answer: It looks like the activation of your virtualenv is causing the problem. Did
you activate the virtualenv before running `sudo pip install -U pyyaml nltk`?
If not, they were installed globally. Remember that by default, when you
create a virtualenv environment, it will ignore all packages not installed
directly into the environment itself (in other words, it will ignore the
packages you installed globally using `apt-get install`). So, you have two
options:
1. Install your dependencies into your virtualenv (by activating the virtualenv then doing `pip install nltk`). If nltk depends on any development libraries, you will need to install those development libraries as well. Those can be installed using your package manager (`apt-get`).
2. Rebuild your virtualenv, this time using the [`--system-site-packages`](http://www.virtualenv.org/en/latest/#the-system-site-packages-option) option. This will allow you to use packages installed outside of the virtualenv environment.
|
Something like python timedelta in golang
Question: I want to get a datetime, counting weeks from a date, days from a week and
seconds from 00:00 time.
With Python I can use this:
BASE_TIME = datetime.datetime(1980,1,6,0,0)
tdelta = datetime.timedelta(weeks = 1722,
days = 1,
seconds = 66355)
mydate = BASE_DATE + tdelta
I'm trying to get it with Go, but I have some problems to reach it:
package main
import (
"fmt"
"time"
)
var base = time.Date(1980, 1, 6, 0, 0, 0, 0, time.UTC)
func main() {
weeks := 1722
days := 1
seconds := 66355
weeksToSecs := 7 * 24 * 60 * 60
daysToSecs := 24 * 60 * 60
totalSecs := (weeks * weeksToSecs) + (days * daysToSecs) + seconds
nanosecs := int64(totalSecs) * 1000000000
//delta := time.Date(0, 0, 0, 0, 0, totalSecs, 0, time.UTC)
date := base.Add(nanosecs)
fmt.Printf("Result: %s", date)
}
prog.go:21: cannot use nanosecs (type int64) as type time.Duration in function
argument
<http://play.golang.org/p/XWSK_QaXrQ>
What I'm missing?
Thanks
Answer:
package main
import (
"fmt"
"time"
)
func main() {
baseTime := time.Date(1980, 1, 6, 0, 0, 0, 0, time.UTC)
date := baseTime.Add(1722*7*24*time.Hour + 24*time.Hour + 66355*time.Second)
fmt.Println(date)
}
[Playground](http://play.golang.org/p/l2pV8wQBQC)
* * *
Output
2013-01-07 18:25:55 +0000 UTC
|
Setting font in tkinter returns errors
Question: I'm in the middle of rewriting the code for my first tkinter application, in
which I'd avoided using classes. That was a dead end and I have to finally
learn class programming in python. I've encountered a very weird error and I
have no idea how to fix it. I've tried, but to no effect. What I'm trying to
do is specify a font for two labels in my app. It worked well in my previous,
class-free code but now it gives me an error:
(...) line 56, in create_widgets
TimeFont = font.Font(family='Palatino', size=88, weight='bold')
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/tkinter/font.py", line 71, in __init__
root = tkinter._default_root
AttributeError: 'module' object has no attribute '_default_root'
Here's the function I'm using for creating widgets:
def create_widgets(self):
self.set_timer = ttk.Entry(self, textvariable=self.timer)
self.start = ttk.Button(self, text='Start', command=self.start)
TimeFont = font.Font(family='Palatino', size=88, weight='bold') #the infamous line 56
self.display1 = ttk.Label(self, textvariable=self.player1, font=TimeFont)
self.display2 = ttk.Label(self, textvariable=self.player2, font=TimeFont)
And some more code "from above" in case its relevant:
from decimal import *
from tkinter import *
from tkinter import ttk
from tkinter import font
import time, _thread, subprocess
class Chclock(ttk.Frame):
@classmethod
def main(cls):
NoDefaultRoot()
root = Tk()
app = cls(root)
app.grid(sticky=NSEW)
root.grid_columnconfigure(0, weight=1)
root.grid_rowconfigure(0, weight=1)
root.resizable(True, False)
root.mainloop()
def __init__(self, root):
super().__init__(root)
root.bind('q', self.player1move)
root.bind('p', self.player2move)
root.bind('b', self.pause)
root.bind('a', self.undo)
root.bind('l', self.undo)
self.create_variables()
self.create_widgets() #here I call the function containing the error
self.grid_widgets()
self.grid_columnconfigure(0, weight=1)
It's probably something silly but I just can't _understand_ what's causing
this problem. It used to work fine...
Thanks!
Answer: Perhaps the code "NoDefaultRoot()" and the error message "object has no
attribute '_default_root'" might have something to do with each other? Notice
a correlation? First rule of debugging is to assume the error message is
telling you something useful.
The problem is that you are creating a font object without telling that object
what window it belongs to. Since you aren't telling it, it chooses to use the
default root window. However, you've explicitly requested no default root
window.
This is a somewhat strange way to structure your Tkinter program. I recommend
reading the answers in the question [Python Tkinter Program
Structure](http://stackoverflow.com/q/17466561/7432)
|
SQLAutoCode - error when attempting to generate schema
Question: I'm trying to auto generate a schema for use in SQLalchemy, I'm using
sqlautocode to do this, I use the following command
D:~ admin$ sqlautocode mysql://'user':"pass"@xx.xx.xx.xx:3306/db_name -o tables.py
but I keep getting the following error..
Traceback (most recent call last):
File "/usr/local/bin/sqlautocode", line 9, in <module>
load_entry_point('sqlautocode==0.7', 'console_scripts', 'sqlautocode')()
File "/Library/Python/2.7/site-packages/distribute-0.6.45-py2.7.egg/pkg_resources.py", line 343, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/Library/Python/2.7/site-packages/distribute-0.6.45-py2.7.egg/pkg_resources.py", line 2354, in load_entry_point
return ep.load()
File "/Library/Python/2.7/site-packages/distribute-0.6.45-py2.7.egg/pkg_resources.py", line 2060, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/Library/Python/2.7/site-packages/sqlautocode/main.py", line 4, in <module>
from declarative import ModelFactory
File "/Library/Python/2.7/site-packages/sqlautocode/declarative.py", line 17, in <module>
from sqlalchemy.ext.declarative import _deferred_relation as _deferred_relationship
ImportError: cannot import name _deferred_relation
<https://pypi.python.org/pypi/sqlautocode>
Answer: Use [sqlacodegen](https://pypi.python.org/pypi/sqlacodegen) instead:
`D:~ admin$ sqlacodegen mysql://'users':"pass"@xx.xx.xx.xx:3306/db_name
--outfile tables.py`
|
Python -- special method arithmetic using existing class methods
Question: This class takes a finite field polynomial string, parses it, operates
(+-*/%), then outputs in the same format as the input. It works fine (so far).
However, now I'm trying to implement special methods on the arithmetic
operators, and I can't get it past the point of simply concatenating the
strings. Generally, the idea is to initialize the input into a class instance,
but in this case there is a regex on input that seems to complicate any
attempts to do this. I'm teaching myself Python so this is a horror movie for
me, but probably just a toy for any seasoned Python programmer.
These seem to have a lot of info, but I'm not sure how much they're helpful in
this situation:
http://stackoverflow.com/questions/10842166/programmatically-create-arithmetic-special-methods-in-python-aka-factory-funct
http://rosettacode.org/wiki/S-Expressions
http://www.greenteapress.com/thinkpython/thinkCSpy/html/chap14.html
http://docs.cython.org/src/userguide/special_methods.html
Here's the class and the examples I'm using:
import re
class gf2pim:
def id(self,lst):
"""returns modulus 2 (1,0,0,1,1,....) for input lists"""
return [int(lst[i])%2 for i in range(len(lst))]
def listToInt(self,lst):
"""converts list to integer for later use"""
result = self.id(lst)
return int(''.join(map(str,result)))
def parsePolyToListInput(self,poly):
"""performs regex on raw string and converts to list"""
c = [int(i.group(0)) for i in re.finditer(r'\d+', poly)]
return [1 if x in c else 0 for x in xrange(max(c), -1, -1)]
def prepBinary(self,x,y):
"""converts to base 2; bina,binb are binary values like 110100101100....."""
x = self.parsePolyToListInput(x); y = self.parsePolyToListInput(y)
a = self.listToInt(x); b = self.listToInt(y)
bina = int(str(a),2); binb = int(str(b),2)
return bina,binb #
def add(self,a,b):
"""a,b are GF(2) polynomials like x**7 + x**3 + x**0 ...; returns binary string"""
bina,binb = self.prepBinary(a,b)
return self.outFormat(bina^binb)
def subtract(self,x,y):
"""same as addition in GF(2)"""
return self.add(x,y)
def quotient(self,a,b):
"""a,b are GF(2) polynomials like x**7 + x**3 + x**0 ...; returns quotient formatted as polynomial"""
a,b = self.prepBinary(a,b)
return self.outFormat(a/b)
def remainder(self,a,b):
"""a,b are GF(2) polynomials like x**7 + x**3 + x**0 ...; returns remainder formatted as polynomial"""
a,b = self.prepBinary(a,b)
return self.outFormat(a%b)
def outFormat(self,raw):
"""process resulting values into polynomial format"""
raw = "{0:b}".format(raw); raw = str(raw[::-1]); g = [] #reverse binary string for enumeration
g = [i for i,c in enumerate(raw) if c == '1']
processed = "x**"+" + x**".join(map(str, g[::-1]))
if len(g) == 0: return 0 #return 0 if list empty
return processed #returns result in gf(2) polynomial form
def __add__(self,other):
return gf2pim.add(self,other)
The last example at the bottom shows the problem:
obj = gf2pim()
a = "x**14 + x**1 + x**0"; b = "x**6 + x**2 + x**1"
c = "x**2 + x**1 + x**0"; d = "x**3 + x**1 + x**0"
e = "x**3 + x**2 + x**1 + x**0"; f = "x**2"; g = "x**1 + x**0"; h = "x**3 + x**2 + x**0"
p = "x**13 + x**1 + x**0"; q = "x**12 + x**1"; j = "x**4 + x**3 + x**1 + x**0"
print "add: [%s] + [%s] = %s "%(a,b,obj.add(a,b))
print "add: [%s] + [%s] = %s "%(c,d,obj.add(c,d))
print "quotient (max(a,b)/min(a,b): [%s] / [%s] = %s "%(a,b,obj.quotient(a,b))
print "remainder (max(a,b) mod min(a,b)): [%s] mod [%s] = %s "%(a,b,obj.remainder(a,b))
print "quotient (max(a,b)/min(a,b): [%s] / [%s] = %s "%(c,d,obj.quotient(c,d))
print "remainder (max(a,b) mod min(a,b): [%s] mod [%s] = %s "%(c,d,obj.remainder(c,d))
print "quotient (max(a,b)/min(a,b): [%s] / [%s] = %s "%(q,q,obj.quotient(q,q))
print "remainder (max(a,b) mod min(a,b): [%s] mod [%s] = %s "%(q,q,obj.remainder(q,q))
print "modular_inverse: [%s] * [%s] mod [%s] = 1 [%s]"%(p,valuemi2[0],q,valuemi2[1])
sum1 = obj.add(a,b); quotient1 = obj.quotient(sum1,c)
### HERE THE PROBLEM IS CLEAR
print "[(a+b)/c] = ",quotient1
smadd1 = a+b
print "smadd1 ",smadd1
And the output:
>>>
add: [x**14 + x**1 + x**0] + [x**6 + x**2 + x**1] = x**14 + x**6 + x**2 + x**0
add: [x**2 + x**1 + x**0] + [x**3 + x**1 + x**0] = x**3 + x**2
quotient (max(a,b)/min(a,b): [x**14 + x**1 + x**0] / [x**6 + x**2 + x**1] = x**7 + x**6 + x**5 + x**3 + x**1
remainder (max(a,b) mod min(a,b)): [x**14 + x**1 + x**0] mod [x**6 + x**2 + x**1] = x**2 + x**1 + x**0
quotient (max(a,b)/min(a,b): [x**2 + x**1 + x**0] / [x**3 + x**1 + x**0] = 0
remainder (max(a,b) mod min(a,b): [x**2 + x**1 + x**0] mod [x**3 + x**1 + x**0] = x**2 + x**1 + x**0
quotient (max(a,b)/min(a,b): [x**12 + x**1] / [x**12 + x**1] = x**0
remainder (max(a,b) mod min(a,b): [x**12 + x**1] mod [x**12 + x**1] = 0
[(a+b)/c]*d = x**14 + x**12 + x**9 + x**1
smadd1 x**14 + x**1 + x**0x**6 + x**2 + x**1
>>>
So as you can see by `smadd1` I need to add these 2 using + and not just
concatenate. Also, I'd like to know if I'll need to use an S-expression tree
in this situation.
EDIT:
Multiply() that was working but isn't now:
def __mul__(self,other):
"""
__multiply__ is the special method for overriding the - operator
returns product of 2 polynomials in gf2; self,other are values 10110011...
"""
self = int(str(self),2)
bitsa = reversed("{0:b}".format(self))
g = [(other<<i)*int(bit) for i,bit in enumerate(bitsa)]
return gf2infix(self.outFormat(reduce(lambda x,y: x^y,g)))
It's original form was:
def multiply(self,a,b):
"""a,b are GF(2) polynomials like x**7 + x**3 + x**0... ; returns product of 2 polynomials in gf2"""
a = self.prepBinary(a); b = self.prepBinary(b)
bitsa = reversed("{0:b}".format(a))
g = [(b<<i)*int(bit) for i,bit in enumerate(bitsa)]
return self.outFormat(reduce(lambda x,y: x^y,g))
* * *
Disregard that issue with `multiply()`, I fixed it. The line that was changed
was:
bitsa = reversed("{0:b}".format(self.bin))
and the line before that was taken out.
Answer: It looks like you are confusing two concepts: strings that represent finite
field polynomials and objects of the class gf2pim, which also represents
finite field polynomials. You should instantiate a gf2pim object for each
polynomial you wish to manipulate, and the operations on gf2pim objects should
return other gf2pim objects. Currently you are trying to use the + operator on
2 strings, which is why your `__add__` method is not being called. There are
also several other problems with your definition of the gf2pim class. I've
refactored your code a bit below, although it is still not perfect. I also
left out the division methods, but I think what I did should get you on the
right track. See section 3.4.8 of this
[link](http://docs.python.org/2/reference/datamodel.html) for more special
method names for operator overloading.
import re
class gf2pim(object):#Your classes should generally inherit from object
def __init__(self, binary):
'''__init__ is a standard special method used to initialize objects. Here __init__
will initialize a gf2pim object based on a binary representation.'''
self.bin = binary
@classmethod
def from_string(cls, string):
return cls(cls._string_to_binary(string))
def to_string(self):
raw = "{0:b}".format(self.bin); raw = str(raw[::-1]); g = [] #reverse binary string for enumeration
g = [i for i,c in enumerate(raw) if c == '1']
processed = "x**"+" + x**".join(map(str, g[::-1]))
if len(g) == 0: return 0 #return 0 if list empty
return processed #returns result in gf(2) polynomial form
@classmethod
def id(cls, lst):
"""returns modulus 2 (1,0,0,1,1,....) for input lists"""
return [int(lst[i])%2 for i in range(len(lst))]
@classmethod
def _list_to_int(self, lst):
"""converts list to integer for later use"""
result = self.id(lst)
return int(''.join(map(str,result)))
@classmethod
def _string_to_list(cls, string):
"""performs regex on raw string and converts to list"""
c = [int(i.group(0)) for i in re.finditer(r'\d+', string)]
return [1 if x in c else 0 for x in xrange(max(c), -1, -1)]
@classmethod
def _string_to_binary(cls, string):
"""converts to base 2; bina,binb are binary values like 110100101100....."""
x = cls._string_to_list(string)
a = cls._list_to_int(x)
bina = int(str(a),2)
return bina #
def __add__(self,other):
"""
__add__ is another special method, and is used to override the + operator. This will only
work for instances of gf2pim and its subclasses.
a,b are GF(2) polynomials like x**7 + x**3 + x**0 ...; returns binary string"""
return gf2pim(self.bin^other.bin)
def __sub__(self,other):
"""
__sub__ is the special method for overriding the - operator
same as addition in GF(2)"""
return self.add(other)
def __str__(self):
return self.to_string()
if __name__ == '__main__':
a = gf2pim.from_string("x**14 + x**1 + x**0")
b = gf2pim.from_string("x**6 + x**2 + x**1")
smadd1 = a+b
print "smadd1 ",smadd1
|
How to set a date restriction for returned events in Google Calendar and put them in order - Python
Question: Ok, so I've seen similar questions to this, but most of them are in regards to
different languages, none seem to be within the realm of Python. I've also
searched the documentation on Google for the different options with listing
events in a Calendar. There is no method listed by Google to restrict the
results by order of state date, only start time.
So, I was hoping someone could help me with a way to retrieve events from a
calendar get them in order, and at the same time, restrict the results to a
specific date. Preferably I'd like to not get any events farther than 30 days
out and order them by start time and date. My guess, while I'm not sure of the
best way to do it, would be to get all the events, put them in a list or
dictionary maybe, then maybe enumerate that list or dictionary into the order
I want and then do some if statements to remove anything from the list or
dictionary that's farther than 30 days out. Below is the code I'm using and
then the response I get in the terminal. As you can see I get dates that are a
year out and they're all out of order. Oh and just so you aren't confused,
this code is pulling from multiple parts of my overall calendar in Google.
Additionally, you'll notice I limited the results to a maximum of 5 to make
testing easier so I wasn't returning 100's of results.
try:
# ----- Facebook Birthday Calendar -----
page_token = None
interval = 0
while True:
events = service.events().list(calendarId='my calendar id', pageToken=page_token).execute()
for event in events['items']:
if interval < 5:
dt = dateutil.parser.parse(event['start']['date'])
print event['summary'], dt.strftime('%d %m %Y')
page_token = events.get('nextPageToken')
interval += 1
else:
break
if not page_token:
break
print "-----"
# ----- My Gmail Calendar -----
page_token_two = None
interval_two = 0
while True:
events_two = service.events().list(calendarId='my calendar id', pageToken=page_token_two).execute()
for event_two in events_two['items']:
if interval_two < 5:
dt_two = dateutil.parser.parse(event_two['start']['dateTime'])
dstime_two = dateutil.parser.parse(event_two['start']['dateTime'])
detime_two = dateutil.parser.parse(event_two['end']['dateTime'])
print event_two['summary'] + " " + dt_two.strftime('%d %m %Y') + " " + dstime_two.strftime('%H%M') + "-" + detime_two.strftime('%H%M')
# print event_two['summary'], dt_two.strftime('%d %m %Y')
page_token_two = events_two.get('nextPageToken')
interval_two += 1
else:
break
if not page_token_two:
break
print "-----"
# ----- US Holidays Calendar -----
page_token_three = None
interval_three = 0
while True:
events_three = service.events().list(calendarId='my calendar id', pageToken=page_token_three).execute()
for event_three in events_three['items']:
if interval_three < 5:
dt_three = dateutil.parser.parse(event_three['start']['date'])
print event_three['summary'], dt_three.strftime('%d %m %Y')
page_token_three = events_three.get('nextPageToken')
interval_three += 1
else:
break
if not page_token_three:
break
print "-----"
Results:
Person's Birthday 17 03 2014
Person's Birthday 20 09 2013
Person's Birthday 10 09 2013
Person's Birthday 17 04 2014
Person's Birthday 29 04 2014
-----
Work 12 04 2013 1430-2330
Work 15 04 2013 1415-2315
Work 22 04 2013 1415-2315
Work 25 04 2013 1405-2305
Work 29 04 2013 0640-0230
-----
Patriot Day 11 09 2012
Thanksgiving 28 11 2013
Groundhog Day 02 02 2014
Memorial Day 28 05 2012
Lincoln's Birthday 12 02 2013
-----
Answer: Python datetime objects are very easy to work with. For instance, you could
collect your event and datetime information into a list of tuples, like this:
all_events = []
for <loop over events from server>:
all_events.append((dateutil.parser.parse(event['start']['date']), event['summary']))
And then you can filter and sort those tuples, using the fact that you can
add/subtract datetimes and get timedelta objects. You can simply compare to a
timedelta object which represents a difference of 30 days:
from datetime import datetime, timedelta
max_td = timedelta(days=30)
now = datetime.now()
# Remove events that are too far into the future
filtered_events = filter(lambda e: e[0] - now <= max_td, all_events)
# Sort events in ascending order of start time
filtered_events.sort()
|
Python doesn't detect my unittest
Question: I have following code snippet -
import unittest
class SimpleWidgetTestCase(unittest.TestCase):
def setUp(self):
print 'setup'
def method_test(self):
print 'test method'
def tearDown(self):
print 'tear down'
if __name__ == "__main__":
unittest.main()
Output -
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Answer: test method name should start with `test`. Replace `method_test` to
`test_method`, and try again.
From [unittest
documentation](http://docs.python.org/2/library/unittest.html#basic-example):
> A testcase is created by subclassing unittest.TestCase. The three individual
> tests are defined with methods whose names start with the letters test. This
> naming convention informs the test runner about which methods represent
> tests.
|
Python Relative Path Import from subfolder
Question: first post to SO, so if I'm missing some details, please forgive me.
Is there a way to use relative paths from another subfolder without resorting
to modifying sys.path via os? Eventually this will be run from a cgi webserver
so I'd rather stay away from any -m arguments to python.exe.
I'm using Python 2.7.3 and have a file/directory structure of the following :
| myprog.py
|
+---functions
| myfunctions.py
| __init__.py
|
\---subfolder
mysub.py
In the root, I have a single .py file, called myprog.py :
#file .\myprog.py
from functions import *
hello("Hi from Main")
In the functions folder I have two files, **init**.py, myfunctions.py :
#The File: functions\__init__.py :
from myfunctions import *
#The File: functions\myfunctions.py :
def hello(sometext):
print sometext
And finally, in the subfolder, I have :
#The File: subfolder\mysub.py :
from ..functions import *
hello("Hi From mysubprogram")
The myprog.py executes fine (when running python.exe myprog.py from the parent
folder), printing "Hi From Main", however, the mysub.py (when executed from
the subfolder) keeps putting the error: _ValueError: Attempted relative import
in non-package_
I have tried varying combinations in mysub.py such as from
..functions.myfunctions import * yet none yields the desired result.
I have read a few relevant articles :
[using __init__.py](http://stackoverflow.com/questions/2361124)
[How to import classes defined in
__init__.py](http://stackoverflow.com/questions/582723)
<http://docs.python.org/2/tutorial/modules.html#packages-in-multiple-
directories>
But just can't figure this out. Oh, and once I get this working, I would like
to remove the import *'s wherever possible, however, I'd rather not have to
put the full paths to the hello function each time it's called, so any advise
there or on cleaning up the init.py (using **all** or in another manner) would
be a bonus.
Thanks Blckknght EDIT, I also found the below : [Python relative imports for
the billionth time](http://stackoverflow.com/questions/14132789/python-
relative-imports-for-the-billionth-time?rq=1)
Which, if what I'm requesting isn't possible, perhaps I'm asking the wrong
thing. If this is just outright bad practice, is the right way to accomplish
my goal using sys.path or is there something else someone can recommend (like
not calling functions from ../folders) ?
Answer: I think the issue has to do with how you are running the `mysub.py` script.
Python doesn't tend to do well with scripts in packages, since the main script
module is always named `__main__` rather than its usual name.
I think you can fix this by running `mysub` with `python -m subfolder.mysub`,
or by manipulating the `__package__` variable in `mysub.py` (as described by
[PEP 366](http://www.python.org/dev/peps/pep-0366/)). It's not neat,
unfortunately.
|
replacing only single instances of a character with python regexp
Question: I am trying to replace single `$` characters with something else, and want to
ignore multiple `$` characters in a row, and I can't quite figure out how. I
tried using lookahead:
s='$a $$b $$$c $d'
re.sub('\$(?!\$)','z',s)
This gives me:
'za $zb $$zc zd'
when what I want is
'za $$b $$$c zd'
What am I doing wrong?
Answer: notes, if not using a callable for the replacement function:
* you would need look-ahead because you must not match if followed by `$`
* you would need look-behind because you must not match if preceded by `$`
not as elegant but this is very readable:
>>> def dollar_repl(matchobj):
... val = matchobj.group(0)
... if val == '$':
... val = 'z'
... return val
...
>>> import re
>>> s = '$a $$b $$$c $d'
>>> re.sub('\$+', dollar_repl, s)
'za $$b $$$c zd'
|
Disjoint set of records from two pandas DataFrames
Question: Is there an easy way to find the disjoint set of records (what would be left
on each of the two original dataframes that is not included in the resulting
inner join) between two pandas dataframes based on a MultiIndex?
Am I missing something rather obvious or do I have to spend some time
implementing this kind of functionality myself?
I attempted to do this by finding the symmetric difference between the set of
muliIndex keys of the two dataframes, but this has proved difficult. I have
been struggling to get this to work. My other option, which seems like it
might be a bit easer is to add a dummy column of integers that can act as a
different single index that is preserved even after I do the multiIndex merge
so I that I can use the python set operators on this de facto single key.
[Note that this is related to but slightly different than this question
because this merge is not based on a MultiIndex object, but on the values in
columns of the dataframe: [How do I do a SQL style disjoint or set difference
on two Pandas DataFrame
objects?](http://stackoverflow.com/questions/14405975/how-do-i-do-a-sql-style-
disjoint-or-set-difference-on-two-pandas-dataframe-objec) ]
Answer: I think your approach of finding the symmetric difference is the way to go.
In [97]: from numpy import random
In [98]: arrays1 = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
....: ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
In [99]: arrays2 = [['bar', 'baz', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], [
....: 'one', 'one', 'two', 'three', 'one', 'two', 'one', 'three']]
In [100]: tuples1 = zip(*arrays1)
In [101]: tuples2 = zip(*arrays2)
In [102]: index1 = MultiIndex.from_tuples(tuples1, names=['first', 'second'])
In [103]: index2 = MultiIndex.from_tuples(tuples2, names=['first', 'second'])
In [104]: df1 = pd.DataFrame(random.randn(8, 2), index=index1)
In [105]: df2 = pd.DataFrame(random.randn(8, 2), index=index2)
In [106]: df1
Out[106]:
0 1
first second
bar one 0.613378 -0.400247
baz one -3.005834 0.004879
two 0.066539 -0.289100
three -0.020099 0.644226
foo one -0.461458 -1.621812
two 0.286655 0.110588
qux one 0.363648 -0.271281
three 1.707787 -1.832602
In [107]: df2
Out[107]:
0 1
first second
bar one -1.010482 -0.023373
baz one -0.040335 1.553905
two -0.080283 -0.571686
three -0.985722 -0.795481
foo one 0.623122 2.124316
two -0.493333 -0.343462
qux one -1.346753 -1.343945
three -0.053497 -0.382402
In [108]: sym_diff = (df1.index - df2.index).union(df2.index - df1.index)
In [109]: sym_diff
Out[109]:
MultiIndex
[(u'baz', u'three'), (u'qux', u'three')]
I'm not sure why there isn't a symmetric difference method on MultiIndex.
|
python inside vim to obtain file list from current directory
Question: How would I use python script to communicate with vim..
Using ultisnips plugin, I have option to include shell script or python script
inside snippet definition, using `!p` for python for example. Now, what I am
trying to do is to get list of files in current directory and put them in
between `<files>` tag. something like this.
snippet lsf
!p
import glob
cwd = vim.eval("getcwd()") # or maybe vim.eval("expand('%:h')")
snip.rv = "\n".join([ "\t<files>%s</files>" % file for file in glob(cwd + "/*")])
endsnippet
Dont pay attention to first and last lines as they are syntax for creating a
snippet using ultisnips, what counts is the whole script I have in syntax
highlighting.
but it throws errors. where I might be wrong, any suggestion?
Answer: You have simply imported the glob module not the glob function and have tried
calling the module as a function.
!p
from glob import glob #Change this line as shown
cwd = vim.eval("getcwd()") # or maybe vim.eval("expand('%:h')")
snip.rv = "\n".join([ "\t<files>%s</files>" % file for file in glob(cwd + "/*")])
This is at least the initial problem.
|
Saving data in Python without a text file?
Question: I have a python program that just needs to save one line of text (a path to a
specific folder on the computer).
I've got it working to store it in a text file and read from it; however, I'd
much prefer a solution where the python file is the only one.
And so, I ask: is there any way to save text in a python program even after
its closed, without any new files being created?
**EDIT:** I'm using py2exe to make the program an .exe file afterwards: maybe
the file could be stored in there, and so it's as though there is no text
file?
Answer: You can save the file name in the Python script and modify it in the script
itself, if you like. For example:
import re,sys
savefile = "widget.txt"
x = input("Save file name?:")
lines = list(open(sys.argv[0]))
out = open(sys.argv[0],"w")
for line in lines:
if re.match("^savefile",line):
line = 'savefile = "' + x + '"\n'
out.write(line)
This script reads itself into a list then opens itself again for writing and
amends the line in which `savefile` is set. Each time the script is run, the
change to the value of `savefile` will be persistent.
I wouldn't necessarily recommend this sort of self-modifying code as good
practice, but I think this may be what you're looking for.
|
Python3 multipartmime email (text, email, and attachment)
Question: I'm creating some emails in Python and I'd like to have HTML, text, and an
attachment. My code is 'working', though its outputs are shown by Outlook as
EITHER HTML or text, while showing the other 'part' (email or txt) as an
attachment. I'd like to have the robust-ness of both email and text versions
along with the file attachment.
Is there a fundamental limitation or am I making a mistake?
#!/usr/bin/env python3
import smtplib,email,email.encoders,email.mime.text,email.mime.base
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
# me == my email address
# you == recipient's email address
me = "[email protected]"
you = "[email protected]"
# Create message container - the correct MIME type is multipart/alternative.
msg = MIMEMultipart('mixed')
msg['Subject'] = "msg"
msg['From'] = me
msg['To'] = you
# Create the body of the message (a plain-text and an HTML version).
text = "Hi\nThis is text-only"
html = """\
<html> This is email</html>
"""
part1 = MIMEText(text, 'plain')
part2 = MIMEText(html, 'html')
#attach an excel file:
fp = open('excelfile.xlsx', 'rb')
file1=email.mime.base.MIMEBase('application','vnd.ms-excel')
file1.set_payload(fp.read())
fp.close()
email.encoders.encode_base64(file1)
file1.add_header('Content-Disposition','attachment;filename=anExcelFile.xlsx')
# Attach parts into message container.
# According to RFC 2046, the last part of a multipart message, in this case
# the HTML message, is best and preferred.
msg.attach(part2)
msg.attach(part1)
msg.attach(file1)
composed = msg.as_string()
fp = open('msgtest.eml', 'w')
fp.write(composed)
fp.close()
Answer: I found this has in fact been answered. Strange how the search feature is less
effective than the 'related' boxes.
[Python: Sending Multipart html emails which contain embedded
images](http://stackoverflow.com/questions/920910/python-sending-multipart-
html-emails-which-contain-embedded-images?lq=1)
|
Does the python code executes in order
Question: I am creating a file and then doing diff on it.
I want to do diff on the file which iscreated in previous step but i get the
error that file dont exist .
This is my code
os.popen("mysqldump --login-path=server1_mysql -e --opt --skip-lock-tables --skip-extended-insert -c %s > %s.sql" % (database, filename))
os.popen("diff %s %s > %s" % (weekly, filename, filename+".PATCH"))
Answer: [`os.popen` is deprecated. Use the subprocess
module](http://docs.python.org/2/library/subprocess.html). `subprocess.call`
will block the main process until the command is finished. You should inspect
the returncode, `retval`, in case there was an error while executing the
`mysqldump` command. In that case, you can not continue with the `diff`:
import subprocess
import shlex
with open("{f}.sql".format(f=filename), 'w') as fout:
retval = subprocess.call(
shlex.split(
"""mysqldump --login-path=server1_mysql -e --opt --skip-lock-tables
--skip-extended-insert -c %s""" % (database, )),
stdout=fout)
if not retval == 0:
raise Exception('Error executing command: {r}'.format(r=retval))
else:
with open("{f}.PATCH".format(f=filename), 'w') as fout:
retval = subprocess.call(
shlex.split("diff {w} {f}".format(w=weekly, f=filename)),
stdout=fout)
|
Using Python in vimscript: How to export a value from a python script back to vim?
Question: I'm struggling with Python in vim.
I still haven't found out how I can import a value from a python script (in a
vim function) back to vim p.e.
function! myvimscript()
python << endpython
import vim, random, sys
s = vim.eval("mylist")
# do operation with variable "s" in python
endpython
" import variable "s" from above
" do operation with "s" in vimscript
endfunction
1) How can I use `"s"` again in vim (how can I import `"s"` from the python
code back to vim)?
I can't find out as well how to use vim.current.buffer with a selection.
function! myvimscript()
let startline = line("'<")
let endline = line("'>")
python << endpython
start = vim.eval("startline")
end = vim.eval("endline")
cb = vim.current.buffer
l = cb[start:end]
endpython
endfunction
2) How can I assign the dynamic value `"start"` and `"end"` to `"l"`
Answer: First of all, please define your function name starting with uppercase.
Here is an example for your two questions. I hope it helps:
function! TestPy() range
let startline = line("'<")
let endline = line("'>")
echo "vim-start:".startline . " vim-endline:".endline
python << EOF
import vim
s = "I was set in python"
vim.command("let sInVim = '%s'"% s)
start = vim.eval("startline")
end = vim.eval("endline")
print "start, end in python:%s,%s"% (start, end)
EOF
echo sInVim
endfunction
first I paste the output of a small test: I visual selected 3,4,5, three
lines, and `:call TestPy()`
The output I had:
vim-start:3 vim-endline:5
start, end in python:3,5
I was set in python
So I explain the output a bit, you may need to read the example function codes
a little for understanding the comment below.
vim-start:3 vim-endline:5 #this line was printed in vim, by vim's echo.
start, end in python:3,5 # this line was prrinted in py, using the vim var startline and endline. this answered your question two.
I was set in python # this line was printed in vim, the variable value was set in python. it answered your question one.
I added a `range` for your function. because, if you don't have it, for each
visual-selected line, vim will call your function once. in my example, the
function will be executed 3 times (3,4,5). with range, it will handle
visualselection as a range. It is sufficient for this example. If your real
function will do something else, you could remove the `range`.
With `range`, better with `a:firstline and a:lastline`. I used the
`line("'<")` just for keep it same as your codes.
**EDIT** with list variable:
check this function:
function! TestPy2()
python << EOF
import vim
s = range(10)
vim.command("let sInVim = %s"% s)
EOF
echo type(sInVim)
echo sInVim
endfunction
if you call it, the output is:
3
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
the "3" means type list (check type() function). and one line below is the
string representation of list.
|
Python OpenCV extremely high CPU usage after 10 second runtime
Question: I'm currently doing a project where I'm building an autonomous driving car. So
far I have sorted out the image processing parts as well as training the SVM
(libSVM). I'm getting the video feed from an IP camera but even using a video
file I'm encountering this same problem. After a few seconds of runtime CPU
usage spikes to 100% and the frame rate drops to below 1FPS. At first I
thought it could be disk I/O, but created a ramdisk and the problem still
persisted. Can someone help me find the problem in my code?
#!/usr/bin/env python
import socket
import os
import cv2.cv as cv
import cv as _cv
import numpy
import time
import serial
from subprocess import call
TCP_IP = '192.168.1.101'
TCP_PORT = 23
BUFFER_SIZE = 1
serial = serial.Serial('/dev/ttyACM0',9600)
def run():
prevDirection = 'e'
directions = { 0:'d',
1:'w',
2:'w',
3:'w',
4:'a',
5:'a',
6:'a',
7:'a',
8:'a',
9:'a',
10:'d',
11:'d',
12:'d',
13:'d',
14:'d',
15:'d',
16:'d',
17:'d',
18:'d',
19:'d',
20:'d',
21:'d',
22:'d',
23:'d',
24:'e',
25:'e',
26:'e',
27:'e'
}
vidFile = cv.CaptureFromFile('vvv.mp4')
hist = cv.CreateHist([180], cv.CV_HIST_ARRAY, [(0,180)], 1)
#selection = (270,460,100,20)
selection = (1,1,100,20)
framesToDrop = 5;
while True:
frame = None
frame = cv.QueryFrame(vidFile)
cv.ShowImage("selected", frame)
cv.Smooth(frame, frame, cv.CV_BLUR, 5, 5)
sub = cv.GetSubRect(frame, selection)
cv.ShowImage("selected", sub)
cv.Smooth(sub, sub, cv.CV_BLUR, 5, 5)
_hsv = cv.CreateImage(cv.GetSize(sub), 8, 3)
cv.CvtColor(sub, _hsv, cv.CV_BGR2HSV)
_hue = cv.CreateImage(cv.GetSize(sub), 8, 1)
cv.Split(_hsv, _hue, None, None, None)
# Convert to HSV and keep the hue
hsv = cv.CreateImage(cv.GetSize(frame), 8, 3)
cv.CvtColor(frame, hsv, cv.CV_BGR2HSV)
hue = cv.CreateImage(cv.GetSize(frame), 8, 1)
cv.Split(hsv, hue, None, None, None)
# Compute back projection
backproject = cv.CreateImage(cv.GetSize(frame), 8, 1)
cv.CalcArrBackProject([hue], backproject, hist)
x,y,w,h = selection
cv.Rectangle(frame, (x,y), (x+w,y+h), (255,255,255))
cv.CalcArrHist( [_hue], hist, 0)
(_, max_val, _, _) = cv.GetMinMaxHistValue(hist)
threshold=100
colour=255
cv.Threshold(backproject,backproject, threshold,colour,cv.CV_THRESH_BINARY)
cv.Smooth(backproject, backproject, cv.CV_BLUR, 5, 5)
cv.Dilate(backproject,backproject,None,5)
cv.Smooth(backproject, backproject, cv.CV_BLUR, 5, 5)
cv.Dilate(backproject,backproject,None,5)
cv.Smooth(backproject, backproject, cv.CV_BLUR, 5, 5)
cv.Dilate(backproject,backproject,None,5)
cv.Rectangle(backproject, (0,0), (640,280), cv.RGB(0, 0, 0), -1)
SVMPrediction = predict(backproject)
moveDirection = directions[SVMPrediction]
#sendCommandToCarWifi(moveDirection,prevDirection)
sendCommandToCarSerial(moveDirection,prevDirection)
#print moveDirection
cv.ShowImage("Live", frame)
cv.ShowImage("Backproject", backproject)
c = cv.WaitKey(7) % 0x100
if c == 27:
break
def sendCommandToCarSerial(direction,prevDirection):
serial.write(direction)
def sendCommandToCarWifi(direction,prevDirection):
if(prevDirection != direction):
print'Sent: ' , direction
prevDirection = direction
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((TCP_IP, TCP_PORT))
s.send(direction)
s.close()
return prevDirection
def predict(inputFrame):
resizedImage = cv.CreateImage((40,30),inputFrame.depth, inputFrame.nChannels)
cv.Resize(inputFrame, resizedImage)
createBinary(resizedImage, 0)
trash = os.system('/tmp/ramdisk/svm-predict /tmp/ramdisk/currentImageBinaryData.txt /tmp/ramdisk/currentImageBinaryData.txt.model /tmp/ramdisk/output')
output = open('/tmp/ramdisk/output','r')
result = output.read()
output.close()
return int(result)
def createBinary(image, number):
threshold=100
colour=255
cv.Threshold(image,image, threshold,colour,cv.CV_THRESH_BINARY)
width,height = cv.GetSize(image)
pixelNum = 1
pixelValues = []
for i in range(height):
for j in range(width):
pixel = image[i,j]
value = 2
if(pixel == 0.0):
value = 0
if(pixel == 255.0):
value = 1
temp = ("%s:%s") % (pixelNum, value)
pixelNum += 1
pixelValues.append(temp)
f = open('/tmp/ramdisk/currentImageBinaryData.txt','w')
numberString = ('%d ') % (number)
f.write(numberString)
t = ' '.join(pixelValues)
f.write(t)
f.write(' \n')
f.flush()
f.close()
if __name__=="__main__":
run()
cv.DestroyAllWindows()
Answer: You should run a profile of your code with CProfile and see what's chewing up
your resources. The official docs on profiling are here:
<http://docs.python.org/2/library/profile.html>
|
Specify the return type for a ctypes call (in python) to a fortran function that returns an array of doubles
Question: I'm trying to use the ctypes module to call, from within a python program, a
(fortran) library of linear algebra routines that I have written. I have
successfully imported the library and can call my _subroutines_ and functions
that return a single value. My problem is calling functions that return an
array of doubles. I can't figure out how to specify the return type. As a
result, I get segfaults whenever I call a function like that.
Here's a minimum working example, a routine to take the cross product between
two 3-vectors:
!****************************************************************************************
! Given vectors a and b, c = a x b
function cross_product(a,b)
real(dp) a(3), b(3), cross_product(3)
cross_product = (/a(2)*b(3) - a(3)*b(2), &
a(3)*b(1) - a(1)*b(3), &
a(1)*b(2) - a(2)*b(1)/)
end function cross_product
Here's my python script:
#!/usr/bin/python
from ctypes import byref, cdll, c_double
testlib = cdll.LoadLibrary('/Users/hart/codes/celib/trunk/libutils.so')
cross = testlib.vector_matrix_utilities_mp_cross_product_
a = (c_double * 3)()
b = (c_double * 3)()
a[0] = c_double(0.0)
a[1] = c_double(1.0)
a[2] = c_double(2.0)
b[0] = c_double(1.0)
b[1] = c_double(3.0)
b[2] = c_double(2.0)
print a,b
cross.restype = c_double * 3
print cross.restype
print cross(byref(a),byref(b))
And here's the output:
goku:~/python/ctypes> ./test_example.py
<__main__.c_double_Array_3 object at 0x10399b710> <__main__.c_double_Array_3 object at 0x10399b7a0>
<class '__main__.c_double_Array_3'>
Segmentation fault: 11
goku:~/python/ctypes>
I've tried different permutations for the line "cross.restype = ..." but I
can't figure out what should actually go there. Thanks for reading this
question. --Gus
Answer: The compiler may return a pointer to the array, or the array descriptor... So,
when mixing languages, you should always use `bind(C)` except when the wrapper
specifically supports Fortran. And (not surprisingly) `bind(C)` functions
cannot return arrays. You could theoretically allocate the array and return
`type(c_ptr)` to it, but how to dealocate it after the use?
So my suggestion is to use a subroutine.
|
import MySQLdb ImportError
Question: I think I installed MySQL correctly. Almost positive, except for the fact that
it isn't working
$ python
>>> import MySQLdb
returns
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "MySQLdb/__init__.py", line 19, in <module>
import _mysql
ImportError: dlopen(/Users/msmith/Documents/dj/mysite/venv/lib/python2.7/site-packages/MySQL_python-1.2.4b4-py2.7-macosx-10.8-intel.egg/_mysql.so, 2): Library not loaded: libmysqlclient.18.dylib
Referenced from: /Users/msmith/Documents/dj/mysite/venv/lib/python2.7/site-packages/MySQL_python-1.2.4b4-py2.7-macosx-10.8-intel.egg/_mysql.so
Reason: image not found
Does anyone have any ideas on how to fix this? THanks
Answer: create a symbolic link to the library
sudo ln -s /usr/local/mysql/lib/libmysqlclient.18.dylib /usr/lib/libmysqlclient.18.dylib
or add this path to your profile
export DYLD_LIBRARY_PATH=/usr/local/mysql-5.5.15-osx10.6-x86/lib/:$DYLD_LIBRARY_PATH
|
Cannot connect to mssql db using pymssql
Question: I have FreeTDS installed and configured correctly. My freetds.conf file as
this appended to the end:
[myserver]
host = myserver
port = 1433
tds version = 7.0
And I can running the following command gives me a SQL prompt:
tsql -S myserver -U username
My python script is extremely minimal, in an attempt to successfully connect
to the database:
#! /path/to/python/bins
import pymsql
conn = pymssql.connect(host='myserver', user='username', password='password', database='database', as_dict=True)
conn.close()
But when I run it I recieve the following error:
Traceback (most recent call last):
File "./test.py", line 5, in <module>
conn = pymssql.connect(host='myserver', user='username', password='password', database='database', as_dict=True)
File "pymssql.pyx", line 456, in pymssql.connect (pymssql.c:6017)
pymssql.InterfaceError: Connection to the database failed for an unknown reason.
What could cause this? From what I've searched, most people who run into this
problem have the freetds.conf file configured incorrectly; however, I can
successfully connect (with tsql). Does anyone know what I'm doing wrong, or
how I can fix this?
Answer: I have just looked through `pymssql` code and most likely you have problem
with MSSQL driver.
<https://code.google.com/p/pymssql/source/browse/pymssql.pyx?name=1.9.908#456>
Try configuring logging in FreeTDS to see "unknown reason": see
<http://freetds.schemamania.org/userguide/logging.htm>
([mirror](http://web.archive.org/web/20150403090737/http://freetds.schemamania.org/userguide/logging.htm))
Basically:
$ export TDSDUMP=/tmp/freetds.log
|
Understanding routing error (Python)
Question: I'm trying to make a program work that calls files in a specific folder.
However, for some reason, I keep getting an error. I'll post the relevant code
and error message.
Code:
def objmask(inimgs, inwhts, thresh1='20.0', thresh2='2.0', tfdel=True,
xceng=3001., yceng=3001., outdir='.', tmpdir='tmp'):
# initial detection of main galaxy with SExtractor for re-centering purposes
if outdir!='.':
if not os.path.exists(outdir):
os.makedirs(outdir)
if not os.path.exists(tmpdir):
os.makedirs(tmpdir)
for c in range(np.size(inimgs)):
print 'Creating Aperture Run:', c
subprocess.call(['sex',inimgs[c],'-c','/home/vidur/se_files/gccg.sex',
'-CATALOG_NAME','/home/vidur/se_files/_tmp_seobj'+str(c)+'.cat',
'-PARAMETERS_NAME','/home/vidur/se_files/gccg_ell.param',
'-FILTER_NAME','/home/vidur/se_files/gccg.conv',
'-STARNNW_NAME','/home/vidur/se_files/gccg.nnw',
'-CHECKIMAGE_TYPE','APERTURES',
'-VERBOSE_TYPE','QUIET',
'-DETECT_THRESH',thresh1,
'-ANALYSIS_THRESH',thresh2,
'-WEIGHT_IMAGE',inwhts[c]],shell=True
)
Error:
Creating Aperture Run: 0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "fetch_swarp2.py", line 110, in objmask
'-WEIGHT_IMAGE',inwhts[c]],
File "/usr/lib/python2.7/subprocess.py", line 493, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
I have a folder named se_files in my home directory. Its path is
/home/username/se_files. This is on Ubuntu, 12.04 32-bit.
Answer: The problem is that `subprocess` can't `sex`, presumably because it's not on
your PATH.
The [`call`](http://docs.python.org/2/library/subprocess.html#subprocess.call)
function won't ever raise an Exception because the program it ran returned
failure; it just returns that failure to you as a number. It only raises it it
can't run the program in the first place.
You can see the difference pretty easily:
>>> import subprocess
>>> subprocess.call(['nosuchprogram'])
[long traceback skipped]
FileNotFoundError: [Errno 2] No such file or directory: 'nosuchprogram'
>>> subprocess.call(['ls', 'nosuchfile'])
ls: nosuchfile: No such file or directory
1
The first one raises; the second returns `1`.
So, put the absolute path to `sex` in the call, or make sure it's installed
properly, or make sure your script is running with the right environment
(e.g., maybe you added `/opt/sex/bin` to your `PATH` only in interactive
scripts, or you only added it for your own user but you're trying to run the
script as `nobody`, or…).
|
no module named requests
Question: I will first state I have searched for this problem, and found the exact same
problem here ( [ImportError: No module named
'requests'](http://stackoverflow.com/questions/16265368/importerror-no-module-
named-requests) ) but that hasn't helped me.
I am using macports on osx (mountain lion). I have successfully installed and
run a few scripts without any issues.
from the macports page, I have installed `requests` via the method it detailed
and as far as I can tell, it has installed successfully:
daves-mbp:~ Dave$ port search requests
arpwatch @2.1a15 (net)
Monitor ARP & RARP requests
http_ping @29jun2005 (net, www)
Sends HTTP requests every few seconds and times how long they take
httping @2.0 (net, www)
Ping-like tool for http-requests
py-requests @1.2.3 (python, devel)
Python HTTP for Humans.
py26-requests @1.2.3 (python, devel)
Python HTTP for Humans.
py27-requests @1.2.3 (python, devel)
Python HTTP for Humans.
py31-requests @1.2.3 (python, devel)
Python HTTP for Humans.
py32-requests @1.2.3 (python, devel)
Python HTTP for Humans.
py33-requests @1.2.3 (python, devel)
Python HTTP for Humans.
webredirect @0.3 (www)
small webserver which redirects all requests
Found 10 ports.
I have python 2.7, so I installed it via:
daves-mbp:~ Dave$ sudo port install py27-requests
Password:
---> Computing dependencies for py27-requests
---> Fetching archive for py27-requests
---> Attempting to fetch py27-requests-1.2.3_0.darwin_12.noarch.tbz2 from http://jog.id.packages.macports.org/macports/packages/py27-requests
---> Attempting to fetch py27-requests-1.2.3_0.darwin_12.noarch.tbz2.rmd160 from http://jog.id.packages.macports.org/macports/packages/py27-requests
---> Installing py27-requests @1.2.3_0
---> Activating py27-requests @1.2.3_0
---> Cleaning py27-requests
---> Updating database of binaries: 100.0%
---> Scanning binaries for linking errors: 100.0%
---> No broken files found.
daves-mbp:~ Dave$
I think that looks good. Using macports is there something else I have to do
before using it? I thought the `python setup.py install` (in the
aforementioned post) may have solved my problem, however, when I search for
`requests` in my filesystem, the only reference is burried in a path (that
macports says is a store for user installed modules. And besides, there is no
setup.py within that or it's parent directory.
I have restarted my terminal window (that fixed another problem earlier), but
it made no difference here.
Any help is appreciated
edit:
`which python` reports /opt/local/bin/python
the first lines of the python interpreter DID report: daves-mbp:~ Dave$ python
Python 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple
Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
but now I have done something and it's responding with new errors:
daves-mbp:~ Dave$ python
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 548, in <module>
main()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 530, in main
known_paths = addusersitepackages(known_paths)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 266, in addusersitepackages
user_site = getusersitepackages()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 241, in getusersitepackages
user_base = getuserbase() # this will also set USER_BASE
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 231, in getuserbase
USER_BASE = get_config_var('userbase')
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sysconfig.py", line 516, in get_config_var
return get_config_vars().get(name)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sysconfig.py", line 449, in get_config_vars
import re
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/re.py", line 105, in <module>
import sre_compile
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sre_compile.py", line 14, in <module>
import sre_parse
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sre_parse.py", line 17, in <module>
from sre_constants import *
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sre_constants.py", line 18, in <module>
from _sre import MAXREPEAT
ImportError: cannot import name MAXREPEAT
Answer: In trying to sort this out, I have broken python, and eventually I got it
going again.
I think initially I had not run one of the `port select --set...` commands.
Once I realised this might be the case, I did so, but that produced the errors
at the top. MAXREPEATS, a circular reference perhaps? No idea.
I have read here ([macports didn't place python_select in
/opt/local/bin](http://stackoverflow.com/questions/6152765/macports-didnt-
place-python-select-in-opt-local-bin)) and here ([How do I uninstall python
from OSX Leopard so that I can use the MacPorts
version?](http://stackoverflow.com/questions/118813/how-do-i-uninstall-python-
from-osx-leopard-so-that-i-can-use-the-macports-versio)) about the `--set`
command not working and to try `sudo port select python python26` (i used
python27) instead.
I checked the PATH and python didn't appear, so I updated that as well.
I got my python interpreter back and low-and-behold `imports requests` now
works.
I think at the end of it all, there were two errors:
1. I used `--set` instead of the newer command, and
2. my path wasn't set
edit: Actually, after more debugging, I found the error was on the first line
of my script, I had defined which python to use (which was the default apple
one, which doesn't include the module). Once I updated the shebang line, it
worked.
|
Releasing a python package - should you include doc and tests?
Question: So, I've released a small library on pypi, more as an exercise (to "see how
it's done") than anything else.
I've uploaded the documentation on readthedocs, and I have a test suite in my
git repo.
Since I figure anyone who might be interested in running the test will
probably just clone the repo, and the doc is already available online, I
decided not to include the doc and test directories in the released package,
and I was just wondering if that was the "right" thing to do.
I know answers to this question will be rather subjective, but I felt it was a
good place to ask in order to get a sense of what the community considers to
be the best practice.
Answer: It is not required but recommended to include documentation as well as unit
tests into the package.
**Regarding documentation:**
Old-fashioned or better to say old-school source releases of open source
software contain documentation, this is a (de facto?) standard (have a look at
GNU software, for example). Documentation is part of the code and should be
part of the release, simply because once you download the source release you
are independent. Ever been in the situation where you've been on a train
somewhere, where you needed to have a quick look into the documentation of
module X but didn't have internet access? And then you relievedly realized
that the docs are already there, locally.
Another important point in this regard is that the documentation that you
bundle together with the code for sure applies to the code version. Code and
docs are in sync.
One more thing especially regarding Python: you can write your docs using
Sphinx and then build beautiful HTML output based on the documentation source
in the process of installing the package. I have seen various Python packages
doing exactly this.
**Regarding tests:**
Imagine the tests are bundled in the source release and are easy to be run by
the user (you should document how to do this). Then, if the user observes a
problem with your code which is not so easy to track down, he can simply run
the unit tests **in his environment** and see if at least those are passing.
If not, you've probably made a wrong assumption when specifying the behavior
of your code, which is good to know about. What I want to say is: it can be
very good for you as a developer if you make it very simple for the user to
execute unit tests.
|
How to find by _id in ming?
Question: I have a mapped class in [ming](http://merciless.sourceforge.net/)
from ming import Session, create_datastore
from ming import schema
from ming.odm import ODMSession
from ming.odm.mapper import MapperExtension
from ming.odm.property import ForeignIdProperty
from ming.odm.property import FieldProperty, RelationProperty
from ming.odm.declarative import MappedClass
import config
bind = create_datastore(config.DATABASE_NAME)
session = Session(bind)
odm_session = ODMSession(doc_session=session)
class Document(MappedClass):
class __mongometa__:
session = odm_session
name = 'document'
_id = FieldProperty(schema.ObjectId)
Now, I want to do a simple query to it as
> Document.query.get(_id="51e46f782b5f9411144f0efe")
But it doesn't work. Documentation is not quite clear about it. I know that in
mongodb shell we have to wrap the id in an ObjectId object, but I can't get it
to work in Python
Answer: You should try the query with ObjectId
from bson.objectid import ObjectId
Document.query.get(_id=ObjectId('51e46f782b5f9411144f0efe'))
With naked pymongo
from bson.objectid import ObjectId
from pymongo import Connection
connection = Connection()
db = connection['lenin']
collection = db.document
collection.find_one({'_id': '51e35ee82e3817732b7bf3c1'}) # returns None
collection.find_one({'_id': ObjectId('51e35ee82e3817732b7bf3c1')}) # returns the object
|
import wx not working in uncompiled scripts
Question: I have installed Python 2.7.5 and wxPython 2.8.12.1 on my new Windows 7
machine, and the 'import wx' statement doesn't work when I try to run the
containing .py script directly from the Windows command prompt or from the
Windows explorer. (It does work in the compiled .pyc file, or if I run the
script from the interactive interpreter using import, or using the python
command at the Windows command prompt.)
The script looks like this:
import wx
print wx.version()
raw_input("Test runs OK - hit Enter to exit")
In the failure case, the output looks like this:
Traceback (most recent call last): File "C:\First Python
Project\src\root\nested\test.py", line 2, in ? print wx.version()
AttributeError: 'module' object has no attribute 'version'
I suspect this has something to do with my wxPython installation, because
'import os' works fine however I run the script.
Thanks for any help. I've looked but can't find this question elsewhere.
Answer: My guess is you may have named one of your scripts "wx.py" in C:\First Python
Project\src\root\nested. If so, you are shadowing wxPython itself. Python will
import you wx.py because it's first on the path and it won't even try to
import the right one. That's my guess anyway.
|
Create a continuous distribution in python
Question: I am having trouble creating a continuous distribution in python and its
really beginning to annoy me. I have read and re-read [this python guide
(scipy
guide)](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html#scipy.stats.rv_continuous)
and it hasn't helped my problem.
My code reads:
import sys
import scipy.stats
import numpy
def CDF_Random(N,NE,E,SE,S,SW,W,NW,Iterations):
WindDir = [0,45,90,135,180,225,270,315]
Freq = N,NE,E,SE,S,SW,W,NW
mydist = scipy.stats.rv_continuous(#My problem is what to write here)
cdf_rand=mydist.rvs(size=Iterations)
return (cdf_rand)
if __name__ == '__main__':
N = float(sys.argv[1])
NE = float(sys.argv[2])
E = float(sys.argv[3])
SE = float(sys.argv[4])
S = float(sys.argv[5])
SW = float(sys.argv[6])
W = float(sys.argv[7])
NW = float(sys.argv[8])
Iterations = float(sys.argv[9])
numpy.set_printoptions(threshold=Iterations)
sys.stdout.write(str(CDF_Random(N,NE,E,SE,S,SW,W,NW,Iterations)))
As you can see if you read the code, my problem is knowing what to put in the
brackets to create the continuous distribution.
`scipy.stats.rv_continuous(#what to put here)`.
I have tried alot of different things, mainly the ones suggested in [this
document(scipy
guide)](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html#scipy.stats.rv_continuous),
like setting my upper and lower range values `a=,b=` setting it to a `pdf` or
a `ppf`. I have tried `[arrays]` using the ones that are entered in the
command line or just ones I wrote into the code itself.
From the command line I run this command `python C:\Users\...\CDF.py 0.01 0.01
0.01 0.01 0.01 0.93 0.01 0.01 10` and every time I get;`RuntimeError:maximum
recursion depth exceeded` I have tried resetting the recursion depth to
different values but this didn't work or crashed python.
`sys.setrecursionlimit(10000)`
So basically what should be entered in the brackets after
`scipy.stats.rv_continuous()` to create a continuous distribution of the
`[array]` called `WindDir` for a given distribution `freq`? I have honestly
had a good look through Google and the stackoverflow website, searching using
keywords, keywords with tags and tags alone and couldn't find a solution.
**Edit 1-Desired outcome** I would like the output to be a real number between
`0,360` or `0,2pi`
Answer: Alright, so in order to use `rv_continuous` you need to provide a probability
density function of some sort. In the example below, I implement a cumulative
density function for the given wind direction interval of [0,360). I do this
by interpolating the probability density function between the nearest two wind
directions specified in the input. Note the parameters `a` and `b` specified
in the `rv_continuous` base class constructor...these specify the minimum and
maximum values of the interval in consideration. Try the code out, and if you
have any questions, please ask and I'll try to help clarify.
_Edit_ I've modified the code for python 3, as well as updated the cdf to more
accurately interpolate between the frequencies given at the cardinal
directions.
import scipy.stats
class rvc(scipy.stats.rv_continuous):
def __init__(self, freqs):
super().__init__(a=0,b=359.9999)
self.WindDir = [0.,45.,90.,135.,180.,225.,270.,315.,360.]
self.Freqs = freqs
def _cdf(self, x):
return [self.do_cdf(i) for i in x]
def do_cdf(self, x):
if x < 0: return 0.0
if x >= 360: return 1.0
v = 0.0
for i in range(9):
if x >= self.WindDir[i]:
v += self.Freqs[i]
else:
v += (self.Freqs[i]-self.Freqs[i-1])*(x-self.WindDir[i-1])/45.
break
return v
rv = rvc([0.01,0.01, 0.01, 0.01, 0.01, 0.01, 0.92, 0.01, 0.01])
print(rv.rvs(size=100))
Running this generates output like
[ 305.76400244 271.49390379 285.5514374 280.4422151 302.52158889
273.85068415 278.8377575 287.05260894 270.66219523 301.97131911
277.77725392 293.29633122 273.60832876 278.90703722 271.12590324
288.38285244 299.89362412 276.68015144 278.46429959 304.33532312
313.42248626 272.10226608 311.00385667 284.48822492 298.134523
300.08266465 272.17850056 270.07995773 359.9999 289.13032076
206.58066169 308.95821915 288.998036 280.93451276 294.33703562
278.02828894 307.96335447 292.50172701 313.67335738 291.99562061
275.98551449 307.72620259 283.11307243 309.6052904 283.65619152
305.80365684 303.43088801 314.74811251 302.8390404 288.16580807
299.84487396 302.34636138 291.09465231 310.74393544 279.23446355
309.83353391 292.62761642 273.22324646 291.16193395 298.51520679
299.87754111 280.29779055 276.76741796 285.5186257 301.7074023
274.9771402 280.6619726 276.53202603 289.50757382 313.61213159
98.86469637 298.41091812 83.62474126 290.42538277 306.70218844
270.81939255 290.25089647 305.3277742 278.03965968 280.5979916
307.9492377 284.30184233 307.14788891 283.33779011 270.86398644
297.70610336 277.48672772 300.35147777 271.07308885 304.41450287
312.5640489 273.54296504 311.62527023 298.95246144 275.199183
302.6063864 359.9999 311.25861396 306.83491033 313.52858514]
|
How can I add context to an exception in Python
Question: I would like to add context to an exception like this:
def process(vals):
for key in vals:
try:
do_something(vals[key])
except Exception as ex: # base class. Not sure what to expect.
raise # with context regarding the key that was being processed.
I found a way that is uncharacteristically long winded for Python. Is there a
better way than this?
try:
do_something(vals[key])
except Exception as ex:
args = list(ex.args)
if len(args) > 1:
args[0] = "{}: {}".format(key, args[0])
ex.args = tuple(args)
raise # Will re-trhow ValueError with new args[0]
Answer: The first item in `ex.args` is always the message -- if there is any. (Note
for some exceptions, such as the one raised by `assert False`, `ex.args` is an
empty tuple.)
I don't know of a cleaner way to modify the message than reassigning a new
tuple to `ex.args`. (We can't modify the tuple since tuples are immutable).
The code below is similar to yours, except it constructs the tuple without
using an intermediate list, it handles the case when `ex.args` is empty, and
to make the code more readable, it hides the boilerplate inside a context
manager:
import contextlib
def process(val):
with context(val):
do_something(val)
def do_something(val):
# assert False
return 1/val
@contextlib.contextmanager
def context(msg):
try:
yield
except Exception as ex:
msg = '{}: {}'.format(msg, ex.args[0]) if ex.args else str(msg)
ex.args = (msg,) + ex.args[1:]
raise
process(0)
yields a stack trace with this as the final message:
ZeroDivisionError: 0: division by zero
|
Flask instanciation app = Flask()
Question: I intentionally removed **name** in app = Flask(**name**) and I get this
error:
Traceback (most recent call last):
File "routes.py", line 4, in <module>
app = Flask()
TypeError: __init__() takes at least 2 arguments (1 given)
this is my code from [nettuts](http://net.tutsplus.com/tutorials/python-
tutorials/an-introduction-to-pythons-flask-framework/) and here is my code:
from flask import Flask, render_template
app = Flask()
@app.route('/')
def home():
return render_template('home.html')
@app.route('/about')
def about():
return render_template('about.html')
if __name__ == '__main__':
app.run(debug=True)
My question is: Where is this **init** method that takes at least 2 arguments?
Answer: If you understand the concept of class and objects, then `__init__` is the
constructor which initializes an instance of the class. In this case, the
class is Flask and when you do the following, you are initializing the
instance of a Flask object:
app = Flask(__name__)
Now your question, "Where is this init method that takes at least 2
arguments?"
That can be explained as per the definition below that defines the constructor
in the code.
def __init__(self, import_name, static_path=None, static_url_path=None,
static_folder='static', template_folder='templates',
instance_path=None, instance_relative_config=False):
If you see above, `self` and `import name` is the required parameter and rest
are all defaulted or not required. The `self` is needed by Python even though
you can name it anything else. read this blog by creator of python himself for
why <http://neopythonic.blogspot.com/2008/10/why-explicit-self-has-to-
stay.html>
|
Multiple linear regression with python
Question: I would like to calculate multiple linear regression with python. I found this
code for simple linear regression
import numpy as np
from matplotlib.pyplot import *
x = np.array([1, 2, 3, 4, 5])
y = np.array([2, 3, 4, 4, 5])
n = np.max(x.shape)
X = np.vstack([np.ones(n), x]).T
a = np.linalg.lstsq(X, y)[0]
So, a is the coefficient, but I don't see what `[0]` means ?
And how can I change the code to obtain multiple linear regressions ?
Answer: to extend it to Multiple Linear Regression all you have to do is to create a
multi dimensional x instead of a one dimension x.
i.e.,
x = np.array([[1, 2, 3,4,5], [4, 5, 6,7,8]], np.int32)
<http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html>
and with respect to a[0] that is called the intercept in a linear regression,
i.e,
y = a + bx + error, a[0] = a, a[1] = b
|
same implemention on java and python, two very different running times
Question: My friend and I made a program to do the same thing, his was in java and mine
in python. The question we had to solve was "What is the smallest positive
number that is evenly divisible by all of the numbers from 1 to 20?" We both
got the right answers, and as far as I can tell implemented pretty similarly,
but his one finished in a about three seconds while mine took over a minute.
Can anyone possibly see why this happened? Just to be clear, I know that both
ways aren't the best, but they both worked and what I'm interested in is
seeing why there was such a significant time difference. Thanks!
Java:
for (int i = 1; i > 0; i++) {
for (int j = 1; j < 21; j++) {
if (i % j != 0) {
break;
}
if (j == 20) {
System.out.println("ANSWER: " + i);
System.exit(0);
}
}
}
python:
e=1
while e > 0 :
num =1
while num < 21:
if e % num != 0:
break
num += 1
if num == 21:
print e
break
e += 1
Answer: While your implementation in python looks similar, the indentation is very
important. In the Java version, this block:
if(j == 20){
System.out.println("ANSWER: " + i);
System.exit(0);
}
is inside the second for loop. In the python version, the block:
if num == 21:
print e
break
is outside the second while loop.
|
How to Use unicode with a list or a string in Python
Question: So i Have a list of some Irish(gaelic words) words that I want to use the
unicode with so that RDFlib will be able to understand the accents above some
of the letters in the word. I dont know whether to use the unicode before the
words are in the list or after. Here is the code I have so far:
sample line in file = `00001740 n 3 eintiteas aonán beith 003 ~ 00001930 n
0000`
def process_file(self):
self.file = open("testing_line_ir.txt", "r")
return self.file
def line_for_loop(self, file):
for line in file:
self.myline = unicode(line, 'utf-8')
for line in self.myline:
............here is where other processes are ran.......
This is giving out the error:
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe1 in position 26: invalid continuation byte
and i have also tried this:
def get_words_list(self, word_part, num_words):
self.word = word_part[3:3 + num_words:1]
self.myword = [unicode(i) for i in self.word]
return self.myword
In this case, 'word' is the list of words ['eintiteas', 'aonán', 'beith'] and
I tried using myword as the encoded list, with the same error as above.
EDIT: Here is the source code from where the error occurs, it occurs on the
graph.parse line The variables passing through like block1 and namespaces are
just lines of text
def compose_printout(self, namespaces, block1, block2, close_rdf):
self.printout += namespaces + block1 + block2 + close_rdf
self.tabfile = StringIO(self.printout)
return self.tabfile
def serialize(self, graph, tabfile):
""" This will serialize with RDFLib """
graph.parse(tabfile, publicID=None, format="xml")
Some of these words are getting added to a RDFlib graph so any help here would
be great!!
Answer: You do not have UTF-8 data. From the exception message I'd say you have
Latin-1 encoded data instead:
>>> print '\xe1'.decode('latin1')
á
You can use the [`codecs.open()`
function](http://docs.python.org/2/library/codecs.html#codecs.open) to create
a file object that returns file data ready-decoded:
import codecs
def process_file(self):
self.file = codecs.open("testing_line_ir.txt", "r", 'latin-1')
return self.file
def line_for_loop(self, file):
for line in file:
# line is *already* unicode
|
Referencing a RegEx Variable
Question: I'm using python to loop through a large list of self reported locations to
try to match them to their home states. The RegEx expression I'm using is:
/^"[^\s]+,\s*([a-zA-Z]{2})"$/
Basically, I'm trying to find a pattern that looks like `XXXCITYXXX,
[Statecode]`, where _statecode_ is only two letters.
My issue is that I don't know how to reference the varying state code once I
find a matching string. I know in Perl that I could use:
$state = uc($1)
However, I don't know the equivalent Python syntax. Anyone know?
Answer: You can do it with re.search, which returns a `match` object (if the regex
matches at all) with a `groups` property containing the captured groups:
import re
match = re.search('^[^\s]+,\s*([a-zA-Z]{2})$', my_string)
if match:
print match.groups()[0]
|
How make dns queries in dns-python as dig (with aditional records section)?
Question: I trying use `dns python` and want get all records with `ANY` type query:
import dns.name
import dns.message
import dns.query
domain = 'google.com'
name_server = '8.8.8.8'
domain = dns.name.from_text(domain)
if not domain.is_absolute():
domain = domain.concatenate(dns.name.root)
request = dns.message.make_query(domain, dns.rdatatype.ANY)
response = dns.query.udp(request, name_server)
print response.answer
print response.additional
print response.authority
but it return me
[]
[]
[]
When I try make this request with `dig`:
$ dig @8.8.8.8 google.com -t ANY
; <<>> DiG 9.9.2-P1 <<>> @8.8.8.8 google.com -t ANY
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2848
;; flags: qr rd ra; QUERY: 1, ANSWER: 25, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;google.com. IN ANY
;; ANSWER SECTION:
google.com. 299 IN A 173.194.40.14
google.com. 299 IN A 173.194.40.1
google.com. 299 IN A 173.194.40.7
google.com. 299 IN A 173.194.40.4
google.com. 299 IN A 173.194.40.3
google.com. 299 IN A 173.194.40.0
google.com. 299 IN A 173.194.40.8
google.com. 299 IN A 173.194.40.6
google.com. 299 IN A 173.194.40.5
google.com. 299 IN A 173.194.40.2
google.com. 299 IN A 173.194.40.9
google.com. 299 IN AAAA 2a00:1450:4002:804::1000
google.com. 21599 IN TYPE257 \# 23 0009697373756577696C6473796D616E7465632E636F6D
google.com. 21599 IN TYPE257 \# 19 0005697373756573796D616E7465632E636F6D
google.com. 21599 IN NS ns2.google.com.
google.com. 21599 IN NS ns3.google.com.
google.com. 599 IN MX 50 alt4.aspmx.l.google.com.
google.com. 599 IN MX 10 aspmx.l.google.com.
google.com. 3599 IN TXT "v=spf1 include:_spf.google.com ip4:216.73.93.70/31 ip4:216.73.93.72/31 ~all"
google.com. 599 IN MX 20 alt1.aspmx.l.google.com.
google.com. 21599 IN SOA ns1.google.com. dns-admin.google.com. 2013070800 7200 1800 1209600 300
google.com. 599 IN MX 30 alt2.aspmx.l.google.com.
google.com. 21599 IN NS ns1.google.com.
google.com. 599 IN MX 40 alt3.aspmx.l.google.com.
google.com. 21599 IN NS ns4.google.com.
;; Query time: 52 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Jul 16 18:23:46 2013
;; MSG SIZE rcvd: 623
When I check requests with `wireshark` then found that `dig` and `dns python`
have different requests:
`dig`:
0000 c8 64 c7 3a e3 40 50 46 5d a5 70 99 08 00 45 00 .d.:.@PF ].p...E.
0010 00 43 9f 60 00 00 40 11 09 8f c0 a8 01 03 08 08 .C.`..@. ........
0020 08 08 8e 9e 00 35 00 2f 71 cf ef 49 01 20 00 01 .....5./ q..I. ..
0030 00 00 00 00 00 01 06 67 6f 6f 67 6c 65 03 63 6f .......g oogle.co
0040 6d 00 00 ff 00 01 00 00 29 10 00 00 00 00 00 00 m....... ).......
0050 00
`dns python`:
0000 c8 64 c7 3a e3 40 50 46 5d a5 70 99 08 00 45 00 .d.:.@PF ].p...E.
0010 00 38 00 00 40 00 40 11 68 fa c0 a8 01 03 08 08 .8..@.@. h.......
0020 08 08 b8 62 00 35 00 24 23 6b 3d 31 01 00 00 01 ...b.5.$ #k=1....
0030 00 00 00 00 00 00 06 67 6f 6f 67 6c 65 03 63 6f .......g oogle.co
0040 6d 00 00 ff 00 01 m.....
For DNS query section:
`dig` have `AD bit: Set` flag:
`002C-002D`: `01 20` for `dig` and `01 00` for `dns python`
and this `Additional records` section that except for `dns-python`:
`0046-0050`: `00 00 29 10 00 00 00 00 00 00 00`.
This actual also not only for `google.com` also for `logitech.com` mayby
other.
So how can I make requests with `dns python` as `dig` with this additional
section?
Answer: I found solution, I made request as `dig`:
import dns.name
import dns.message
import dns.query
import dns.flags
domain = 'google.com'
name_server = '8.8.8.8'
ADDITIONAL_RDCLASS = 65535
domain = dns.name.from_text(domain)
if not domain.is_absolute():
domain = domain.concatenate(dns.name.root)
request = dns.message.make_query(domain, dns.rdatatype.ANY)
request.flags |= dns.flags.AD
request.find_rrset(request.additional, dns.name.root, ADDITIONAL_RDCLASS,
dns.rdatatype.OPT, create=True, force_unique=True)
response = dns.query.udp(request, name_server)
print response.answer
print response.additional
print response.authority
With `ADDITIONAL_RDCLASS = 4096` like `dig` all work too, but I set it full to
be on the safe side.
and it works pretty good.
|
Passing a Python list to PHP - Only achievable with JSON?
Question: I've been doing some research around and haven't found a way to solve the
situation I'm faced with now.
I need to pass a Python list to PHP. I've been reading about doing it with
JSON but I was wondering if it was possible without it.
My list looks something like this:
a_list = [0,['A1', 'A2', ['A3','A4']], ['B5', 'B2', ['B3','B4']]]
I have found how to pass simple values between Python and PHP but this is a
bit more complicated.
Also, I have found on another question asked here something that works for
dictionaries:
Python side:
import json
D = {'foo':1, 'baz': 2}
print json.dumps(D)
PHP side:
<?php
$result = json_decode(exec('python myscript.py'), true);
echo $result['foo'];
Any help would be appreciated.
Thanks!
Answer: You'll need to serialize the data to somehow, to send it across. Here are a
couple of ways to do it:
1. JSON
2. Protobuf
3. Write the data out to a text file in python and read it back in with PHP
4. Start a server in your PHP program and have your python program write the data to it in some format over a socket
|
Extract email sub-strings from large document
Question: I have a very large .txt file with hundreds of thousands of email addresses
scattered throughout. They all take the format:
...<[email protected]>...
What is the best way to have Python to cycle through the entire .txt file
looking for a all instances of a certain @domain string, and then grab the
entirety of the address within the <...>'s, and add it to a list? The trouble
I have is with the variable length of different addresses.
Answer: This [code](https://developers.google.com/edu/python/regular-expressions)
extracts the email addresses in a string. Use it while reading line by line
>>> import re
>>> line = "why people don't know what regex are? let me know [email protected]"
>>> match = re.search(r'[\w\.-]+@[\w\.-]+', line)
>>> match.group(0)
'[email protected]'
If you have several email addresses use `findall`:
>>> line = "why people don't know what regex are? let me know [email protected] dssdadsa [email protected]"
>>> match = re.findall(r'[\w\.-]+@[\w\.-]+', line)
>>> match
['[email protected]', '[email protected]']
* * *
The regex above probably finds the most common non-fake email address. If you
want to be completely aligned with the [RFC
5322](http://www.ietf.org/rfc/rfc5322.txt) you should check whose it defines
the exact allowed patterns for email addresses. Check
[this](http://stackoverflow.com/questions/201323/using-a-regular-expression-
to-validate-an-email-address) out to avoid any bugs in finding email addresses
correctly.
|
Python subprocess: stderr only saving the first line. Why?
Question: I am running `tcpdump` from within **Python** and I would like to know how
many **packets** are **dropped by the kernel**.
When run on a command line, tcpdump looks like this:
me@mypc:$ sudo tcpdump -w myPackets.cap -i eth0 ip
tcpdump: listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes
^C28 packets captured
28 packets received by filter
0 packets dropped by kernel
This is how I call `tcpdump` in my Python script:
f_out = open("tcpdumpSTDOUT", "w")
f_err = open("tcpdumpSTDERR", "w")
tcpdumpProcess = subprocess.Popen(['tcpdump',
'-w', 'myPackets.cap', '-i', 'eth0', '-n','ip'],
stdout=f_out,
stderr=f_err)
# a few seconds later:
tcpdumpProcess.kill()
f_in.close()
f_out.close()
Now, if I look at `tcpdumpSTDERR`, I **only** see the **first of the usual
output lines** :
> tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535
> bytes
Where's all the rest?
**EDIT** I tried a different approach:
>>> myProcess = subprocess.Popen("tcpdump -w myPackets.cap -i eth2 ip", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
>>> myProcess.communicate()
Then I killed tcpdump from a different shell, and the output of commnunicate()
was displayed:
('', 'tcpdump: listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes\n')
... still the first line only!
**EDIT 2** Interestingly:
>>> import shlex
>>> a = subprocess.Popen(shlex.split("tcpdump -w myPackets.cap -i eth2 ip"), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
>>> a.terminate()
>>> a.communicate()
('', 'tcpdump: listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes\n221 packets captured\n221 packets received by filter\n0 packets dropped by kernel\n')
Answer: Use `proc.terminate()` instead of `proc.kill()`:
import shlex
import subprocess
import time
with open("tcpdumpSTDERR", "wb") as f_err: # close the file automatically
proc = subprocess.Popen(shlex.split("tcpdump -w myPackets.cap -i eth2 ip"),
stderr=f_err)
time.sleep(2) # wait a few seconds
proc.terminate() # send SIGTERM instead of SIGKILL
proc.wait() # avoid zombies
|
Python convert csv to xlsx
Question: In [this post](http://superuser.com/questions/301431/how-to-batch-convert-csv-
to-xls-xlsx) there is a Python example to convert from csv to xls.
However, my file has more than 65536 rows so xls does not work. If I name the
file xlsx it doesnt make a difference. Is there a Python package to convert to
xlsx?
Answer: Here's an example using
[xlsxwriter](https://xlsxwriter.readthedocs.org/en/latest):
import os
import glob
import csv
from xlsxwriter.workbook import Workbook
for csvfile in glob.glob(os.path.join('.', '*.csv')):
workbook = Workbook(csvfile + '.xlsx')
worksheet = workbook.add_worksheet()
with open(csvfile, 'rb') as f:
reader = csv.reader(f)
for r, row in enumerate(reader):
for c, col in enumerate(row):
worksheet.write(r, c, col)
workbook.close()
FYI, there is also a package called
[openpyxl](http://pythonhosted.org/openpyxl/), that can read/write Excel 2007
xlsx/xlsm files.
Hope that helps.
|
Python: testing for utf-8 character in string
Question: I need to test whether a string that has already been encoded with
str.encode('utf-8') is right-to-left. I tried
if u'\u200f' in str.decode('utf-8'):
print 'found it'
It neither complains nor works.
Q: What is the correct syntax to test for the occurrence of a single non-ASCII
character in a string? Python 2.6 and I can't use 3.
Q: I remember reading that predominantly right-to-left characters default to
RTL even without an explicit RML. Does anyone know a way to test such a string
without knowing which language to expect (i.e. the string can be in Arabic,
Hebrew or any other RTL language)?
Thanks for all help.
Answer: Every unicode character has a "bidirectional" class. You can find the
bidirectional class using
[unicodedata.bidirectional](http://docs.python.org/2/library/unicodedata.html#unicodedata.bidirectional).
The function returns a string, e.g. 'L', 'R', 'AL', etc. with the [following
meaning](http://www.unicode.org/reports/tr44/tr44-4.html):
| L | Left_To_Right | any strong left-to-right character |
| LRE | Left_To_Right_Embedding | U+202A: the LR embedding control |
| LRO | Left_To_Right_Override | U+202D: the LR override control |
| R | Right_To_Left | any strong right-to-left (non-Arabic-type) character |
| AL | Arabic_Letter | any strong right-to-left (Arabic-type) character |
| RLE | Right_To_Left_Embedding | U+202B: the RL embedding control |
| RLO | Right_To_Left_Override | U+202E: the RL override control |
| PDF | Pop_Directional_Format | U+202C: terminates an embedding or override control |
| EN | European_Number | any ASCII digit or Eastern Arabic-Indic digit |
| ES | European_Separator | plus and minus signs |
| ET | European_Terminator | a terminator in a numeric format context, includes currency signs |
| AN | Arabic_Number | any Arabic-Indic digit |
| CS | Common_Separator | commas, colons, and slashes |
| NSM | Nonspacing_Mark | any nonspacing mark |
| BN | Boundary_Neutral | most format characters, control codes, or noncharacters |
| B | Paragraph_Separator | various newline characters |
| S | Segment_Separator | various segment-related control codes |
| WS | White_Space | spaces |
| ON | Other_Neutral | most other symbols and punctuation marks |
For instance:
In [3]: import unicodedata as UD
In [5]: UD.bidirectional(u'\u0688')
Out[5]: 'AL'
In [6]: UD.bidirectional(u'\u200f')
Out[6]: 'R'
In [7]: UD.bidirectional(u'H')
Out[7]: 'L'
* * *
So you might be able to _guess_ if a string is right-to-left by determining if
it is composed mainly of characters whose bidirectional class is `R` or `AL`.
For example,
# coding: utf-8
import unicodedata as UD
texts = ['ڈوگرى'.decode('utf-8'),
u'Hello']
for text in texts:
x = len([None for ch in text if UD.bidirectional(ch) in ('R', 'AL')])/float(len(text))
print('{t} => {c}'.format(t=text.encode('utf-8'), c='RTL' if x>0.5 else 'LTR'))
yields
ڈوگرى => RTL
Hello => LTR
* * *
Regarding the first question:
> Q: What is the correct syntax to test for the occurrence of a single non-
> ASCII character in a string? Python 2.6 and I can't use 3.
Your method for testing if a character is in a `unicode` is correct. If
`u'\u200f' in str.decode('utf-8')` neither complains nor works, then
`u'\u200f'` is not in the `unicode`.
|
Install a packet with pip, ImportError
Question: When I install a packet with pip (for example patsy)
[sudo] pip install patsy
Downloading/unpacking patsy
Downloading patsy-0.1.0.tar.gz (258kB): 258kB downloaded
Running setup.py egg_info for package patsy
no previously-included directories found matching 'doc/_build'
Requirement already up-to-date: numpy in /usr/lib/python2.7/dist-packages (from patsy)
Installing collected packages: patsy
Running setup.py install for patsy
no previously-included directories found matching 'doc/_build'
Successfully installed patsy
Cleaning up...
And pip install in /usr/local/lib/python2.7/dist-packet/ and not in
/usr/lib/python2.7/dist-packet/ and when i try to import a packet in Ipython I
have this error:
import patsy
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-9de22d189b17> in <module>()
----> 1 import patsy
ImportError: No module named patsy
I have Linux Mint Mate 15. what's I do bad??
Answer: Try modifying your PYTHONPATH, I recommend reading the following
<http://www.stereoplex.com/blog/understanding-imports-and-pythonpath>
|
Get the number of friends a facebook user has
Question: I have 2 unrelated questions.
1. How many posts does facebook allow you to get with an api?
2. Using facepy, facebook, or any other api, how do I get the number of friends a user has? (This user is not my friend). The user id is provided.
This is how I currently get the number of my friends with facebook api:
>>> from facebook import *
>>> token = 'whatevertheinfo'
>>> graph = facebook.GraphAPI(token)
>>> friends = graph.get_connections("me", "friends")
>>> len(friends['data'])
1
If I try `>>> friends = graph.get_connections("100000549223625", "friends")` I
get this error:
Traceback (most recent call last):
File "<pyshell#55>", line 1, in <module>
friends = graph.get_connections("100000549223625", "friends")
File "C:\Documents and Settings\visolank\Desktop\Python\programs\facebook.py", line 112, in get_connections
return self.request(id + "/" + connection_name, args)
File "C:\Documents and Settings\visolank\Desktop\Python\programs\facebook.py", line 298, in request
raise GraphAPIError(response)
GraphAPIError: Unsupported operation
I dont need to know who the friends are, I just need a number. I know it's
possible, because facebook claims:
"Public Profile and Friend List
The public profile and friend list is the basic information available to an
app. All other permissions and content must be explicitly asked for."
Answer: This is intentional behaviour. There was a [bug
report](https://developers.facebook.com/bugs/356511554434996) filed for this
problem, and this was the official Facebook response:
> This is intentional, you cannot retrieve friends lists for non-users of the
> app.
I suspect that if you used a userid of someone that had installed your app it
might work, but not just any arbitrary user.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.