text
stringlengths 226
34.5k
|
---|
Python: How to force a "print" to use __unicode__ instead of __str__, or otherwise naturally "print" the message without explicitly calling unicode()
Question: Basically I just want to be able to create instances using a class called
Bottle: eg `class Bottle(object):...` and then in another module be able to
simply "print" any instance **without** having to hack code to explicitly call
a character encoding routine.
In summary, when I try:
obj=Bottle(u"味精")
print obj
Or to an "in place" "print":
print Bottle(u"味精")
I get:
"UnicodeEncodeError: 'ascii' codec can't encode characters"
Similar stackoverflow questions:
* [unicode class in Python](http://stackoverflow.com/questions/2189156/unicode-class-in-python)
* [how to print chinese word in my code.. using python](http://stackoverflow.com/questions/2688020/how-to-print-chinese-word-in-my-code-using-python)
* [Python string decoding issue](http://stackoverflow.com/questions/2389410/python-string-decoding-issue)
* [python 3.0, how to make print() output unicode?](http://stackoverflow.com/questions/507123/python-3-0-how-to-make-print-output-unicode)
¢ It's currently not feasible to switch to python3. ¢
A solution or hint (and explanation) on how to do an in place utf-8 print
(just like class U does **successfully** below) would be muchly appreciated.
:-)
ThanX N
\--
Sample code:
\-------- 8>< \- - - - cut here - - - -
#!/usr/bin/env python
# -*- coding: utf-8 -*-
def setdefaultencoding(encoding="utf-8"):
import sys, codecs
org_encoding = sys.getdefaultencoding()
if org_encoding == "ascii": # not good enough
print "encoding set to "+encoding
sys.stdout = codecs.getwriter(encoding)(sys.stdout)
sys.stderr = codecs.getwriter(encoding)(sys.stderr)
setdefaultencoding()
msg=u"味精" # the message!
class U(unicode): pass
m1=U(msg)
print "A)", m1 # works fine, even with unicode, but
class Bottle(object):
def __init__(self,msg): self.msg=msg
def __repr__(self):
print "debug: __repr__",self.msg
return '{{{'+self.msg+'}}}'
def __unicode__(self):
print "debug: __unicode__",self.msg
return '{{{'+self.msg+'}}}'
def __str__(self):
print "debug: __str__",self.msg
return '{{{'+self.msg+'}}}'
def decode(self,arg): print "debug: decode",self.msg
def encode(self,arg): print "debug: encode",self.msg
def translate(self,arg): print "debug: translate",self.msg
m2=Bottle(msg)
#print "B)", str(m2)
print "C) repr(x):", repr(m2)
print "D) unicode(x):", unicode(m2)
print "E)",m2 # gives: UnicodeEncodeError: 'ascii' codec can't encode characters
\-------- 8>< \- - - - cut here - - - - Python 2.4 output:
encoding set to utf-8
A) 味精
C) repr(x): debug: __repr__ 味精
{{{\u5473\u7cbe}}}
D) unicode(x): debug: __unicode__ 味精
{{{味精}}}
E) debug: __str__ 味精
Traceback (most recent call last):
File "./uc.py", line 43, in ?
print "E)",m2 # gives: UnicodeEncodeError: 'ascii' codec can't encode characters
UnicodeEncodeError: 'ascii' codec can't encode characters in position 3-4: ordinal not in range(128)
\-------- 8>< \- - - - cut here - - - - Python 2.6 output:
encoding set to utf-8
A) 味精
C) repr(x): debug: __repr__ 味精
Traceback (most recent call last):
File "./uc.py", line 41, in <module>
print "C) repr(x):", repr(m2)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 3-4: ordinal not in range(128)
Answer: if you use `sys.stdout = codecs.getwriter(encoding)(sys.stdout)` then you
should pass Unicode strings to `print`:
>>> print u"%s" % Bottle(u"魯賓遜漂流記")
debug: __unicode__ 魯賓遜漂流記
{{{魯賓遜漂流記}}}
As @bobince points out in the comments: avoid changing `sys.stdout` in such
manner otherwise it might break any library code that works with `sys.stdout`
and doesn't expect to print Unicode strings.
In general:
`__unicode__()` should return Unicode strings:
def __init__(self, msg, encoding='utf-8'):
if not isinstance(msg, unicode):
msg = msg.decode(encoding)
self.msg = msg
def __unicode__(self):
return u"{{{%s}}}" % self.msg
`__repr__()` should return ascii-friendly `str` object:
def __repr__(self):
return "Bottle(%r)" % self.msg
`__str__()` should return `str` object. Add _optional_ `encoding` to document
what encoding is used. There is no good way to choose encoding here:
def __str__(self, encoding="utf-8")
return self.__unicode__().encode(encoding)
Define `write()` method:
def write(self, file, encoding=None):
encoding = encoding or getattr(file, 'encoding', None)
s = unicode(self)
if encoding is not None:
s = s.encode(encoding)
return file.write(s)
It should cover cases when the file has its own encoding or it supports
Unicode strings directly.
|
How to set up a cronjob that accesses a website every 5 minutes
Question: I am hosting a site on ep.io and heroku. The way they work is that when your
website isn't accesed for some amount of time, the server is 'shut off' only
to be restarted when someone visits your domain. I noticed that this generally
will take up to 15 seconds for the page to load.
To counter this problem I want to make a cron job on another hosting I have,
that tries to access my ep.io/heroku webpage every 5 minutes for example. I
however have no clue how this would work as I have never used cronjobs.
I think I should add something like this to the crontab:
*/5 * * * * /usr/bin/curl --silent http://myapp.heroku.com
Is this a valid crontab line? If not, how should it be?
**Answer:**
Ok, After some searching around I found a solution, I'll post it here so
others can benefit.
Make a python script (e.g. acces_page.py) that loads a URL:
#!/usr/bin/env python
import urllib2
if __name__ == '__main__':
f = urllib2.urlopen('http://myapp.heroku.com')
print f.read(10)
Then upload it somewhere on your second, not heroku, server (/webapps/myapp/
for me) and add the following line to your crontab (`crontab -e`):
*/5 * * * * ~/webapps/myapp/access_page.py > $HOME/cron.log 2>&1
This will run the script every five minutes, accessing your webpage defined in
your python script. To test if it works it will output the first 10 characters
of the webpage in the file cron.log in your $HOME directory.
If it says something like access denied, chmod your .py file with `chmod +x
access_page.py`.
Answer: add some thing to happen in your url page. in your example
<http://myapp.heroku.com/index.php> file.
write in DB or a file will be better
|
What do you call the elements of a module?
Question: I have this code from Learn Python The Hard Way and I need to comment it:
from sys import argv
So I did: #imports argv from the sys module
I know what argv does, but I am having trouble figuring out what to call this
element of the sys module? A variable, a method, a function? Argument
variable?
Also, given that line of code, does it mean that all the sys module is
imported in my program or only argv?
Answer: `argv` is a variable in the `sys` module's namespace. It happens to be a list.
The statement
from sys import argv
causes Python to parse the entire `sys` module (assuming it has not already
been parsed, in which case it is _not_ reloaded), and copy the `argv` variable
into the current module's namespace.
|
Why is semicolon allowed in this python snippet?
Question: Python does not warrant the use of semicolons to end statements. So why is
this (below) allowed?
import pdb; pdb.set_trace()
Answer: Python does not _require_ semi-colons to terminate statements. Semi colons
_can_ be used to delimit statements if you wish to put multiple statements on
the same line.
Now, _why_ is this allowed? It's a simple design decision. I don't think
Python needs this semi-colon thing, but somebody thought it would be nice to
have and added it to the language.
|
SVN : repository structure when 2 differents major version development in parallel?
Question: I am quite new with subversion, and I whant to know how to structure a
repository. As I read, 'trunk' directory is for the main development, 'tags'
is to snapshot a version, 'branches' is for doing some big changes/testing and
not disturb the trunk.
The problem is when one have two major versions to develop in parallel : I do
not see very well how to structure that. I take the exemple of the python
langage, both version 2 and 3 are under development, I see these structure
possibilities :
1st one :
===========
repos/
python2/
trunk/
tags/
V2.5/
V2.6/
V2.7/
branches/
big_modif1/
testing2/
python3/
trunk/
tags/
V3.0/
V3.1/
V3.2/
branches/
big_modif43/
testing37/
2nd one :
===========
repos/
python/
trunk/
V2/
V3/
tags/
V2.5/
V2.6/
V2.7/
V3.0/
V3.1/
V3.2/
branches/
big_modif_on_v2.x/
testing2_on_v2.x/
big_modif43_on_v3.x/
testing37_on_v3.x/
3rd one :
===========
repos/
python/
trunk/
tags/
V2.5/
V2.6/
V2.7/
V3.0/
V3.1/
V3.2/
branches/
V2_trunk/
V3_trunk/
big_modif_on_v2.x/
testing2_on_v2.x/
big_modif43_on_v3.x/
testing37_on_v3.x/
What will you choose (of course, you can propose something else) ?
Answer: I think a combination could be the best. Let me explain it with your example:
* Python 2 and Python 3 are developed in the same project, from the same team (so should be developed at least in one repository).
* Python 3 is the future (main) development version, Python 2 is not actively developed further (not sure about that).
* Both are released to the public and should stay in sync, but no Python 3 feature should leak into Python 2.
So I would follow the ["single project repo layout"](http://svnbook.red-
bean.com/en/1.6/svn-book.html#svn.tour.importing.layout) (described in the
[SVN red book](http://svnbook.red-bean.com/en/1.6/svn-book.html)):
repos/
python/
trunk/
branches/
V2/
tags/
...
V2.7/
...
V3.2/
The main point here is that V2 was branched when the development on Version 3
was begun. And that you should stick tothe following merging rules:
* Merge only bug fixes in V2 to trunk if they are compatible to it.
* Don't merge from trunk (== V3) to V2.
|
Django uwsgi import error
Question: I have a Django project with one app called `subscribe`. In root `urls.py` I
use include from `subscribe`'s `urls.py`.
I put to `INSTALLED_APPS` `subscribe` and in `subscribe`'s `urls.py` I use
`subscribe.views.<name>` for call my views. When server run as `python
manage.py runserver` locally all works fine. But when server run on
nginx+uwsgi with virtualenv, I've got `ImportError: No module named
subscribe`. When I changing `subscribe` to `project.subscribe` in
`INSTALLED_APPS` and in `subscribe`'s `urls.py` changing
`subscribe.views.<name>` to `project.subscribe.views.<name>` all works fine.
uwsgi config:
[uwsgi]
socket = 127.0.0.1:9003
workers = 2
master = true
virtualenv = /home/user/python
chdir = /home/user
env = DJANGO_SETTINGS_MODULE=project.settings
module = django.core.handlers.wsgi:WSGIHandler()
daemonize = /home/user/uwsgi.log
Why should I use absolute path import and how I can change it to relative back
on nginx+uwsgi with virtualenv?
Answer: Your uwsgi config should include `pythonpath=/path/where/lives/settings.py/`
directive, so python interpreter will know where to find your apps.
Find more information about uwsgi config options:
* <http://projects.unbit.it/uwsgi/wiki/Doc>
* <http://projects.unbit.it/uwsgi/wiki/Example>
|
Using Python networkx for exploring network properties
Question: I'm trying to make a code for getting Twitter network properties.
But I got an error on my code. I don't know how come it happened.
The error is this:
Traceback (most recent call last):
File "Network_property.py", line 14, in <module>
followee = line.strip().split('\t')[1]
IndexError: list index out of range
The code is this:
import os, sys
import time
import networkx as nx
DG = nx.DiGraph()
ptime = time.time()
j = 1
#for line in open("./US_Health_Links.txt", 'r'):
for line in open("./test_network.txt", 'r'):
follower = line.strip().split('\t')[0]
followee = line.strip().split('\t')[1]
DG.add_edge(follower, followee)
if j%1000000 == 0:
print j*1.0/1000000, "million lines done", time.time() - ptime
ptime = time.time()
j += 1
print nx.number_connected_components(DG)
I gathered some links data like this:
1000 1001
1000 1020191
1000 10267352
1000 10957902
1000 11039092
1000 1118691
1000 11882
1000 1228281
1000 1247041
1000 12965332
1000 13027572
1000 13075072
1000 13183162
1000 13250162
1000 13326292
1000 13452672
1000 13844892
1000 14061830
1000 1406481
1000 14134703
1000 14216951
1000 14254402
1000 14258044
1000 14270791
1000 14278978
1000 14313332
1000 14392970
1000 14441172
1000 14497568
1000 14502775
1000 14595635
1000 14620544
1000 14632615
1000 14680596
1000 14956164
1000 14998341
1000 15132211
1000 15145450
1000 15285998
1000 15288974
1000 15300187
1000 1532061
1000 15326300
"1000" is a follower and others are followee.
+
I wanna get results of (1) number of connected component, (2) fraction of
nodes in the largest connected component, (3) average and median of in-degree,
(4) average and median of out-degree, (5) diameter, and (6) clustering
coefficient
But the site "networkx.lanl.gov" doesn't work.
Is there anybody who help me out?
Answer: The error has nothing specifically to do with networkx. What is happening is
that for some line `line.strip().split('\t')` is returning only a single
field. I'd guess that the problem is with blank lines in your file. Compare:
>>> ''.split("\t")
['']
>>> ''.split("\t")[1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: list index out of range
>>>
Thus, an empty line could cause the problem. You could check this explicitly,
for example, by adding
if not line:
continue
at the beginning of your `for` loop.
Also take a look at
[networkx.read_edgelist](http://networkx.lanl.gov/reference/generated/networkx.read_edgelist.html),
which should be simplest if you don't need to have the `print` statement
showing progress.
|
python: urllib2 using different network interface
Question: I have the following code:
f = urllib2.urlopen(url)
data = f.read()
f.close()
It's running on a machine with two network interfaces. I'd like to specify
which interface I want the code to use. Specifically, I want it to use the one
other than the one it is using by default... but I can figure out which is
which if I can just pick the interface.
What's the easiest/best/most pythonic way to do this?
Answer: Not yet a complete solution, but if you were using only simple socket objects,
you could do what you need this way :
import socket
s = socket.socket()
s.bind(("127.0.0.1", 0)) # replace "127.0.0.1" by the local IP of the interface to use
s.connect(("remote_server.com", 80))
Thus, you will force the system to bind the socket to the wanted network
interface.
|
Python trying to write and read class from file but something went horribly wrong
Question: Considering this is only for my homework I don't expect much help but I just
can't figure this out and honestly I can't get my head around what's going
wrong. Usually I have an idea where the problem is but now I just don't get
it.
_Long story short_ : I'm trying to create a valid looking telephone number
within a class and then loading it onto an array or list then later on save
all of them as string into a folder. When I start the program again I want it
to read the file and re-create my class and load it back into the list.
(Basically a very simple repository).
Problem is even though I evaluate the stored phone number in the exact same
way I validate it as input data ... I get an error which makes no sens.
Another small problem is the fact that when I re-use the data for some reason
it creates white spaces in the file which in turn messes my program up badly.
Here I validate phone numbers:
def validateTel(call_ID):
if isinstance (call_ID, str) == True:
call_ID = call_ID.replace (" ", "")
if (len (call_ID) != 10):
print ("Telephone numbers are 10 digits long")
return False
for item in call_ID:
try:
int(item)
except:
print ("Telephone numbers should contain non-negative digits")
return False
else:
if (int(item) < 0):
print ("Digits are non-negative")
After this I use it and other non-relevant (to this discussion) data to create
an object (class instance) and move them to a list.
Inside my class I have a load from string and a load to string. What they do
is take everything from my class object so I can write it to a file using
`"+"` as a separator so I can use `string.split("+")` and write it to a file.
This works nicely, but when I read it ... well it's not working.
def load_data():
f = open ("data.txt", "r")
ch = f.read()
contact = agenda.contact () # class object
if ch in (""," ","None"," None"):
f.close()
return [] # if the file is empty or has None in some way I pass an empty stack
else:
stack = [] # the list where I load all my class objects
f.seek(0,0)
for line in f:
contact.loadFromString(line) # explained bellow
stack.append(deepcopy(contact))
f.close()
return stack
In `loadFromString(line)` all I do is validate the line (see if the data
inside it at least looks OK).
Now here is the place where I validate the string I just read from the file:
def validateString (load_string):
string = string.split("+")
if len (string) != 4:
print ("System error in loading from file: Program skipping segment of corrupt data")
return False
if string[0] == "" or string[0] == " " or string[0] == None or string[0] == "None" or string[0] == " None":
print ("System error in loading from file: Name field cannot be empty")
try:
int(string[1])
except:
print("System error in loading from file: ID is not integer")
return False
if (validateTel(str(string[2])) == False):
print ("System error in loading from file: Call ID (telephone number)")
return False
return True
**Small recap** : I try to load the data from file using `loadFromString()`.
The only relevant thing that does is it tries to validate my data with
`validateString(string)` in there the only thing that messes me up is the
validateTel. But my input data gets validated in the same way my stored data
does. They are perfectly identical but it gives a "System error" BUT to give
such an error it should have also gave an error in the validate sub-program
but it doesn't.
I hope this is enough info because my program is kinda big (for me any way)
however the bug should be here somewhere.
I thank anyone brave enough to sift trough this mess.
EDIT:
The class is very simple, it looks like this:
class contact:
def __init__ (self, name = None, ID = None, tel = None, address = None):
self.__name = name
self.__id = ID
self.__tel = tel
self.__address = address
After this I have a series of setters and getters (to modify contacts and to
return parts of the abstract data)
Here I also have my loadFromString and loadToString but those work just fine
(except maybe they cause a small jump after each line (an empty line) which
then kills my program, but that I can deal with)
My problem is somewhere in the validate or a way the repository interacts with
it. The point is that even if it gives an error in the loading of the data,
first the validate should print an error ... but it doesn't -_-
Answer: You said `I just can't figure this out and honestly I can't get my head around
what's going wrong`. I think this is a great quote which sums up a large part
of programming and software development in general -- dealing with crazy,
weird problems and spending a lot of time trying to wrap your head around
them.
Figuring out how to turn ridiculously complicated problems into small,
manageable problems is the hardest part of programming, but also arguably the
most important and valuable.
* * *
Here's some general advice which I think might help you:
* use meaningful names for functions and variables (`validateString` doesn't tell me anything about what the function does; `string` tells me nothing about the meaning of its contents)
* break down problems into small, well-defined pieces
* specify your data -- **what is a phone number?** 10 positive digits, no spaces, no punctuation?
* document/comment the input/output from functions if it's not obvious
Specific suggestions:
* `validateTel` could probably be replaced with a simple regular expression match
* try using `json` for serialization
* if you're using `json`, then it's easy to use lists. I would strongly recommend this over using `+` as a separator -- that looks highly questionable to me
* * *
Example: using a regex
import re
def validateTel(call_ID):
phoneNumberRegex = re.compile("^\d{10}$") # match a string of 10 digits
return phoneNumberRegex.match(call_ID)
Example: using json
import json
phoneNumber1, phoneNumber2, phoneNumber3 = ... whatever ...
mylist = [phoneNumber1, phoneNumber2, phoneNumber3]
print json.dumps(mylist)
|
A slider for curses based UI
Question: As a learning project, I'd like to set-out to make an ncurses-based UI for a
program I had in mind, written in python.
After looking at urwid documentation, I cannot see anyway to create a simple
slider (I need it to make a volume slider) that can be adjusted with the
mouse.
Am I missing something in urwid, or is there a more convenient curses module
to make such a slider?
Answer: Curses is has very low level API - going back to the 1980's C'programing.
The Python wrappers have some higher level support for keyboard input and some
other niceties, but they are few and apart and not nicely documented.
The Python niceties do not include Mouse support (ok, you get your mouse state
back in a tuple instead of having to create a C structure for that, so it is
somewhat better).
The idea is that one has to enable a curses Window enable "keypad" so that
Python gives you full key codes enable a "mousemask" so that mouse events are
sent to your app Detect the special "mouse_key" keyboard code in the getch
function so that you can call "getmouse" to get the coordinates and button
state.
So there are no pre-made nice callbacks, you have to set-up the mainloop of
your application to detect mouse events your self.
This sample code performs the above steps for reading the mouse events and
printing the mouse state to the screen - it should be enough to get one
started in building some usefull mouse handling with curses:
# -*- coding: utf-8 -*-
import curses
screen = curses.initscr()
curses.noecho()
curses.mousemask(curses.ALL_MOUSE_EVENTS)
screen.keypad(1)
char = ""
try:
while True:
char = screen.getch()
screen.addstr( str(char) + " ")
if char == curses.KEY_MOUSE:
screen.addstr (" |" + str(curses.getmouse()) + "| ")
finally:
screen.keypad(0)
curses.endwin()
curses.echo()
|
Secure Copy File from remote server via scp and os module in Python
Question: I'm pretty new to Python and programming. I'm trying to copy a file between
two computers via a python script. However the code
os.system("ssh " + hostname + " scp " + filepath + " " + user + "@" + localhost + ":" cwd)
won't work. I think it needs a password, as descriped in [How do I copy a file
to a remote server in python using scp or
ssh?](http://stackoverflow.com/questions/68335/how-do-i-copy-a-file-to-a-
remote-server-in-python-using-scp-or-ssh). I didn't get any error logs, the
file just won't show in my current working directory.
However every other command with `os.system("ssh " + hostname + "command")` or
`os.popen("ssh " + hostname + "command")` does work. -> `command = e.g. ls`
When I try `ssh hostname scp file user@local:directory` in the commandline it
works without entering a password.
I tried to combine `os.popen` commands with getpass and pxssh module to
establish a ssh connection to the remote server and use it to send commands
directly (I only tested it for an easy command):
import pxssh
import getpass
ssh = pxssh.pxssh()
ssh.force_password = True
hostname = raw_input("Hostname: ")
user = raw_input("Username: ")
password = getpass.getpass("Password: ")
ssh.login(hostname, user, password)
test = os.popen("hostname")
print test
But I'm not able to put commands through to the remote server (`print test`
shows, that hostname = local and not the remote server), however I'm sure, the
conection is established. I thought it would be easier to establish a
connection than always use `"ssh " + hostname` in the bash commands. I also
tried some of the workarounds in [How do I copy a file to a remote server in
python using scp or ssh?](http://stackoverflow.com/questions/68335/how-do-i-
copy-a-file-to-a-remote-server-in-python-using-scp-or-ssh), but I must admit
due to lack of expirience I didn't get them to work.
Thanks a lot for helping me.
Answer: I think the easiest (to avoid having to enter a password) and most secure way
to go about this is to first set [public/private key
authentication](http://www.ece.uci.edu/~chou/ssh-key.html). Once that is done,
and you can log in to the remote system by doing `ssh user@hostname`, the
following bash command would do the trick:
scp some/complete/path/to/file user@remote_system:some/remote/path
The corresponding Python code would be:
import subprocess
filepath = "some/complete/path/to/file"
hostname = "user@remote_system"
remote_path = "some/remote/path"
subprocess.call(['scp', filepath, ':'.join([hostname,remote_path])])
|
Python : 2 sockets, sending from A to B and from B to A
Question: I've been working on this for 2 days and still I do not manage to get it to
work properly.
I wanted to write an application that uses 2 sockets with a medium in the
middle
This medium is this script that should read from socketA and write to SocketB
and read from SocketB and write to SocketA.
However it seems I can not nail it.
My script when running accepts connections, but it will not allow me to input
something on the telnet screen.
I am using 2 shared lists between the sockets to pass data.
#!/usr/bin/env python
import sys
import arduinoReadThread
import arduinoWriteThread
import socket
import thread
bolt = 0
socketArray=list()
HOST =""
HOST2=""
PORT1 =50115
PORT2 =50125
s1=socket.socket(socket.AF_INET, socket.SOCK_STREAM ) #create an INET, STREAMing socket
s1.bind((HOST,PORT1)) #bind to that port
s1.listen(2) #listen for user input and accept 1 connection at a time.
socketArray.append(s1)
s2=socket.socket(socket.AF_INET, socket.SOCK_STREAM ) #create an INET, STREAMing socket
s2.bind((HOST2,PORT2)) #bind to that port
s2.listen(2) #listen for user input and accept 1 connection at a time.
socketArray.append(s2)
print "sockets set up"
s1ToWriteList = list()
s2ToWriteList = list()
def socketFunctionWrite1():
while(bolt == 0):
client, address = s1.accept()
print "Writing connections"
if len(s1ToWriteList) > 0:
client.send(s1ToWriteList.pop(0))
def socketFunctionRead1():
while(bolt == 0):
client2, address = s2.accept()
f = client2.recv(1024)
print "reading connection"
s1ToWriteList.append(f)
print len(s1ToWriteList)
def socketFunctionWrite2():
while(bolt == 0):
client2, address = s2.accept()
print "Writing connections"
if len(s2ToWriteList) > 0:
client2.send(s2ToWriteList.pop(0))
def socketFunctionRead2():
while(bolt == 0):
client, address = s1.accept()
f = client.recv(1024)
print "reading connection"
s2ToWriteList.append(f)
print len(s2ToWriteList)
def shutDown():
test = raw_input("Quit ?")
if(test =="y"):
bolt = 1
else:
shutDown()
def spreadSockets():
thread.start_new_thread(socketFunctionRead1,())
print "launch 1"
thread.start_new_thread(socketFunctionRead2,())
print "launch 2"
thread.start_new_thread(socketFunctionWrite1,())
print "launch 3"
thread.start_new_thread(socketFunctionWrite2,())
print "launch 4"
spreadSockets()
while(True):
pass
Answer: Used your exact code and it worked for me. I think what you might be doing
wrong is telnet'ing to the wrong IP. Dont use 'localhost' or 127.0.0.1, you
need to use the actual (internal) IP of your box.
If on linux, you can see if with `ifconfig -a`, or `ipconfig /all` on windows.
Running your code exactly, no modification (except for removing the 2 unknown
imports at the top):
Launched script:
[ 15:01 [email protected] ~/SO/python ]$ ./sock.py
sockets set up
launch 1
launch 2
launch 3
launch 4
Writing connections
Writing connections
^CTraceback (most recent call last):
File "./sock.py", line 93, in <module>
time.sleep(1)
KeyboardInterrupt
Then telnet'd:
[ 15:01 [email protected] ~ ]$ telnet 10.10.1.11 50115
Trying 10.10.1.11...
Connected to 10.10.1.11.
Escape character is '^]'.
Hello, World!
Hello 2
^]
telnet> quit
Connection closed.
[ 15:02 [email protected] ~ ]$ telnet 10.10.1.11 50125
Trying 10.10.1.11...
Connected to 10.10.1.11.
Escape character is '^]'.
Hello 50125!
Hi!
^]
telnet> quit
Connection closed.
[ 15:02 [email protected] ~ ]$
My internal interface config (`inet addr:10.10.1.11`):
[ 15:07 [email protected] ~/SO/python ]$ ifconfig eth0
eth0 Link encap:Ethernet HWaddr **:**:**:**:**:**
inet addr:10.10.1.11 Bcast:10.10.1.255 Mask:255.255.255.0
...
|
Selenium python do test every 10 seconds
Question: I am using selenium (python) testing and I need to test my application
automatically every 10 seconds.
How can I do this?
Answer: You could use
[threading.Timer](http://docs.python.org/library/threading.html#timer-
objects):
import threading
import logging
def print_timer(count):
if count:
t = threading.Timer(10.0, print_timer,args=[count-1])
t.start()
logger.info("Begin print_timer".format(c=count))
time.sleep(15)
logger.info("End print_timer".format(c=count))
def using_timer():
t = threading.Timer(0.0, print_timer,args=[3])
t.start()
if __name__=='__main__':
logging.basicConfig(level=logging.DEBUG,
format='%(threadName)s: %(asctime)s: %(message)s',
datefmt='%H:%M:%S')
using_timer()
yields
Thread-1: 06:46:18: Begin print_timer --
| 10 seconds
Thread-2: 06:46:28: Begin print_timer --
Thread-1: 06:46:33: End print_timer | 10 seconds
Thread-3: 06:46:38: Begin print_timer --
Thread-2: 06:46:43: End print_timer | 10 seconds
Thread-4: 06:46:48: Begin print_timer --
Thread-3: 06:46:53: End print_timer
Thread-4: 06:47:03: End print_timer
Note that this will spawn a new thread ever 10 seconds. Be sure to provide
some way for the thread-spawning to cease before the number of threads becomes
intolerable.
|
Error: No such file or directory
Question: I am trying to extract data from a XML file with python. I tried the following
code.
from xml.etree.ElementTree import ElementTree
tree = ElementTree()
tree.parse("data_v2.xml")
Error message:
IOError: [Errno 2] No such file or directory: 'data_v2.xml'.
Answer: This is not XML error. This means that `data_v2.xml` does not exist -- system
(operation system) cannot find it. Maybe this name is wrong, maybe you need to
provide full path.
|
Video file reading by OpenCV is very slow in Python
Question: I'm trying to use OpenCV from Python for some video processing and it works
extremely slow for me. For example a simple reading and showing of all frames
works at about 1 fps:
import cv2
cap = cv2.VideoCapture("out1.avi")
cv2.namedWindow("input")
while(True):
f, img = cap.read()
cv2.imshow("input", img)
cv2.waitKey(1)
The same video file in C++ is rendered without any problems at about 30 fps.
Are there any ideas why Python version is so slow?
And there is another interesting thing about Python version: it doesn't show
.wmv files which C++ version can process (for my Python can open raw video
only).
I use OpenCV 2.3.1 and Python 2.7
Thanks for help!
Answer: The code runs normally (fast) in my machine (opencv 2.3.0 & python 2.6.4 on
win7-64, playing uncompressed avi file).
have you tried the performance using older python interface (cv instead of
cv2)?
regarding .wmv video playback, it's kinda having problem with python interface
(or specifically the ffmpeg). it can't play other than uncompressed .avi
files.
|
Create a list (of tuples?) from two lists of different sizes
Question: I am stuck trying to perform this task and while trying I can't help thinking
there will be a nicer way to code it than the way I have been trying.
I have a line of text and a keyword. I want to make a new list going down each
character in each list. The keyword will just repeat itself until the end of
the list. If there are any non-alpha characters the keyword letter will not be
used.
For example:
Keyword="lemon"
Text="hi there!"
would result in
('lh', 'ei', ' ', 'mt' , 'oh', 'ne', 'lr', 'ee', '!')
Is there a way of telling python to keep repeating over a string in a loop, ie
keep repeating over the letters of lemon?
I am new to coding so sorry if this isn't explained well or seems strange!
Answer: You've got two questions mashed into one. The first is: how do you remove non-
alphanumeric chars from a string? You can do it a few ways, but regular
expression substitution is a nice way.
import re
def removeWhitespace( s ):
return re.sub( '\s', '', s )
The second part of the question is about how to keep looping through the
keyword, until the text line is consumed. You can write this as:
def characterZip( keyword, textline ):
res = []
textline = removeWhitespace(textline)
textlen = len(textline)
for i in xrange(textlen)):
res.append( '%s%s' % (keyword[i%len(keyword)], textline[i]) )
return res
Most pythonistas will look at this and see opportunity for refactoring. The
patten that this code is trying to achieve is in functional programming termed
a `zip`. The quirk is that in this case you're doing something slightly non-
normative with the repeating characters of the keyword, this too has an
equivalent, the
[cycle](http://docs.python.org/library/itertools.html#itertools.cycle)
function in the itertools module.
from itertools import cycle, islice, izip
def characterZip( keyword, textline ):
textline = removeWhitespace(textline)
textlen = len(textline)
it = islice( izip(cycle(keyword), textline), textlen )
return [ '%s%s' % val for val in it ]
|
wsgi application is using an older python version
Question: I have deployed a Pyramid app using mod_wsgi.
I have setup the python path in the virtualhost:
WSGIDaemonProcess MyApp user=myUser group=staff threads=4 python-path=/home/myapp/env/lib/python2.7/site-packages
WSGIScriptAlias / /home/myapp/env/pyramid.wsgi
for debugging purposes, in pyramid.wsgi, I have also put:
import sys
print(sys.path)
print(sys.version)
When I visit the app I can see the app is using python 2.6 instead of 2.7!
The sys.path outputs this:
['/home/myapp/env/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg',
'/home/myapp/env/lib/python2.7/site-packages/pip-1.0.1-py2.7.egg',
'/home/myapp/env/lib/python2.7/site-packages',
'/home/myapp/env/lib/python2.7/site-packages/PIL',
'/opt/local/lib/python2.6/site-packages/setuptools-0.6c12dev_r88846-py2.6.egg',
'/opt/local/lib/python2.6/site-packages/virtualenv-1.6.1-py2.6.egg',
'/opt/local/lib/python26.zip',
'/opt/local/lib/python2.6',
'/opt/local/lib/python2.6/plat-sunos5',
'/opt/local/lib/python2.6/lib-tk',
'/opt/local/lib/python2.6/lib-old',
'/opt/local/lib/python2.6/lib-dynload',
'/opt/local/lib/python2.6/site-packages']
You can see the python 2.6 paths are there, but if I ssh to the server and
execute python it launches python2.7.
where does 2.6 come from? which user(apache?) is calling this wsgi app so I
can change its python environment?
pls help!
Answer: mod_wsgi doesn't care what version `python` is. It's built against the Python
library itself, so if you want it to use a different version then you need to
rebuild it.
|
Why is the http request hanging in my python script?
Question: One of my script runs perfectly on an XP system, but the exact script hangs on
a 2003 system. I always use mechanize to send the http request, here's an
example:
import socket, mechanize, urllib, urllib2
socket.setdefaulttimeout(60) #### No idea why it's not working
MechBrowser = mechanize.Browser()
Header = {'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8 GTB7.1 (.NET CLR 3.5.30729)', 'Referer': 'http://www.porn-w.org/ucp.php?mode=login'}
Request = urllib2.Request("http://google.com", None, Header)
Response = MechBrowser.open(Request)
I don't think there's anything wrong with my code, but each time when it comes
to a certain http POST request to a specific url, it hangs on that 2003
computer (only on that url). What could be the reason of all this and how
should I debug?
By the way, the script runs all right until several hours ago. And no setting
is changed.
Answer: You could use [Fiddler](http://www.fiddler2.com/fiddler2/) or [Wire
Shark](http://www.wireshark.org/) to see what is happening at the HTTP-level.
It is also worth checking out if the machine has been blocked from making
requests to the machine you are trying to access. Use a regular browser (with
your own HTML form), and the HTTP library used by Mechanize and see if you can
manually construct a request. Fiddler can also help you do this.
|
How to open files given as command line arguments in python?
Question: I want my .py file to accept file I give as input in command line. I used the
sys.argv[] and also fileinput but I am not getting the output.
Answer: If you will write the following script:
#!/usr/bin/env python
import sys
with open(sys.argv[1], 'r') as my_file:
print(my_file.read())
and run it, it will **display the content of the file** whose name you pass
**in the first argument** like that:
./my_script.py test.txt
(in the above example this file will be `test.txt`).
|
It appears I've run out of 32-bit address space. What are my options?
Question: I'm trying to take the covariance of a large matrix using `numpy.cov`. I get
the following error:
Python(22498,0xa02e3720) malloc: *** mmap(size=1340379136) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Process Python bus error
It seems that this is not uncommon for 32-bit machines/builds (I have a 64-bit
mac os x 10.5, but using a 32-bit python and numpy build as I had trouble
building numpy+scipy+matplotlib on a 64-bit installation).
So at this point what would be the recommended course of action that will
allow me to proceed with the analysis, if not switching machines (none others
are available to me at the moment)? Export to fortran/C? Is there a simple(r)
solution? Thanks for your suggestions.
Answer: To be at your place, I would try to "pickle" (save) the matrix on my hard
drive, close python , then in command line re-open the pickeled file and do my
computation on a "fresh python" instance.
I would do that because maybe your problem is before computing the covariance.
import cPickle
import numpy
M = numpy.array([[1,2],[3,4]]) # here it will be your matrix
cPickle( M , open( "~/M.pic", "w") ) # here it's where you pickle the file
Here you close python. Your file should be saved in you home directory as
"M.pic".
import cPickle
import numpy
M = cPickle.load( open( "~/M.pic", "r") )
M = numpy.coa( M )
If it still does not work, try setting a "good" dtype for your data. numpy
seams to use dtype 'float64' of 'int64' by default. This is huge and if you do
not need this precision, you might want to reduce it to 'int32' or 'float32'.
import numpy
M = numpy.array([[1,2],[3,4]] , dtype.float32 )
Indeed, I can guarantee you that C/Fortran is not an option for you. Numpy is
already written in C/Fortran and probably by people cleverer than you and me
;)
By curiosity, how big is your matrix? how big is your pickled file?
|
ml-py svm converges but classifying wrongly
Question: I am trying to do some classification task with python and SVM.
From collected data I extracted the feature vectors for each class and created
a training set. The feature vectors have n-dimensions(39 or more). So, say for
2 classes I have a set of 39-d feature vectors and a single array of class
labels corresponding to each entry in the feature vector.Currently, I am using
mlpy and doing something like this:
import numpy as np
import mlpy
svm=mlpy.Svm('gaussian') #tried a linear kernel too but not having the convergence
instance= np.vstack((featurevector1,featurevector1))
label=np.hstack((np.ones((1,len(featurevector1),dtype=int),-1*np.ones((1,len(featurevector2),dtype=int)))
#Assigning a label(+1/-1) for each entry in instance, (+1 for entries coming
from #featurevector 1 and -1 for featurevector2
svm.compute(instance,label) #it converges and outputs 1
svm.predict(testdata) #This one says all class label are 1 only whereas I ve testing data from both classes
Am I doing some mistake here? Or should I use some other library? Please help.
Answer: I don't use mlpy, but `np.ones((1,len(featurevector1))` should perhaps be just
`np.ones(len(featurevector1))` \--
print `.shape` of each to see the difference.
(If you have a link to public data anything like yours, could you post it
please ?)
|
Plone 4: how to get the context for a sub-folder with a hyphen in the shortname
Question: I am writing a python script to import content from another CMS into Plone
4.1. For a number of reasons I am running it like so: `bin/instance run
path/to/myscript`
The question I have is how to get the correct context for a folder with a
hyphen in the ID/shortname. For example from the root of a plone site called
mysite, I can work with a folder called "sub-folder" like so:
from Products.CMFCore.utils import getToolByName
urltool = getToolByName(app.mysite, "portal_url")
portal = urltool.getPortalObject()
folder = getattr(portal, 'sub-folder')
But if I then want to create a folder or page within that sub-folder, the
following throws an error: "AttributeError: sub"
urltool = getToolByName(app.mysite.sub-folder, "portal_url")
portal = urltool.getPortalObject()
And performing the same on the News folder, (which has no hyphen) produces no
error:
urltool = getToolByName(app.mysite.news, "portal_url")
portal = urltool.getPortalObject()
Simply trying portal.sub-folder throws the same Error.
So what would be the python code to get the proper context of
"http://localhost:8080/mysite/sub-folder" so that I can then successfully call
the invokeFactory method and create a folder or page within mysite/sub-folder?
What if I needed to find the context of "http://localhost:8080/mysite/sub-
folder/2nd-level" ?
The online documentation I have found seems to only account for folders named
dog or news, which have no hyphen in the ID/Shortname. However, if you create
these items by hand in Plone, the shortnames obviously have hyphens, and so
there must be a way to get the correct folder context.
Answer: That's because if you use:
app.mysite.sub-folder
python thinks that you're trying to do a difference between `app.mysite.sub`
and `folder`. Instead you have to use this syntax:
secondlevel = mysite['sub-folder']['2nd-level']
or
secondlevel = mysite.restrictedTraverse('/mysite/sub-folder/2nd-level')
|
Get first N key pairs from an Ordered Dictionary to another one in python
Question: I have an ordered dictionary (OrderedDict) sorted by value. How can I get the
top (say 25) key values and add them to a new dictionary? For example: I have
something like this
dictionary={'a':10,'b':20,'c':30,'d':5}
ordered=OrderedDict(sorted(dictionary.items(), key=lambda x: x[1],reverse=True))
Now ordered is an ordered dictionary, I want to create a dictionary, say by
taking the top 2 most frequent items and their keys
frequent={'c':30,'b':20}
Answer: The primary purpose of OrderedDict is retaining the order in which the
elements were created. What you want here is
[`collections.Counter`](http://docs.python.org/library/collections.html#collections.Counter),
which has the n-most-frequent functionality built-in:
>>> dictionary={'a':10,'b':20,'c':30,'d':5}
>>> import collections
>>> collections.Counter(dictionary).most_common(2)
[('c', 30), ('b', 20)]
|
Python Threading With Concurrency
Question: I wrote the following code while trying to learn threading in python.
import threading
import time
def printWorker(x,y):
t = time.time()
while time.time() - t < 10:
print "Name:%s Time:%s" %(y,str(time.time() - t))
time.sleep(x)
t1 = threading.Thread(target = printWorker(2,'Thread-1'))
t2 = threading.Thread(target = printWorker(3,'Thread-2'))
t1.start()
t2.start()
Im trying to get an output where both Thread-1 and Thread-2 start at same
time. IE Print
Thread-1 Stuff, Thread-2 Stuff, Thread-1 Stuff, Thread-2 Stuff, instead of
Thread-1 Stuff, Thread-1 Stuff, Thread-1 Stuff, Thread-1 Stuff, Thread-2
Stuff, Thread-2 Stuff, Thread-2 Stuff, Thread-2 Stuff
Instead Thread-2 Only starts after Thread-1. I've checked online examples but
I don't understand what I'm doing wrong mechanically.
Answer: To pass arguments you need to do this:
t1 = threading.Thread(target=printWorker, args=(2, 'Thread-1'))
t2 = threading.Thread(target=printWorker, args=(3, 'Thread-2'))
Your code is invoking printWorker on the main thread and starting two threads
with target=None (the return value of printWorker).
|
wxpython cut copy and paste throughout the application
Question: I'm developing a small application with multiple TextCtrl and ComboBox
widgets. I want that when I press the key Ctrl-C Ctrl-V and Ctrl-X I get the
usual behaviour of copying, pasting and cutting in the appropriate entry.
What I obtain right now is that, while I can right-click and have the text
copd/past/cutd, I can't through the keybindings or the menu entries. How can I
obtain this in a simple way?
Answer: Menu keybindings work by default with Alt-first_menu_letter ->
submenu_first_letter.
The menu event of the selected item should bind the corresponding event
handler:
self.Bind(wx.EVT_MENU, self.on_copy, self.copy)
for a copy method, you first select the text you want to copy with the mouse.
Then you can get the widget that is focused (the specific textcontrol with the
selected string to be copied) with:
widget = self.FindFocus()
in this way now you can get the selected string from that widget:
self.copied = widget.GetStringSelection()
And the same has to be done for pasting the copied text in the textctrl you
situate the cursor. Here you have a working example:
import wx
class MyFrame(wx.Frame):
def __init__(self, *args, **kwds):
kwds["style"] = wx.DEFAULT_FRAME_STYLE
wx.Frame.__init__(self, *args, **kwds)
self.tctrl_1 = wx.TextCtrl(self, -1, "", style=wx.TE_MULTILINE)
self.tctrl_2 = wx.TextCtrl(self, -1, "", style=wx.TE_MULTILINE)
self.menubar = wx.MenuBar()
self.test = wx.Menu()
self.copy = wx.MenuItem(self.test, wx.NewId(), "copy", "is_going to copy", wx.ITEM_NORMAL)
self.test.AppendItem(self.copy)
self.paste = wx.MenuItem(self.test, wx.NewId(), "paste", "will paste", wx.ITEM_NORMAL)
self.test.AppendItem(self.paste)
self.menubar.Append(self.test, "Test")
self.SetMenuBar(self.menubar)
self.__set_properties()
self.__do_layout()
self.Bind(wx.EVT_MENU, self.on_copy, self.copy)
self.Bind(wx.EVT_MENU, self.on_paste, self.paste)
def __set_properties(self):
self.SetTitle("frame_1")
def __do_layout(self):
sizer_1 = wx.BoxSizer(wx.VERTICAL)
sizer_2 = wx.BoxSizer(wx.HORIZONTAL)
sizer_2.Add(self.tctrl_1, 1, wx.EXPAND, 0)
sizer_2.Add(self.tctrl_2, 1, wx.EXPAND, 0)
sizer_1.Add(sizer_2, 1, wx.EXPAND, 0)
self.SetSizer(sizer_1)
sizer_1.Fit(self)
self.Layout()
def on_copy(self, event):
widget = self.FindFocus()
self.copied = widget.GetStringSelection()
def on_paste(self, event):
widget = self.FindFocus()
widget.WriteText(self.copied)
if __name__ == "__main__":
app = wx.PySimpleApp(0)
frame = MyFrame(None, -1, "")
frame.Show()
app.MainLoop()
|
Python Popen WHOIS OS command fail test
Question: Prefacing this with the text 'just another beginner'. when you have the result
of a whois command via the Popen command, how do you test if its good ?
Normally when Python returns a list of whatever you can test the length of it
and that has usually sufficed for me, but this is a little more arbitrary.
eg im testing for a domains country of origin, but sometimes the domains that
gethostbyaddr gives me are not recognised by the WHOIS server. So, i thought i
would go with sending it an ip in case of failure but I've ended up with this
not so pretty less than 70 characters test. Just wondering if anyone knows
what the 'standard' way of doing this is.
w = Popen(['whois', domain], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
whois_result = w.communicate()[0]
print len(whois_result)
if len(whois_result) <= 70:
w = Popen(['whois', p_ip], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
whois_result = w.communicate()[0]
print len(whois_result)
if len(whois_result) <= 70:
print "complete and utter whois failure, its you isnt it, not me."
test = re.search("country.+([A-Z].)",whois_result)
countryid = test.group(1)
Answer: To answer your direct question, look for this string in the output of a
`whois` command to see whether there was a problem...
> No match for "insert_domain_here"
To address other meaningful issues to your task... your `Popen` command is
going at things the hard way... you don't need a `PIPE` for `stdin` and you
can call `.communicate()` directly on the `Popen` to make this a bit more
efficient... I rewrote with what I think you have in mind...
from subprocess import Popen, PIPE, STDOUT
import re
## Text result of the whois is stored in whois_result...
whois_result = Popen(['whois', domain], stdout=PIPE,
stderr=STDOUT).communicate()[0]
if 'No match for' in whois_result:
print "Processing whois failure on '%s'" % domain
whois_result = Popen(['whois', p_ip], stdout=PIPE,
stderr=STDOUT).communicate()[0]
if 'No match for' in whois_result:
print "complete and utter whois failure, its you isnt it, not me."
test = re.search("country.+([A-Z].)",whois_result)
countryid = test.group(1)
|
How to use multiple variables read from file with looping subprocess/popen
Question: I am using python to read 2 files from my linux os. One contains a single
entry/number 'DATE':
>
> 20111125
>
the other file contains many entries, 'TIME':
>
> 042844UTC
> 044601UTC
> ...
> 044601UTC
>
I am able to read the files to assign to proper variables. I would like to
then use the variables to create folder paths, move files etc... such as:
>
> $PATH/20111125/042844UTC
> $PATH/20111125/044601UTC
> $PATH/20111125/044601UTC
>
and so on.
Somehow this doesn't work with multiple variables passed at once:
import subprocess, sys, os, os.path
DATEFILE = open('/Astronomy/Sorted/2-Scratch/MAPninox-DATE.txt', "r")
TIMEFILE = open('/Astronomy/Sorted/2-Scratch/MAPninox-TIME.txt', "r")
for DATE in DATEFILE:
print DATE,
for TIME in TIMEFILE:
os.popen('mkdir -p /Astronomy/' + DATE + '/' TIME) # this line works for DATE only
os.popen('mkdir -p /Astronomy/20111126/' + TIME) # this line works for TIME only
subprocess.call(['mkdir', '-p', '/Astronomy/', DATE]), #THIS LINE DOESN'T WORK
Thanks!
Answer: I would suggest using `os.makedirs` (which does the same thing as `mkdir -p`)
instead of `subprocess` or `popen`:
import sys
import os
DATEFILE = open(os.path.join(r'/Astronomy', 'Sorted', '2-Scratch', 'MAPninox-DATE.txt'), "r")
TIMEFILE = open(os.path.join(r'/Astronomy', 'Sorted', '2-Scratch', 'MAPninox-TIME.txt'), "r")
for DATE in DATEFILE:
print DATE,
for TIME in TIMEFILE:
os.makedirs(os.path.join(r'/Astronomy', DATE, TIME))
astrDir = os.path.join(r'/Astronomy', '20111126', TIME)
try
os.makedirs(astrDir)
except os.error:
print "Dir %s already exists, moving on..." % astrDir
# etc...
Then use [`shutil`](http://docs.python.org/library/shutil.html) for any
`cp`/`mv`/etc operations.
* * *
From the [`os` Docs](http://docs.python.org/library/os.html):
> **`os.makedirs(path[, mode])`**
> Recursive directory creation function. Like `mkdir()`, but makes all
> intermediate-level directories needed to contain the leaf directory. Raises
> an error exception if the leaf directory already exists or cannot be
> created. The default mode is 0777 (octal). On some systems, mode is ignored.
> Where it is used, the current umask value is first masked out.
|
How to get Fabric to automatically (instead of user-interactively) interact with shell commands? Combine with pexpect?
Question: Seeking means to get [Fabric](http://fabfile.org) to automatically (instead of
user-interactively) interact with shell commands (and not just requests for
passwords, but also requested user input when no "stdin/interactive override"
like `apt-get install -y` is available).
[This question](http://stackoverflow.com/questions/5182857/can-i-use-fabric-
to-perform-interactive-shell-commands) along with these [Fabric
docs](http://docs.fabfile.org/en/1.3.3/usage/interactivity.html?highlight=interactive)
suggest that Fabric can only "push the interactivity" back to the human user
that's running the Fabric program. Seeking to instead fully automate without
any human presence. Don't yet have a "real," current problem to solve, just
preparing for possible, future obstacle.
Possibly useful to combine with [pexpect](http://www.noah.org/python/pexpect/)
(or similar, alternative mechanism) if Fabric can't exclusively handle all
stdin/prompts automatically? Hoping it doesn't need to be an ["either/or" kind
of thing](http://stackoverflow.com/questions/4200267/fabric-vs-pexpect). Why
not leverage both (pexpect and Fabric) where appropriate, if applicable, in
same program/automation?
Answer: As Glenn, I would say use pexpect; in addition,
have a look at this wrapper I wrote to script the pexpect behaviour from
fabric:
from ilogue.fexpect import expect, expecting, run
prompts = []
prompts += expect('What is your name?','John')
prompts += expect('Where do you live?','New York')
with expecting(prompts):
run('command')
See also my blogpost on [fexpect or how to handle prompts in fabric with
pexpect](http://ilogue.com/jasper/blog/fexpect--dealing-with-prompts-in-
fabric-with-pexpect/)
|
Python 3.x and TestLink xmlprc
Question: Appreciate your helping first, I am new for the python 3.x. When I try to use
Python 3.x to parse the testlink xmlprc server. I got below error, but I can
run the code under Python 2.x, any idea?
import xmlrpc.client
server = xmlrpc.client.Server("http://172.16.29.132/SITM/lib/api/xmlrpc.php") //here is my testlink server
print (server.system.listMethods()) //I can print the methods list here
print (server.tl.ping()) // Got error.
Here is the error:
['system.multicall', 'system.listMethods', 'system.getCapabilities', 'tl.repeat', 'tl.sayHello', 'tl.ping', 'tl.setTestMode', 'tl.about', 'tl.checkDevKey', 'tl.doesUserExist', 'tl.deleteExecution', 'tl.getTestSuiteByID', 'tl.getFullPath', 'tl.getTestCase', 'tl.getTestCaseAttachments', 'tl.getFirstLevelTestSuitesForTestProject', 'tl.getTestCaseCustomFieldDesignValue', 'tl.getTestCaseIDByName', 'tl.getTestCasesForTestPlan', 'tl.getTestCasesForTestSuite', 'tl.getTestSuitesForTestSuite', 'tl.getTestSuitesForTestPlan', 'tl.getLastExecutionResult', 'tl.getLatestBuildForTestPlan', 'tl.getBuildsForTestPlan', 'tl.getTotalsForTestPlan', 'tl.getTestPlanPlatforms', 'tl.getProjectTestPlans', 'tl.getTestPlanByName', 'tl.getTestProjectByName', 'tl.getProjects', 'tl.addTestCaseToTestPlan', 'tl.assignRequirements', 'tl.uploadAttachment', 'tl.uploadTestCaseAttachment', 'tl.uploadTestSuiteAttachment', 'tl.uploadTestProjectAttachment', 'tl.uploadRequirementAttachment', 'tl.uploadRequirementSpecificationAttachment', 'tl.uploadExecutionAttachment', 'tl.createTestSuite', 'tl.createTestProject', 'tl.createTestPlan', 'tl.createTestCase', 'tl.createBuild', 'tl.setTestCaseExecutionResult', 'tl.reportTCResult']
Traceback (most recent call last):
File "F:\SQA\Python\Testlink\Test.py", line 5, in <module>
print (server.tl.ping())
File "C:\Python31\lib\xmlrpc\client.py", line 1029, in __call__
return self.__send(self.__name, args)
File "C:\Python31\lib\xmlrpc\client.py", line 1271, in __request
verbose=self.__verbose
File "C:\Python31\lib\xmlrpc\client.py", line 1070, in request
return self.parse_response(resp)
File "C:\Python31\lib\xmlrpc\client.py", line 1164, in parse_response
p.feed(response)
File "C:\Python31\lib\xmlrpc\client.py", line 454, in feed
self._parser.Parse(data, 0)
xml.parsers.expat.ExpatError: junk after document element: line 2, column 0
Answer: When I've seen this message before, it happened because the contents of the
transported data wasn't escaped for XML transport. The solution was to wrap
the data in an _[XMLRPC Binary
object](http://docs.python.org/py3k/library/xmlrpc.client.html#binary-
objects)_.
In your case, you don't control the server side, so the above isn't a solution
for you but it may suggest what the actual problem is.
Also, the Python 2 versus Python 3 difference suggests that there is a
text/bytes issue at work.
To help diagnose the issue, set `verbose=True` so you can see the actual HTTP
request/response headers and the XML request/response. That may show you what
is at `line 2: column 0`. You may find that the issue may be with the PHP
script not wrapping up binary data in base64 encoding as required by the
[XMLRPC spec](http://xmlrpc.scripting.com/spec.html).
|
Pass arguments to a slot using QTimer
Question: I have written a pyQt client-server application. (python:3.2.2 , pyQT:4.8.6)
The sender sends a message to the listening receivers, and the receivers send
a response to the sender. I dont want the response to be sent instantly, but
after a small delay.
This is part of the receiver code:
----------------------------- msghandler.py-----------------------------------
class MsgHandler(QObject):
def __init__(self):
QObject.__init__(self)
self.mSec1Timer = None
def setParentUI(self, p):
self.parentUI = p
def handle_ask(self, ID, stamp, length, cargo, peerSocket):
print("Incoming:ASK")
#self.mSec1Timer.timeout.connect(lambda:self.send_msg_reply(peerSocket))
self.mSec1Timer.timeout.connect(self.dummyFunc) #Works
self.mSec1Timer.timeout.connect(lambda:self.dummyFunc())#See Rem-1
self.mSec1Timer.timeout.connect(lambda:self.shouldGiveError())#See Rem-2
self.parentUI.timerStart.emit(5)
@pyqtSlot(tuple)
def send_msg_reply(self, peerSocket):
print("This is not printed")
self.mSec1Timer.timeout.disconnect()
@pyqtSlot()
def dummyFunc(self):
print("dummy @ ",QDateTime.currentMSecsSinceEpoch())
self.mSec1Timer.timeout.disconnect()
------------------------------------------------------------------------------
from msghandler import *
class DialogUIAgent(QDialog):
timerStart = pyqtSignal(int)
def __init__(self):
QDialog.__init__(self)
self.myHandler = MsgHandler()
self.myHandler.setParentUI(self)
self.myTimer = QTimer()
self.myTimer.setSingleShot(True)
self.myHandler.mSec1Timer = self.myTimer
self.timerStart.connect(self.startMyTimer)
@pyqtSlot(int)
def startMyTimer(self, msec):
self.myTimer.start(msec)
For testing the behaviour first, i used
`self.mSec1Timer.timeout.connect(self.dummyFunc)`, and the output was as
expected:
Incoming:ASK
dummy @ 1322491256315
Incoming:ASK
dummy @ 1322491260310
Incoming:ASK
dummy @ 1322491265319
Incoming:ASK
dummy @ 1322491270323
Incoming:ASK
dummy @ 1322491275331
But when i used
`self.mSec1Timer.timeout.connect(lambda:self.send_msg_reply(peerSocket))`, the
slot was never called. Output:
Incoming:ASK
Incoming:ASK
Incoming:ASK
Incoming:ASK
Why is this happening, and what can i do to fix it?
Thanks in advance.
* * *
**EDIT :**
Remark-1:
dummyFunc worked before, but it does not work with `lambda:self.dummyFunc()`
Remark-2:
I was expecting an Error with `lambda:self.shouldGiveError()`, because there
is no such function, but instead i get nothing.
Is this a problem of the way that i use lambda?
Answer: Have you taken a look at
[QObject.invokeMethod()](http://www.riverbankcomputing.com/static/Docs/PyQt4/html/qmetaobject.html)?
It has been supported since PyQt 4.4 and it should allow you to call a slot
with arguments when you don't have a signal that match.
|
Ultimate answer to relative python imports
Question: I know that there are lots of questions about the same import issues in Python
but it seems that nobody managed to provide a clear example of correct usage.
Let's say that we have a package `mypackage` with two modules `foo` and `bar`.
Inside `foo` we need to be able to access `bar`.
Because we are still developing it, `mypackage` is not in `sys.path`.
We want to be able to:
* import `mypackage.foo`
* run `foo.py` as a script and execute the sample usage or tests from the `__main__` section.
* use Python 2.5
How do we have to do the import in foo.py in order to be sure it will work in
all these cases.
# mypackage/__init__.py
...
# mypackage/foo/__init__.py
...
# mypackage/bar.py
def doBar()
print("doBar")
# mypackage/foo/foo.py
import bar # fails with module not found
import .bar #fails due to ValueError: Attempted relative import in non-package
def doFoo():
print(doBar())
if __name__ == '__main__':
doFoo()
Answer: Take a look at the following info from [PEP
328](http://www.python.org/dev/peps/pep-0328/#relative-imports-and-name):
> Relative imports use a module's `__name__` attribute to determine that
> module's position in the package hierarchy. If the module's name does not
> contain any package information (e.g. it is set to `'__main__'`) then
> relative imports are resolved as if the module were a top level module,
> regardless of where the module is actually located on the file system.
When you run `foo.py` as a script, that module's `__name__` is `'__main__'`,
so you cannot do relative imports. This would be true even if `mypackage` was
on `sys.path`. Basically, you can only do relative imports from a module if
that module was imported.
Here are a couple of options for working around this:
1) In `foo.py`, check if `__name__ == '__main__'` and conditionally add
`mypackage` to `sys.path`:
if __name__ == '__main__':
import os, sys
# get an absolute path to the directory that contains mypackage
foo_dir = os.path.dirname(os.path.join(os.getcwd(), __file__))
sys.path.append(os.path.normpath(os.path.join(foo_dir, '..', '..')))
from mypackage import bar
else:
from .. import bar
2) Always import `bar` using `from mypackage import bar`, and execute `foo.py`
in such a way that `mypackage` is visible automatically:
$ cd <path containing mypackage>
$ python -m mypackage.foo.foo
|
Is it possible to precompile an entire python package?
Question: We have a significant ~(50kloc) tree of packages/modules (approx 2200files)
that we ship around to our cluster with each job. The jobs run for ~12 hours,
so the overhead of untarring/bootrapping (i.e. resolving PYTHONPATH for each
module) usually isn't a big deal. However, as the number of cores in our
worker nodes have increased, we've increasingly hit the case where the
scheduler will have 12 jobs land simultaneously, which will grind the poor
scratch drive to a halt servicing all the requests (worse, for reasons beyond
our control, each job requires a separate loopback filesystem, so there's 2
layers of indirection on the drive).
Is there a way to hint to the interpreter the proper location of each file
(without decorating the code with paths strewn throughout (maybe overriding
import?)) or bundle up all the associated .pyc files into some sort of binary
blob that can just be read once?
Thanks!
Answer: We had problems like this on our cluster. (The Lustre filesystem was slow for
metadata operations.) Our solution was to use the "[zip
import](http://docs.python.org/library/zipimport.html)" facilities in Python.
In our case we made a single zip of the stdlib (placed in the name given
already in sys.path, like "/usr/lib/python26.zip") and another zip of our
project, with the latter added to the PYTHONPATH.
This is much faster because it's a single filesystem metadata read, followed
by a quick zip file read of the table-of-contents to figure out what's inside,
and cache for later lookups.
|
Undefined Symbol in C++ When Loading a Python Shared Library
Question: I have been trying to get a project of mine to run but I have run into
trouble. After much debugging I have narrowed down the problem but have no
idea how to proceed.
Some background, I am using a python script inside C++ code. This is somewhat
documented on Python, and I managed to get it running very well in my basic
executable. #include and a -lpython2.6 and everything was grand.
However, difficulty has arisen when running this python script from a shared
library(.so). This shared library is "loaded" as a "module" by a simulation
system (OpenRAVE). The system interacts with this module using a virtual
method for "modules" called SendCommand. The module then starts a
boost::thread, giving python its own thread, and returns to the simulation
system. However, when python begins importing its modules and thus loading its
dynamic libraries it fails, I assume due to the following error:
ImportError: /usr/lib/python2.6/dist-packages/numpy/core/multiarray.so: undefined symbol: _Py_ZeroStruct
I have run ldd on my executable and the shared library, there doesn't some to
be a difference. I have also run nm -D on the file above, the _Py_ZeroStruct
is indeed undefined. If you guys would like print outs of the commands I would
be glad to supply them. Any advice would be greatly appreciated, thank you.
Here is the full python error:
Traceback (most recent call last):
File "/usr/lib/python2.6/dist-packages/numpy/__init__.py", line 130, in
import add_newdocs
File "/usr/lib/python2.6/dist-packages/numpy/add_newdocs.py", line 9, in
from lib import add_newdoc
File "/usr/lib/python2.6/dist-packages/numpy/lib/__init__.py", line 4, in
from type_check import *
File "/usr/lib/python2.6/dist-packages/numpy/lib/type_check.py", line 8, in
import numpy.core.numeric as _nx
File "/usr/lib/python2.6/dist-packages/numpy/core/__init__.py", line 5, in
import multiarray
ImportError: /usr/lib/python2.6/dist-packages/numpy/core/multiarray.so: undefined symbol: _Py_ZeroStruct
Traceback (most recent call last):
File "/home/constantin/workspace/OpenRAVE/src/grasp_behavior_2.py", line 3, in
from openravepy import *
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/__init__.py", line 35, in
openravepy_currentversion = loadlatest()
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/__init__.py", line 16, in loadlatest
return _loadversion('_openravepy_')
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/__init__.py", line 19, in _loadversion
mainpackage = __import__("openravepy", globals(), locals(), [targetname])
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/_openravepy_/__init__.py", line 29, in
from openravepy_int import *
ImportError: numpy.core.multiarray failed to import
Answer: I experienced the same problem with my application and solved it **without
linking** python to the executable.
The setup is as follows:
Executable --_links_ \--> library --_dynamically-loads_ \--> plugin --_loads_
\--> python interpreter
The solution to avoid the ImportErrors was to change the parameters of dlopen,
with which the plugin was loaded to `RTLD_GLOBAL`.
dlopen("plugin.so", RTLD_NOW | RTLD_GLOBAL)
This makes the symbols available to other things loaded afterwards, i.e. other
plugins or the python interpreter.
It can, however, happen that symbol clashes occur, because a plugin later
exports the same symbols.
|
ValueError: Cannot assign in django
Question: I have encountered a problem when I was trying to add/post a data to my
models. this is what i have done in my python manage.py shell:
>>> from booking.models import *
>>> qa = Product.objects.get(id=5)
>>> sd = Booking.objects.create(
... date_select = '2011-11-29',
... product_name = qa.name,
... quantity = 1,
... price = qa.price,
... totalcost = 20,
... first_name = 'lalala',
... last_name = 'sadsd',
... contact = '2321321',
... product = qa.id)
Traceback (most recent call last):
File "<console>", line 10, in <module>
File "/usr/local/lib/python2.7/dist-packages/django/db/models/manager.py", line 138, in create
return self.get_query_set().create(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 358, in create
obj = self.model(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/db/models/base.py", line 352, in __init__
setattr(self, field.name, rel_obj)
File "/usr/local/lib/python2.7/dist-packages/django/db/models/fields/related.py", line 331, in __set__
self.field.name, self.field.rel.to._meta.object_name))
ValueError: Cannot assign "5": "Booking.product" must be a "Product" instance.
i don't have any idea why is that happening... it is `product = qa.id` not
equal to the Booking.product?
here is my model.py
from django.db import models
# Create your models here.
class Product(models.Model):
name = models.CharField(max_length=50, null=True, unique=False)
quantity = models.IntegerField(max_length=10, null=True)
price = models.FloatField()
def __unicode__(self):
return self.name
class Booking(models.Model):
date_select = models.DateField(auto_now=False, auto_now_add=False)
product_name = models.CharField(max_length=50, blank=True, null=True)
quantity = models.IntegerField(max_length=10, blank=True, null=True)
price = models.FloatField()
totalcost = models.FloatField()
first_name = models.CharField(max_length=50, null=True, blank=True, unique=False)
last_name = models.CharField(max_length=50, null=True, blank=True, unique=False)
contact = models.CharField(max_length=50, null=True, blank=True, unique=False)
product = models.ForeignKey(Product)
def __unicode__(self):
return self.first_name
and my handlers.py
from django.utils.xmlutils import SimplerXMLGenerator
from piston.handler import BaseHandler
from booking.models import *
from piston.utils import rc, require_mime, require_extended, validate
class BookingHandler(BaseHandler):
allowed_method = ('GET', 'POST', 'PUT', 'DELETE')
fields = ('id', 'date_select', 'product_name', 'quantity', 'price','totalcost', 'first_name', 'last_name', 'contact', 'product')
model = Booking
def read(self, request, id):
if not self.has_model():
return rc.NOT_IMPLEMENTED
try:
inst = self.model.objects.get(id=id)
return inst
except self.model.MultipleObjectsReturned:
return rc.DUPLICATE_ENTRY
except self.model.DoesNotExist:
return rc.NOT_HERE
def create(self, request, *args, **kwargs):
if not self.has_model():
return rc.NOT_IMPLEMENTED
attrs = self.flatten_dict(request.POST)
if attrs.has_key('data'):
ext_posted_data = SimplerXMLGenerator(request.POST.get('data'))
attrs = self.flatten_dict(ext_posted_data)
try:
inst = self.model.objects.get(**attrs)
return rc.DUPLICATE_ENTRY
except self.model.DoesNotExist:
inst = self.model(**attrs)#error??
inst.save()
return inst
except self.model.MultipleObjectsReturned:
return rc.DUPLICATE_ENTR
does anyone can give me a hand about my situation?
thanks in advance...
Answer: You assign the product as
product = qa.id
While you should do
product = qa
|
Converting excel files to python to frequency
Question: Essentially I've got an excel files with voltage in the first column, and time
in the second. I want to find the period of the voltages, as it returns a
graph of voltage in y axis and time in x axis with a periodicity, looking
similar to a sine function.
To find the frequency I have uploaded my excel file to python as I think this
will make it easier- there may be something I've missed that will simplify
this.
So far in python I have:
import xlrd
import numpy as N
import numpy.fft as F
import matplotlib.pyplot as P
wb = xlrd.open_workbook('temp7.xls') #LOADING EXCEL FILE
wb.sheet_names()
sh = wb.sheet_by_index(0)
first_column = sh.col_values(1) #VALUES FROM EXCEL
second_column = sh.col_values(2) #VALUES FROM EXCEL
Now how do I find the frequency from this?
Answer: I'm not sure how much you know about the Fourier transform, so forgive me if
this is too much background.
Your signal does not have "a frequency", it is but it can be thought of as the
sum of many frequencies. The Fourier transform will tell you the weights of
all the frequencies that make up your signal. Unfortunately information may be
lost when sampling from the analog (continuous time) to digital (discrete
time) domain. This puts a constraint on the information we can get about
frequency - namely that the maximum frequency component we can determine is
related to the digital sampling rate ([Nyquist-Shannon
criterion](http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem)):
fs > 2B
Where `fs` is your sampling rate (samples/unit time, typically in Hz or
something like it), and `B` is the maximum frequency of your signal. If your
signal _actually_ has frequencies higher than `B` they will be "aliased" to
some value lower than `B`.
For your problem, all you have to do is this:
x = N.array(first_column)
X = F.fft(x)
Now `X` is the frequency-domain representation of your voltage signal. The
corresponding frequency axis covers `[0, fs)`, based on the sampling theorem.
So, what is `fs`? You need to calculate that by looking at the number of
samples you have divided by the total duration of your sampled signal (note
your units here):
fs = len(second_column) / second_column[-1]
Note that this representation of your signal will also (probably) be complex,
i.e. each frequency will have an associated amplitude and phase.
Hopefully this helps, and hopefully I didn't cover a bunch of stuff you
already knew.
|
QTableView and custom ORM
Question: Do you have any ideas, examples how to use `QTableView` with a custom ORM
written in Python (for example `Web2Py DAL`).
So i have result of a query and fields describing properties of columns in
that result:
ID (int) Name (str)
1 Lisa
2 Maria
I want to make a class `ResultSetModel` which can be tied to a `QTableView`. I
can have many objects of this class, each with its own query - like in
`QSqlQueryModel`. But `QSqlQueryModel` deals with SQL infrastructure in Qt,
but i have my own ORM to deal with the database.
Thank you
**UPDATE :**
Let's suppose i have a table with a lot of rows. I don't want to request all
of them and keep them in a model. I need a model that works together with
QTableView requesting next or previous records when user scrolls down or up
the view.
QAbstractItemModel.fetchMore is interesting but doesn't do what i want.
You can see the Fetch More example in `examples/itemviews/fetchmore.py`. When
you scroll down to the end, it requests additional portion of data, but keeps
the old records too. And it doesn't do the same when you scroll up.
Imagine that i have several million of persons in the Persons table. I want to
request and keep in my model/memory only the records shown in the view.
What i am trying to achieve is shown here:
<http://www.youtube.com/watch?v=hQlE0rrr7wI>
I.e. once the view is shown, the underlying model requests as many rows as
needed to show on the screen. As you scroll down/up - the other rows is
requested from DB incrementally.
Answer: Here is a working example using elixir and pyside. There should be
`session.commit()` there somewhere (on "save" button click or something like
that). Other then that it's fully functional
from elixir import *
from PySide import QtGui, QtCore
import operator, sys
class ColumnDescriptor(object):
#This holds properties, controlling how each field looks/behaves in GUI"""
def __init__(self, field_id):
self.id = field_id
self.verbose_name = self.id.capitalize().replace('_', ' ')
self.comment = None
class Person(Entity): #ORM entity class
#ORM entity fields
id_number = Field(Integer)
name = Field(Unicode(50))
def __init__(self, name, id_number):
self.name = name
self.id_number = id_number
class PersonView():
columns = []
col = ColumnDescriptor('id_number')
col.comment = "Person's identification code"
columns.append(col)
col = ColumnDescriptor('name')
col.verbose_name = 'Full name'
col.comment = "Person's full name"
columns.append(col)
def __init__(self):
self.total_records = Person.query.count()
def get_items(self, limit, offset = 0):
return Person.query.offset(offset).limit(limit).all()
class TableModel(QtCore.QAbstractTableModel):
#A one-size-fits-all model based on a view descriptor
numberPopulated = QtCore.Signal(int)
def __init__(self, view, editable = False, limit = 50):
super(TableModel, self).__init__()
self.view = view
self.editable = editable
self.current_page = 1
self.items_per_page = limit
self.items = view.get_items(self.items_per_page)
def columnCount(self, index):
return len(self.view.columns)
def rowCount(self, index):
return len(self.items)
def loadPage(self):
self.beginResetModel()
self.items = []
self.endResetModel()
self.items = self.view.get_items(self.items_per_page,
self.current_page * self.items_per_page)
self.beginInsertRows(QtCore.QModelIndex(), 0, len(self.items))
self.endInsertRows()
self.numberPopulated.emit(len(self.items))
def prevPage(self):
self.current_page = self.current_page - 1
self.loadPage()
def nextPage(self):
self.current_page = self.current_page + 1
self.loadPage()
def headerData(self, column, orientation, role):
if orientation == QtCore.Qt.Horizontal and role == QtCore.Qt.DisplayRole:
return self.view.columns[column].verbose_name
def data(self, index, role):
if index.isValid():
if (role == QtCore.Qt.DisplayRole) or (role == QtCore.Qt.EditRole):
field_name = self.view.columns[index.column()].id
value = self.items[index.row()].__getattribute__(field_name)
if value:
return unicode(value)
else:
return ''
def flags(self, index):
if not index.isValid():
return QtCore.Qt.ItemIsEnabled
else:
if self.editable:
return QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable | QtCore.Qt.ItemIsEditable
else:
return QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
class MyWindow(QtGui.QWidget):
def __init__(self):
super(MyWindow, self).__init__()
self.layout = QtGui.QVBoxLayout(self)
self.grid = QtGui.QTableView(self)
self.grid.setModel(TableModel(PersonView(), True))
self.layout.addWidget(self.grid)
self.layoutButtons = QtGui.QHBoxLayout(self)
self.layout.addLayout(self.layoutButtons)
self.btnPrevious = QtGui.QPushButton("Previous", self)
self.btnNext = QtGui.QPushButton("Next",self)
self.layoutButtons.addWidget(self.btnPrevious)
self.layoutButtons.addWidget(self.btnNext)
self.btnPrevious.clicked.connect(self.grid.model().prevPage)
self.btnNext.clicked.connect(self.grid.model().nextPage)
if __name__ == "__main__":
metadata.bind = "sqlite:///persons.sqlite"
setup_all(True)
#fill the table up, if empty
if not Person.query.all():
for n in range(1,1000):
p = Person(u'Person', n)
session.commit()
app = QtGui.QApplication(sys.argv)
win = MyWindow()
win.show()
app.exec_()
|
scrapy log handler
Question: I seek your help in the following 2 questions - How do I set the handler for
the different log levels like in python. Currently, I have
STATS_ENABLED = True
STATS_DUMP = True
LOG_FILE = 'crawl.log'
But the debug messages generated by Scrapy are also added into the log files.
Those are very long and ideally, I would like the DEBUG level messages to left
on standard error and INFO messages to be dump to my `LOG_FILE`.
Secondly, in the docs, it says `The logging service must be explicitly started
through the scrapy.log.start() function.` My question is, where do I run this
`scrapy.log.start()`? Is it inside my spider?
Answer: > Secondly, in the docs, it says `The logging service must be explicitly
> started through the scrapy.log.start() function`. My question is, where do I
> run this scrapy.log.start()? Is it inside my spider?
If you run a spider using `scrapy crawl my_spider` \-- the log is started
automatically if `STATS_ENABLED = True`
If you start the crawler process manually, you can do `scrapy.log.start()`
before starting the crawler process.
from scrapy.crawler import CrawlerProcess
from scrapy.conf import settings
settings.overrides.update({}) # your settings
crawlerProcess = CrawlerProcess(settings)
crawlerProcess.install()
crawlerProcess.configure()
crawlerProcess.crawl(spider) # your spider here
log.start() # depends on LOG_ENABLED
print "Starting crawler."
crawlerProcess.start()
print "Crawler stopped."
**The little knowledge I have about your first question:**
Because you have to start the scrapy log manually, this allows you to use your
own logger.
I think you can copy module `scrapy/scrapy/log.py` in scrapy sources, modify
it, import it instead of `scrapy.log` and run `start()` \- scrapy will use
your log. In it there is a line in function `start()` which says
`log.startLoggingWithObserver(sflo.emit, setStdout=logstdout)`.
Make your own observer (<http://docs.python.org/howto/logging-
cookbook.html#logging-to-multiple-destinations>) and use it there.
|
python module import issues in command prompt
Question: I have installed some python packages which I am able to access using IDLE and
not through command shell window.
Here is the output from IDLE:
Python 2.7.2+ (default, Oct 4 2011, 20:03:08)
[GCC 4.6.1] on linux2
Type "copyright", "credits" or "license()" for more information.
==== No Subprocess ====
>>> import whoosh
Here is the output from my terminal:
pradeep@ubuntu:~$ python
Python 2.7.2 (default, Nov 28 2011, 23:56:33)
[GCC 4.6.1] on linux3
Type "help", "copyright", "credits" or "license" for more information.
>>> import whoosh
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named whoosh
How can I point the terminal python to IDLE python packages? Why is terminal
showing 'linux3' where as IDLE showing 'linux2'? Please help me with this path
issue. thanks.
**Update1:**
Thanks all. Like most of you guessed, I have two different versions installed.
My Idle Path shows
['/home/pradeep', '/usr/bin', '/usr/local/lib/python2.7/dist-packages/Whoosh-2.3.0-py2.7.egg', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PIL', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntu-sso-client', '/usr/lib/python2.7/dist-packages/ubuntuone-client', '/usr/lib/python2.7/dist-packages/ubuntuone-control-panel', '/usr/lib/python2.7/dist-packages/ubuntuone-couch', '/usr/lib/python2.7/dist-packages/ubuntuone-installer', '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol']
My terminal path shows:
['', '/usr/local/lib/python27.zip', '/usr/local/lib/python2.7', '/usr/local/lib/python2.7/plat-linux3', '/usr/local/lib/python2.7/lib-tk', '/usr/local/lib/python2.7/lib-old', '/usr/local/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/site-packages']
How do I remove the terminal version and install IDLE version in terminal?
Thanks.
Answer: You're running two different Python installs, one dated 10/4/2011 and the
other dated 11/28/2011. The second one doesn't have _whoosh_ installed.
Your options are:
1. Look for the version that IDLE uses and run it from the command-line. To find it, turn on IDLE and run `import sys; print sys.executable`. That will show you the location of the version with the packages installed.
2. Or you can beef-up your command-line version by installing those same packages at the command-line (i.e. run `python setup.py install` for the various packages you want to load.
|
More functional os.walk
Question: Since I need to do many traversals of directories, which some complex
filtering, I thought to create a wrapper around os.walk.
Which is something like this:
def fwalk(root, pred_dir, pred_files, walk_function=walk):
"""Wrapper function around the standard os.walk, that filter out
the directories visited using a filtering predicate
"""
for base, dirs, files in walk_function(root):
# ignore also the root directory when not needed, which is
# actually more important than the subdirectories
dirs = [d for d in dirs if pred_dir(path.join(base, d))]
files = [f for f in files if pred_files(path.join(base, f))]
if _ignore_dirs_predicate(base) and (dirs or files):
yield base, dirs, files
Basically it behaves as os.walk, but takes two predicates to make it a bit
nicer to compose in higher-level functions. For example this will only go
through the python modules:
ISA_PY = lambda f: f[-3:] == '.py'
# I can make it a class or maybe even a module if it's better
def walk_py(src):
# should not be in the list
return fwalk(src, _ignore_dirs_predicate, ISA_PY)
It also takes a walk function which for example can be just a dummy walk, used
for testing.
def dummy_walk(_):
test_dir = [
('/root/', ['d1, .git'], []),
('/root/d1', [], ['setup.py']),
('/root/test', [], ['test1.py']),
('/root/.git', [], [])
]
# returns a function which skips the parameter and return the iterator
return iter(test_dir)
The problem now is that I find it very hard to trust this function, apart from
the some unit testing using the dummy walk is quite hard to make sure it's
correct.
Any suggestion about how I can improve this and make it nicer?
Answer: you need to modify dirs in place in order to avoid recursive traversal of the
removed directories. Use:
dirs[:] = [d for d in dirs if pred_dir(path.join(base, d))]
this will remove the need to check `_ignore_dirs_predicate(base)` (and remove
the `NameError` caused by the use of `_ignore_dirs_predicate` instead of
`pred_dir`)
You should also rewrite `ISA_PY` to use `str.endswith()`
|
Python, string format, newline (C++ - std::endl)
Question: I format string (in overloading operator **str**) and i don't want to use raw
\n or \r\n tags. Have python cross-platform newline identifier like std::endl
in C++?
I trying to google it, but don't found answer.
Answer: How about using `os.linesep`? It contains the appropriate line separator for
your OS:
>>> import os
>>> os.linesep
'\n'
>>> print "line one" + os.linesep + "line two"
line one
line two
|
Is the routes Dispatcher broken in CherryPy for Mac?
Question: Is CherryPy broken? I just set it up and tried to use the routes dispatcher
but it has an import error, my code is as follows:
import cherrypy
mapper = cherrypy.dispatch.RoutesDispatcher()
The error is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jwesonga/environments/cherrypy/lib/python2.6/site-packages/CherryPy-3.2.2-py2.6.egg/cherrypy/_cpdispatch.py", line 463, in __init__
import routes
ImportError: No module named routes
I'm on a Mac and I tried both 3.2.2 and 3.0 using virtualenv for the latter.
Answer: I have successfully used CherryPy with the routes dispatcher under OS X.
The error you've shown is:
ImportError: No module named routes
This is pretty clear -- Python can't find the `routes` modules. Have you
installed it? This is not part of CherryPy, it's a separate module that you
will need to install. If you're using MacPorts, you should be able to:
port install py-routes
(Or `py25-routes` or `py26-routes` depending on which Python you're using). If
you're using virtualenv, you can simply run:
easy_install routes
|
Strange pygame image error
Question: I have a small problem in my code. I have the following code:
import os,pygame
class npc:
ntype = 0
image = None
x = 0
y = 0
text = ""
name = ""
def Draw(self,screen):
screen.blit(self.image, [self.x*32,self.y*32])
def __init__(self,name,nx,ny):
f = open(name)
z = 0
for line in f:
if z == 0:
name = line
if z == 1:
ntype = line
if z == 2:
text = line
if z == 3:
self.image = pygame.image.load(os.path.join('img', line))
self.x = nx
self.y = ny
z=z+1
The file which I am loading from is in the following format:
The Shadow
0
Hello. I am evil.
shadow.png
It is the last line which has the problem. When I try to load that png using
pygame.image.load I get an error saying that it cant load that image. If I
change the pygame load code to `self.image =
pygame.image.load(os.path.join('img', "shadow.png"))` however, it works
perfectly. I have looked over the files several times, and I can't find any
reason for this error. Can someone see what I am doing wrong?
Traceback:
Traceback (most recent call last):
File "./main.py", line 26, in <module>
rmap = g.Load_Map("l1.txt",char)
File "/home/josiah/python/rpg/generate.py", line 31, in Load_Map
npcs.append(npc.npc(str.split(bx,',')[1],x,y))
File "/home/josiah/python/rpg/npc.py", line 23, in __init__
self.image = pygame.image.load(os.path.join('img', line))
pygame.error: Couldn't open img/shadow.png
Answer: You might have a trailing newline character. Try stripping the line:
self.image = pygame.image.load(os.path.join('img', line.strip()))
Better yet, load your file differently. Instead of the loop, you could do
something like this (assuming every file is formatted identically, and has at
least the same number of lines):
name, ntype, text, filename = [l.strip() for l in open(name).readlines()[:4]]
# Now use the variables normally, for example:
self.image = pygame.image.load(os.path.join('img', filename))
|
Python's urllib2.urlopen() hanging with local connection to a Java Restlet server
Question: I'm trying to connect to a local running Restlet server from python, but the
connection hangs infinitely (or times out if I set a timeout).
import urllib2
handle = urllib2.urlopen("http://localhost:8182/contact/123") # hangs
If I use `curl` from a shell to open the above URL, the results return
quickly. If I use urllib2 to open a different local service (e.g. a Django web
server on port 8000), urllib2 works fine.
I've tried disabling firewall (I'm doing this on OS X). I've tried changing
localhost to 127.0.0.1. The logs from Restlet for both the curl and urllib2
connection appear the same aside from the user-agent.
My workaround would be to just call `curl` via `subprocess`, but I'd rather
understand why this is failing.
Here's how my Restlet Resource looks:
public class ContactResource extends ServerResource {
@Get
public String represent() throws Exception {
return "<contact details>";
}
//....
}
Let me know if you want more info/code
Answer: I encountered the similar issues and ended up using the [Requests
package](http://docs.python-requests.org/en/latest/index.html).
|
ZMQ Pub-Sub Program Failure When Losing Network Connectivity
Question: I have a simple pub-sub setup on a mid-sized network, using ZMQ 2.1. Although
some subscribers are using C# bindings, others are using Python bindings, and
the issue I'm having is the same for either.
If I pull the network cable from a machine running a subscriber, I get an un-
catchable error that immediately terminates that subscriber.
Here's a very simple example of a subscriber in Python (not actual production
code, but enough to reproduce the problem):
import zmq
def main(server_address, port):
context = zmq.Context()
sub_socket = context.socket(zmq.SUB)
sub_socket.connect("tcp://" + server_address + ":" + str(port))
sub_socket.setsockopt(zmq.SUBSCRIBE, "KITH1S2")
while True:
msg = sub_socket.recv()
print msg
if __name__ == "__main__": main("company-intranet", 4000)
In C# the program simply terminates silently. In Python I at least get this:
> Assertion failed: rc == 0 (....\src\zmq_connector.cpp:48)
>
> This application has requested the Runtime to terminate it in an unusual
> way. Please contact the application's support team for more information.
I've tried non-blocking versions, and poller versions, but in either case this
instant termination problem persists. Is there something obvious I _should_ be
doing but I'm not? (That is, obvious to someone else :) ).
**EDIT:**
Found the following: <https://zeromq.jira.com/browse/LIBZMQ-207>
Seems as though it is/was a known issue.
That link further links to Github, where a change log for 2.1.10 has this
note:
> * Fixed issue 207, assertion failure in zmq_connecter.cpp:48, when an
> invalid zmq_connect() string was used, or the hostname could not be
> resolved. The zmq_connect() call now returns -1 in both those cases.
>
Although _connect()_ does indeed throw an Invalid Argument exception in Python
(not C# apparently?), _recv()_ still fails. If the subscriber machine suddenly
loses the network, that subscriber will simply stop functioning.
So - I'm going to try using IP addresses instead of named addresses to see if
this will bypass the issue. Not ideal, but better than insta-crash.
Answer: Original question: _Is there something obvious I should be doing but I'm not?_
No.
The workaround for now is to use IP addressing. This does not cause program
failure upon network disconnect for ZMQ 2.1.x.
|
wxPython: scrollbars interfering with formatting
Question: The following code is intended to demonstrate a problem I'm having with
wxPython. When I substitute a `wx.Panel` with a `wx.ScrolledWindow` and then
run the program the window that is opened is about as small as it could
possibly be. Once the frame has been manually resized the program works okay,
but obviously I'd prefer the window to open with a sensible size - as it does
it I use a subclass of `panel` instead of a `scrolledwindow`. I've tried all
the obvious stuff like `SetBestSize`, `SetInitialSize` but to no avail.
import wx
class MyApp(wx.App):
def OnInit(self):
self.frame = Example(None, title="Top frame")
self.frame.SetInitialSize()
self.SetTopWindow(self.frame)
self.frame.Show()
return True
class Example(wx.Frame):
def __init__(self, parent, title, ):
super(Example, self).__init__(parent, title=title,size=(300, 350))
self.panelOne = MyPanel(self)
self.frameSizer = wx.BoxSizer(wx.VERTICAL)
self.frameSizer.Add(self.panelOne, 1, wx.EXPAND)
self.SetSizer(self.frameSizer)
self.frameSizer.Fit(self)
self.Centre()
self.Show()
class MyPanel(wx.ScrolledWindow):
def __init__(self, parent):
super(MyPanel, self).__init__(parent)
self.mainSizer = wx.BoxSizer(wx.VERTICAL)
self.SetScrollbars(1,1,400,200)
self.entryGrid = wx.FlexGridSizer(cols = 8, rows = 10)
for i in range(80):
x = wx.StaticText(self, id=-1, label=str(i), size=(-1,-1), pos=(-1,-1), style=0, name="")
self.entryGrid.Add(x, 1, wx.ALL, 20)
### widgets here
self.mainSizer.Add(self.entryGrid)
# set optimum layout for mainsizer...
self.SetSizer(self.mainSizer)
# ...then fit main sizer to the panel.
self.mainSizer.Fit(self)
if __name__ == '__main__':
app = MyApp(False)
app.MainLoop()
Answer: try the following code and see if it does what you want:
HTH.
Andrea.
import wx
class MyApp(wx.App):
def OnInit(self):
self.frame = Example(None, title="Top frame")
self.frame.SetInitialSize()
self.SetTopWindow(self.frame)
self.frame.Show()
return True
class Example(wx.Frame):
def __init__(self, parent, title, size=(300, 350)):
super(Example, self).__init__(parent, title=title)
self.panelOne = MyPanel(self, size)
self.frameSizer = wx.BoxSizer(wx.VERTICAL)
self.frameSizer.Add(self.panelOne, 1, wx.EXPAND)
self.SetSizer(self.frameSizer)
self.frameSizer.Layout()
self.Centre()
self.Show()
class MyPanel(wx.ScrolledWindow):
def __init__(self, parent, size):
super(MyPanel, self).__init__(parent)
self.mainSizer = wx.BoxSizer(wx.VERTICAL)
self.SetScrollbars(1, 1, 400, 200)
self.entryGrid = wx.FlexGridSizer(cols=8, rows=10)
for i in range(80):
x = wx.StaticText(self, label=str(i))
self.entryGrid.Add(x, 1, wx.ALL, 20)
self.mainSizer.Add(self.entryGrid)
# set optimum layout for mainsizer...
self.SetSizer(self.mainSizer)
self.SetSizeHints(*size)
if __name__ == '__main__':
app = MyApp(False)
app.MainLoop()
|
Python I/O: How to use the wsgi.input stream from io module
Question: In a WSGI application we can read the row input data from the _wsgi.input_
field:
def application(env, start_response):
.....
data = env['wsgi.input'].read(num_bytes)
.....
However, I want to wrap the file-like object using the new _io_ module:
import io
def application(env, start_response):
.....
f = io.open(env['wsgi.input'], 'rb')
buffer = bytearray(buff_size)
read = f.readinto(buffer)
.....
The problem is that `io.open` doesn't accept these kind of file objects. Any
idea on how to do that? I need to read from `env['wsgi.input']` to a buffer.
Answer: the `io.open()` function does not accept file-object as the first parameter.
however, it accepts an integer representing the handle to an open file. so you
may have some success using:
f = io.open(env['wsgi.input'].fileno(), 'rb')
**addendum:**
the io module is written for python 3, where string handling is quite
different from python 2. calling `read()` on a file opened in binary mode
returns a `bytes` object in python 3, but a `str` in python 2, but when
wrapping a file using the `io` module and using binary mode, the io module
expect `read()` to return `bytes`.
you can try fixing your original file object by making it return bytes():
def fix(file):
# wrap 'func' to convert its return value to bytes using the specified encoding
def wrap(func, encoding):
def read(*args, **kwargs):
return bytes(func(*args, **kwargs), encoding)
return read
file.read = wrap(file.read, 'ascii')
fix(env['wsgi.input'])
f = io.open(env['wsgi.input'].fileno(), 'rb')
the above function wraps the `read()` method, but can be completed to wrap
`readline()`. also, a small additional work is required to wrap
`readlines()`...
|
How do you deal with multiple Python classes in the same module depending on each other?
Question: I'm using an ODM library and I'm defining documents as classes within the same
module, when they are related. I've hit a circular dependency problem and
because I haven't come across this before in Python, I don't know how to
inform the classes of the existence of each other. Example:
''' docs.py '''
from mongoengine import Document
from mongoengine.fields import StringField, ReferenceField, ListField
class Base(Document):
some_field = StringField()
class Foo(Base):
other_field = StringField()
another_field = ReferenceField(Bar)
class Bar(Base):
other_field = StringField()
another_field = ListField(ReferenceField(Foo))
As it stands the Python will throw a `NameError` because `Bar` is not defined
when the interpreter gets to a reference to it in the file, within the class
`Foo`. How do I tell Python not to worry and that the class definition will be
along shortly?
Answer: `ReferenceField` accepts class name as well.
another_field = ReferenceField('Bar')
|
Python cannot open file
Question:
#!/usr/bin/python
data = open("/home/mia/Desktop/results/all-nodup.txt", "r")
fd = open("/home/mia/Desktop/results/all-filter.txt", "w")
last_time = 0.0
last_ip = None
last_hash = None
row = data.read()
for line in row:
timestamp, ip, hash_value = line.split()
if ip==last_ip and hash_value==last_hash:
if float(timestamp) - float(last_time) >= 5.0:
fd.write("%s\t%s\t%s\n" % (str(timestamp), str(ip), str(hash_value)))
last_time, last_ip, last_hash = timestamp, ip, hash_value
else:
fd.write("%s\t%s\t%s\n" % (str(timestamp), str(ip), str(hash_value)))
last_time, last_ip, last_hash = timestamp, ip, hash_value
fd.close()
This is my entire code, I go to results/ directory to run: `python filter.py`
I got an error message: `python: can't open file 'filter.py': [Errno 2] No
such file or directory`
But every other scripts can be executed, so python works fine, maybe I should
import something in this case?
Answer: python can't even find your filter.py script file, so changing your code is
useless. To fix that you'll either need to:
* put `filter.py` inside `results/` directory
* use absolute path, e.g. `python /path/to/script/filter.py`
* figure out the correct relative path, e.g. `python ../../blah/filter.py`
* put the path where filter.py resides into your PATH variable
|
regex - how to recognise a pattern until a second one is found
Question: I have a file, named a particular way. Let's say it's:
tv_show.s01e01.episode_name.avi
it's the standard way a video file of a tv show's episode is named on the net.
The pattern is quite the same all over the web, so I want to extract some
information from a file named this way. Basically I want to get:
1. the show's title;
2. the season number `s01`;
3. the episode number `e01`;
4. the extension.
I'm using a Python 3 script to do so. This test file is pretty simple because
all I have to do is this
import re
def acquire_info(f="tv_show.s01e01.episode_name.avi"):
tvshow_title = title_p.match(f).group()
numbers = numbers_p.search(f).group()
season_number = numbers.split("e")[0].split("s")[1]
ep_number = numbers.split("e")[1]
return [tvshow_title, season_number, ep_number]
if __name__ == '__main__':
# re.I stands for the option "ignorecase"
title_p = re.compile("^[a-z]+", re.I)
numbers_p = re.compile("s\d{1,2}e\d{1,2}", re.I)
print(acquire_info())
and the output is as expected `['tv_show', '01', '01']`. But what if my file
name is like this other one? `some.other.tv.show.s04e05.episode_name.avi`.
How can I build a regex that gets all the text **BEFORE** the
`"s\d{1,2}e\d{1,2}"` pattern is found?
P.S. I didn't put in the example the code to get the extension, I know, but
that's not my problem so it does not matter.
Answer: try this
show_p=re.compile("(.*)\.s(\d*)e(\d*)")
show_p.match(x).groups()
where x is your string
Edit** (I forgot to include the extension, here is the revision)
show_p=re.compile("^(.*)\.s(\d*)e(\d*).*?([^\.]*)$")
show_p.match(x).groups()
* * *
And Here is the test result
>>> show_p=re.compile("(.*)\.s(\d*)e(\d*).*?([^\.]*)$")
>>> x="tv_show.s01e01.episode_name.avi"
>>> show_p.match(x).groups()
('tv_show', '01', '01', 'avi')
>>> x="tv_show.s2e1.episode_name.avi"
>>> show_p.match(x).groups()
('tv_show', '2', '1', 'avi')
>>> x='some.other.tv.show.s04e05.episode_name.avi'
>>> show_p.match(x).groups()
('some.other.tv.show', '04', '05', 'avi')
>>>
|
Python: How do I make a button freeze when I press it? - Tkinter
Question: This is a memory game where the user selects two buttons (cells). If the two
words behind the cells match, the words stays visible else the cells goes
hidden and u can't see the words behind them. What I want to do is to make a
cell freeze if it shows the word behind it when it is pressed. I want to apply
that function to both choice1 and choice2 and even to the pairs that the user
have already found.
I have tried to change `self.hidden = not self.hidden` on line 16 to
if self.hidden:
self.hidden = False
else:
self.hidden = False
but that didn't work out pefectly.
Here are the important part:
from tkinter import *
import random
class Cell:
def __init__(self, word, hidden = True):
self.word = word
self.hidden = hidden
def show_word(self):
""" Shows the word behind the cell """
if self.hidden:
self.hidden = False
else:
self.hidden = False
self.button["text"] = str(self)
if mem.choice1 is None:
mem.choice1 = self
elif mem.choice2 is None:
mem.choice2 = self
mem.update_tries()
else:
choice1, choice2 = mem.choice1, mem.choice2
mem.choice1, mem.choice2 = self, None
self.check(choice1, choice2)
def check(self, choice1, choice2):
""" Checks if the chosen words are a pair """
if choice1.word != choice2.word:
for cell in (choice1, choice2):
cell.hidden = True
cell.button['text'] = str(cell)
def __str__(self):
""" Displays or hides the word """
if self.hidden:
return "---"
else:
return self.word
class Memory(Frame):
""" GUI application that creates a Memory game """
def __init__(self, master):
super(Memory, self).__init__(master)
self.grid()
self.create_widgets()
self.tries = 0
self.choice1 = None
self.choice2 = None
Answer: You could disable the button when it should not be clickable anymore:
cell.button.config(state = DISABLED)
|
how to add/subtract 2 days in the target day in python?
Question: I have a problem in adding or subtracting 2 days to the target day... here is
my code :
import datetime
target_date = datetime.date(2011,2,7)
thanks
Answer: Use
[`datetime.timedelta`](http://docs.python.org/library/datetime.html#datetime.timedelta):
import datetime
target_date = datetime.date(2011,2,7)
delta = datetime.timedelta(days=2)
new_date = target_date - delta
print new_date # 2011-02-05
|
Python for android - import modules error
Question: I am trying to import a python module in the "Python For Android" application
in my emulator. I am able to push the zipped file and extract it but once the
extraction is complete, I get the following error :-
Errorjava.io.
FileNotFoundException: /data/data/com.googlecode.pythonforandroid/files/python/egg-info/hachoir-1.3.3.zip/files.txt(No such file or directory)
Any help is appreciated !
Answer: I had the same problem and solved it just by pressing the uninstall button
within the python application. The first one. Mine didn't uninstall, instead
out installed successfully the modules. Good luck!
|
Parsing puppet-api yaml with python
Question: I am creating a script which need to parse the yaml output that the puppet
outputs.
When I does a request agains example
_https://puppet:8140/production/catalog/my.testserver.no_ I will get some yaml
back that looks something like:
--- &id001 !ruby/object:Puppet::Resource::Catalog
aliases: {}
applying: false
classes:
- s_baseconfig
...
edges:
- &id111 !ruby/object:Puppet::Relationship
source: &id047 !ruby/object:Puppet::Resource
catalog: *id001
exported:
and so on... The problem is when I do an yaml.load(yamlstream), I will get an
error like:
yaml.constructor.ConstructorError: could not determine a constructor for the tag '!ruby/object:Puppet::Resource::Catalog'
in "<string>", line 1, column 5:
--- &id001 !ruby/object:Puppet::Reso ...
^
As far as I know, this &id001 part is supported in yaml.
Is there any way around this? Can I tell the yaml parser to ignore them? I
only need a couple of lines from the yaml stream, maybe regex is my friend
here? Anyone done any yaml cleanup regexes before?
You can get the yaml output with curl like:
curl --cert /var/lib/puppet/ssl/certs/$(hostname).pem --key /var/lib/puppet/ssl/private_keys/$(hostname).pem --cacert /var/lib/puppet/ssl/certs/ca.pem -H 'Accept: yaml' https://puppet:8140/production/catalog/$(hostname)
I also found some info about this in the puppet mailinglist @
<http://www.mail-archive.com/[email protected]/msg24143.html>. But
I cant get it to work correctly...
Answer: I have emailed Kirill Simonov, the creator of PyYAML, to get help to parse
Puppet YAML file.
He gladly helped with the following code. This code is for parsing Puppet log,
but I'm sure you can modify it to parse other Puppet YAML file.
The idea is to create the correct loader for the Ruby object, then PyYAML can
read the data after that.
Here goes:
#!/usr/bin/env python
import yaml
def construct_ruby_object(loader, suffix, node):
return loader.construct_yaml_map(node)
def construct_ruby_sym(loader, node):
return loader.construct_yaml_str(node)
yaml.add_multi_constructor(u"!ruby/object:", construct_ruby_object)
yaml.add_constructor(u"!ruby/sym", construct_ruby_sym)
stream = file('201203130939.yaml','r')
mydata = yaml.load(stream)
print mydata
|
String reverse in Python
Question: Write a simple program that reads a line from the keyboard and outputs the
same line where every word is reversed. A word is defined as a continuous
sequence of alphanumeric characters or hyphen (‘-’). For instance, if the
input is “Can you help me!” the output should be “naC uoy pleh em!”
I just tryed with the following code, but there are some problem with it,
print"Enter the string:"
str1=raw_input()
print (' '.join((str1[::-1]).split(' ')[::-2]))
It prints "naC uoy pleh !em", just look the exclamation(!), it is the problem
here. Anybody can help me???
Answer: The easiest is probably to use the `re` module to split the string:
import re
pattern = re.compile('(\W)')
string = raw_input('Enter the string: ')
print ''.join(x[::-1] for x in pattern.split(string))
When run, you get:
Enter the string: Can you help me!
naC uoy pleh em!
|
How can I write a generic Python 2.2 function that returns a list of unset parameters?
Question: I have a function with many input parameters, and I need a function that will
return a list of parameter names (not values) for each parameter whose value
is '' or None
Normally I'd throw an exception in such a method. If anyone wants to crack the
problem by throwing an exception, that is fine. I still have the requirement
that the function return the list of parameter names.
To summarize
1. Return a list of parameter names for parameters that are unset
2. "unset" means the parameter's value is not empty string or None
3. Accept a single parameter: a single dimension list or dict
4. The list should contain the complete set of empty parameter names
5. I need it to be backward compatible with Python 2.2 and Jython
6. 2.2 is non-negotiable. The code must run on legacy systems that we have no authority to upgrade. Sucks to be us.
7. The parameters are not command line arguments, but parameters to a function.
8. The parameters are stored in individual variables, but I can manually put them into a dict if necessary.
9. Instead of returning a list of Python variable names, return a list of user-friendly descriptions for each empty variable. Example: "Database Name" vs "db_name".
Answers to questions raised:
1. What if an unknown parameter is encountered? We don't care. We create the list of parameters to validate and select only those which are mandatory by virtue of the system's logic. Thus we'd never put an unknown parameter into the list of ones to validate
2. What about UI parameters that are not mandatory or which must be validated in other ways (int vs. string, etc)? We would not put the non-mandatory params in the list we pass to the validation function. For other more complex validations, we handle these individually, adhoc. The reason this function seemed convenient is because empty parameters are the most common validation we do, and writing an `if not foo:` for each one gets tedious across functions, of which we have many.
3. Please explain """By nature of our platform""". Also """it arrives in individual variables""" ... individual variables in what namespace? And what does """(preprocessing)""" mean? – John Machin 2 days ago. Answer: The variables are in the global namespace. We use code injection (similar to how a C preprocessor would substitute code for macro names, except we are substituting variable values for tags, similar to this:
DATABASE_NAME = ^-^Put the variable the user entered for database name here^-^
which ends up like this after the preprocessor runs:
DATABASE_NAME = "DB1"
Here is a concrete example showing why a simple method throwing an exception
would not work. I have rewritten to use an exception rather than returning a
value, by request:
def validate_parameters(params_map):
"""
map is like {foo: "this is foo"}
"""
missing_params_info = []
for k,v in params_map.items():
if not k:
missing_params_info.append(v)
if missing_params_info:
raise TypeError('These parameters were unset: %s' % missing_params_info)
params = {}
params['foo'] = '1'
params['bar'] = '2'
params['empty'] = ''
params['empty2'] = ''
params['None'] = None
params_map = {
params['foo']: 'this is foo',
params['bar']: 'this is bar',
params['empty']: 'this is empty',
params['empty2']: 'this is empty2',
params['None']: 'this is None',
}
print validate_parameters(params_map)
bash-3.00# python /var/tmp/ck.py
Traceback (most recent call last):
File "/var/tmp/ck.py", line 26, in ?
print validate_parameters(params_map)
File "/var/tmp/ck.py", line 10, in validate_parameters
raise TypeError('These parameters were unset: %s' % missing_params_info)
TypeError: These parameters were unset: ['this is empty2', 'this is None']
Two reasons it doesn't work for us: It only prints empty2, even though there
is another empty parameter, "empty". "empty" is overwritten by "empty2"
because they use the same key in the map.
Second reason: I need to get the list of descriptions into a variable at some
point after running this function. Maybe this is possible with exceptions, but
I don't know how right now.
I've posted an answer that seems to solve all these problems, but is not
ideal. I marked the question answered, but will change that if someone posts a
better answer.
Thanks!
Answer: I'm pretty sure I don't understand the question or how what you posted as your
'best solution' meets the requirements, but working just from:
> I have a function with many input parameters, and I need a function that
> will return a list of parameter names (not values) for each parameter whose
> value is '' or None
Here's an easy way to do what that line seems to ask for:
def validate_parameters(args):
unset = []
for k in args:
if args[k] is None or args[k]=="":
unset.append(k)
return unset
and then just call validate_parameters from the _first_ line of a function:
def foo(a, b, c):
print "Unset:", validate_parameters(locals())
>>> foo(1, None, 3)
Unset: ['b']
>>> foo(1, None, "")
Unset: ['c', 'b']
If it wasn't for the Python 2.2 requirement you could do it all in a single
line list comprehension. The important thing is that you have to call it from
the very first line of the function to ensure that `locals()` only picks up
parameters and not any other local variables.
|
how to convert string to datetime.timedelta()?
Question: how can i convert my string of date to a datetime.timedelta() in python?
I have this code :
import datetime
date_select = '2011-12-1'
delta = datetime.timedelta(days=1)
target_date = date_select + delta
print target_date
thanks in advance ...
Answer: You wouldn't convert `date_select` to a `timedelta`, instead, you need a
`datetime` object, which can be added to a `timedelta` to produce an updated
`datetime` object:
from datetime import datetime, timedelta
date_select = datetime.strptime('2011-12-1', '%Y-%m-%d')
delta = timedelta(days=1)
target_date = date_select + delta
print target_date
Or, if you prefer, without the fancy `from ... import ...` import line:
import datetime # <- LOOK HERE, same as in your example
date_select = datetime.datetime.strptime('2011-12-1', '%Y-%m-%d')
delta = datetime.timedelta(days=1)
target_date = date_select + delta
print target_date
|
Python: argparse subcommand subcommand?
Question: I have a program that has many available options. For example a configuration
option to change settings.
./app config -h
gives me the help using normal argparse subcommands
now i would like to add another subcommand to the config subcommand called
list to list config values
./app config list
additionally that command should accept another option so that i could say
./app config list CATEGORY
only to list the config of one category
my code right now is basically this just with more commands
>>> parser = argparse.ArgumentParser()
>>> subparsers = parser.add_subparsers(title='subcommands',
... description='valid subcommands',
... help='additional help')
>>> subparsers.add_parser('foo')
>>> subparsers.add_parser('bar')
>>> parser.parse_args(['-h'])
usage: [-h] {foo,bar} ...
optional arguments:
-h, --help show this help message and exit
subcommands:
valid subcommands
{foo,bar} additional help
So far I could not find any way to use a subcommand in a subcommand. If this
is possible, how? If not, is there any other way to accomplish this goal?
Thanks in Advance
Answer:
#file: argp.py
import argparse
parser = argparse.ArgumentParser(prog='PROG')
parser_subparsers = parser.add_subparsers()
sub = parser_subparsers.add_parser('sub')
sub_subparsers = sub.add_subparsers()
sub_sub = sub_subparsers.add_parser('sub_sub')
sub_sub_subparsers = sub_sub.add_subparsers()
sub_sub_sub = sub_sub_subparsers.add_parser('sub_sub_sub')
Seems to work.
In [392]: run argp.py
In [393]: parser.parse_args('sub sub_sub sub_sub_sub'.split())
Out[393]: Namespace()
In [400]: sys.version_info
Out[400]: sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0)
|
Setup Eclipse to work with Python bindings for Subversion
Question: I'm new in the Python world, coming from PHP. So, this question may sound
silly for experienced Python developers, but I'm really confused.
So, I want to implement simple Python app to work with subversion. For that
purpose I want to use python bindings for subversion, so I installed Ubuntu
python-subversion and python-subversion-dbg packages.
$ ll /usr/lib/pyshared/python2.6/libsvn
total 6460
drwxr-xr-x 2 root root 4096 2011-12-03 17:01 ./
drwxr-xr-x 23 root root 4096 2011-12-03 09:47 ../
-rw-r--r-- 1 root root 790331 2011-08-05 19:59 _client_d.so
-rw-r--r-- 1 root root 320844 2011-08-05 20:00 _client.so
-rw-r--r-- 1 root root 900465 2011-08-05 19:59 _core_d.so
-rw-r--r-- 1 root root 379804 2011-08-05 20:00 _core.so
-rw-r--r-- 1 root root 300336 2011-08-05 19:59 _delta_d.so
-rw-r--r-- 1 root root 115932 2011-08-05 20:00 _delta.so
-rw-r--r-- 1 root root 228879 2011-08-05 19:59 _diff_d.so
-rw-r--r-- 1 root root 89532 2011-08-05 20:00 _diff.so
-rw-r--r-- 1 root root 345484 2011-08-05 19:59 _fs_d.so
-rw-r--r-- 1 root root 137400 2011-08-05 20:00 _fs.so
-rw-r--r-- 1 root root 582390 2011-08-05 19:59 _ra_d.so
-rw-r--r-- 1 root root 231864 2011-08-05 20:00 _ra.so
-rw-r--r-- 1 root root 491500 2011-08-05 19:59 _repos_d.so
-rw-r--r-- 1 root root 196668 2011-08-05 20:00 _repos.so
-rw-r--r-- 1 root root 1038898 2011-08-05 19:59 _wc_d.so
-rw-r--r-- 1 root root 426008 2011-08-05 20:00 _wc.so
I tried to add /usr/lib/pyshared/python2.6/libsvn as library in Eclipse from
PyDev > Interpreter Python > Libraries > New Folder. But I still can't import
anything from svn package. I also see that there are no .py files, just .so.
I just want to be able to use it like on <http://svnbook.red-
bean.com/en/1.1/ch08s02.html>
My code:
from svn import fs
Erro I get:
File "/home/umpirsky/EclipseWorkspace/test/src/test.py", line 1, in <module> ImportError: cannot import name fs
How can I import this?
Answer: I remember installing this a while ago. Did you follow all the step of the?
The ones that you have to get right are:
* cd Source
* Create the Makefile using 'python setup.py configure'
* make
* cd Tests
* Test pysvn by running make
If that runs you know you are ok with the build. Then install pysvn by copying
the following from Extension/Source to python site-specific directory.
mkdir python-libdir/site-packages/pysvn
cp pysvn/__init__.py python-libdir/site-packages/pysvn
cp pysvn/_pysvn*.so python-libdir/site-packages/pysvn
By default you site-packages should be under: /usr/local/lib/pythonX.Y/site-
packages
Once you've copied that, on Eclipse:
Window->Preferences->Pydev->Interpreter Python
Under System PYTHONPATH add the folder you created above.
I've found that sometimes Pydev won't pick the new source folder for whatever
reason. So, I just remove the compiler and add it again. When you do that
Pydev will pick everything up under site-packages.
Edit: Here are the download
[instructions](http://pysvn.tigris.org/project_downloads.html) of what you
need. I thought you had downloaded the same package as python-svn. I actually
haven't use the distribution you downloaded. But I think pysvn will do the
trick for you and it has good documentation if you are just starting.
The install guide should get you going with the install. If you get lost with
it refer to the notes that I have above.
site-package is just the standard location for python installed modules.
|
How can I fire a Traits static event notification on a List?
Question: I am working through the [`traits`](https://github.com/enthought/traits)
[presentation from PyCon
2010](http://python.mirocommunity.org/video/1690/pycon-2010-introduction-to-
tra). At about 2:30:45 the presenter starts covering [trait event
notifications](http://code.enthought.com/projects/traits/docs/html/traits_user_manual/notification.html),
which allow (among other things) the ability to automatically call a
subroutine any time a [`trait`](https://github.com/enthought/traits) has
changed.
I am running a modified copy of the example he gave... In this trial, I am
trying to see whether I can fire a static event whenever I make a change to
`volume` or `inputs`.
from traits.api import HasTraits, Range, List, Float
import traits
class Amplifier(HasTraits):
"""
Define an Amplifier (a la Spinal Tap) with Enthought's traits. Use traits
to enforce values boundaries on the Amplifier's objects. Use events to
notify via the console when the volume trait is changed and when new volume
traits are added to inputs.
"""
volume = Range(value=5.0, trait=Float, low=0.0, high=11.0)
inputs = List(volume) # I want to fire a static trait event notification
# when another volume element is added
def __init__(self, volume=5.0):
super(Amplifier, self).__init__()
self.volume = volume
self.inputs.append(volume)
def _volume_changed(self, old, new):
# static event listener for self.volume
if not (new in self.inputs):
self.inputs.append(self.volume)
if new == 11.0:
print "This one goes to eleven... so far, we have seen", self.inputs
def _inputs_changed(self, old, new):
# static event listener for self.inputs
print "Check it out!!"
if __name__=='__main__':
spinal_tap = Amplifier()
spinal_tap.volume = 11.0
print "DIRECTLY adding a new volume input..."
spinal_tap.inputs.append(4.0)
try:
print "NEGATIVE Test... adding 12.0"
spinal_tap.inputs.append(12.0)
except traits.trait_errors.TraitError:
print "Test passed"
When I run this script, I can see `This one goes to eleven... so far, we have
seen [5.0, 11.0]` in the console output, so I know that `_volume_changed()`
gets fired when I assign `11.0` to `spinal_tap.volume`.
However, I never see any events from `_inputs_changed()`. No matter what
example I cook up, I can't get a
[`List`](http://code.enthought.com/projects/traits/docs/html/traits_user_manual/defining.html#other-
predefined-traits) to fire an event.
This is the output I am seeing... note that there is no evidence that
`_inputs_changed()` ever fires.
[mpenning@Bucksnort ~]$ python spinaltap.py
This one goes to eleven... so far, we have seen [5.0, 11.0]
DIRECTLY adding a new volume input...
NEGATIVE Test... adding 12.0
Test passed
[mpenning@Bucksnort ~]$
I have run this both under Python2.6 / Cygwin / Windows 7 and Python 2.5 /
Linux (all using [`traits`](https://github.com/enthought/traits) version 4.0.0
that I `easy_install` directly off [Enthought's site](http://enthought.com/)).
The results are the same no matter what I have tried so far.
Should a
[`List`](http://code.enthought.com/projects/traits/docs/html/traits_user_manual/defining.html#other-
predefined-traits) be able to fire a static event when using traits? If so, am
I doing something wrong?
Answer: After browsing their unit tests, I found a test for `Dict` traits in
enthought's [event unittest
coverage](https://github.com/enthought/traits/blob/master/traits/tests/container_events_test_case.py)...
it looks like when you have a container like a `Dict` or `List` that you need
to set up the magic event listener method like this:
## Broken method definition: def _inputs_changed(self, old, new):
# container event static listeners must be in the form of _foo_items_changed()
def _inputs_items_changed(self, old, new):
# static event listener for self.inputs
if len(new.added) > 0:
print "Check it out, we added %s to self.items" % new.added
elif len(new.removed) > 0:
print "Check it out, we removed %s from self.items" % new.removed
Likewise, I also discovered that the `on_trait_change` decorator (used for
dynamic `traits` event notification) requires similar nomenclature if you are
calling it with a `traits.api.List` or `traits.api.Dict`... so I could also
write the code above as:
from traits.api import on_trait_change
# ...
@on_trait_change('inputs_items')
def something_changed(self, name, new):
# static event listener for self.inputs
if len(new.added) > 0:
print "Check it out, we added %s to self.items" % new.added
elif len(new.removed) > 0:
print "Check it out, we removed %s from self.items" % new.removed
Either way, when I run the code, I get expected output:
[mpenning@Bucksnort ~]$ python spinaltap.py
Check it out, we added [5.0] to self.items
Check it out, we added [11.0] to self.items
This one goes to eleven... so far, we have seen [5.0, 11.0]
DIRECTLY adding a new volume input...
Check it out, we added [4.0] to self.items
NEGATIVE Test... adding 12.0
Test passed
[mpenning@Bucksnort ~]$
|
Scrapy read list of URLs from file to scrape?
Question: I've just installed scrapy and followed their simple dmoz
[tutorial](http://doc.scrapy.org/en/latest/intro/tutorial.html) which works. I
just looked up basic file handling for python and tried to get the crawler to
read a list of URL's from a file but got some errors. This is probably wrong
but I gave it a shot. Would someone please show me an example of reading a
list of URL's into scrapy? Thanks in advance.
from scrapy.spider import BaseSpider
class DmozSpider(BaseSpider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
f = open("urls.txt")
start_urls = f
def parse(self, response):
filename = response.url.split("/")[-2]
open(filename, 'wb').write(response.body)
Answer: You were pretty close.
f = open("urls.txt")
start_urls = [url.strip() for url in f.readlines()]
f.close()
...better still would be to use the context manager to ensure the file's
closed as expected:
with open("urls.txt", "rt") as f:
start_urls = [url.strip() for url in f.readlines()]
|
Putting a function to a local namespace to speeding up access taking descriptors into account
Question: I use `numpy.random.normal` function in a **tough** loop in a class.
class MyClass(MyBaseClass):
def run(self):
while True:
...
self.L.append(numpy.random.normal(0,1))
I know that it's pretty slow in Python to use multiple lookups. In
`numpy.random.normal` there are 3 lookups: first `numpy` is looked up, then
`random`, and then `normal`.
So I decided to address this problem by assigning `numpy.random.normal` to a
local variable `_normal`.
Here we go:
class MyClass(MyBaseClass):
_normal = numpy.random.normal
def run(self):
while True:
...
self.L.append(MyClass._normal(0,1))
What I'm really concerned about is descriptors. When a variable in a class in
accessed, all the bases classes are looked up for the **data descriptor** with
the same name. It's described
[here](http://www.cafepy.com/article/python_attributes_and_methods/python_attributes_and_methods.html#attribute-
search-summary):
> Check `objectname.__class__.__dict__`for attrname. If it exists and is a
> data-descriptor, return the descriptor result. Search all bases of
> `objectname.__class__` for the same case.
So, I guess, if I put `_normal` in the local space as I did above, it will
case looking up all bases classes for the data descriptor. And I wary of it
becoming a source of a slowdown.
Are my concerns justified?
Should I worry about the time it takes to look up for the descriptors in base
classes?
And is there a better way to speed up access to a function located deep into a
module when it's used in a class?
* * *
There was a discussion in the comments to the answers.
I decided to give some additional details of implementation that appeared to
be important (for my particular case).
Actually, the code is closer to this (it's very very simplified):
class MyClass(MyBaseClass):
def __iter__(self):
return self
def next(self):
self.L.append(numpy.random.normal(0,1))
def run(self):
while True:
self.next()
Answer: If you must do something like this (is function lookup actually the dominant
cost? Random number generation is not cheap) you should realize that one
global + one attr lookup (`MyClass._normal`) is not that much cheaper than one
global + three attr lookups (`numpy.random.normal`). What you really want is
to get _zero_ global or attr lookups inside the loop, which you can only do by
defining `_normal` _inside the function_. If you're really desperate to shave
cycles you should also prebind the list append call:
class MyClass(MyBaseClass):
def run(self):
_normal = numpy.random.normal
_Lappend = self.L.append
while True:
...
_Lappend(_normal(0,1))
Contrast disassembly output (just for the `append` statement):
LOAD_FAST 0 (self)
LOAD_ATTR 1 (L)
LOAD_ATTR 2 (append)
LOAD_GLOBAL 3 (numpy)
LOAD_ATTR 4 (random)
LOAD_ATTR 5 (normal)
LOAD_CONST 1 (0)
LOAD_CONST 2 (1)
CALL_FUNCTION 2
CALL_FUNCTION 1
POP_TOP
vs
LOAD_FAST 2 (_Lappend)
LOAD_FAST 1 (_normal)
LOAD_CONST 1 (0)
LOAD_CONST 2 (1)
CALL_FUNCTION 2
CALL_FUNCTION 1
What would be even better is to vectorize -- generate many random normal
deviates from and append them to the list in one go -- you can do that with
the `size` argument to `numpy.random.normal`.
|
Python: Unable to import mechanize module on Mac
Question: I have mechanize module installed using easy_install but when I tried to
import I get the following error:
Python 2.6.7 (r267:88850, Nov 21 2011, 14:59:21)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import mechanize
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named mechanize
Just to confirm that I have installed mechanize I did easy_install again and
it confirm that I have mechanize:
easy_install mechanize
Searching for mechanize
Best match: mechanize 0.2.5
Processing mechanize-0.2.5-py2.6.egg
mechanize 0.2.5 is already the active version in easy-install.pth
Using /Library/Python/2.6/site-packages/mechanize-0.2.5-py2.6.egg
Processing dependencies for mechanize
Finished processing dependencies for mechanize
I realize that not only mechanize most of the external modules that I install
using easy_install don't get available for import. Is it due to the fact that
I have macports installed..?
This is what I get from `echo $PATH` echo $PATH
/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/Users/N-H/DevApps/android-sdk-mac_x86/platform-tools:/Users/N-H/DevApps/android-sdk-mac_x86/tools:/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:/opt/subversion/bin/:/opt/subversion/bin:/usr/bin/java:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/Users/N-H/DevApps/android-sdk-mac_86/tools:/Library/grails-1.3.6/bin:/opt/subversion/bin:/usr/bin/java:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/Users/N-H/DevApps/android-sdk-mac_86/tools:/usr/bin/gcc-4.2
I did which python and looks like mac ports installs python under opt
directory... (not really sure)
$which python
/opt/local/bin/python
Answer: Looks like you have installed mechanize to python 2.6 provided with Os X, but
you are running python interpreter installed from macports.
You can run easy_install for python from macports with (for python 2.7):
/opt/local/bin/easy_install-2.7
|
I have a twisted reactor running, how do I connect to it?
Question: I've been following the tutorials and now have a twisted reactor running. I've
used telnet to test that it does stuff but I've not managed to find anything
in the twisted tutorials on how to connect to the reactor.
My assumption was there would be something within twisted to do this, should I
instead use the built in socket?
Edit:
This is the Server script:
import time
import multiprocessing
from twisted.internet.protocol import Factory
from twisted.protocols.basic import LineReceiver
from twisted.internet import reactor
class TTT(LineReceiver):
def __init__(self, users):
self.users = users
self.name = None
self.state = "GETNAME"
def connectionMade(self):
self.sendLine("You are connected")
def connectionLost(self, reason):
if self.users.has_key(self.name):
del self.users[self.name]
def lineReceived(self, line):
if line == "quit":
reactor.stop()
if self.state == "GETNAME":
self.handle_GETNAME(line)
else:
self.handle_CHAT(line)
def handle_GETNAME(self, name):
if self.users.has_key(name):
self.sendLine("Name taken, please choose another.")
return
self.sendLine("Welcome, %s!" % (name,))
self.name = name
self.users[name] = self
self.state = "CHAT"
def handle_CHAT(self, message):
message = "<%s> %s" % (self.name, message)
for name, protocol in self.users.iteritems():
if protocol != self:
protocol.sendLine(message)
class TTTFactory(Factory):
def __init__(self):
self.state = [0 for x in range(9)]
self.turn = -1
self.users = {} # maps user names to Chat instances
def make_move(self, player, x, y):
if player != self.turn:
return "Not your turn"
i = x + y * 3
if self.state[i] != 0:
return "Invalid move"
self.state[i] = player
# Horrizontal
if self.state[0] == self.state[1] == self.state[2]: return "Win"
if self.state[3] == self.state[4] == self.state[5]: return "Win"
if self.state[6] == self.state[7] == self.state[8]: return "Win"
# Vertical
if self.state[0] == self.state[3] == self.state[6]: return "Win"
if self.state[1] == self.state[4] == self.state[7]: return "Win"
if self.state[2] == self.state[5] == self.state[8]: return "Win"
# Diagonal
if self.state[0] == self.state[4] == self.state[8]: return "Win"
if self.state[6] == self.state[4] == self.state[2]: return "Win"
# Swap turn
self.turn = 0 - self.turn
return "Next move"
def buildProtocol(self, addr):
return TTT(self.users)
# def reactor_runner():
def new_server(conn):
port_num = 8007
conn.send(port_num)
reactor.listenTCP(port_num, TTTFactory())
reactor.run()
I want to have another python program/process send and recieve messages from
it. The idea behind the project is create a multiplayer tic tac toe game.
I want to have a server process and 1 or more client processes. For ease of
running I'm currently using multiprocessing to run them at the same time. When
complete the client process needs to be able to connect over a network as it
may not be on the same computer as the host.
Answer: Here's a small example of a client capable of connecting to your server above.
from twisted.internet.protocol import ClientFactory
from twisted.protocols.basic import LineReceiver
from twisted.internet import reactor
class TTTClientProtocol(LineReceiver):
def lineReceived(self, line):
line = line.strip()
if line == 'You are connected':
self.sendLine(self.factory.username)
else:
print 'SERVER SAYS:', line
class TTTClientFactory(ClientFactory):
protocol = TTTClientProtocol
def __init__(self, name):
self.username = name
name = raw_input('Please enter your name: ')
print 'Connecting...'
reactor.connectTCP('localhost', 8007, TTTClientFactory(name))
reactor.run()
I kept it as simple as I could so you could understand it easily, but to
implement the chat part I would need code to read from `stdin` without
blocking the reactor. Since you mentioned you're using a GUI instead of
terminal standard input/output, then that is a lot easier actually - just
[choose a reactor compatible with your GUI
library](http://twistedmatrix.com/documents/current/core/howto/choosing-
reactor.html) and then use your normal GUI events.
Hope that helps...
|
Python: Import Exception
Question: Is it possible to import everything except one module from a package?
I need a lot of modules from a particular library that I use in my class, but
it looks like it used the same module name for one of the modules that I need.
I need to use set operation and intersection, but when I import that library
from my class, it gives me an error because of that.
I didn't want to import it separately or put the name in front of every
methods since I'm using it a lot.
Is there a way for the python to import everything except for a particular
method like `set`? Or maybe import the `set` part again later?
Answer: No, there is no terminology to `from ... import * except blah, bleh, bluh`.
You can either write your own import function to support it, or do something
like:
from xyz import *
del set
which will stop shadowing the built-in `set` so you can use it again. Then if
you need the `xyz.set` function you can do:
from xyz import set as xyzset
Note: `from ... import *` is not _usually_ good practice, and you should make
sure the modules you are using this way support it -- if they don't explicity
say they were designed to be used this way, then you shouldn't (unless you
enjoy debugging weird problems later ;).
|
How to send a mail to hundreds of thousands recipients in python fast which each of recipients is showed as receiver in mail header?
Question: I'm looking of solutions to send huge number of mails (hundreds of thousands)
in Python. I have a list of recipients (in a file), and I want to send a mail
to all of them. I want each of recipients was show as receiver in mail header,
not bcc or cc. Well, my solutions: send a separated mail to each of them not a
mail to all list. I did some works below:(I used _smtplib_ , _threading_):
class SendMail(threading.Thread):
def __init__(self, from, to, subject, message):
self.from = from
self.to = to
self.subject = subject
self.message = message
def run(self):
try:
msg = MIMEMultipart('alternative')
msg['Subject'] = self.subject
msg['From'] = self.from
msg['To'] = self.to
msg.attach(MIMEText(self.message , 'html'))
server = smtplib.SMTP()
server.connect('xxxxx', 25)
server.login('cxxxxx', 'yyyyyy')
server.sendmail(self.from, self.to, msg.as_string())
except:
pass
def sendmail():
f = open('recipients','w')
from = "[email protected]"
subject = "hello"
message = "Hello Hello"
for line in f.readlines():
t = SendMail(from, line, subject, message)
t.run()
f.close()
It work but very slow (about 6 mails/second). So please help me to make it
faster. Or suggest me another solutions to do this job. Thank you very much!
Answer: Fastest would be to install a Mail Transfer Agent such as
[Postfix](http://en.wikipedia.org/wiki/Postfix_%28software%29) on your machine
and hand all your emails to it for delivery using the `/usr/sbin/sendmail`
mailing interface. Most reasonable mail servers can accept thousands of mail
messages for delivery per second and may perform some SMTP pipelining to send
a message to multiple recipients on a target domain in a single connection,
drastically reducing traffic overhead and improving message throughput. (This
wouldn't influence how your users see your emails.)
Most mail servers can also handle temporarily down servers extraordinarily
well, which is extremely important since many sites use
[greylisting](http://en.wikipedia.org/wiki/Graylisting) to combat spam.
If, however, you really want to be contacting an SMTP server via a network
connection in your Python, it would be a very good idea to use a [thread
pool](http://en.wikipedia.org/wiki/Thread_pool) of sending threads that will
take addresses off a queue, create and send the email, and then return to the
queue for another address to service. Your current code creates and destroys a
new thread for every single mail delivery. Threads take time to create and
take time to destroy, and all that overhead is time that could have been spent
servicing your mails.
Furthermore, the thread pool would limit the total number of active
connections. There's no point in creating 1000 separate but simultaneous
connections to a single mail server. The [three-way
handshakes](http://en.wikipedia.org/wiki/Three-
way_handshake#Connection_establishment) that set up each session take three
times the latency to your server to establish the TCP session before you can
send any SMTP traffic. So create ten threads with ten connections _and reuse
those connections for sending emails_. (Even ten is probably overkill for this
-- two or three would probably work better. Heck, _one_ thread might be best
of all, though if that connection goes down (per-connection mail limits?) you
would have a period of sending _nothing_ over the wire until that connection
is re-established.)
What you've created now is very similar to the [thundering herd
problem](http://en.wikipedia.org/wiki/Thundering_herd_problem) \-- you start
hundreds or thousands of threads but might not have sufficient memory to keep
them all in RAM simultaneously. You might have introduced enough swapping to
drastically reduce the performance of your sending system, where a single
thread of execution might fit entirely in memory and run without stalling for
swapping.
|
How do I deploy web2py on PythonAnywhere?
Question: How do i get a basic web2py server up and running on
[PythonAnywhere](http://www.pythonanywhere.com)?
Answer: [update - 29/05] We now have a big button on the web tab that will do all this
stuff for you. Just click where it says _Web2Py_ , fill in your admin
password, and you're good to go.
Here's the old stuff for historical interest...
I'm a PythonAnywhere developer. We're not massive web2py experts (yet?) but
I've managed to get web2py up and running like this:
First download and unpack web2py:
wget http://www.web2py.com/examples/static/web2py_src.zip
unzip web2py_src.zip
Go to the PythonAnywhere "Web" panel and edit your `wsgi.py`. Add these lines:
import os
import sys
path = '/home/my_username/web2py'
if path not in sys.path:
sys.path.append(path)
from wsgihandler import application
replacing `my_username` with your username.
You will also need to **comment out the last two lines** in wsgi.py, where we
have the default hello world web.py application...
# comment out these two lines if you want to use another framework
#app = web.application(urls, globals())
#application = app.wsgifunc()
Thanks to Juan Martinez for his instructions on this part, which you can view
here: <http://web2py.pythonanywhere.com/>
then open a _Bash_ console, and `cd` into the main `web2py` folder, then run
python web2py.py --port=80
enter admin password
press ctrl-c
(this will generate the `parameters_80.py` config file)
then go to your _Web_ panel on PythonAnywhere, click **reload web app** , and
things should work!
|
Cython ImportError: No module named parallel
Question: I am trying to access the new parallel features of Cython 0.15 (using Cython
0.15.1). However, if I try this minimal example (testp.py), taken from
<http://docs.cython.org/src/userguide/parallelism.html>:
from cython.parallel import prange, parallel, threadid
cdef int i
cdef int sum = 0
for i in prange(n, nogil=True):
sum += i
print sum
with this setup.py:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy
ext = Extension("testp", ["testp.pyx"], include_dirs=[numpy.get_include()],
extra_compile_args=['-fopenmp'], extra_link_args ['-fopenmp'])
setup(ext_modules=[ext], cmdclass={'build_ext': build_ext})
when I `import testp`, Python tells me: `ImportError: No module named
parallel`. And in fact, if I browse the Cython package in the site-packages, I
cannot find any file or directory that is called `parallel`. But I thought it
should be included somewhere in the release? Could someone please clarify for
a confused user?
Answer: You can check all of your python modules in python command-line using:
>>> help('modules')
And then try to install/reinstall cython using easy_install or pip.
|
set up python emacs environment mac os x
Question: I have followed this tutorial for setting up a python emacs enviroment.
<http://www.saltycrane.com/blog/2010/05/my-emacs-python-environment/>
I'm able to get python mode for emacs and syntax highlighting but when I try
to use rope and ropemacs for example the command C-c d for the documentiation
I get that the command is not defined
rope and ropemacs where installed with easy install and my .emacs file looks
like Am'I missing something??
(custom-set-variables
;; custom-set-variables was added by Custom.
;; If you edit it by hand, you could mess it up, so be careful.
;; Your init file should contain only one such instance.
;; If there is more than one, they won't work right.
'(custom-enabled-themes (quote (tango-dark))))
(custom-set-faces
;; custom-set-faces was added by Custom.
;; If you edit it by hand, you could mess it up, so be careful.
;; Your init file should contain only one such instance.
;; If there is more than one, they won't work right.
)
(add-hook 'find-file-hook 'flymake-find-file-hook)
(provide 'init_python)
(autoload 'pymacs-apply "pymacs")
(autoload 'pymacs-call "pymacs")
(autoload 'pymacs-eval "pymacs" nil t)
(autoload 'pymacs-exec "pymacs" nil t)
(autoload 'pymacs-load "pymacs" nil t)
(eval-after-load "pymacs"
'(add-to-list 'pymacs-load-path ~/.emacs.d/vendor/pymacs-0.24-beta2"))
(pymacs-load "ropemacs" "rope-")
(setq ropemacs-enable-autoimport t)
(add-to-list 'load-path "~/.emacs.d/vendor/auto-complete-1.2")
(require 'auto-complete-config)
(add-to-list 'ac-dictionary-directories "~/.emacs.d/vendor/auto-complete-1.2/dict")
(ac-config-default)
Answer: you need to add your ropeemacs dir into the load path, so that EMACS can find
the package and load those commands for you
(setq load-path (cons "/PATH-OF-ROPEEMACS" load-path))
to be honest, I don't think rope emacs is that useful for me, it slows down
the emacs startup quite a lot, but the benefit is not that obvious. when I
want to look up issues, I simply go SO.
|
How to implement a Lock with a timeout in Python 2.7
Question: Is there a way to implement a lock in Python for multithreading purposes whose
`acquire` method can have an arbitrary timeout? The only working solutions I
found so far use polling, which
* I find inelegant and inefficient
* Doesn't preserve the bounded waiting / progress guarantee of the lock as a solution to the critical section problem
Is there a better way to implement this?
Answer: to elaborate on Steven's comment suggestion:
import threading
import time
lock = threading.Lock()
cond = threading.Condition(threading.Lock())
def waitLock(timeout):
with cond:
current_time = start_time = time.time()
while current_time < start_time + timeout:
if lock.acquire(False):
return True
else:
cond.wait(timeout - current_time + start_time)
current_time = time.time()
return False
Things to notice:
* there are two `threading.Lock()` objects, one is internal to the `threading.Condition()`.
* when manipulating `cond`, it's lock is acquired; the `wait()` operation unlocks it, though, so any number of threads can watch it.
* the wait is embedded inside a for loop that keeps track of the time. `threading.Condition` can become notified for reasons other than timeouts, so you still need to track the time if you really want it to expire.
* even with the condition, you still 'poll' the real lock, because its possible for more than one thread to wake and race for the lock. if the lock.acquire fails, the loop returns to waiting.
* callers of this `waitLock` function should follow a `lock.release()` with a `cond.notify()` so that other threads waiting on it are notified that they should retry aquiring the lock. This is not shown in the example.
|
Newbie Apache / Python error
Question: Trying to configure apache on my laptop to execute a python script for a small
assignment:
* I've created a /scripts folder in root and granted it all permissions 777.
* In my virtual hosts file I have added `ScriptAlias /scripts/ /scripts/`
* Added directory handle also in my conf file:
Options +ExecCGI FollowSymLinks Indexes MultiViews AllowOverride All Order
allow,deny Allow from all AddHandler cgi-script .py
The script I'm trying to run is (a sample python test script):
#!/usr/bin/python
print "Content-type: text/html"
print
print "<pre>"
import os, sys
from cgi import escape
print "<strong>Python %s</strong>" % sys.version
keys = os.environ.keys()
keys.sort()
for k in keys:
print "%s\t%s" % (escape(k), escape(os.environ[k]))
print "</pre>"
When I access it via <http://127.0.0.1/scripts/results.py> I get an Internal
Server Error and in my error log I get the following error:
> [Mon Dec 05 20:58:30 2011] [error] [client 127.0.0.1] (2)No such file or
> directory: exec of '/scripts/result.py' failed
>
> [Mon Dec 05 20:58:30 2011] [error] [client 127.0.0.1] Premature end of
> script headers: result.py
Apache does have suexec module loaded from what I've found when running
apachectl -v, and suspect that this may have something to do with the problem.
Also running /usr/bin/python /scripts/result.py executes fine, but since
apache runs under a different user guess this doesn't mean much.
Also I'm running this on OSX Lion, and I wasn't able to find how to run the
script from cli as apache, during my debugging.
Any help would be appreciated.
Answer: I don't have access to OSX, but I'd probably try something like this:
ScriptAlias /cgi-bin/ "/scripts/"
<Directory "/scripts">
Options +ExecCGI FollowSymLinks Indexes MultiViews
AllowOverride All
Order allow,deny
Allow from all
AddHandler cgi-script .py
</Directory>
|
python networking: asynchat handshake
Question: I am using python asynchat to implement a network protocol. At connection time
I need to send a command and the server answer with a session.
My main problem is that I need to wait until I get the session response. but
not sure how to implement this. should I use socket.recv for the connection
setup? is a good idea?
Answer: When writing a network application using asynchronous techniques, you _wait_
by recording your state somewhere and then letting the main loop continue. At
some future time, the data you're waiting for will become available, the main
loop will notify you of that fact, and you can combine the new data with the
recorded state to complete whatever task you are working on. Depending on the
specific task, you may need to go through this cycle many times before your
task is actually done.
These ideas are basically the same regardless of what asynchronous system
you're using. However, [Twisted](http://twistedmatrix.com/) is [a vastly
superior system](http://stackoverflow.com/questions/4384360/which-python-
async-library-would-be-best-suited-for-my-code-asyncore-twisted) to
[asynchat](http://docs.python.org/library/asynchat.html), so I'm not going to
try to explain any of the asynchat details. Instead, here's an example that
does the kind of thing you're asking about, using Twisted:
from twisted.internet.defer import Deferred
from twisted.internet.protocol import Protocol, Factory
from twisted.internet.endpoints import TCP4ClientEndpoint
from twisted.internet import reactor
# Stream-oriented connections like TCP are handled by an instance
# of a Protocol subclass
class SomeKindOfClient(Protocol):
# When a new connection is established, the first thing that
# happens is this method is called.
def connectionMade(self):
# self.transport is set by the superclass, and lets us
# send data over the connection
self.transport.write("GREETING")
# a Deferred is a generic, composable API for specifying
# callbacks
self.greetingComplete = Deferred()
# Here's some local state
self._buffer = ""
# Whenever bytes arrive on the TCP connection, they're passed
# to this method
def dataReceived(self, bytes):
# Incorportate the network event data into our local state.
# This kind of buffering is always necessary with TCP, because
# there's no guarantees about how many bytes will be delivered
# at once (except that it will be at least 1), regardless of
# the size of the send() the peer did.
self._buffer += bytes
# Figure out if we're done - let's say the server response is 32
# bytes of something
if len(self._buffer) >= 32:
# Deliver it to whomever is waiting, by way of the Deferred
# object
greeting, self._buffer = self._buffer[:32], self._buffer[32:]
complete = self.greetingComplete
self.greetingComplete = None
complete.callback(greeting)
# Otherwise we'll keep waiting until dataReceived is called again
# and we have enough bytes.
# One of the normal ways to create a new client connection
f = Factory()
f.protocol = SomeKindOfClient
e = TCP4ClientEndpoint(reactor, "somehost", 1234)
# Connect returns one of those Deferreds - letting us specify a function
# to call when the connection is established. The implementation of
# connect is also doing basically the same kind of thing as you're asking
# about.
d = e.connect(f)
# Execution continues to this point before the connection has been
# established. Define a function to use as a callback when the connection
# does get established.
def connected(proto):
# proto is an instance of SomeKindOfClient. It has the
# greetingComplete attribute, which we'll attach a callback to so we
# can "wait" for the greeting to be complete.
d = proto.greetingComplete
def gotGreeting(greeting):
# Note that this is really the core of the answer. This function
# is called *only* once the protocol has decided it has received
# some necessary data from the server. If you were waiting for a
# session identifier of some sort, this is where you might get it
# and be able to proceed with the remainder of your application
# logic.
print "Greeting arrived", repr(greeting)
# addCallback is how you hook a callback up to a Deferred - now
# gotGreeting will be called when d "fires" - ie, when its callback
# method is invoked by the dataReceived implementation above.
d.addCallback(gotGreeting)
# And do the same kind of thing to the Deferred we got from
# TCP4ClientEndpoint.connect
d.addCallback(connected)
# Start the main loop so network events can be processed
reactor.run()
To see how this behaves, you can launch a simple server (eg `nc -l 1234`) and
point the client at it. You'll see the greeting arrive and you can send some
bytes back. Once you've sent back 30, the client will print them (and then
hang around indefinitely, because I implemented no further logic in that
protocol).
|
Does the Rdio Desktop API give tracks unique IDs?
Question: I've built a small desktop app for myself that logs listens from iTunes and
Rdio so I can create powerful playlists based on how I listen to music over
time, but it requires each track to have a unique ID that will never change,
regardless of which app I used to listen. I currently retrieve unique IDs from
iTunes using appscript in Python:
from appscript import *
it = app('iTunes')
it.current_track.persistent_ID()
However, when poking through the Rdio Suite AppleScript Dictionary I don't see
any kind of unique ID attached to tracks. Suggestions?
Answer: Rdio has an [API](http://developer.rdio.com/). Here's some info on how to get
the currently playing track: <http://groups.google.com/group/rdio-
api/browse_thread/thread/23d52c77b4e56a55/3b8a897f0835fd90>
You can also get the url of the currently playing track via Applescript. This
should serve as a sort of unique ID for the track:
osascript -e 'tell app "Rdio" to get the rdio url of the current track'
|
Custom include tag
Question: I want to make a custom include tag (like `{% smart_include something %}` ),
which realize what kind a thing we want to include and then call regular `{%
include %}` tag. That's should be something like this:
@register.simple_tag
def smart_include(something):
if something == "post":
template_name = "post.html"
return regular_include_tag(template_name)
Is there a way to use `{% include %}` tag in python code, and how exactly?
**UPD.** turn's out, the best way to solve this problem is just use
`render_to_string` shortcut
Answer: If you look into the **django.template.loader_tags** you fill find a function
**do_include** which is basically the function that is called when we use {%
include %}.
So you should be able to import it call the function itself in python.
I have not tried this but I think it should work
|
Why does this work in Python 3 IDLE on Windows and not in the terminal on Ubuntu?
Question: I have a program where I use `input()` to take input from STDIN.
I use the input to read the first word from a line and use it as a dictionary
key, with every subsequent word added to a list which is the value of the
aformentioned key.
The input is in a file `names.txt`:
Victor Bertha Amy Diane Erika Clare
Wyatt Diane Bertha Amy Clare Erika
Xavier Bertha Erika Clare Diane Amy
Yancey Amy Diane Clare Bertha Erika
Zeus Bertha Diane Amy Erika Clare
Amy Zeus Victor Wyatt Yancey Xavier
Bertha Xavier Wyatt Yancey Victor Zeus
Clare Wyatt Xavier Yancey Zeus Victor
Diane Victor Zeus Yancey Xavier Wyatt
Erika Yancey Wyatt Zeus Xavier Victor
So for instance, `men["Victor"] = ["Bertha","Amy","Diane","Erika","Clare"]`.
The code is in the file `GS.py` (an implementation of Gale-Shapley):
if __name__ == "__main__":
## Data Dictionary
''' Name : Preferences '''
men = dict()
women = dict()
''' List of unmatched men '''
freeMen = list()
''' Name : How far down in preferences '''
count = dict()
''' Name : Current Match '''
wife = dict()
husband = dict()
## Reading Input
data = input("").split("\n")
print(data)
readingMen = True
for l in data:
line = l.split()
print(line)
if len(line) > 1:
newPerson = line[0]
newPersonPreferences = list()
for i in range(1,len(line)):
newPersonPreferences.append(line[i])
if readingMen:
print("man")
print(newPersonPreferences)
men[newPerson] = newPersonPreferences
wife[newPerson] = 0
count[newPerson] = 0
freeMen.append(newPerson)
else:
print("woman")
print(newPersonPreferences)
women[newPerson] = newPersonPreferences
husband[newPerson] = 0
elif len(line) == 1:
raise IOError(l + "\nis an invalid line.")
else:
readingMen = False
## Proposing
while len(freeMen) != 0:
m = freeMen[0]
w = men[m][count[m]]
count[m] += 1
if husband[w] == 0:
husband[w] = m
wife[m] = w
freeMen.remove(m)
else:
try:
if women[w].index(husband[w], women[w].index(m)):
freeMen.append(husband[w])
wife[husband[w]] = 0
husband[w] = m
wife[m] = w
freeMen.remove(m)
except ValueError:
pass
## Match Printing
print()
for m in wife:
print(m, wife[m])
When using IDLE on Windows, I just paste the contents of this file and hit
enter, and it works.
But using Ubuntu, I do `python3 GS.py < names.txt` and I get this:
me@glados:~$ python3 GS.py < names.txt
['Victor Bertha Amy Diane Erika Clare']
['Victor', 'Bertha', 'Amy', 'Diane', 'Erika', 'Clare']
man
['Bertha', 'Amy', 'Diane', 'Erika', 'Clare']
Traceback (most recent call last):
File "GS.py", line 83, in <module>
if husband[w] == 0:
KeyError: 'Bertha'
_(edited)_ Now when I do `cat names.txt | python3 GS.py` I get this:
ajg9132@glados:~$ cat names.txt | python GS.py
Traceback (most recent call last):
File "GS.py", line 50, in <module>
data = input("").split("\n")
File "<string>", line 1
Victor Bertha Amy Diane Erika Clare
^
SyntaxError: invalid syntax
I've no idea what to do - kind of ignorant regarding I/O. Any help?
_Edit note:_ I thought the two different bash commands I gave were equivalent,
but then again, I'm a total noob, so an explanation for why they're different
would help too...
To clear up ambiguity, this is for an Algo homework... (sad that I understand
the algorithm but not the low-level details of the OS) and I need to have a
specific input and output scheme. e.g.
spock $ java GS
Victor Bertha Amy Diane Erika Clare
Wyatt Diane Bertha Amy Clare Erika
Xavier Bertha Erika Clare Diane Amy
Yancey Amy Diane Clare Bertha Erika
Zeus Bertha Diane Amy Erika Clare
Amy Zeus Victor Wyatt Yancey Xavier
Bertha Xavier Wyatt Yancey Victor Zeus
Clare Wyatt Xavier Yancey Zeus Victor
Diane Victor Zeus Yancey Xavier Wyatt
Erika Yancey Wyatt Zeus Xavier Victor
Victor Amy
Wyatt Clare
Xavier Bertha
Yancy Erika
Zeus Diane
spock $
The only reason why I wasn't doing this was because pasting several lines of
text into PuTTY made bash try to interpret each line as a command. I can't
even.
Answer: The meaning of `input()` has changed.
In Python 3.2: <http://docs.python.org/py3k/library/functions.html#input>
In Python 2.7.2: <http://docs.python.org/library/functions.html#input>
You can see this far easier with two small testing programs. The only
difference is one uses the Python 2.7 interpreter and the other uses the
Python 3.2 interpreter:
$ cat input27.py
#!/usr/bin/python2.7
data = input("")
for l in data.split("\n"):
print(l)
$ cat input32.py
#!/usr/bin/python3.2
data = input("")
for l in data.split("\n"):
print(l)
$ ./input27.py < names.txt
Traceback (most recent call last):
File "./input27.py", line 2, in <module>
data = input("")
File "<string>", line 1
Victor Bertha Amy Diane Erika Clare
^
SyntaxError: invalid syntax
$ ./input32.py < names.txt
Victor Bertha Amy Diane Erika Clare
$
Note that even though the Python 3.2 version doesn't throw errors, it also
doesn't print all the lines in `names.txt` as one might expect.
I don't think the `input()` method is worth using. Easier would use the new-
fangled [`for line in
file:`](http://docs.python.org/library/stdtypes.html#file.next) approach
instead:
$ cat fixed_input27.py
#!/usr/bin/python2.7
import sys
for line in sys.stdin:
print(line.split()[0])
$ cat fixed_input32.py
#!/usr/bin/python3.2
import sys
for line in sys.stdin:
print(line.split()[0])
$ ./fixed_input27.py < names.txt
Victor
Wyatt
Xavier
Yancey
Zeus
Amy
Bertha
Clare
Diane
Erika
$ ./fixed_input32.py < names.txt
Victor
Wyatt
Xavier
Yancey
Zeus
Amy
Bertha
Clare
Diane
Erika
$
(I removed the one blank line from the `names.txt` because it caused this
simple program to throw an error. It won't actually be an issue in your full-
fledged program, because you properly handle the blank line.)
I can't explain why `input()` _did_ work under Windows, but `input()` feels
like a horrible enough interface (who thought running the user-supplied input
through `eval` was a good idea?!? sheesh) to just re-write it.
**Update**
Okay, I was intrigued enough to solve this all the way. I took all your
debugging code back out and switched to using the `for l in sys.stdin:`
approach:
$ ./GS.py
Victor Bertha Amy Diane Erika Clare
Wyatt Diane Bertha Amy Clare Erika
Xavier Bertha Erika Clare Diane Amy
Yancey Amy Diane Clare Bertha Erika
Zeus Bertha Diane Amy Erika Clare
Amy Zeus Victor Wyatt Yancey Xavier
Bertha Xavier Wyatt Yancey Victor Zeus
Clare Wyatt Xavier Yancey Zeus Victor
Diane Victor Zeus Yancey Xavier Wyatt
Erika Yancey Wyatt Zeus Xavier Victor
Wyatt Clare
Xavier Bertha
Yancey Erika
Zeus Diane
Victor Amy
$ cat GS.py
#!/usr/bin/python3.2
if __name__ == "__main__":
import sys
## Data Dictionary
''' Name : Preferences '''
men = dict()
women = dict()
''' List of unmatched men '''
freeMen = list()
''' Name : How far down in preferences '''
count = dict()
''' Name : Current Match '''
wife = dict()
husband = dict()
## Reading Input
readingMen = True
for l in sys.stdin:
line = l.split()
if len(line) > 1:
newPerson = line[0]
newPersonPreferences = list()
for i in range(1,len(line)):
newPersonPreferences.append(line[i])
if readingMen:
men[newPerson] = newPersonPreferences
wife[newPerson] = 0
count[newPerson] = 0
freeMen.append(newPerson)
else:
women[newPerson] = newPersonPreferences
husband[newPerson] = 0
elif len(line) == 1:
raise IOError(l + "\nis an invalid line.")
else:
readingMen = False
## Proposing
while len(freeMen) != 0:
m = freeMen[0]
w = men[m][count[m]]
count[m] += 1
if husband[w] == 0:
husband[w] = m
wife[m] = w
freeMen.remove(m)
else:
try:
if women[w].index(husband[w], women[w].index(m)):
freeMen.append(husband[w])
wife[husband[w]] = 0
husband[w] = m
wife[m] = w
freeMen.remove(m)
except ValueError:
pass
## Match Printing
print()
for m in wife:
print(m, wife[m])
$
Note that you have to hit `^D` when you're done pasting in the input if you
run it this way. (I _much_ prefer IO redirection `./GS.py < names.txt`, but if
your professor will copy and paste, then make sure your prof knows to hit `^D`
to signal the end of input.)
|
Map multiple lists of values to a list of keys in a python dictionary?
Question: I want to map some values(a list of lists) to some keys(a list) in a python
dictionary. I read [Map two lists into a dictionary in
Python](http://stackoverflow.com/questions/209840/map-two-lists-into-a-
dictionary-in-python)
and figured I could do that this way :
headers = ['name', 'surname', 'city']
values = [
['charles', 'rooth', 'kentucky'],
['william', 'jones', 'texas'],
['john', 'frith', 'los angeles']
]
data = []
for entries in values:
data.append(dict(itertools.izip(headers, entries)))
But I was just wondering is there is a nicer way to go?
Thanks
PS: I'm on python 2.6.7
Answer:
from functools import partial
from itertools import izip, imap
data = map(dict, imap(partial(izip, headers), values))
|
Trouble installing pyserial with macports python (python27) on Mac
Question: I installed py-serial using macports (on my mac) but it installed python24
instead of using my existing python27.
So if use the python24 that macports py-serial installed then I can import py-
serial
# /opt/local/bin/python2.4 -c 'import serial'
But I cannot import it into python27 using either of these
# /opt/local/bin/python2.7 -c 'import serial'
# python -c 'import serial'
I get this error
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named serial
I made sure I have the correct version selected:
# port select --set python python27
I tried uninstall and reinstall it still gives the same error as above
# sudo port uninstall
# port install py-serial
I think this is a problem related to my macports/python, not py-serial. Any
help is appreciated.
Answer: py-serial is the port for python2.4 for later versions of python use the
numbered version e.g.
sudo port install py27-serial
It is an historical error - originally they thought they should just have one
version of python packages then it was realised that you could have multiple
versions of python and that many packages are version dependant.
|
How to use std::vector in PHP using SWIG
Question: I am working on wrapping a C++ API in PHP using SWIG. I am most of the way
there but I am having problems with a function that returns a vector. The
header looks something like this:
#include <vector>
namespace STF
{
class MyClass
{
public:
const std::vector<MyOtherClass> &getList();
};
}
The interface file looks like this:
%include <std_vector.i>
%import "STF_MyOtherClass.i"
%{
#include "STF_MyOtherClass.h"
#include "STF_MyClass.h"
%}
%include "STF_MyClass.h"
I seem to be able to call the function fine but it is returning a PHP resource
instead of an object. Specifically it is a resource of type:
"_p_std__vectorT_STF__MyClass_t".
How can I either get this to return an object that I can iterate through
(preferably with a foreach loop) or how can I iterate through this resource?
**Update:**
I have been working on a solution based off of what I read here:
<http://permalink.gmane.org/gmane.comp.programming.swig/16817>
Basically I am trying to convert the vector into a python array:
%typemap(out) std::vector<STF::MyOtherClass>
{
array_init( return_value );
std::vector<STF::MyOtherClass>::iterator itr;
itr = $1.begin();
for( itr; itr != $1.end(); itr++ )
{
zval* tmp;
MAKE_STD_ZVAL( tmp );
SWIG_SetPointerZval( tmp, &*itr, $descriptor(STF::MyOtherClass*), 2 );
add_next_index_zval( return_value, tmp );
}
}
This is very close to working. I put a breakpoint inside the wrapper code
within SWIG_ZTS_SetPointerZval. When it goes to initialize the object it does
a zend_lookup_class for "stf__myotherclass" which fails (it doesn't find a
clasS). I am not sure why it can't find the class.
Answer: You're almost there, but as well as `%include <std_vector.i>` you'll also need
something like:
%template (MyVector) std::vector<MyOtherClass>;
This instructs SWIG to expose vectors of `MyOtherClass` to the target language
as a type called `MyVector`. Without doing this SWIG doesn't know which types
you want to instantiate `std::vector` for, so it's reduced to the default
wrapping.
Side note:
Is there a reason you've got the `const` in `const std::vector<MyOtherClass>
getList();` when it's not a reference? I'd either make it a reference and make
the method `const` also (`const std::vector<MyOtherClass>& getList() const;`)
or drop the `const` entirely since it does nothing there.
|
Easier way to create a JSON object from an SQLObject
Question: EDIT -- took the code from below and made it so it can handle ForiegnKeys,
Decimal numbers (although i'm doing a very forced float conversion). It
returns a dict now so it can be recursive.
from sqlobject import SQLObject
from decimal import Decimal
def sqlobject_to_dict(obj):
json_dict = {}
cls_name = type(obj)
for attr in vars(cls_name):
if isinstance(getattr(cls_name, attr), property):
attr_value = getattr(obj, attr)
attr_class = type(attr_value)
attr_parent = attr_class.__bases__[0]
if isinstance(getattr(obj, attr), Decimal):
json_dict[attr] = float(getattr(obj, attr))
elif attr_parent == SQLObject:
json_dict[attr] = sqlobject_to_dict(getattr(obj, attr))
else:
json_dict[attr] = getattr(obj, attr)
return json_dict
EDIT -- changed to add the actual data model -- there are generated values
that need to be accessed and Decimal() columns that need dealing with as well.
So I've seen this: [return SQL table as JSON in
python](http://stackoverflow.com/questions/3286525/return-sql-table-as-json-
in-python) but it's not really what I'm looking for -- that's "brute force" --
you need to know the names of the attributes of the object in order to
generate the JSON response.
What I'd like to do is something like this (the name of the class and it's
attributes are not-important)
class BJCPStyle(SQLObject):
name = UnicodeCol(length=128, default=None)
beer_type = UnicodeCol(length=5, default=None)
category = ForeignKey('BJCPCategory')
subcategory = UnicodeCol(length=1, default=None)
aroma = UnicodeCol(default=None)
appearance = UnicodeCol(default=None)
flavor = UnicodeCol(default=None)
mouthfeel = UnicodeCol(default=None)
impression = UnicodeCol(default=None)
comments = UnicodeCol(default=None)
examples = UnicodeCol(default=None)
og_low = SGCol(default=None)
og_high = SGCol(default=None)
fg_low = SGCol(default=None)
fg_high = SGCol(default=None)
ibu_low = IBUCol(default=None)
ibu_high = IBUCol(default=None)
srm_low = SRMCol(default=None)
srm_high = SRMCol(default=None)
abv_low = DecimalCol(size=3, precision=1, default=None)
abv_high = DecimalCol(size=3, precision=1, default=None)
versions = Versioning()
def _get_combined_category_id(self):
return "%s%s" % (self.category.category_id, self.subcategory)
def _get_og_range(self):
low = self._SO_get_og_low()
high = self._SO_get_og_high()
if low == 0 and high == 0:
return "varies"
else:
return "%.3f - %.3f" % (low, high)
def _get_fg_range(self):
low = self._SO_get_fg_low()
high = self._SO_get_fg_high()
if low == 0 and high == 0:
return "varies"
else:
return "%.3f - %.3f" % (low, high)
def _get_srm_range(self):
low = self._SO_get_srm_low()
high = self._SO_get_srm_high()
if low == 0 and high == 0:
return "varies"
else:
return "%.1f - %.1f" % (low, high)
def _get_abv_range(self):
low = self._SO_get_abv_low()
high = self._SO_get_abv_high()
if low == 0 and high == 0:
return "varies"
else:
return "%.2f%% - %.2f%%" % (low, high)
def _get_ibu_range(self):
low = self._SO_get_ibu_low()
high = self._SO_get_ibu_high()
if low == 0 and high == 0:
return "varies"
else:
return "%i - %i" % (low, high)
Is there an easy way, pythonic way to write that magic to_json() function?
Answer: You can use the python [json module](http://docs.python.org/library/json.html)
with the SQLObject [sqlmeta](http://sqlobject.org/SQLObject.html#class-
sqlmeta) class. Like this:
def to_json(obj):
return json.dumps(dict((c, getattr(obj, c)) for c in obj.sqlmeta.columns))
When I run this with your class `Foo` I get:
>>> print to_json(f)
{"bar": "test", "lulz": "only for the", "baz": true}
**Edit:** if you want to include [magic
attributes](http://sqlobject.org/SQLObject.html#adding-magic-attributes-
properties) in your json string and you don't mind using something of a hack,
you could abuse the fact that the attributes of your object are python
properties. For example, if I add a magic attribute `foo` to your original
sample class:
class Foo(SQLObject):
bar = UnicodeCol(length=128)
baz = BoolCol(default=True)
lulz = UnicodeCol(length=256)
def _get_foo(self):
return "foo"
Then I can define the `to_json()` function like this:
def to_json(obj):
cls = type(obj)
d = dict((c, getattr(obj, c)) for c in vars(cls) if isinstance(getattr(cls, c), property))
return json.dumps(d)
Now, if I do this:
f = Foo(bar = "test", lulz = "only for the")
print to_json(f)
I get the following result:
{"baz": true, "lulz": "only for the", "bar": "test", "foo": "foo"}
|
How to import modules from alternate locations when using Python IDLE?
Question: I've been trying to figure this out for more than 2 days, screening the
internet and the tutorial, but yet I don't have solved my problem. I'm a real
newb and don't yet really know what I'm doing..
Software I use: Mac OS X 10.6 Python v3.2.2 Interactive interpreter (IDLE)
Problem: IDLE's default directory is /Users/ME/Documents/. Files with the
extention .py can only be opened when located in this directory. However, I
made a folder where I would like to save all the .py files etc that have to do
with this software. Currently, IDLE cannot load .py files from the chosen
directory by me.
What I did first was I added to IDLE: import sys.
sys.path.append('Users/Mydir/') sys.path
However, in an already existing thread from 2010 I read sys.path is for the
Interpreter ONLY, and that if I am to change this I need to modify the
PYTHONPATH environment variable:
**PYTHONPATH="/Me/Documents/mydir:$PYTHONPATH" export PYTHONPATH**
However, I'm confused how to use this and cannot find answers to my following
questions: 1) PYTHONPATH (.py?) is already existing on my computer when I
installed the program? If YES, where is it? I cannot find it anywhere. If NO,
I need to create one. But where and what should be the content so that IDLE
can load files from a non-default directory? Should it contain only the words
in bold?
I hope I made my problem clear.
Cheers
Answer: It's not totally clear to me what you mean by `load`. That could mean `Open`
and `Close` files in the IDLE editor. Or it could mean being able to use the
Python `import` statement to load existing Python modules from other files.
I'll assume the latter, that by `load` you mean `import`.
There are two general ways to launch IDLE on Mac OS X. One is from the command
line of a terminal session; if you installed Python 3.2 using the python.org
installers, by default typing `/usr/local/bin/idle3.2` will work. The other
way is by launching `IDLE.app` from `/Applications/Python 3.2`, i.e. by
double-clicking its icon. Because you say the default directory for files is
your `Documents` folder, I'm assuming you are using the second method because
`IDLE.app` sets `Documents` as its current working directory, which becomes
the default directory for *Open*s and *Save*s and is automatically added as
the first directory on Python's `sys.path`, the list of directories that
`Python` uses to search for modules when `import`ing.
If you want to add other directories to `sys.path`, as you've noted you can
use the `PYTHONPATH` environment variable to do so. The standard way to do
this is to add an `export PYTHONPATH=...` definition to a shell startup
script, like `.bash_profile`. However, if you use `IDLE.app`, no shell is
involved so commands in `.bash_profile` have no effect.
While there are ways to modify the environment variables for OS X GUI apps, in
this case, a simpler solution is to use the other method to invoke IDLE, from
the command line of a shell session, using either `/usr/local/bin/idle3.2` or,
if you've run the `Update Shell Profile` command in the `/Applications/Python
3.2` folder (and opened a new terminal session), just `idle3`. Then, a
PYTHONPATH environment variable you set up will be inherited by that IDLE.
BTW, there is no direct way to modify the initial current working directory of
`IDLE.app` from `Documents` other than modifying the code in IDLE. If you
start IDLE from a command line, it inherits the current working directory of
the shell.
[UPDATE] But rather than fooling around with defining `PYTHONPATH`, here is
another even simpler, and probably better, approach that should work with
either `IDLE.app` or the command line `idle`. It takes advantage of [Python
path configuration (`.pth`)
files](http://docs.python.org/py3k/library/site.html) and [user site-package
directories](http://docs.python.org/py3k/library/site.html#site.USER_BASE).
Assuming you are using a standard Python framework build of 3.2 (like from a
python.org installer) on Mac OS X, create a path file for the directory you
want to permanently add to `sys.path`. In a terminal session:
mkdir -p ~/Library/Python/3.2/lib/python/site-packages
cd ~/Library/Python/3.2/lib/python/site-packages
cat >my_paths.pth <<EOF
/Users/YOUR_USER_NAME/path/to/your_additional_python_directory_1
/Users/YOUR_USER_NAME/path/to/your_additional_python_directory_2
EOF
Now, whenever you run that Python 3.2 or IDLE under your user name, the
directories you have added to the `.pth` file will automatically be added to
`sys.path`.
BTW, the exact path location of the user site-packages directory for versions
of Python earlier than 3.2 or 2.7 may be slightly different. Also, on other
Unix-y systems, the default location for the user site-package directory is
`~/.local/lib/python3.2/site-packages`.
|
html forms to django
Question: > **Possible Duplicate:**
> [python/django adder](http://stackoverflow.com/questions/8423687/python-
> django-adder)
I am new to django and tryed to make a site that takes my number (inputted
from an of html) and add/ subtract it to an other.
So i tryed this in views.py:
def calc(request):
try:
i1 = int(request.GET['i1'])
i2 = int(request.GET['i2'])
except MultiValueDictKeyError:
raise Http404()
u = i2 + i1
return ("answer.html", u)
i also showed this to some other people and they told me to make a 'link/bond'
to the html, how can i make such a link without using a database?
edit: I am very sorry guys, I started to learn django from The Djangobook and
they didn't explain it as clearly as you guys or the documentation, my sincere
apologies
Answer: Ok, I'm going to answer this simply so we can be done with this question,
which was also posted yesterday. I'm not trying to be rude here, but you
really need to do some research, read some tutorials, etc, _before_ asking a
question that doesn't really make a whole lot of sense. I'm sticking the save
logic of the form in the view so you can see what's going on.
#forms.py
from django import forms
class AdditionForm(forms.Form):
first_number = forms.IntegerField()
second_number = forms.IntegerField()
#views.py
from django.shortcuts import render
from [your_app].forms imort AdditionForm
def calc(request):
form = AdditionForm(request.POST or None)
answer = None
if request.method == 'POST':
if form.is_valid():
first_number = form.cleaned_data.get('first_number', 0)
second_number = form.cleaned_data.get('second_number', 0)
answer = first_number + second_number
return render(request, 'some-file.html', {'form' : form, 'answer' : answer})
#some-file.html
<html>
<head>
<title>Simple Calculator</title>
</head>
<body>
<form action="." method="post" enctype="application/x-www-form-urlencoded">
<fieldset>
<ol>
{{ form.as_ul }}
<li><input type="submit" value="Add the Numbers" />
</ol>
</fieldset>
</form>
{% if answer %}
<p>Your answer was: {{ answer }}</p>
{% endif %}
</body>
</html>
|
Colorbar does not show values when using LogNorm()
Question: I am trying to make a contour plot with the contour levels scaled by the log
of the values. However, the colorbar does not show enough values next to the
colors. Here is a simple example.
import numpy as N
import matplotlib as M
import matplotlib.pyplot as PLT
# Set up a simple function to plot
values = N.empty((10,10))
for xi in range(10):
for yi in range(10):
values[xi,yi] = N.exp(xi*yi/10. - 1)
levels = N.logspace(-1, 4, 10)
log_norm = M.colors.LogNorm()
# Currently not used - linear scaling
linear_norm = M.colors.Normalize()
# Plot the function using the indices as the x and y axes
PLT.contourf(values, norm=log_norm, levels=levels)
PLT.colorbar()
If you switch log_norm for linear_norm in the contourf call, you'll see that
the colorbar does have values. Of course, using linear_norm means the colors
are scaled linearly and the contours are not well distributed for this
function.
I'm using python 2.7.2, enthought edition which comes with matplotlib, on Mac
OS 10.7.
Answer: Add a format to the call to `PLT.colorbar`:
import numpy as N
import matplotlib as M
import matplotlib.pyplot as PLT
# Set up a simple function to plot
x,y = N.meshgrid(range(10),range(10))
values = N.exp(x*y/10. - 1)
levels = N.logspace(-1, 4, 10)
log_norm = M.colors.LogNorm()
# Currently not used - linear scaling
linear_norm = M.colors.Normalize()
# Plot the function using the indices as the x and y axes
PLT.contourf(values, norm=log_norm, levels=levels)
PLT.colorbar(format='%.2f')
PLT.show()

|
Call a python main from another python script
Question: I have been giving some huge command line tool from a colleague. The main
reads a bunch of arguments, parses those using the elegant `import
OptionParser` later on and does the job.
if __name__ == '__main__':
main(sys.argv)
I can either dig into the code and copy paste loads of code, or find a way to
use a "command line" call from my python script. I guess the second option is
preferrable as it prevents me from randomly extracting code. Would you agree ?
Answer: You don't need to do cut and paste or launch a new Python interpreter. You
should be able to [`import` the other
script](http://docs.python.org/reference/simple_stmts.html#import).
For example, if your colleague's script is called `somescript.py` you could
do:
import somescript
args = ['one','two']
somescript.main(args)
|
python - tkinter -- Having trouble with resizing multiple grids
Question: i have the following code and i can't manage to resize properly the frames.
When i resize, everything disappears.. When i run it, it's ok.But when i
resize it, i want to keep the original view.
from Tkinter import *
import os,sys
ALL=N+S+E+W
class Application(Frame):
def __init__(self,master=None):
Frame.__init__(self,master)
self.master.rowconfigure(0,weight=1)
self.master.columnconfigure(0,weight=1)
self.grid(sticky=ALL)
self.rowconfigure(0,weight=1)
myframe1=Frame(self,bg='green')
myframe1.bind("<Button-1>",self.handler1)
myframe1.grid(row=0,column=0,rowspan=1,columnspan=2,sticky=ALL)
self.rowconfigure(1,weight=1)
myframe2=Frame(self,bg='blue')
myframe2.bind("<Button-1>",self.handler2)
myframe2.grid(row=1,column=0,rowspan=1,columnspan=2,sticky=ALL)
buttons=('Red','Blue','Green','Black')
button=[0]*4
for c in range(4):
self.rowconfigure(c+2,weight=1)
self.columnconfigure(c,weight=1)
button[c]=Button(self,text="{0}".format(buttons[c]),command=lambda x=buttons[c]:self.colors(x))
button[c].grid(row=2,column=c,sticky=E+W)
self.columnconfigure(4,weight=1)
self.rowconfigure(6,weight=1)
button1=Button(self,text='{0}'.format('Open'),command=self.content)
button1.grid(row=2,column=4,sticky=E+W)
f=Frame(self,bg='red')
self.myentry=Entry(f)
self.myentry.grid(row=0,column=4,sticky=ALL)
self.text=Text(f)
self.text.grid(row=1,column=4,sticky=ALL)
f.grid(row=0,column=2,rowspan=2,columnspan=3,sticky=ALL)
.......
I tried many combinations of rowconfigure,columnconfigure,rowspan,columnspan
but i failed!
My original view is : 
After resizing in one direction : 
In another direction : 
The white area is the text widget which i want to be resizible(also the blue
and green areas)
UPDATED---------------
f=Frame(self,bg='red')
self.myentry=Entry(f)
self.myentry.grid(row=0,column=4,sticky=ALL)
self.text=Text(f)
self.text.grid(row=1,column=4,sticky=ALL)
f.grid(row=0,column=2,rowspan=2,columnspan=3,sticky=ALL)
Answer: Your problem is that you seem to not quite understand how grid works. For
example, you are putting only two widgets in the red frame (self.myentry and
self.text) yet you are putting them in column 2 and 4. Are you aware that the
columns are relative to their parent, not the GUI as a whole? You want them in
column 0 _of the red frame_ , then you want the red frame in the second column
of _it's_ parent.
The way to solve this is to divide and conquer. First, divide the main screen
up into it's logical parts, and lay out those logical parts so they resize
properly. Then, for anything inside each part, lather, rinse repeat. Using
frames for organization is the way to go.
Here's how I would tackle your problem (though there's certainly more than one
way to solve this problem). First, you have two major areas of the screen: the
top portion which has the green, blue and red frames and their contents, and
the bottom part which holds the buttons. The top area should grow and shrink
in all directions, the bottom area only grows in the X direction. I would
create two frames for this, one for each part, and use pack since pack is the
simplest geometry manager. The top frame should be configured to fill both
directions and expand. The bottom part (with the buttons) should only fill in
the X direction.
You now have two areas that are independent of each other and have proper
resize behavior: the "main" area and the "toolbar" area. You are free to
arrange the inner contents of these frames however you wish without having to
worry about how that affects the main layout.
In the bottom frame, if you want all the widgets to be the same size, use pack
and have them all fill X and expand, and they will equally fill the area. If
you want them to be different sizes, use grid so you can control each column
separately.
For the top part, it has three sub-sections: the red, green and blue frames.
Since they are not all arranged horizontally or vertically I would use grid.
Place green in cell 0,0, blue in cell 0,1, and red in cell 1,1 spanning two
rows. Give row 0 and column 1 a weight of 1 so it takes up all the slack.
As I wrote earlier, this isn't the only way to "divide and conquer" this
specific problem. Instead of seeing the main app as two parts -- top and
bottom, with the top part having three sub-parts, another choice is to see
that your main window has four parts: green, blue, red and toolbar. The key
isn't to pick the perfect definition, but to break the layout problem down
into chunks working from the outside in.
Here is a working example:
from Tkinter import *
ALL=N+S+E+W
class Application(Frame):
def __init__(self,master=None):
Frame.__init__(self,master)
# the UI is made up of two major areas: a bottom row
# of buttons, and a top area that fills the result of
# UI
top_frame = Frame(self)
button_frame = Frame(self)
button_frame.pack(side="bottom", fill="x")
top_frame.pack(side="top", fill="both", expand=True)
# top frame is made up of three sections: two smaller
# regions on the left, and a larger region on the right
ul_frame = Frame(top_frame, background="green", width=200)
ll_frame = Frame(top_frame, background="blue", width=200)
right_frame = Frame(top_frame, background="red")
ul_frame.grid(row=0, column=0, sticky=ALL)
ll_frame.grid(row=1, column=0, sticky=ALL)
right_frame.grid(row=0, column=1, rowspan=2, sticky=ALL)
top_frame.columnconfigure(1, weight=1)
top_frame.rowconfigure(0, weight=1)
top_frame.rowconfigure(1, weight=1)
# the right frame is made up of two widgets, an entry
# on top and a text below
entry = Entry(right_frame)
text = Text(right_frame)
entry.pack(side="top", fill="x")
text.pack(side="top", fill="both", expand=True)
# the button frame has five equally spaced buttons
for color in ('Red', 'Blue', 'Green', 'Black'):
b = Button(button_frame, text=color)
b.pack(side="left", fill="x", expand=True)
quit_button = Button(button_frame, text="Quit")
quit_button.pack(side="left", fill="x", expand=True)
root = Tk()
app = Application(root)
app.pack(fill="both", expand=True)
root.mainloop()
|
How to use paramiko to talk interactivity with a remote application?
Question: I am trying to use paramiko to interatively speak with an command line
application, but I'm doing something wrong.
# that's the echo.py, the script I am connecting to via SSH
import sys, time
while 1:
x = sys.stdin.readline()
sys.stdout.write("got-" + x) # x already contains newline
# client.py
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('127.0.0.1', username='user', password='pass')
(stdin, stdout, stderr) = ssh.exec_command(r"python C:\test\echo.py")
stdin.write("xxx\n")
print "got back [%s]" % stdout.read() # <<< here the code got stuck, getting nothing back
Note: the only case that worked for me was to have the `echo.py` quit and do
`stdout.readlines()` on the client, but obviously this is not what I was
looking for.
I need to be able to **send and receive messages by using stdin and stdout** ,
optionally with some kind of timeout support.
Answer: I believe the problem is that when you run `sys.stdout.write` it writes to the
STDOUT buffer, but the buffer is not flushed until done so manually or closed
(stdout is a [File Object](http://docs.python.org/library/stdtypes.html#file-
objects), which you can confirm yourself with `type(sys.stdout)`). Since your
loop is infinite, the buffer is never flushed.
Changing your loop in echo.py to this should do the trick:
while 1:
x = sys.stdin.readline()
sys.stdout.write("got-" + x) # x already contains newline
sys.stdout.flush() # flush the buffer
|
simple thread management within python classes
Question: Im trying to write a module for Python that prints out text for my program and
displays a progress bar while i do something in the background. Im using the
'threading' module currently but open to suggestions if something else will
make it easier.
what i want to know is two fold, **_how should i call this class elegantly and
how should i stop these threads im creating?_**
this is what im currently doing:
tmp1 = textprint("hello this is text")
tmp1.start()
# do something
tmp1.stop()
these are the options ive considered and
[looked](http://docs.python.org/library/threading.html) into so far:
* using thread.name to find the name of the thread or having the thread return a name to kill afterwards. OR passing a number for similar thread identification afterwards. (a bit cumbersome and not my favourite solution.)
sending a thread.event ? - from reading the docs i see an event can be sent,
perhaps that can be used to stop it?
or a [with](http://www.python.org/dev/peps/pep-0343/) statement but im unclear
how to use it in this context, plus i find most of the python docs extremely
confusing and not written for me at all.
what i would _like_ to do is something like this:
`echo('hello')` (prints progress bar etc) \- and then when i want to stop it
`echo.stop()`
the obv. problem there though is that the stop function doesnt know which
thread it is trying to stop.
Here is a skeleton of what im trying to do:
import time
import string
import threading
class print_text(threading.Thread):
def __init__(self,arg=None):
super(print_text,self).__init__()
self._stop = False
self.arg=arg
def run (self):
# start thread for text
print self.txt
while not self._stop:
print "rude words"
def echo (self,txt):
self.txt=txt
self.start()
def stop(self):
self._stop = True
def stopped(self):
return self._stop == True
def __enter__(self):
print "woo"
return thing
def __exit__(self, type, value, traceback):
return isinstance(value, TypeError)
if __name__ == '__main__':
print_text.start.echo('this is text') # dunt werk
with print_text.echo('this is text'):
time.sleep(3)
print "done"
and then call it like so:
echo('this is text')
i also guess to do this i would have to
import echo from print_text
the WITH way of doing things suggests putting an `__enter__` and `__exit__`
bit in. i tried them and they didnt work and also, i didnt know what i was
doing, really appreciate any help, thanks.
Answer: The best way to stop a thread in Python is to politely ask it to stop. The
best way to pass new data to a thread is with the `Queue` module.
Both are used in [the code in this
post](http://eli.thegreenplace.net/2011/05/18/code-sample-socket-client-
thread-in-python/), which demonstrates socket communication from a Python
thread but is otherwise relevant to your question. If you read the code
carefully you'll notice:
1. Using `threading.Event()` which is set by a method call from outside, and which the thread periodically checks to know if it was asked to die.
2. Using `Queue.Queue()` for both passing commands to the thread and receiving responses from it.
|
linking between two modules with distutils
Question: I have recently [exposed a
problem](http://stackoverflow.com/questions/8443652/dependencies-between-
compiled-modules-in-python) when working with several compiled C++ modules and
would like to rephrase the question.
I have two modules 'mod1' and 'mod2'. They are compiled as two distinct
'ext_modules' in my setup.py, as shown here :
#!/usr/bin/python2
from setuptools import setup, Extension
mod1 = Extension('mod1',
sources = ['mod1.cpp'],
libraries = ['boost_python'])
mod2 = Extension('mod2',
sources = ['mod2.cpp'],
libraries = ['boost_python'])
setup(name='foo',
version='0.0',
description='',
ext_modules=[mod1,mod2],
install_requires=['distribute'])
But internally, 'mod2.hpp' is including 'mod1.hpp', as the first module is
defining stuff that is used by the second module.
EDIT : this will compile fine, but then :
$> cd build/lib.linux-i686-2.7
$> python2 -c "import mod1 ; import mod2"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: ./mod2.so: undefined symbol: _ZN6ParentD2Ev
Here, "Parent" is the name of a class defined in mod1 and used in mod2.
EDIT2 : another weird behaviour I don't understand :
$> cd build/lib.linux-i686-2.7
$> python2
Python 2.7.2 (default, Nov 21 2011, 17:24:32)
[GCC 4.6.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mod2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: ./mod2.so: undefined symbol: _ZN6ParentD2Ev
>>> import mod1
>>> import mod2
Segmentation fault
Here, importing mod2 first fails. But if I try agin after importing mod1, I
get a segfault.
* * *
Both modules are built as shared libraries - but if I'm not mistaken, mod2
needs to be linked to mod1 in order to work, as it needs stuff defined in
mod1. Is it possible to define such a dependency with setuptools/distribute ?
Something like :
mod2 = Extension('mod2',
sources = ['mod2.cpp'],
libraries = ['boost_python',mod1])
From my various readings, it looks like it is possible to do something like
this with boost's bjam utility - unfortunately, I didn't manage to use it
(even to compile the example) on my system.
Things I have tried:
* adding 'mod1.cpp' to the sources of mod2. It works (kind of: I must import mod1 before mod2 to make it work) but I'm loosing the interest of having modules as shared objects.
Workarouds:
* importing mod1 as a regular python module in mod2, but that would put an extra layer of python within my C++ code
What do you think ?
Answer: In the end, I could not find how to do that with distutils / distribute. So I
switched to cmake, and so far it works flawlessly !
with cmake, you can easily compile one module as a shared library, and later
add this shared library to the dependencies of another module.
|
python threads, how to return results generated during the execution of a multithread code
Question: I'm tring to learn how to use threads in python to save a list of object. I'm
starting from this code :
import threading
import urllib
from tempfile import NamedTemporaryFile
singlelock = threading.Lock()
class download(threading.Thread):
def __init__(self, sitecode, lista):
threading.Thread.__init__(self)
self.sitecode = sitecode
self.status = -1
def run(self):
url = "http://waterdata.usgs.gov/nwis/monthly?referred_module=sw&site_no="
url += self.sitecode
url += "&PARAmeter_cd=00060&partial_periods=on&format=rdb&submitted_form=parameter_selection_list"
tmp = NamedTemporaryFile(delete=False)
urllib.urlretrieve(url, tmp.name)
print "loaded Monthly data for sitecode : ", self.sitecode
lista.append(tmp.name)
print lista
sitecodelist = ["01046500", "01018500", "01010500", "01034500", "01059000", "01066000", "01100000"]
lista = []
for k in sitecodelist:
get_data = download(k,lista)
get_data.start()
It just prints out the list generated during the thread execution, while I'm
tring to return it.
Trying to read the documentation, I'm looking on how to use `threading.Lock()`
and its methods `acquire()` and `release()` that seems to be the solution to
my issue ... but I'm really far to understand how to implement it in my
example code.
thanks so much for any hints!
Answer: First of all we should all quickly review what threads are
<http://en.wikipedia.org/wiki/Thread_%28computer_science%29>.
Ok, so threads share memory. So this should be easy! Which is also the good
and bad thing about threads, it's easy and dangerous! (also lightweight for
the OS).
Now, if using, python with cpython you should familiarize yourself with the
global interpreter lock:
<http://docs.python.org/glossary.html#term-global-interpreter-lock>
Also, from <http://docs.python.org/library/threading.html>:
> CPython implementation detail: Due to the Global Interpreter Lock, in
> CPython only one thread can execute Python code at once (even though certain
> performance-oriented libraries might overcome this limitation). If you want
> your application to make better of use of the computational resources of
> multi-core machines, you are advised to use multiprocessing. However,
> threading is still an appropriate model if you want to run multiple
> I/O-bound tasks simultaneously.
What does this mean? If your task isn't IO threading won't gain you anything
from the OS since any time you do anything with python code, only a single
thread will be able to do anything since it has the global lock and no other
threads can get it. With IO bound tasks the OS will schedule other threads
since the global lock will be released while waiting for the IO to complete.
There is the caveat though that you could be calling into code that does not
fall under the GIL and in that case threading will also perform well (hence
the reference to "performance oriented libraries" above.)
Thankfully, python makes managing the shared memory a simple task and there is
already good documentation on how to do so, though it took me a small bit to
find it. If you have any further questions let us know.
In [83]: import _threading_local
In [84]: help(_threading_local)
Help on module _threading_local:
NAME
_threading_local - Thread-local objects.
FILE
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/_threading_local.py
MODULE DOCS
http://docs.python.org/library/_threading_local
DESCRIPTION
(Note that this module provides a Python version of the threading.local
class. Depending on the version of Python you're using, there may be a
faster one available. You should always import the `local` class from
`threading`.)
Thread-local objects support the management of thread-local data.
If you have data that you want to be local to a thread, simply create
a thread-local object and use its attributes:
>>> mydata = local()
>>> mydata.number = 42
>>> mydata.number
42
You can also access the local-object's dictionary:
>>> mydata.__dict__
{'number': 42}
>>> mydata.__dict__.setdefault('widgets', [])
[]
>>> mydata.widgets
[]
What's important about thread-local objects is that their data are
local to a thread. If we access the data in a different thread:
>>> log = []
>>> def f():
... items = mydata.__dict__.items()
... items.sort()
... log.append(items)
... mydata.number = 11
... log.append(mydata.number)
>>> import threading
>>> thread = threading.Thread(target=f)
>>> thread.start()
>>> thread.join()
>>> log
[[], 11]
we get different data. Furthermore, changes made in the other thread
don't affect data seen in this thread:
>>> mydata.number
42
Of course, values you get from a local object, including a __dict__
attribute, are for whatever thread was current at the time the
attribute was read. For that reason, you generally don't want to save
these values across threads, as they apply only to the thread they
came from.
You can create custom local objects by subclassing the local class:
>>> class MyLocal(local):
... number = 2
... initialized = False
... def __init__(self, **kw):
... if self.initialized:
... raise SystemError('__init__ called too many times')
... self.initialized = True
... self.__dict__.update(kw)
... def squared(self):
... return self.number ** 2
This can be useful to support default values, methods and
initialization. Note that if you define an __init__ method, it will be
called each time the local object is used in a separate thread. This
is necessary to initialize each thread's dictionary.
Now if we create a local object:
>>> mydata = MyLocal(color='red')
Now we have a default number:
>>> mydata.number
2
an initial color:
>>> mydata.color
'red'
>>> del mydata.color
And a method that operates on the data:
>>> mydata.squared()
4
As before, we can access the data in a separate thread:
>>> log = []
>>> thread = threading.Thread(target=f)
>>> thread.start()
>>> thread.join()
>>> log
[[('color', 'red'), ('initialized', True)], 11]
without affecting this thread's data:
>>> mydata.number
2
>>> mydata.color
Traceback (most recent call last):
...
AttributeError: 'MyLocal' object has no attribute 'color'
Note that subclasses can define slots, but they are not thread
local. They are shared across threads:
>>> class MyLocal(local):
... __slots__ = 'number'
>>> mydata = MyLocal()
>>> mydata.number = 42
>>> mydata.color = 'red'
So, the separate thread:
>>> thread = threading.Thread(target=f)
>>> thread.start()
>>> thread.join()
affects what we see:
>>> mydata.number
11
>>> del mydata
And just in case... an example using your style above.
In [40]: class TestThread(threading.Thread):
...: report = list() #shared across threads
...: def __init__(self):
...: threading.Thread.__init__(self)
...: self.io_bound_variation = random.randint(1,100)
...: def run(self):
...: start = datetime.datetime.now()
...: print '%s - io_bound_variation - %s' % (self.name, self.io_bound_variation)
...: for _ in range(0, self.io_bound_variation):
...: with open(self.name, 'w') as f:
...: for i in range(10000):
...: f.write(str(i) + '\n')
...: print '%s - finished' % (self.name)
...: end = datetime.datetime.now()
...: print '%s took %s time' % (self.name, end - start)
...: self.report.append(end - start)
...:
And a run of three threads with output.
In [43]: threads = list()
...: for i in range(3):
...: t = TestThread()
...: t.start()
...: threads.append(t)
...:
...: for thread in threads:
...: thread.join()
...:
...: for thread in threads:
...: print thread.report
...:
Thread-28 - io_bound_variation - 76
Thread-29 - io_bound_variation - 83
Thread-30 - io_bound_variation - 80
Thread-28 - finished
Thread-28 took 0:00:08.173861 time
Thread-30 - finished
Thread-30 took 0:00:08.407255 time
Thread-29 - finished
Thread-29 took 0:00:08.491480 time
[datetime.timedelta(0, 5, 733093), datetime.timedelta(0, 6, 253811), datetime.timedelta(0, 6, 440410), datetime.timedelta(0, 4, 342053), datetime.timedelta(0, 5, 520407), datetime.timedelta(0, 5, 948238), datetime.timedelta(0, 8, 173861), datetime.timedelta(0, 8, 407255), datetime.timedelta(0, 8, 491480)]
[datetime.timedelta(0, 5, 733093), datetime.timedelta(0, 6, 253811), datetime.timedelta(0, 6, 440410), datetime.timedelta(0, 4, 342053), datetime.timedelta(0, 5, 520407), datetime.timedelta(0, 5, 948238), datetime.timedelta(0, 8, 173861), datetime.timedelta(0, 8, 407255), datetime.timedelta(0, 8, 491480)]
[datetime.timedelta(0, 5, 733093), datetime.timedelta(0, 6, 253811), datetime.timedelta(0, 6, 440410), datetime.timedelta(0, 4, 342053), datetime.timedelta(0, 5, 520407), datetime.timedelta(0, 5, 948238), datetime.timedelta(0, 8, 173861), datetime.timedelta(0, 8, 407255), datetime.timedelta(0, 8, 491480)]
You may wonder why report has more then three elements... that is because I
ran the above for loop code three times in my interpreter. If I wanted to fix
this "bug", I need to make sure to set the shared variable to an empty list
before running.
TestThread.report = list()
Thus illustrates why threads can become unwieldy.
|
Like FourWaySplitter but different
Question: Using Python 2.6, wx.Python:
Wonder if someone might know how to build a four panel window w/o using
FourWaySplitter. (Unless I am wrong FourWaySplitter will do what I am about to
describe. With FourWaySplitter any change of the size of any sub-panel changes
the size of ALL sub-panels.)
Here is what I need:
ONE vertical split and two **INDEPENDENT** horizontal splits one on each of
the newly split vertical parts. (sounds simple but after a day of trying every
permentation of splitters and sizers, I have finally admitted defeat.)
Anyway the idea is that only the vertical splitter would change the size of
all four windows. The horizontal splitter on the left side of the vertical one
would change only the two windows/panels on the left of the vertical splitter,
and the horizontal splitter splitter on the right of the vertical splitter
would change only the two windows/panels on the right of the vertical
splitter.
A concrete working example would be a lot of help and greatly appreciated and
if you come to my house I will make you some pancakes.
Answer: This was pretty confusing for me too. Fortunately, I wanted to write an
article on the subject and I finally figured it out after a couple of hours a
month or three ago. I still haven't written the article, but I paired down my
example a little for this answer. Hopefully you can follow it:
import random
import wx
########################################################################
class RandomPanel(wx.Panel):
""""""
#----------------------------------------------------------------------
def __init__(self, parent):
"""Constructor"""
wx.Panel.__init__(self, parent)
color = random.choice(["red", "green", "blue", "yellow"])
self.SetBackgroundColour(color)
########################################################################
class MainPanel(wx.Panel):
""""""
#----------------------------------------------------------------------
def __init__(self, parent):
"""Constructor"""
wx.Panel.__init__(self, parent)
# create the sizers
sizer = wx.BoxSizer(wx.VERTICAL)
twoSplitSizer = wx.BoxSizer(wx.HORIZONTAL)
verticalSplitter = wx.SplitterWindow(self)
# create the left side
leftSplitter = wx.SplitterWindow(verticalSplitter)
panelOne = RandomPanel(leftSplitter)
panelTwo = RandomPanel(leftSplitter)
leftSplitter.SplitHorizontally(panelOne, panelTwo)
leftSplitter.SetSashGravity(0.5)
# create the remote side
rightSplitter = wx.SplitterWindow(verticalSplitter)
panelThree = RandomPanel(rightSplitter)
panelFour = RandomPanel(rightSplitter)
rightSplitter.SplitHorizontally(panelThree, panelFour)
rightSplitter.SetSashGravity(0.5)
verticalSplitter.SplitVertically(leftSplitter, rightSplitter)
verticalSplitter.SetSashGravity(0.5)
sizer.Add(verticalSplitter, 1, wx.EXPAND)
self.SetSizer(sizer)
########################################################################
class MainFrame(wx.Frame):
""""""
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
wx.Frame.__init__(self, None, title="4-Way Split", size=(800,600))
panel = MainPanel(self)
self.Show()
#----------------------------------------------------------------------
if __name__ == "__main__":
app = wx.App(False)
frame = MainFrame()
app.MainLoop()
I use the random colors just to make it easier to distinguish between panels.
Normally, you'd have different widgets on each panel. Anyway, this works for
me on Windows with wxPython 2.8.11
|
python map string split list
Question: I am trying to map the `str.split` function to an array of string. namely, I
would like to split all the strings in a string array that follow the same
format. Any idea how to do that with `map` in python? For example let's assume
we have a list like this:
>>> a = ['2011-12-22 46:31:11','2011-12-20 20:19:17', '2011-12-20 01:09:21']
want to split the strings by space ( split(" ")) using map to have a list as:
>>> [['2011-12-22', '46:31:11'], ['2011-12-20', '20:19:17'], ['2011-12-20', '01:09:21']]
Answer: Though it isn't well known, there is a function designed just for this
purpose,
[operator.methodcaller](http://docs.python.org/library/operator.html#operator.methodcaller):
>>> from operator import methodcaller
>>> a = ['2011-12-22 46:31:11','2011-12-20 20:19:17', '2011-12-20 01:09:21']
>>> map(methodcaller("split", " "), a)
[['2011-12-22', '46:31:11'], ['2011-12-20', '20:19:17'], ['2011-12-20', '01:09:21']]
This technique is faster than equivalent approaches using lambda expressions.
|
Masks in python opencv cv2 not working?
Question: While in general the new python bindings for opencv (cv2) are a beauty,
"masks" don't seem to be working properly - unless I really get something
wrong:
For example "cv2.add" still works properly without a mask:
import cv2
a = ones((2,2,3), dtype=uint8)
cv2.add(a,a)
correctly gives
array([[[2, 2, 2],
[2, 2, 2]],
[[2, 2, 2],
[2, 2, 2]]], dtype=uint8)
But when you add a mask (and an out array "b" - which is required by for some
reason is not assigned either) you get a RANDOM result, i.e. the result
changes when you run the command multiple times
myMask = zeros(a.shape[0:2], dtype = uint8)
mask[1,1] = 255
b = zeros(a.shape)
cv2.add(a,a,b,myMask)
cv2.add(a,a,b,myMask)
gives on my machine (Win7, 32bit,Python 2.7, opencv 2.3.1)
In [34]: cv2.add(a,a,b,myMask)
Out[34]:
array([[[ 26, 0, 143],
[ 5, 216, 245]],
[[156, 5, 104],
[ 2, 2, 2]]], dtype=uint8)
In [35]: cv2.add(a,a,b,myMask)
Out[35]:
array([[[35, 0, 0],
[ 0, 3, 0]],
[[ 0, 0, 3],
[ 2, 2, 2]]], dtype=uint8)
... and something new on the next trial. Now either I get something seriously
wrong, or there is a serious problem with the cv2 bindings.
Any suggestions?
Answer: Its an interesting question. I am seeing the same problem. I posted a bug and
got a reply. <http://code.opencv.org/issues/1748>
The solution is simple. The dst array is undefined on creation and the
operation changes only those destination array pixels p, for which mask(p)!=0.
So the only mechanism that works is to premake dst before addition. I.e.
dst = np.zeros(...)
dst = cv2.add(a, a, dst=dst, mask=mask)
The next release will clear newly created images in operations such as
cv2.add, cv2.subtract, cv2.bitwise_and/or/xor - so it will work without
problem.
my code looks like:
import cv2
import numpy as np
import time
a = np.ones((2,2,3), dtype=np.uint8)
print "simple add"
t = time.time()
for i in range(10000):
b = cv2.add(a,a)
print "%5.4f seconds" % (time.time()-t)
print b
print "\nnumpy add"
t = time.time()
for i in range(10000):
b = a+a
print "%5.4f seconds" % (time.time()-t)
print b
# make mask same dimensions but 1 byte deep(not three)
mask = np.zeros(a.shape[:-1], dtype=np.uint8)
mask[1,1] = 255
print "\nmask", mask.shape
print mask
print "\nmasked add - uninitialised"
t = time.time()
for i in range(10000):
b = cv2.add(a,a,mask=mask)
print "%5.4f seconds" % (time.time()-t)
print b
print "uninitialised entries are unmodified - so random.\n Inconsistent when run more than once."
print "same calc a second time..."
b = cv2.add(a,a,mask=mask)
print b
print "\nmasked add - using preinitialised dst"
t = time.time()
b = a.copy()
for i in range(10000):
b = cv2.add(a,a,b,mask=mask)
print "%5.4f seconds" % (time.time()-t)
print b
print "Consistent when run more than once."
print "same calc a second time..."
b = a.copy()
b = cv2.add(a,a,b,mask=mask)
print b
FYI: timings (10k repeats):
cv2.add - no mask 0.0120 seconds
cv2.add - with mask 0.0160 seconds
np.add 0.0190 seconds
cv2.add - uninitialised mask 0.0220 seconds
FYI: Submit bugs following instructions here:
<http://code.opencv.org/projects/OpenCV/wiki/WikiStart>
|
how to read a file that can be saved as either ansi or unicode in python?
Question: I have to write a script that support reading of a file which can be saved as
either Unicode or Ansi (using MS's notepad).
I don't have any indication of the encoding format in the file, how can I
support both encoding formats? (kind of a generic way of reading files with
out knowing the format in advanced).
Answer: MS Notepad gives the user a choice of 4 encodings, expressed in clumsy
confusing terminology:
"Unicode" is UTF-16, written little-endian. "Unicode big endian" is UTF-16,
written big-endian. In both UTF-16 cases, this means that the appropriate BOM
will be written. Use `utf-16` to decode such a file.
"UTF-8" is UTF-8; Notepad explicitly writes a "UTF-8 BOM". Use `utf-8-sig` to
decode such a file.
"ANSI" is a shocker. This is MS terminology for "whatever the default legacy
encoding is on this computer".
Here is a list of Windows encodings that I know of and the languages/scripts
that they are used for:
cp874 Thai
cp932 Japanese
cp936 Unified Chinese (P.R. China, Singapore)
cp949 Korean
cp950 Traditional Chinese (Taiwan, Hong Kong, Macao(?))
cp1250 Central and Eastern Europe
cp1251 Cyrillic ( Belarusian, Bulgarian, Macedonian, Russian, Serbian, Ukrainian)
cp1252 Western European languages
cp1253 Greek
cp1254 Turkish
cp1255 Hebrew
cp1256 Arabic script
cp1257 Baltic languages
cp1258 Vietnamese
cp???? languages/scripts of India
If the file has been created on the computer where it is being read, then you
can obtain the "ANSI" encoding by `locale.getpreferredencoding()`. Otherwise
if you know where it came from, you can specify what encoding to use if it's
not UTF-16. Failing that, guess.
Be careful using `codecs.open()` to read files on Windows. The docs say:
"""Note Files are always opened in binary mode, even if no binary mode was
specified. This is done to avoid data loss due to encodings using 8-bit
values. This means that no automatic conversion of '\n' is done on reading and
writing.""" This means that your lines will end in `\r\n` and you will
need/want to strip those off.
Putting it all together:
Sample text file, saved with all 4 encoding choices, looks like this in
Notepad:
The quick brown fox jumped over the lazy dogs.
àáâãäå
Here is some demo code:
import locale
def guess_notepad_encoding(filepath, default_ansi_encoding=None):
with open(filepath, 'rb') as f:
data = f.read(3)
if data[:2] in ('\xff\xfe', '\xfe\xff'):
return 'utf-16'
if data == u''.encode('utf-8-sig'):
return 'utf-8-sig'
# presumably "ANSI"
return default_ansi_encoding or locale.getpreferredencoding()
if __name__ == "__main__":
import sys, glob, codecs
defenc = sys.argv[1]
for fpath in glob.glob(sys.argv[2]):
print
print (fpath, defenc)
with open(fpath, 'rb') as f:
print "raw:", repr(f.read())
enc = guess_notepad_encoding(fpath, defenc)
print "guessed encoding:", enc
with codecs.open(fpath, 'r', enc) as f:
for lino, line in enumerate(f, 1):
print lino, repr(line)
print lino, repr(line.rstrip('\r\n'))
and here is the output when run in a Windows "Command Prompt" window using the
command `\python27\python read_notepad.py "" t1-*.txt`
('t1-ansi.txt', '')
raw: 'The quick brown fox jumped over the lazy dogs.\r\n\xe0\xe1\xe2\xe3\xe4\xe5
\r\n'
guessed encoding: cp1252
1 u'The quick brown fox jumped over the lazy dogs.\r\n'
1 u'The quick brown fox jumped over the lazy dogs.'
2 u'\xe0\xe1\xe2\xe3\xe4\xe5\r\n'
2 u'\xe0\xe1\xe2\xe3\xe4\xe5'
('t1-u8.txt', '')
raw: '\xef\xbb\xbfThe quick brown fox jumped over the lazy dogs.\r\n\xc3\xa0\xc3
\xa1\xc3\xa2\xc3\xa3\xc3\xa4\xc3\xa5\r\n'
guessed encoding: utf-8-sig
1 u'The quick brown fox jumped over the lazy dogs.\r\n'
1 u'The quick brown fox jumped over the lazy dogs.'
2 u'\xe0\xe1\xe2\xe3\xe4\xe5\r\n'
2 u'\xe0\xe1\xe2\xe3\xe4\xe5'
('t1-uc.txt', '')
raw: '\xff\xfeT\x00h\x00e\x00 \x00q\x00u\x00i\x00c\x00k\x00 \x00b\x00r\x00o\x00w
\x00n\x00 \x00f\x00o\x00x\x00 \x00j\x00u\x00m\x00p\x00e\x00d\x00 \x00o\x00v\x00e
\x00r\x00 \x00t\x00h\x00e\x00 \x00l\x00a\x00z\x00y\x00 \x00d\x00o\x00g\x00s\x00.
\x00\r\x00\n\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\r\x00\n\x00'
guessed encoding: utf-16
1 u'The quick brown fox jumped over the lazy dogs.\r\n'
1 u'The quick brown fox jumped over the lazy dogs.'
2 u'\xe0\xe1\xe2\xe3\xe4\xe5\r\n'
2 u'\xe0\xe1\xe2\xe3\xe4\xe5'
('t1-ucb.txt', '')
raw: '\xfe\xff\x00T\x00h\x00e\x00 \x00q\x00u\x00i\x00c\x00k\x00 \x00b\x00r\x00o\
x00w\x00n\x00 \x00f\x00o\x00x\x00 \x00j\x00u\x00m\x00p\x00e\x00d\x00 \x00o\x00v\
x00e\x00r\x00 \x00t\x00h\x00e\x00 \x00l\x00a\x00z\x00y\x00 \x00d\x00o\x00g\x00s\
x00.\x00\r\x00\n\x00\xe0\x00\xe1\x00\xe2\x00\xe3\x00\xe4\x00\xe5\x00\r\x00\n'
guessed encoding: utf-16
1 u'The quick brown fox jumped over the lazy dogs.\r\n'
1 u'The quick brown fox jumped over the lazy dogs.'
2 u'\xe0\xe1\xe2\xe3\xe4\xe5\r\n'
2 u'\xe0\xe1\xe2\xe3\xe4\xe5'
Things to be aware of:
(1) "mbcs" is a file-system pseudo-encoding which has no relevance at all to
decoding the _contents_ of files. On a system where the default encoding is
`cp1252`, it makes like `latin1` (aarrgghh!!); see below
>>> all_bytes = "".join(map(chr, range(256)))
>>> u1 = all_bytes.decode('cp1252', 'replace')
>>> u2 = all_bytes.decode('mbcs', 'replace')
>>> u1 == u2
False
>>> [(i, u1[i], u2[i]) for i in xrange(256) if u1[i] != u2[i]]
[(129, u'\ufffd', u'\x81'), (141, u'\ufffd', u'\x8d'), (143, u'\ufffd', u'\x8f')
, (144, u'\ufffd', u'\x90'), (157, u'\ufffd', u'\x9d')]
>>>
(2) `chardet` is very good at detecting encodings based on non-Latin scripts
(Chinese/Japanese/Korean, Cyrillic, Hebrew, Greek) but not much good at Latin-
based encodings (Western/Central/Eastern Europe, Turkish, Vietnamese) and
doesn't grok Arabic at all.
|
Python: os.chdir() not working within a for loop?
Question: I'm trying to get a homemade path navigation function working - basically I
need to go through one folder, and explore every folder within it, running a
function within each folder.
I reach a problem when I try to change directories within a for loop. I've got
this "findDirectories" function:
def findDirectories(list):
for files in os.listdir("."):
print (files)
list.append(files)
os.chdir("y")
That last line causes the problems. If I remove it, the function just compiles
a list with all the folders in that folder. Unfortunately, this means I have
to run this each time I go down a folder, I can't just run the whole thing
once. I've specified the folder "y" as that's a real folder, but the program
crashes upon opening even with that. Doing os.chdir("y") outside of the for
loop has no issues at all.
I'm new to Python, but not to programming in general. How can I get this to
work, or is there a better way? The final result I need is running a Function
on each single "*Response.xml" file that exists within this folder, no matter
how deeply nested it is.
Answer: Well, you don't post the traceback of the actual error but clearly it doesn't
work as you have specified `y` as a relative path.
Thus it may be able to change to `y` in the first iteration of the loop, but
in the second it will be trying to change to a **subdirectory** of `y` that is
also called `y`
Which you probably do not have.
You want to be doing something like
import os
for dirName, subDirs, fileNames in os.walk(rootPath):
# its not clear which files you want, I assume anything that ends with Response.xml?
for f in fileNames:
if f.endswith("Response.xml"):
# this is the path you will want to use
filePath = os.path.join(dirName, f)
# now do something with it!
doSomethingWithFilePath(filePath)
Thats untested, but you have the idea ...
|
Using mock patch to mock an instance method
Question: I'm trying to mock something while testing a Django app using the
imaginatively named [Mock testing
library](http://www.voidspace.org.uk/python/mock/index.html). I can't seem to
quite get it to work, I'm trying to do this:
models.py
from somelib import FooClass
class Promotion(models.Model):
foo = models.ForeignKey(FooClass)
def bar(self):
print "Do something I don't want!"
test.py
class ViewsDoSomething(TestCase):
view = 'my_app.views.do_something'
def test_enter_promotion(self):
@patch.object(my_app.models.FooClass, 'bar')
def fake_bar(self, mock_my_method):
print "Do something I want!"
return True
self.client.get(reverse(view))
What am I doing wrong?
Answer: Ah I was confused on where to apply that patch decorator. Fixed:
class ViewsDoSomething(TestCase):
view = 'my_app.views.do_something'
@patch.object(my_app.models.FooClass, 'bar')
def test_enter_promotion(self, mock_method):
self.client.get(reverse(view))
|
Scrolling windows with wxPython
Question: I am making a frame with a scrollbar and some images inside. The scrollbar
works fine when the frame is empty. However, when I add a picture in, the
scrollbars seem to get pushed up into the top left corner of the frame. How
can I implement my code so that the scrollbars stay where they are after I add
pictures?
Working Code;
import wx
import wx.animate
class ScrollbarFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, -1, 'Scrollbar Example', pos = (100, 50), size=(1000, 1000))
self.scroll = wx.ScrolledWindow(self, -1)
self.scroll.SetScrollbars(1, 1, 1000, 1000)
#self.button = wx.Button(self.scroll, -1, "Scroll Me", pos=(50, 20))
#self.Bind(wx.EVT_BUTTON, self.OnClickTop, self.button)
#self.button2 = wx.Button(self.scroll, -1, "Scroll Back", pos=(500, 350))
#self.Bind(wx.EVT_BUTTON, self.OnClickBottom, self.button2)
self.SetBackgroundColour("gray")
imageName = "01 background.png"
gifName = "Jill.gif"
backgroundImage = wx.Image(imageName, wx.BITMAP_TYPE_ANY).ConvertToBitmap()
wx.StaticBitmap(self, -1, backgroundImage,(10,5),(backgroundImage.GetWidth(), backgroundImage.GetHeight()))
gifImage = wx.animate.GIFAnimationCtrl(self, 0, gifName, pos=(160, 74))
# clears the background
gifImage.GetPlayer().UseBackgroundColour(True)
gifImage.Play()
def update(self, imageName, gifName):
backgroundImage = wx.Image(imageName, wx.BITMAP_TYPE_ANY).ConvertToBitmap()
wx.StaticBitmap(self, -1, backgroundImage,(10,5),(backgroundImage.GetWidth(), backgroundImage.GetHeight()))
gifImage = wx.animate.GIFAnimationCtrl(self, 0, gifName, pos=(100, 100))
# clears the background
gifImage.GetPlayer().UseBackgroundColour(True)
gifImage.Play()
def OnClickTop(self, event):
self.scroll.Scroll(600, 400)
def OnClickBottom(self, event):
self.scroll.Scroll(1, 1)
app = wx.PySimpleApp()
frame = ScrollbarFrame()
frame.Show()
app.MainLoop()
* * *
if you comment out this part:
gifName = "Jill.gif"
backgroundImage = wx.Image(imageName, wx.BITMAP_TYPE_ANY).ConvertToBitmap()
wx.StaticBitmap(self, -1, backgroundImage,(10,5),(backgroundImage.GetWidth(), backgroundImage.GetHeight()))
gifImage = wx.animate.GIFAnimationCtrl(self, 0, gifName, pos=(160, 74))
# clears the background
gifImage.GetPlayer().UseBackgroundColour(True)
gifImage.Play()
the window displays properly with the scrollbar. But include either (or both)
of the image files, and the problem occurs.
Answer: If you want your images inside the scrolled window panel, then you have to put
your static bipmap and gifImage inside it. So the parent of your images should
not be `self` (the `wx.Frame` instance) but `self.scroll`.
Modify the 4 lines indicated:
...................
wx.StaticBitmap(self.scroll, -1, backgroundImage,(10,5),(backgroundImage.GetWidth(), backgroundImage.GetHeight())) # <- this one
gifImage = wx.animate.GIFAnimationCtrl(self.scroll, 0, gifName, pos=(160, 74)) # <- this one
# clears the background
gifImage.GetPlayer().UseBackgroundColour(True)
gifImage.Play()
def update(self, imageName, gifName):
backgroundImage = wx.Image(imageName, wx.BITMAP_TYPE_ANY).ConvertToBitmap()
wx.StaticBitmap(self.scroll, -1, backgroundImage,(10,5),(backgroundImage.GetWidth(), backgroundImage.GetHeight())) # <- this one
gifImage = wx.animate.GIFAnimationCtrl(self.scroll, 0, gifName, pos=(100, 100)) # <- this one
...................
This puts your two images one over the other. If you want to put them
separately (column or row), then you should add them to a sizer inserted in
your scrolled window
|
Matching strings for multiple data set in Python
Question: I am working on python and I need to match the strings of several data files.
First I used pickle to unpack my files and then I place them into a list. I
only want to match strings that have the same conditions. This conditions are
indicated at the end of the string.
My working script looks approximately like this:
import pickle
f = open("data_a.dat")
list_a = pickle.load( f )
f.close()
f = open("data_b.dat")
list_b = pickle.load( f )
f.close()
f = open("data_c.dat")
list_c = pickle.load( f )
f.close()
f = open("data_d.dat")
list_d = pickle.load( f )
f.close()
for a in list_a:
for b in list_b:
for c in list_c
for d in list_d:
if a.GetName()[12:] in b.GetName():
if a.GetName[12:] in c.GetName():
if a.GetName[12:] in d.GetName():
"do whatever"
This seems to work fine for these 2 lists. The problems begin when I try to
add more 8 or 9 more data files for which I also need to match the same
conditions. The script simple won't process and it gets stuck. I appreciate
your help.
Edit: Each of the lists contains histograms named after the parameters that
were used to create them. The name of the histograms contains these parameters
and their values at the end of the string. In the example I did it for 2 data
sets, now I would like to do it for 9 data sets without using multiple loops.
Edit 2. I just expanded the code to reflect more accurately what I want to do.
Now if I try to do that for 9 lists, it does not only look horrible, but it
also doesn't work.
Answer: out of my head:
files = ["file_a", "file_b", "file_c"]
sets = []
for f in files:
f = open("data_a.dat")
sets.append(set(pickle.load(f)))
f.close()
intersection = sets[0].intersection(*sets[1:])
EDIT: Well I overlooked your mapping to x.GetName()[12:], but you should be
able to reduce your problem to set logic.
|
how do i write to a python subprocess' stdin
Question: I'm trying to write a python script that starts a subprocess, and writes to
the subprocess stdin. I'd also like to be able to determine an action to be
taken if the subprocess crashes.
The process I'm trying to start is a program called nuke which has its own
built in version of python which I'd like to be able to submit commands to,
and then tell it to quit after the commands execute. So far I've worked out
that if start python in the command prompt like and then start nuke as a
subprocess then i can type in commands to nuke but id like to be able to put
this all in a script so that the master python program can start nuke and then
write to its stdin (and thus into its built in version of python) and tell it
to do snazzy things, so i write a script that start nuke like this:
subprocess.call(["C:/Program Files/Nuke6.3v5/Nuke6.3", "-t", "E:/NukeTest/test.nk"])
Then nothing happens because nuke is waiting for user input, does anyone know
how I'd now write to the stdin?
I'm doing this because I'm running a plugin with nuke that causes it to crash
intermittently when rendering multiple frames. So I'd like this script to be
able to start nuke, tell it to do something and then if it crashes try again.
So if there is a way to catch a crash and still be ok then that'd be great.
Answer: It might be better to use
[`communicate`](http://docs.python.org/library/subprocess.html#subprocess.Popen.communicate):
from subprocess import Popen, PIPE, STDOUT
p = Popen(['myapp'], stdout=PIPE, stdin=PIPE, stderr=PIPE)
stdout_data = p.communicate(input='data_to_write')[0]
"Better", because of this warning:
> Use communicate() rather than .stdin.write, .stdout.read or .stderr.read to
> avoid deadlocks due to any of the other OS pipe buffers filling up and
> blocking the child process.
|
PyQt4's udp broadcast doesn't seem to work
Question: My application written in PyQt4 doesn't seem to get any data. readyRead event
isn't even fired. Use of the applications is following:
python server.py -s -p 50000 #(server on port 50000)
python server.py -c -p 50000 #(client sending data to port 50000)
//
import sys
import time
from PyQt4 import QtNetwork, QtCore
from optparse import OptionParser
class Server(object):
def __init__(self, port):
self.port = port
try:
self.socket = QtNetwork.QUdpSocket()
self.socket.bind(QtNetwork.QHostAddress.Broadcast, int(self.port), QtNetwork.QUdpSocket.ShareAddress)
self.socket.readyRead.connect(self.receiver)
except QtNetwork.QUdpSocket.NetworkError:
print "EXCEPTION DURING INITIALIZING SERVER'S SOCKET"
sys.exit(1)
def receiver(self):
print "DEBUG: RECEIVE"
while(self.socket.hasPendingDatagrams()):
try:
size = self.socket.pendingDatagramSize()
msg, host, port = self.socket.readDatagram(size)
except:
print "EXCEPTION DURING RECEIVEING AND READING DATAGRAM"
else:
print "HOST %s:%s MSG: %s" % (str(host), str(port), str(msg))
def __del__(self):
print "DESTRUCTOR"
self.socket.close()
class Client(object):
def __init__(self, port):
self.port = port
try:
self.socket = QtNetwork.QUdpSocket()
except:
print "EXCEPTION DURING INITIALIZING CLIENT'S SOCKET"
self.main_loop()
def main_loop(self):
for i in range(20):
self.debug_msg()
time.sleep(0.5)
print "EXITING"
self.socket.close()
def debug_msg(self):
msg = "DEBUG"
self.socket.writeDatagram(msg, QtNetwork.QHostAddress.Broadcast, int(self.port))
if __name__ == "__main__":
parser = OptionParser()
parser.add_option("-p", "", action="store", type="string", dest="port")
parser.add_option("-c", "", action="store_true", dest="client")
parser.add_option("-s", "", action="store_true", dest="server")
options, args = parser.parse_args()
if not (options.server or options.client):
print "Client/Server not specified. Could not continue..."
sys.exit(1)
elif not options.port:
print "Server's port not specified. Could not continue..."
sys.exit(1)
else:
if options.server:
serv = Server(options.port)
App = QtCore.QCoreApplication(sys.argv)
sys.exit(App.exec_())
else:
client = Client(options.port)
Answer: In your client, you are not starting Qt's event loop. You should allocate an
application object and either call exec_ on it and handle the debug_msg calls
via timers or you should pump the event loop using
QCoreApplication.processEvents().
Another option is to use flush():
<http://doc.qt.nokia.com/latest/qabstractsocket.html#flush>
> `bool QAbstractSocket::flush ()`
>
> This function writes as much as possible from the internal write buffer to
> the underlying network socket, without blocking. If any data was written,
> this function returns true; otherwise false is returned.
>
> Call this function if you need QAbstractSocket to start sending buffered
> data immediately. The number of bytes successfully written depends on the
> operating system. In most cases, you do not need to call this function,
> because QAbstractSocket will start sending data automatically once control
> goes back to the event loop. In the absence of an event loop, call
> waitForBytesWritten() instead.
|
How to create a new window button PySide/PyQt?
Question: I'm having problems with a "New Window" function in PyQt4/PySide with Python
2.7. I connected a `initNewWindow()` function, to create a new window, to an
action and put it in a menu bar. Once a common function in desktop software.
Instead of giving me a new persistent window alongside the other one the new
window pops up and closes. The code I'm working on is proprietary so I created
an example that does the same thing with the same error below. Is there any
way to get this to work? Runs in PySide with Python 2.7. It was written in and
tested in Windows.
from PySide.QtCore import QSize
from PySide.QtGui import QAction
from PySide.QtGui import QApplication
from PySide.QtGui import QLabel
from PySide.QtGui import QMainWindow
from PySide.QtGui import QMenuBar
from PySide.QtGui import QMenu
from sys import argv
def main():
application = QApplication(argv)
window = QMainWindow()
window.setWindowTitle('New Window Test')
menu = QMenuBar(window)
view = QMenu('View')
new_window = QAction('New Window', view)
new_window.triggered.connect(initNewWindow)
view.addAction(new_window)
menu.addMenu(view)
label = QLabel()
label.setMinimumSize(QSize(300,300))
window.setMenuBar(menu)
window.setCentralWidget(label)
window.show()
application.exec_()
def initNewWindow():
window = QMainWindow()
window.setWindowTitle('New Window')
window.show()
if __name__ == '__main__':
main()
Answer: If a function creates a PyQt object that the application needs to continue
using, you will have to ensure that a reference to it is kept somehow.
Otherwise, it could be deleted by the Python garbage collector immediately
after the function returns.
So either give the object a parent, or keep it as an attribute of some other
object. (In principle, the object could also be made a global variable, but
that is usually considered bad practice).
Here's a revised version of your example script that demonstrates how to fix
your problem:
from PySide import QtGui, QtCore
class Window(QtGui.QMainWindow):
def __init__(self):
QtGui.QMainWindow.__init__(self)
menu = self.menuBar().addMenu(self.tr('View'))
action = menu.addAction(self.tr('New Window'))
action.triggered.connect(self.handleNewWindow)
def handleNewWindow(self):
window = QtGui.QMainWindow(self)
window.setAttribute(QtCore.Qt.WA_DeleteOnClose)
window.setWindowTitle(self.tr('New Window'))
window.show()
# or, alternatively
# self.window = QtGui.QMainWindow()
# self.window.setWindowTitle(self.tr('New Window'))
# self.window.show()
if __name__ == '__main__':
import sys
app = QtGui.QApplication(sys.argv)
window = Window()
window.resize(300, 300)
window.show()
sys.exit(app.exec_())
|
Python's trigonmetric function return unexpected values
Question:
import math
print "python calculator"
print "calc or eval"
while 0 == 0:
check = raw_input() #(experimental evaluation or traditional calculator)
if check == "eval":
a = raw_input("operator\n") #operator
if a == "+":
b = input("arg1\n") #inarg1
c = input("arg2\n") #inarg2
z = b + c
print z
elif a == "-":
b = input("arg1\n") #inarg1
c = input("arg2") #inarg2
z = b - c
print z
elif a == "/":
b = input("arg1\n") #inarg1
c = input("arg2\n") #inarg2
z = b / c
print z
elif a == "*":
b = input("arg1\n") #inarg1
c = input("arg2]n") #inarg2
z = b * c
print z
elif a == "^":
b = input("arg1\n") #inarg1
c = input("arg2\n") #inarg2
z = b ** c
elif a == "sin":
b = input("arg1\n") #inarg1
var = math.degrees(math.sin(b))
print var
elif a == "asin":
b = input("arg1\n") #inarg1
var = math.degrees(math.asin(b))
print var
elif a == "cos":
b = input("arg1\n") #inarg1
var = math.degrees(math.cos(b))
print var
elif a == "acos":
b = input("arg1\n") #inarg1
var = math.degrees(math.acos(b))
print var
elif a == "tan":
b = input("arg1\n") #inarg1
var = math.degrees(math.tan(b))
print var
elif a == "atan":
b = input("arg1\n") #inarg1
var = math.degrees(math.atan(b))
print var
elif check == "calc" :
x = input() #takes input as expression
print x #prints expression's result
Isn't the sine of 90 degrees 1? With this it shows up as something around
51.2? Google's calculator does this too? BTW: this is my python calculator
b = input("arg1\n") #inarg1
var = math.degrees(math.sin(b))
print var
This one and other trig functions are the problem. For the most part, this was
just a simple python calculator, but I wanted to add some trig functions.
Answer: You don't want o convert the return value of `sin()` to degrees -- the return
value isn't an angle. You instead want to convert the argument to radians,
since `math.sin()` expects radians:
>>> math.sin(math.radians(90))
1.0
|
How to extract Ethernet level data from pcap file?
Question: I have some pcap files, previously I used tshark combined python to extract
source IP address, timestamp, ect.
However, now I open these pcap files in Wireshark. It also contains the VLAN
info, VID is the thing I want to extract right now.
I use `tshark -r xx.pcap` in the terminal, it can only show the tcp level
info, I can not get this VLAN ID. Does anyone one know how to do it in Python?
use some library or tool?
Answer: the perfect solution is [Scapy](http://www.secdev.org/projects/scapy/)
In this example i create a packet with vlan and then print the vlan ID
from scapy.all import *
pkt=Ether()/Dot1Q(vlan=0x32)/IP(dst="192.168.1.66")/ICMP()
print pkt[Dot1Q].vlan
and this example shows how to read a pcap file and print the VLAN ID
from scapy.all import *
from scapy.utils import *
pkts=rdpcap("filename.pcap")
for pkt in pkts:
if pkt.haslayer(Dot1Q):
print pkt[Dot1Q].vlan
tested and works perfect.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.