text
stringlengths 226
34.5k
|
---|
Why can't I access the private variables of the superclass in Python?
Question: I know that I _should_ use the access methods. I see in the `datetime` module
that the class `datetime` inherits from date.
class datetime(date):
<some other code here....>
self = date.__new__(cls, year, month, day)
self._hour = hour
self._minute = minute
self._second = second
self._microsecond = microsecond
self._tzinfo = tzinfo
return self
I also see that datetime is able to access the members of date, as in
`__repr__`:
def __repr__(self):
"""Convert to formal string, for repr()."""
L = [self._year, self._month, self._day, # These are never zero
self._hour, self._minute, self._second, self._microsecond]
I tried to subclass datetime to add some information to it and then write a
similar `__repr__` function:
def __repr__(self):
"""Convert to formal string, for repr()."""
L = [self._year, self._month, self._day, # These are never zero
self._hour, self._minute, self._second, self._microsecond,
self._latitude, self._longitude]
The debugger complained that self._year didn't exist. (`self.year` works,
however.)
I know that I _should_ be using the access function. I just want to understand
why `datetime` is able to access the private variables of `date` but my
subclass isn't able.
Answer: if you look at the [end of
`datetime.py`](http://hg.python.org/cpython/file/1bee3da9a305/Lib/datetime.py#l2094),
you'll see this:
try:
from _datetime import *
except ImportError:
pass
this imports among other things the C-version of the previously defined python
classes, which will therefore be used, and those don't have the members you're
trying to access.
|
bash: syntax error near unexpected token `(' - Python
Question:
# from lxml import etree;
import module2dbk;
print module2dbk.xsl_transform(etree.parse('test-ccap/col10614/index.cnxml'), []);
Error: bash: syntax error near unexpected token `('
Answer: add `#!/usr/bin/env python` at the top of your script, or call your script
using `python myscript.py`
|
Convert float to comma-separated string
Question: How would I convert a float into its 'accounting form' --
100028282.23 --> 100,028,282.23
100028282 --> 100,028,282.00
Is there a python method that does this?
Answer: You can use the
[`locale.format()`](http://docs.python.org/library/locale.html#locale.format)
function to do this:
>>> import locale
>>> locale.setlocale(locale.LC_ALL, 'en_US.utf8')
'en_US.utf8'
>>> locale.format("%.2f", 100028282.23, grouping=True)
'100,028,282.23'
Note that you have to give the precision: `%.2f`
Alternatively you can use the
[`locale.currency()`](http://docs.python.org/library/locale.html#locale.currency)
function, which follow the
[`LC_MONETARY`](http://docs.python.org/library/locale.html#locale.LC_MONETARY)
settings:
>>> locale.currency(100028282.23)
'$100028282.23'
|
Biopython local BLAST database error
Question: I am trying to run blastx locally with the "nr" database using Biopython's
NcbiblastxCommandline tool but I always get the following error regarding the
protein database search path:
>>> from Bio.Blast.Applications import NcbiblastxCommandline
>>> nr = "/Users/Priya/Documents/Python/ncbi-blast-2.2.26+/bin/nr.pal"
>>> infile = "/Users/Priya/Documents/Python/Tutorials/opuntia.txt"
>>> blastx = "/Users/Priya/Documents/Python/ncbi-blast-2.2.26+/bin/blastx"
>>> outfile = "/Users/Priya/Documents/Python/Tutorials/opuntia_python_local.xml"
>>> blastx_cline = NcbiblastxCommandline(blastx, query = infile, db = nr, evalue = 0.001, out = outfile)
>>> stdout, stderr = blastx_cline()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Bio/Application/__init__.py", line 443, in __call__
stdout_str, stderr_str)
Bio.Application.ApplicationError: Command '/Users/Priya/Documents/Python/ncbi-blast-2.2.26+/bin/blastx -out /Users/Priya/Documents/Python/Tutorials/opuntia_python_local.xml -query /Users/Priya/Documents/Python/Tutorials/opuntia.txt -db /Users/Priya/Documents/Python/ncbi-blast-2.2.26+/bin/nr.pal -evalue 0.001' returned non-zero exit status 2, 'BLAST Database error: No alias or index file found for protein database [/Users/Priya/Documents/Python/ncbi-blast-2.2.26+/bin/nr.pal] in search path [/Users/Priya::]'
I am not sure how to change the path to point to the nr database that I
downloaded, but I thought I pointed to it correctly since I can run this code
from the command line without any problems:
Priyas-iMac:~ Priya$ /Users/priya/Documents/Python/ncbi-blast-2.2.26+/bin/blastx -query /Users/priya/Documents/Python/Tutorials/opuntia.txt -db /Users/priya/Documents/Python/ncbi-blast-2.2.26+/bin/nr -out /Users/priya/Documents/Python/Tutorials/opuntia_local.xml -evalue 0.001 -outfmt 5
The above command line code creates an xml file of the blast results as I
would expect.
Any help solving this problem with the Biopython NCBI command line tools would
be greatly appreciated!
Answer: Your `nr` variable ends in `nr.pal`. `nr` (without `.pal`) should be fine. If
removing `pal` doesn't work. You can try setting up an `.ncbirc` file in your
home directory which has this:
[BLAST]
BLASTDB=/directory/path/to/blast/databases
It basically sets up an environment variable for blast database lookups.
Afterwards, you can simply use `nr` (no path required) in your `nr` variable.
By the way, you can check the command line constructed by
`NcbiblastxCommandline` using `print blastx_cline`. My guess is it's not the
same as the one you typed manually.
EDIT: Check out <http://www.biostars.org/> for bioinformatics-specific
questions similar to StackExchange's format.
|
pythonpath for google app engine in pydev eclipse
Question: I have `google app engine` installed to /home/mydev folder such that
dev_appserver.py is in `/home/mydev/google_appengine` directory.
In eclipse helios,I have pydev and for my project's PYTHONPATH,I added the
path `/home/mydev/google_appengine/lib` under external libraries.
Still,pydev compains it cannot resolve
import webapp2
Any idea how to solve this?
Answer: I am using Eclipse Indigo 3.7.2, PyDev 2.2.3, and GAE 1.6.5 and I get the same
problem by pointing external libraries to `/home/mydev/google_appengine/lib`
(or the equivalent on my machine), but it resolves when I point the external
libraries to `/home/mydev/google_appengine`, the parent directory.
Hope this helps (this is an empirical solution that worked for me and have no
specific reason why it works).
|
Python checking if a fork() process is finished
Question: Just wondering if some one could help me out. The problem I'm having is that I
os.fork() to get several bits of information and send them to a file, but
checking to see if the fork process is not working.
import sys
import time
import os
import re
ADDRESS = argv[1]
sendBytes = argv[2]
proID2 = os.fork()
if proID2 == 0:
os.system('ping -c 20 ' + ADDRESS + ' > testStuff2.txt')
os._exit(0)
print proID2
finn = True
while finn == True:
time.sleep(1)
finn = os.path.exists("/proc/" + str(proID2))
print os.path.exists("/proc/" + str(proID2))
print 'eeup out of it ' + str(proID2)
I think that the os.path.exists() is maybe not the right thing to use.
Thanks.
Answer: To wait for the child process to terminate, use one of the `os.waitXXX()`
functions, such as
[`os.waitpid()`](http://docs.python.org/library/os.html#os.waitpid). This
method is reliable; as a bonus, it will give you the status information.
|
Moving from multiprocessing to threading
Question: In my project, I use the `multiprocessing` class in order to run tasks
parallely. I want to use `threading` instead, as it has better performance (my
tasks are TCP/IP bound, not CPU or I/O bound).
`multiprocessing` has wonderful functions, as `Pool.imap_unordered` and
`Pool.map_async`, that does not exist in the `threading` class.
What is the right way to convert my code to use `threading` instead? The
documentation introduces the `multiprocessing.dummy` class, that is a wrapper
for the `threading` class. However that raises lots of errors (at least on
python 2.7.3):
pool = multiprocessing.Pool(processes)
File "C:\python27\lib\multiprocessing\dummy\__init__.py", line 150, in Pool
return ThreadPool(processes, initializer, initargs)
File "C:\python27\lib\multiprocessing\pool.py", line 685, in __init__
Pool.__init__(self, processes, initializer, initargs)
File "C:\python27\lib\multiprocessing\pool.py", line 136, in __init__
self._repopulate_pool()
File "C:\python27\lib\multiprocessing\pool.py", line 199, in _repopulate_pool
w.start()
File "C:\python27\lib\multiprocessing\dummy\__init__.py", line 73, in start
self._parent._children[self] = None
AttributeError: '_DummyThread' object has no attribute '_children'
**Edit:** What actually happens is that I have a GUI that runs a different
thread (to prevent the GUI from gettint stuck). That thread runs the specific
search function that has the `ThreadPool` that fails.
**Edit 2:** The bugfix [was fixed](http://bugs.python.org/issue14881) and will
be included in future releases. Great to see a crasher fixed!
import urllib2, htmllib, formatter
import multiprocessing.dummy as multiprocessing
import xml.dom.minidom
import os
import string, random
from urlparse import parse_qs, urlparse
from useful_util import retry
import config
from logger import log
class LinksExtractor(htmllib.HTMLParser):
def __init__(self, formatter):
htmllib.HTMLParser.__init__(self, formatter)
self.links = []
self.ignoredSites = config.WebParser_ignoredSites
def start_a(self, attrs):
for attr in attrs:
if attr[0] == "href" and attr[1].endswith(".mp3"):
if not filter(lambda x: (x in attr[1]), self.ignoredSites):
self.links.append(attr[1])
def get_links(self):
return self.links
def GetLinks(url, returnMetaUrlObj=False):
'''
Function gather links from a url.
@param url: Url Address.
@param returnMetaUrlObj: If true, returns a MetaUrl Object list.
Else, returns a string list. Default is False.
@return links: Look up.
'''
htmlparser = LinksExtractor(formatter.NullFormatter())
try:
data = urllib2.urlopen(url)
except (urllib2.HTTPError, urllib2.URLError) as e:
log.error(e)
return []
htmlparser.feed(data.read())
htmlparser.close()
links = list(set(htmlparser.get_links()))
if returnMetaUrlObj:
links = map(MetaUrl, links)
return links
def isAscii(s):
"Function checks is the string is ascii."
try:
s.decode('ascii')
except (UnicodeEncodeError, UnicodeDecodeError):
return False
return True
@retry(Exception, logger=log)
def parse(song, source):
'''
Function parses the source search page and returns the .mp3 links in it.
@param song: Search string.
@param source: Search website source. Value can be dilandau, mp3skull, youtube, seekasong.
@return links: .mp3 url links.
'''
source = source.lower()
if source == "dilandau":
return parse_dilandau(song)
elif source == "mp3skull":
return parse_Mp3skull(song)
elif source == "SeekASong":
return parse_SeekASong(song)
elif source == "youtube":
return parse_Youtube(song)
log.error('no source "%s". (from parse function in WebParser)')
return []
def parse_dilandau(song, pages=1):
"Function connects to Dilandau.eu and returns the .mp3 links in it"
if not isAscii(song): # Dilandau doesn't like unicode.
log.warning("Song is not ASCII. Skipping on dilandau")
return []
links = []
song = urllib2.quote(song.encode("utf8"))
for i in range(pages):
url = 'http://en.dilandau.eu/download_music/%s-%d.html' % (song.replace('-','').replace(' ','-').replace('--','-').lower(),i+1)
log.debug("[Dilandau] Parsing %s... " % url)
links.extend(GetLinks(url, returnMetaUrlObj=True))
log.debug("[Dilandau] found %d links" % len(links))
for metaUrl in links:
metaUrl.source = "Dilandau"
return links
def parse_Mp3skull(song, pages=1):
"Function connects to mp3skull.com and returns the .mp3 links in it"
links = []
song = urllib2.quote(song.encode("utf8"))
for i in range(pages):
# http://mp3skull.com/mp3/how_i_met_your_mother.html
url = 'http://mp3skull.com/mp3/%s.html' % (song.replace('-','').replace(' ','_').replace('__','_').lower())
log.debug("[Mp3skull] Parsing %s... " % url)
links.extend(GetLinks(url, returnMetaUrlObj=True))
log.debug("[Mp3skull] found %d links" % len(links))
for metaUrl in links:
metaUrl.source = "Mp3skull"
return links
def parse_SeekASong(song):
"Function connects to seekasong.com and returns the .mp3 links in it"
song = urllib2.quote(song.encode("utf8"))
url = 'http://www.seekasong.com/mp3/%s.html' % (song.replace('-','').replace(' ','_').replace('__','_').lower())
log.debug("[SeekASong] Parsing %s... " % url)
links = GetLinks(url, returnMetaUrlObj=True)
for metaUrl in links:
metaUrl.source = "SeekASong"
log.debug("[SeekASong] found %d links" % len(links))
return links
def parse_Youtube(song, amount=10):
'''
Function searches a song in youtube.com and returns the clips in it using Youtube API.
@param song: The search string.
@param amount: Amount of clips to obtain.
@return links: List of links.
'''
"Function connects to youtube.com and returns the .mp3 links in it"
song = urllib2.quote(song.encode("utf8"))
url = r"http://gdata.youtube.com/feeds/api/videos?q=%s&max-results=%d&v=2" % (song.replace(' ', '+'), amount)
urlObj = urllib2.urlopen(url, timeout=4)
data = urlObj.read()
videos = xml.dom.minidom.parseString(data).getElementsByTagName('feed')[0].getElementsByTagName('entry')
links = []
for video in videos:
youtube_watchurl = video.getElementsByTagName('link')[0].attributes.item(0).value
links.append(get_youtube_hightest_quality_link(youtube_watchurl))
return links
def get_youtube_hightest_quality_link(youtube_watchurl, priority=config.youtube_quality_priority):
'''
Function returns the highest quality link for a specific youtube clip.
@param youtube_watchurl: The Youtube Watch Url.
@param priority: A list represents the qualities priority.
@return MetaUrlObj: MetaUrl Object.
'''
video_id = parse_qs(urlparse(youtube_watchurl).query)['v'][0]
youtube_embedded_watchurl = "http://www.youtube.com/embed/%s?autoplay=1" % video_id
d = get_youtube_dl_links(video_id)
for x in priority:
if x in d.keys():
return MetaUrl(d[x][0], 'youtube', d['VideoName'], x, youtube_embedded_watchurl)
log.error("No Youtube link has been found in get_youtube_hightest_quality_link.")
return ""
@retry(Exception, logger=log)
def get_youtube_dl_links(video_id):
'''
Function gets the download links for a youtube clip.
This function parses the get_video_info format of youtube.
@param video_id: Youtube Video ID.
@return d: A dictonary of qualities as keys and urls as values.
'''
d = {}
url = r"http://www.youtube.com/get_video_info?video_id=%s&el=vevo" % video_id
urlObj = urllib2.urlopen(url, timeout=12)
data = urlObj.read()
data = urllib2.unquote(urllib2.unquote(urllib2.unquote(data)))
data = data.replace(',url', '\nurl')
data = data.split('\n')
for line in data:
if 'timedtext' in line or 'status=fail' in line or '<AdBreaks>' in line:
continue
try:
url = line.split('&quality=')[0].split('url=')[1]
quality = line.split('&quality=')[1].split('&')[0]
except:
continue
if quality in d:
d[quality].append(url)
else:
d[quality] = [url]
try:
videoName = "|".join(data).split('&title=')[1].split('&')[0]
except Exception, e:
log.error("Could not parse VideoName out of get_video_info (%s)" % str(e))
videoName = ""
videoName = unicode(videoName, 'utf-8')
d['VideoName'] = videoName.replace('+',' ').replace('--','-')
return d
class NextList(object):
"A list with a 'next' method."
def __init__(self, l):
self.l = l
self.next_index = 0
def next(self):
if self.next_index < len(self.l):
value = self.l[self.next_index]
self.next_index += 1
return value
else:
return None
def isEOF(self):
" Checks if the list has reached the end "
return (self.next_index >= len(self.l))
class MetaUrl(object):
"a url strecture data with many metadata"
def __init__(self, url, source="", videoName="", quality="", youtube_watchurl=""):
self.url = str(url)
self.source = source
self.videoName = videoName # Youtube Links Only
self.quality = quality # Youtube Links Onlys
self.youtube_watchurl = youtube_watchurl # Youtube Links Onlys
def __repr__(self):
return "<MetaUrl '%s' | %s>" % (self.url, self.source)
def search(song, n, processes=config.search_processes):
'''
Function searches song and returns n valid .mp3 links.
@param song: Search string.
@param n: Number of songs.
@param processes: Number of processes to launch in the subprocessing pool.
'''
linksFromSources = []
pool = multiprocessing.Pool(processes)
args = [(song, source) for source in config.search_sources]
imapObj = pool.imap_unordered(_parse_star, args)
for i in range(len(args)):
linksFromSources.append(NextList(imapObj.next(15)))
pool.terminate()
links = []
next_source = 0
while len(links) < n and not all(map(lambda x: x.isEOF(), linksFromSources)):
nextItem = linksFromSources[next_source].next()
if nextItem:
log.debug("added song %.80s from source ID %d (%s)" % (nextItem.url.split('/')[-1], next_source, nextItem.source))
links.append(nextItem)
if len(linksFromSources) == next_source+1:
next_source = 0
else:
next_source += 1
return links
def _parse_star(args):
return parse(*args)
Answer: I can't reproduce your problem on my machine. What's in your `processes`
variable? Is it an `int`?
Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import multiprocessing.dummy as multiprocessing
>>> pool = multiprocessing.Pool(5)
>>> pool
<multiprocessing.pool.ThreadPool object at 0x00C7DF90>
>>>
\----Edit----
You probably also want to double check if you had messed up your standard
library, try an clean install of python 2.7.3 in a different folder.
\----Edit 2----
You can quickly patch it like this:
import multiprocessing.dummy
import weakref
import threading
class Worker(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
poll = multiprocessing.dummy.Pool(5)
print str(poll)
w = Worker()
w._children = weakref.WeakKeyDictionary()
w.start()
|
Python Run Two Functions With Different Timers In One Daemon
Question: I'm using the [template python daemon discussed
here](http://stackoverflow.com/questions/473620/how-do-you-create-a-daemon-in-
python) in two different scripts to launch two separate daemons. I would like
to combine them into one daemon script that has one lockfile, etc. However,
each has different loop timer, one at 1 minute and the other at 5 minutes. I'm
starting with this:
import os
import subprocess
import time
from daemon import runner
class App():
def __init__(self):
self.stdin_path = '/dev/null'
self.stdout_path = '/dev/tty'
self.stderr_path = '/dev/tty'
self.pidfile_path = '/tmp/test.pid'
self.pidfile_timeout = 5
def run(self):
try:
while True:
print "hello!"
# set the sleep time for repeated action here:
time.sleep(1)
except Exception, e:
raise
app = App()
daemon_runner = runner.DaemonRunner(app)
daemon_runner.do_action()
The obvious thing to do would be to create another class but I want the basic
stuff like pidfile to remain constant.
Answer: I'm don't entirely get why you want to do this, but this should work:
Make `run_A` and `run_B` functions to do the two independent things (obviously
not those names), including the sleeps and so on. It doesn't really matter,
but it might make sense to have these outside the `App` class.
Then make a `run` function that looks like this:
def run(self):
threads = [
threading.Thread(target=run_A),
threading.Thread(target=run_B)
]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
This will spawn a separate thread for each job, which can then go on its merry
way. If you have any shared resources between the jobs, you'll have to worry
about locking.
This assumes that the jobs are quick to execute and exact timings don't really
matter; because of CPython's
[GIL](http://en.wikipedia.org/wiki/Global_Interpreter_Lock), only one thread
will run Python statements at once, so if they pop up at the same time they'll
run a bit slower. You could pretty trivially switch this to use
`multiprocessing` if you want them to be able to run in parallel.
This will result in some "timer drift" because you're using `sleep` (as would
your daemons if run independently). If this matters, you could use interrupts
instead, or calculate the time until the next multiple of five minutes rather
than just sleeping five minutes.
|
Why does the pygame window not close properly?
Question: When I go to close the program window, the program freezes, then I am forced
to force quit the program. Why doesn't the program close when the X / Close
button is clicked on. I am also using python 2.7 if that matters.
import pygame
import os, sys
from itertools import *
from oryxsprites import *
from oryxbackground import *
running = True
while running:
backgroundmain()
pygame.display.set_caption('OryxGame')
#pygame.display.set_icon(biggrasstile)
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
Answer: A quote from [an article that sports code very similar to
yours](http://www.petercollingridge.co.uk/pygame-physics-simulation/creating-
pygame-window):
> The window now persists whilst 'running' is equal to True, which it will be
> until you close the window (by clicking the X). Note that if you use an IDE
> for Python programming, then it may interfere with Pygame. This isn’t
> normally a major problem but it can stop the Pygame window from closing
> properly. If so, adding pygame.quit() should solve the problem.
|
how to schedule a timed event in python
Question: I'd like to schedule a repeated timed event in python like this: "at time X
launch function Y (in a separate thread) and repeat every hour"
"X" is fixed timestamp
The code should be cross-platform, so i'd like to avoid using an external
program like "cron" to do this.
code extract:
import threading
threading.Timer(10*60, mail.check_mail).start()
#... SET UP TIMED EVENTS HERE
while(1):
print("please enter command")
try:
command = raw_input()
except:
continue
handle_command(command)
Answer: Create a [`dateutil.rrule`](http://labix.org/python-
dateutil#head-470fa22b2db72000d7abe698a5783a46b0731b57), `rr` for your
schedule and then use a loop like this in your thread:
for ts in rr:
now = datetime.now()
if ts < now:
time.sleep((now - ts).total_seconds())
# do stuff
Or a better solution that will account for clock changes:
ts = next(rr)
while True:
now = datetime.now()
if ts < now:
time.sleep((now - ts).total_seconds() / 2)
continue
# do stuff
ts = next(rr)
|
How can I express this Python for loop in Haskell?
Question: Sometimes when I want to use `wget`, I just end up printing a bunch of lines
with Python like so:
>>> for i in range(25):
... print "http://www.theoi.com/Text/HomerOdyssey", i, ".html"
...
http://www.theoi.com/Text/HomerOdyssey 0 .html
http://www.theoi.com/Text/HomerOdyssey 1 .html
http://www.theoi.com/Text/HomerOdyssey 2 .html
http://www.theoi.com/Text/HomerOdyssey 3 .html
http://www.theoi.com/Text/HomerOdyssey 4 .html
http://www.theoi.com/Text/HomerOdyssey 5 .html
http://www.theoi.com/Text/HomerOdyssey 6 .html
http://www.theoi.com/Text/HomerOdyssey 7 .html
http://www.theoi.com/Text/HomerOdyssey 8 .html
http://www.theoi.com/Text/HomerOdyssey 9 .html
http://www.theoi.com/Text/HomerOdyssey 10 .html
http://www.theoi.com/Text/HomerOdyssey 11 .html
http://www.theoi.com/Text/HomerOdyssey 12 .html
http://www.theoi.com/Text/HomerOdyssey 13 .html
http://www.theoi.com/Text/HomerOdyssey 14 .html
http://www.theoi.com/Text/HomerOdyssey 15 .html
http://www.theoi.com/Text/HomerOdyssey 16 .html
http://www.theoi.com/Text/HomerOdyssey 17 .html
http://www.theoi.com/Text/HomerOdyssey 18 .html
http://www.theoi.com/Text/HomerOdyssey 19 .html
http://www.theoi.com/Text/HomerOdyssey 20 .html
http://www.theoi.com/Text/HomerOdyssey 21 .html
http://www.theoi.com/Text/HomerOdyssey 22 .html
http://www.theoi.com/Text/HomerOdyssey 23 .html
http://www.theoi.com/Text/HomerOdyssey 24 .html
>>>
I can paste that output into a new file, remove the spaces, and use `wget -i`.
But I am sick of Python.
I want to learn Haskell.
Despite spending 10 minutes trying to do the same thing from `ghci`, I am no
further forward.
This is what my attempts looked like:
alec@ROOROO:~/oldio$ ghci
GHCi, version 7.0.4: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude> putStrLn
<interactive>:1:1:
No instance for (Show (String -> IO ()))
arising from a use of `print'
Possible fix:
add an instance declaration for (Show (String -> IO ()))
In a stmt of an interactive GHCi command: print it
Prelude> putStrLn "hey"
hey
Prelude> putStrLn "hey" [1..10]
<interactive>:1:1:
The function `putStrLn' is applied to two arguments,
but its type `String -> IO ()' has only one
In the expression: putStrLn "hey" [1 .. 10]
In an equation for `it': it = putStrLn "hey" [1 .. 10]
Prelude> putStrLn "hey" snd [1..10]
<interactive>:1:1:
The function `putStrLn' is applied to three arguments,
but its type `String -> IO ()' has only one
In the expression: putStrLn "hey" snd [1 .. 10]
In an equation for `it': it = putStrLn "hey" snd [1 .. 10]
Prelude> putStrLn "hey" $ snd [1..10]
<interactive>:1:1:
The first argument of ($) takes one argument,
but its type `IO ()' has none
In the expression: putStrLn "hey" $ snd [1 .. 10]
In an equation for `it': it = putStrLn "hey" $ snd [1 .. 10]
Prelude> "hello"
"hello"
Prelude> "hello" ++ "world"
"helloworld"
Prelude> "hello" ++ [1..10] ++ " world"
<interactive>:1:16:
No instance for (Num Char)
arising from the literal `10'
Possible fix: add an instance declaration for (Num Char)
In the expression: 10
In the first argument of `(++)', namely `[1 .. 10]'
In the second argument of `(++)', namely `[1 .. 10] ++ " world"'
Prelude> "hello" ++ print [1..10] ++ " world"
<interactive>:1:12:
Couldn't match expected type `[Char]' with actual type `IO ()'
In the return type of a call of `print'
In the first argument of `(++)', namely `print [1 .. 10]'
In the second argument of `(++)', namely
`print [1 .. 10] ++ " world"'
Prelude> print [1..10]
[1,2,3,4,5,6,7,8,9,10]
Prelude> map ("hello") [1..10]
<interactive>:1:6:
Couldn't match expected type `a0 -> b0' with actual type `[Char]'
In the first argument of `map', namely `("hello")'
In the expression: map ("hello") [1 .. 10]
In an equation for `it': it = map ("hello") [1 .. 10]
Prelude> greeting :: String --> Int -> [String, Int]
<interactive>:1:39: parse error on input `,'
Prelude> greeting :: String --> Int -> [(String), (Int)]
<interactive>:1:41: parse error on input `,'
Prelude> greeting :: String -> Int -> [(String), (Int)]
<interactive>:1:40: parse error on input `,'
Prelude> greeting :: String -> Int -> [(String) (Int)]
<interactive>:1:1: Not in scope: `greeting'
Prelude> foreach [1..24] print
<interactive>:1:1: Not in scope: `foreach'
Prelude> import Data.IORef
Prelude Data.IORef> foreach [1..24] print
<interactive>:1:1: Not in scope: `foreach'
Prelude Data.IORef> foreach = flip mapM_
<interactive>:1:9: parse error on input `='
Answer:
for i in range(25):
... print "http://www.theoi.com/Text/HomerOdyssey", i, ".html"
becomes:
> import Control.Monad
so that we can:
forM_ [1..25] $ \i ->
putStrLn $ "http://www.theoi.com/Text/HomerOdyssey" ++ show i ++ ".html"
|
Set/Get user information from a xmpp server: python
Question: I am new in python and I am trying to create a testing python script to test
different actions on my XMPP server. I already was able to test the login of
my user and now I want to get the information that the server is sending
(stanza) and set new information.
I have read several webs and I am not very clear with all this information.
The main source has been sleekxmpp.com.
I have my stanza:
<iq type='get' to= 'chat.net' id='id1'>
<aa xmlns='http://myweb.com' />
</iq>
<iq type='result' to= 'chat.net' id='id1'>
<aa xmlns='http://myweb.com' >
<name>My name as included in sent mails<name>
<lang>en</lang>
<mail>My mail as included in sent mails</mail>
</aa>
</iq>
I want to get the information and also set one of the parameters (lets say
name) but I don't know how.
class user_info(sleekxmpp.stanza.Iq):
self.get_query()
I must do it in python. Any help appreciated
Answer: What you want to do is create a custom stanza class for your stanza. Here's
one that will work for the example you have:
from sleekxmpp import Iq
from sleekxmpp.xmlstream import ElementBase, register_stanza_plugin
class AA(ElementBase):
name = 'aa'
namespace = 'http://myweb.com'
plugin_attrib = 'aa'
interfaces = set(['name', 'lang', 'mail'])
sub_interfaces = interfaces
register_stanza_plugin(Iq, AA)
Ok, so what does all of that do? The `name` field specifies that the XML
object's root tag is 'aa', and `namespace` specifies the root tag's namespace;
obvious so far I hope.
The `plugin_attrib` field is the name that can be used to access this stanza
from the parent stanza. For example, you should already be familiar with how
you can use `iq['type']` or `iq['from']` to extract data out of an Iq stanza.
With `plugin_attrib` set to `"aa"`, then you can use `iq['aa']` to get a
reference to the AA content.
The `interfaces` set is the set of key names that this stanza provides for
extracting information, just like working with dictionaries. For example an Iq
stanza has 'to', 'from', 'type', etc in its interfaces set. By default,
accessing and modifying these keys will create or modify attributes of the
stanza's main element. So, at this point, your stanza would behave like this:
aa = AA()
aa['name'] = 'foo'
print aa
"<aa xmlns='http://myweb.com' name='foo' />"
Now, to instead map interface keys to subelements instead of attributes, they
need to be in the `sub_interfaces` set. So by setting `sub_interfaces =
interfaces` the above example would now work like so:
aa = AA()
aa['name'] = 'foo'
print aa
"<aa xmlns='http://myweb.com'><name>foo</name></aa>"
If you needed something more advanced, you could also define methods of the
form get_* / set_* / del_* where * is the interface name which will then be
used to extract or modify data.
So, all together, you will be able to do:
iq = Iq()
# ... set iq parameters
iq.enable('aa') # Add an initial, empty aa element.
try:
resp = iq.send()
print(resp['aa']['name'])
# ..., etc
except XMPPError:
print('There was an error')
Also, don't forget that we have the [email protected] chat room for
SleekXMPP help if you need it.
|
How to make Python 2.7 and Python 3.1 coexist on windows 7?
Question: I have a Python 3.1 installed on my desktop but now I need to have Python 2.7
to run CQL. I installed both versions Python on my box, type 'Python', the 3.1
version was invoked. but when I tried to use 2.7 version by specified the path
of the executable of 2.7, the 3.1 was invoked again, with some errors, as
listed below:
C:\Python27>.\python.exe
Traceback (most recent call last):
File "C:\Python31\lib\site.py", line 56, in <module>
import os
File "C:\Python31\lib\os.py", line 380, in <module>
from _abcoll import MutableMapping # Can't use collections (bootstrap)
File "C:\Python31\lib\_abcoll.py", line 54
class Hashable(metaclass=ABCMeta):
^
SyntaxError: invalid syntax
what should/could I do to make both Python version work for me?
Answer: Either use [cygwin](http://www.cygwin.com/) to have a nice bash environment
that makes it easy to launch python scripts with specific interpreters, or,
use [virtualenv](http://pypi.python.org/pypi/virtualenv) to create
environments based on specific interps:
> mkdir project
> cd project
> C:\Python27\Scripts\virtualenv.exe -p C:\Python32\python.exe .
> Scripts\activate
At this point, your env has been set to this project, and typing `python` will
start python3.2
You would type `deactivate` to revert back to normal env. With this approach,
you can either set this up on a per project basis to manage version
dependencies between python projects, or you can make this more global for
each python version in general. Up to you.
In *nix based systems we have the luxury of the "shebang" line at the top of
the script to tell it which interpreter to use. Windows only has file
extension associations via the GUI, and you have to be explicit in the command
line in form of `python <script.py>`
|
Python's append() only allows unique items in a list?
Question: The python documentation implies that duplicate items can exist within a list,
and this is supported by the assignmnet: list = ["word1", "word1"]. However,
Python's append() doesn't seem to add an item if it's already in the list. Am
I missing something here or is this a deliberate attempt at a set() like
behaviour?
>> d = {}
>> d["word1"] = 1
>> d["word2"] = 2
>> d["word2"] = 3
>> vocab = []
>> for word,freq in d.iteritems():
>> ... vocab.append(word)
>> for item in vocab:
>> ... print item
returns:
word1
word2
Where's the second word2?
Answer: There _is_ no second word2.
>>> d = {}
>>> d["word1"] = 1
>>> d["word2"] = 2
>>> d
{'word1': 1, 'word2': 2}
>>> d["word2"] = 3
>>> d
{'word1': 1, 'word2': 3}
Dictionaries map a specific key to a specific value. If you want a single key
to correspond to multiple values, typically a list is used, and a defaultdict
comes in very handy:
>>> from collections import defaultdict
>>> d = defaultdict(list)
>>> d["word1"].append(1)
>>> d["word2"].append(2)
>>> d["word2"].append(3)
>>> d
defaultdict(<type 'list'>, {'word1': [1], 'word2': [2, 3]})
|
Python: Decimal part of a large float
Question: I'm trying to get the decimal part of `(pow(10, i) - 1)/23` for `0 < i < 50`.
I have tried
(pow(10, i) - 1)/23 % 1
in Python 3 but I get `0.0` for all values of `i` greater than 17.
How can I extract the decimal part of a large integer in Python?
Answer: To preserve precision, I'd probably use the
[fractions](http://docs.python.org/library/fractions.html) module:
>>> from fractions import Fraction
>>> Fraction(10)
Fraction(10, 1)
>>> Fraction(10)**50
Fraction(100000000000000000000000000000000000000000000000000, 1)
>>> Fraction(10)**50-1
Fraction(99999999999999999999999999999999999999999999999999, 1)
>>> (Fraction(10)**50-1)/23
Fraction(99999999999999999999999999999999999999999999999999, 23)
>>> ((Fraction(10)**50-1)/23) % 1
Fraction(5, 23)
>>> float(((Fraction(10)**50-1)/23) % 1)
0.21739130434782608
although using the [decimal](http://docs.python.org/library/decimal.html)
module would be another option.
update: wait, on second thought, the answer here is always going to be
`((10^n-1) % 23)/23`, so the above is significant overkill (although it does
scale up better to more complex problems). We can even take advantage of the
three-argument pow call:
>>> pow(10, 50, 23)
6
>>> pow(10, 50, 23) - 1
5
>>> (pow(10, 50, 23) - 1) % 23 # handle possible wraparound
5
>>> ((pow(10, 50, 23) - 1) % 23) / 23.0
0.21739130434782608
|
Surf missing in opencv 2.4 for python
Question: I'm trying to instantiate a SURF object in python using OpenCV as described
[here](http://docs.opencv.org/modules/nonfree/doc/feature_detection.html#surf)
but this happens:
>>> import cv2
>>> cv2.__version__
'2.4.0'
>>> cv2.SURF()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'SURF'
Does anyone know why this happens or if SURF is missing from the Python
version of OpenCV?
Answer: It is a regression which should be fixed in the next library update.
But SURF is not really absent. You still can access it via the generic
wrappers:
surf_detector = cv2.FeatureDetector_create("SURF")
surf_descriptor = cv2.DescriptorExtractor_create("SURF")
* * *
**Update:** `cv2.SURF()` is restored in OpenCV 2.4.1
|
NameError: a global name 'RPyPException' is not defined
Question: 2ND question: Thanks so much Ben! It works! I got at Error 13 message saying I
couldn't make a temporary file in C:\Program Files so I movd the ARSER folder
and put it under my user name. That took care of the Error 13 but now I get
NameError: a global name 'RPyPException' is not defined. Is this because I
moved the folder out of the Program Files folder where I have saved R, Python,
and rpy? Thanks!
* * *
1ST question: I am trying to analyze biorythm data with a program called ARSER
(http://bioinformatics.cau.edu.cn/ARSER/) and when I try to run it I get the
error:
File "C:\Program Files\ARSER\arser.py", line 9, in from rpy import * Import
Error: no module named rpy
I am running WINDOWS 7 and have downloaded:
1. Python(x,y) running Python version 2.7.2.3
2. windows patch for Python 2.7 (pywin32-217.win32-py2.7.exe)
3. R version 2.8.1
4. rpy version 2.2.3
Under the My Computer Advanced Options I changed the environmental variable
PATH to C:\Program Files\R\R-2.8.1\bin but this did not solve the above error.
The help instructions I was reading were from an older version of R so maybe
that's the problem?
I am new to all these programs and I appreciate any suggestions you have!
Thanks so much!
Answer: I suspect you need to change the `PYTHONPATH` environment variable to include
the directory containing `rpy`. Python knows where to search for modules when
you import something by using the `PYTHONPATH` environment variable, much as
the shell knows where to look for a program that you type the name of by using
the `PATH` environment variable.
|
GNU time(1) reports wrong I/O count
Question: on fedora 16, running time(1) on a small program that just does 10 writes of
1024 bytes to a file, reports "24 outputs". I was expecting the I/O count to
be 10. Note that if i run strace on the program I can see the 10 write()
calls. So what is the I/O count as reported by time(1)? thanks a lot
#!/usr/bin/python
import os
import pdb
SIZE_IO=1024
IONB=10
def test1(file):
#pdb.set_trace()
buffer= '\x01' * SIZE_IO
fd = os.open(file, os.O_CREAT|os.O_RDWR, 0777)
for ix in range(IONB):
len = os.write(fd, buffer)
print len
os.close(fd)
return 1
if name__== "__main":
test1("ttt.txt")
print 'ok'
Answer: Isn't each print going to cause a write as well?
|
Python db2 install using easy install
Question: I want to install python db2 package for Python but Im unable to install it.
I have installed the easy_install and Im able to successfully import the
easy_install.
My easy_install location :c:/python27/lib/site-packages/
My db2 egg location c:/python27/ibm_db-1.0.5-py2.7-win32.egg
How would my installation command look like in the shell,
I tried this command and it gives me invalid error,
>>> easy_install.py c:\python27\ibm_db-1.0.5-py2.7-win32.egg
Answer: try this:
C:\Python27\Scripts\easy_install -H None -f c:\python27\ibm_db-1.0.5-py2.7-win32.egg ibm_db
|
How does one override the __setitem__ method for (possibly multidimensional) arrays created via ctypes _fields_?
Question: I am using ctypes in Python 3.2.2 to encapsulate some C data structures. The
ultimate goal is to be able to have an object that wraps a C structure notice
when the structure's data contents have been modified.
Representative code:
from ctypes import *
class Comm(Structure):
def __init__(self):
self.attributes_updated = False
def __setattr__(self, name, value):
super(Comm, self).__setattr__('attributes_updated', True)
super(Comm, self).__setattr__(name, value)
class MyCStruct(Comm):
_fields_ = [('number', c_int),
('array', c_int*5)]
def __init__(self):
Comm.__init__(self)
This works great for any simple data attribute like 'number'.
>>> s = MyCStruct()
>>> s.attributes_updated
False
>>> s.value = 123
>>> s.attributes_updated
True
Since `__setattr__` is not invoked for accesses via index notation to the
`array` attribute, I would thus like to override the `__setitem__` attribute
for those members of the C struct that are arrays. Presumably, at that point I
would need to include a reference back to the containing object so that the
containing object's `attributes_updated` variable could be changed, but I've
not gotten to the point where I am able to trap accesses to array attributes
in the convenient way that I can trap accesses to simple attributes. Is there
a way to do this on the indexable objects that ctypes creates via the
`_fields_` variable? Is it possible to override `__setitem__` on `s.array`?
Might there be a better way to go about doing this?
Ideally, this would happen:
>>> s = MyCStruct()
>>> s.attributes_updated
False
>>> s.array[2] = 456
>>> s.attributes_updated
True
Edit for follow-up question:
How about a multidimensional array?
class MyCStruct(Comm):
_fields_ = [('number', c_int),
('array', (c_int*5)*2]
I mistakenly expected the answer below, which works brilliantly for single
dimensional arrays to do the same for arbitrarily nested ones. There should be
a way to recursively spawn proxy objects to do the same thing for arrays with
more than one dimension, yes? The syntax escapes me.
Answer: I think a nice solution would be to return a proxy object instead of the
array, which can then handle the access to the array. Could be something like
this:
from ctypes import *
class ArrayProxy(object):
def __init__(self, array, struct):
self.array = array
self.struct = struct
def __setitem__(self, i, val):
self.array[i] = val
self.struct.attributes_updated = True
def __getitem__(self, i):
item = self.array[i]
if issubclass(type(item), Array):
# handle multidimensional arrays
return ArrayProxy(item, self.struct)
return item
class Comm(Structure):
def __init__(self):
self.attributes_updated = False
def __setattr__(self, name, value):
super(Comm, self).__setattr__('attributes_updated', True)
super(Comm, self).__setattr__(name, value)
def __getattribute__(self, name):
attr = super(Comm, self).__getattribute__(name)
if issubclass(type(attr), Array):
return ArrayProxy(attr, self)
return attr
class MyCStruct(Comm):
_fields_ = [('number', c_int),
('array', c_int*5),
('multiarray', c_int*2*1),]
def __init__(self):
Comm.__init__(self)
s = MyCStruct()
print s.array
# <__main__.ArrayProxy object at 0x1b1f3d0>
print s.attributes_updated
# False
s.array[0] = 1
print s.attributes_updated
# True
s2 = MyCStruct()
s2.multiarray[0][0] = 1
print s2.attributes_updated
# True
|
Getting a Blobstore key
Question: I am reading about the Blobstore in Google App Engine. The code below is from
the sample documentation. After the user selects a file to upload and clicks
Submit, how do I get the key into a javascript variable? I can show it on a
page, but I only want to keep it for later use. Obviously, I am new to Web
programming.
#!/usr/bin/env python
#
import os
import urllib
from google.appengine.ext import blobstore
from google.appengine.ext import webapp
from google.appengine.ext.webapp import blobstore_handlers
from google.appengine.ext.webapp.util import run_wsgi_app
class MainHandler(webapp.RequestHandler):
def get(self):
upload_url = blobstore.create_upload_url('/upload')
self.response.out.write('<html><body>')
self.response.out.write('<form action="%s" method="POST" enctype="multipart/form-data">' % upload_url)
self.response.out.write("""Upload File: <input type="file" name="file"><br> <input type="submit"
name="submit" value="Submit"> </form></body></html>""")
class UploadHandler(blobstore_handlers.BlobstoreUploadHandler):
def post(self):
upload_files = self.get_uploads('file') # 'file' is file upload field in the form
blob_info = upload_files[0]
self.response.out.write('<html><body>')
self.response.out.write(str(blob_info.key()))
self.response.out.write('</body><html>')
def main():
application = webapp.WSGIApplication(
[('/', MainHandler),
('/upload', UploadHandler),
], debug=True)
run_wsgi_app(application)
if __name__ == '__main__':
main()
Answer: You could do something like this:
self.response.out.write("""
<html>
<script>
var blobKey = "%s";
</script>
<body>
...
</body>
</html>""" % (blob_info.key(),)
|
else: syntax is incorrect
Question: I am a little new to python and I am trying to write this script to cancel
print jobs over 1 mb.. (the line where it is checking for size is set to 1 mb
just to make sure it is working). for some reason my last else statement keeps
saying it has invalid syntax. I checked to see if all parentheses were closed
and I could not find an unmatched pair. could someone please tell me why it
says it is invalid? Also can you take a look at my line 24 (fullname =
...grep...) to make sure the syntax on that is correct?
#! /usr/bin/python
import os
infile = open ('test.pl', 'r')
outfile = open('print.reportpython', 'w+')
newfile = infile.readlines()
newfile.pop(0)
count = 0
firstline = newfile[0]
splitline = firstline.split()
currentuser = splitline[1]
currentuser = str(currentuser)
for line in newfile:
newline = line.split()
names = newline[1]
size = int(newline[2])
names = str(names)
print names
if names is currentuser:
if size >= 1:
os.popen ("cancel lab01-10292")
fullname = os.popen("cat /etc/passwd |grep " + newline[1] + "cut -d':' -f5")
count += 1
print count
else:
print outfile.write ("(" + currentuser + ")")
print outfile.write (" ")
count = 0
currentuser = names
Answer: You do:
if foo:
bar
baz
else:
bomb
Which is wrong. All the lines between the `if` and its corresponding `else`
must be indented deeper than the `if` and `else`, like this:
if foo:
bar
baz
else:
bomb
|
Constructing a regular expression for url in start_urls list in scrapy framework python
Question: I am very new to scrapy and also i didn't used regular expressions before
The following is my `spider.py` code
class ExampleSpider(BaseSpider):
name = "test_code
allowed_domains = ["www.example.com"]
start_urls = [
"http://www.example.com/bookstore/new/1?filter=bookstore",
"http://www.example.com/bookstore/new/2?filter=bookstore",
"http://www.example.com/bookstore/new/3?filter=bookstore",
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
Now if we look at `start_urls` all the three urls are same except they differ
at integer value `2?, 3?` and so on i mean unlimited according to urls present
on the site , i now that we can use crawlspider and we can construct regular
expression for the URL like below,
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
import re
class ExampleSpider(CrawlSpider):
name = 'example.com'
allowed_domains = ['example.com']
start_urls = [
"http://www.example.com/bookstore/new/1?filter=bookstore",
"http://www.example.com/bookstore/new/2?filter=bookstore",
"http://www.example.com/bookstore/new/3?filter=bookstore",
]
rules = (
Rule(SgmlLinkExtractor(allow=(........),))),
)
def parse(self, response):
hxs = HtmlXPathSelector(response)
can u please guide me , that how can i construct a crawl spider Rule for the
above `start_url` list.
Answer: If i understand you correctly, you want a lot of start URL with a certain
pattern.
If so, you can override
[BaseSpider.start_requests](http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.spider.BaseSpider.start_requests)
method:
class ExampleSpider(BaseSpider):
name = "test_code"
allowed_domains = ["www.example.com"]
def start_requests(self):
for i in xrange(1000):
yield self.make_requests_from_url("http://www.example.com/bookstore/new/%d?filter=bookstore" % i)
...
|
pymongo installed but import fails
Question: CentOS 5.8 ships with Python 2.4.3. I installed pymongo using command: sudo
pip install pymongo (after installing pip with easy_install after installing
python-pip...typical CentOS, nothing works out of the box).
The install appears to work, I get the messages:
Successfully installed pymongo
Cleaning up...
Then, when I run import pymongo, I get this:
ImportError: No module named pymongo
I followed the standard pymongo install procedure so pymongo definitely
doesn't work on CentOS. Anybody know of a workaround to get this working? It
seems yet again, a hack is required to make CentOS support basic functionality
every other Linux distro supports out of the box....
Answer: Since you didn't specify you need to use the default CentOS Python install, I
would highly recommend using virtualenv to get the support you want. CentOS
requires that old version of Python, and we have to work around it at my
business as well.
Grab the latest Python, and build from source. Make sure any SSL devel
libraries are installed through Yum.
./configure --prefix=/home/user/custompython
make && make install
Grab virtualenv.py from
<https://raw.github.com/pypa/virtualenv/master/virtualenv.py> and run that
script from your custom python install
/home/user/custompython/bin/python virtualenv.py -P /home/user/custompython/bin/python /home/user/pythonENV
export PYTHON_HOME=/home/user/pythonENV/
export PATH=/home/user/pythonENV/bin/:$PATH
pip install pymongo
Of course, adding the path modifications to .bashrc is helpful. Now you can
install anything you want without worrying about the very old 2.4.3 that
CentOS ships with. This is the exact setup I have with our CentOS 5.8 system,
and it is very helpful.
|
Python: Idiomatic properties for structured data?
Question: I've got a bad smell in my code. Perhaps I just need to let it air out for a
bit, but right now it's bugging me.
I need to create three different input files to run three Radiative Transfer
Modeling (RTM) applications, so that I can compare their outputs. This process
will be repeated for thousands of sets of inputs, so I'm automating it with a
python script.
I'd like to store the input parameters as a generic python object that I can
pass to three other functions, who will each translate that general object
into the specific parameters needed to run the RTM software they are
responsible. I think this makes sense, but feel free to criticize my approach.
There are many possible input parameters for each piece of RTM software. Many
of them over-lap. Most of them are kept at sensible defaults, but should be
easily changed.
I started with a simple `dict`
config = {
day_of_year: 138,
time_of_day: 36000, #seconds
solar_azimuth_angle: 73, #degrees
solar_zenith_angle: 17, #degrees
...
}
There are a lot of parameters, and they can be cleanly categorized into
groups, so I thought of using `dict`s within the `dict`:
config = {
day_of_year: 138,
time_of_day: 36000, #seconds
solar: {
azimuth_angle: 73, #degrees
zenith_angle: 17, #degrees
...
},
...
}
I like that. But there are a lot of redundant properties. The solar azimuth
and zenith angles, for example, can be found if the other is known, so why
hard-code both? So I started looking into python's builtin
[`property`](http://docs.python.org/library/functions.html#property). That
lets me do nifty things with the data if I store it as object attributes:
class Configuration(object):
day_of_year = 138,
time_of_day = 36000, #seconds
solar_azimuth_angle = 73, #degrees
@property
def solar_zenith_angle(self):
return 90 - self.solar_azimuth_angle
...
config = Configuration()
But now I've lost the structure I had from the second `dict` example.
Note that some of the properties are less trivial than my `solar_zenith_angle`
example, and might require access to other attributes outside of the group of
attributes it is a part of. For example I can calculate `solar_azimuth_angle`
if I know the day of year, time of day, latitude, and longitude.
**What I'm looking for:**
A simple way to store configuration data whose values can all be accessed in a
uniform way, are nicely structured, and may exist either as attributes (real
values) or properties (calculated from other attributes).
**A possibility that is kind of boring:**
Store everything in the dict of dicts I outlined earlier, and having other
functions run over the object and calculate the calculatable values? This
doesn't sound fun. Or clean. To me it sounds messy and frustrating.
**An ugly one that works:**
After a long time trying different strategies and mostly getting no where, I
came up with one possible solution that seems to work:
**My classes:** (smells a bit func-y, er, funky. def-initely.)
class SubConfig(object):
"""
Store logical groupings of object attributes and properties.
The parent object must be passed to the constructor so that we can still
access the parent object's other attributes and properties. Useful if we
want to use them to compute a property in here.
"""
def __init__(self, parent, *args, **kwargs):
super(SubConfig, self).__init__(*args, **kwargs)
self.parent = parent
class Configuration(object):
"""
Some object which holds many attributes and properties.
Related configurations settings are grouped in SubConfig objects.
"""
def __init__(self, *args, **kwargs):
super(Configuration, self).__init__(*args, **kwargs)
self.root_config = 2
class _AConfigGroup(SubConfig):
sub_config = 3
@property
def sub_property(self):
return self.sub_config * self.parent.root_config
self.group = _AConfigGroup(self) # Stinky?!
**How I can use them:** (works as I would like)
config = Configuration()
# Inspect the state of the attributes and properties.
print("\nInitial configuration state:")
print("config.rootconfig: %s" % config.root_config)
print("config.group.sub_config: %s" % config.group.sub_config)
print("config.group.sub_property: %s (calculated)" % config.group.sub_property)
# Inspect whether the properties compute the correct value after we alter
# some attributes.
config.root_config = 4
config.group.sub_config = 5
print("\nState after modifications:")
print("config.rootconfig: %s" % config.root_config)
print("config.group.sub_config: %s" % config.group.sub_config)
print("config.group.sub_property: %s (calculated)" % config.group.sub_property)
**The behavior:** (output of execution of all of the above code, as expected)
Initial configuration state:
config.rootconfig: 2
config.group.sub_config: 3
config.group.sub_property: 6 (calculated)
State after modifications:
config.rootconfig: 4
config.group.sub_config: 5
config.group.sub_property: 20 (calculated)
**Why I don't like it:**
Storing configuration data in class definitions inside of the main object's
`__init__()` doesn't feel elegant. Especially having to instantiate them
immediately after definition like that. Ugh. I can deal with that for the
parent class, sure, but doing it in a constructor...
Storing the same classes outside the main `Configuration` object doesn't feel
elegant either, since properties in the inner classes may depend on the
attributes of `Configuration` (or their siblings inside it).
I could deal with defining the functions outside of everything, so inside
having things like
@property
def solar_zenith_angle(self):
return calculate_zenith(self.solar_azimuth_angle)
but I can't figure out how to do something like
@property
def solar.zenith_angle(self):
return calculate_zenith(self.solar.azimuth_angle)
(when I try to be clever about it I always run into `<property object at
0xXXXXX>`)
So what is the right way to go about this? Am I missing something basic or
taking a very wrong approach? Does anyone know a clever solution?
Help! My python code isn't beautiful! I must be doing something wrong!
Answer: Phil,
Your hesitation about func-y config is very familiar to me :)
I suggest you to store your config not as a python file but as a structured
data file. I personally prefer YAML because it looks clean, just as you
designed in the very beginning. Of course, you will need to provide formulas
for the auto calculated properties, but it is not too bad unless you put too
much code. Here is my implementation using PyYAML lib.
The config file (config.yml):
day_of_year: 138
time_of_day: 36000 # seconds
solar:
azimuth_angle: 73 # degrees
zenith_angle: !property 90 - self.azimuth_angle
The code:
import yaml
yaml.add_constructor("tag:yaml.org,2002:map", lambda loader, node:
type("Config", (object,), loader.construct_mapping(node))())
yaml.add_constructor("!property", lambda loader, node:
property(eval("lambda self: " + loader.construct_scalar(node))))
config = yaml.load(open("config.yml"))
print "LOADED config.yml"
print "config.day_of_year:", config.day_of_year
print "config.time_of_day:", config.time_of_day
print "config.solar.azimuth_angle:", config.solar.azimuth_angle
print "config.solar.zenith_angle:", config.solar.zenith_angle, "(calculated)"
print
config.solar.azimuth_angle = 65
print "CHANGED config.solar.azimuth_angle = 65"
print "config.solar.zenith_angle:", config.solar.zenith_angle, "(calculated)"
The output:
LOADED config.yml
config.day_of_year: 138
config.time_of_day: 36000
config.solar.azimuth_angle: 73
config.solar.zenith_angle: 17 (calculated)
CHANGED config.solar.azimuth_angle = 65
config.solar.zenith_angle: 25 (calculated)
The config can be of any depth and properties can use any subgroup values. Try
this for example:
a: 1
b:
c: 3
d: some text
e: true
f:
g: 7.01
x: !property self.a + self.b.c + self.b.f.g
Assuming you already loaded this config:
>>> config
<__main__.Config object at 0xbd0d50>
>>> config.a
1
>>> config.b
<__main__.Config object at 0xbd3bd0>
>>> config.b.c
3
>>> config.b.d
'some text'
>>> config.b.e
True
>>> config.b.f
<__main__.Config object at 0xbd3c90>
>>> config.b.f.g
7.01
>>> config.x
11.01
>>> config.b.f.g = 1000
>>> config.x
1004
**UPDATE**
Let us have a property config.b.x which uses both self, parent and subgroup
attributes in its formula:
a: 1
b:
x: !property self.parent.a + self.c + self.d.e
c: 3
d:
e: 5
Then we just need to add a reference to parent in subgroups:
import yaml
def construct_config(loader, node):
attrs = loader.construct_mapping(node)
config = type("Config", (object,), attrs)()
for k, v in attrs.iteritems():
if v.__class__.__name__ == "Config":
setattr(v, "parent", config)
return config
yaml.add_constructor("tag:yaml.org,2002:map", construct_config)
yaml.add_constructor("!property", lambda loader, node:
property(eval("lambda self: " + loader.construct_scalar(node))))
config = yaml.load(open("config.yml"))
And let's see how it works:
>>> config.a
1
>>> config.b.c
3
>>> config.b.d.e
5
>>> config.b.parent == config
True
>>> config.b.d.parent == config.b
True
>>> config.b.x
9
>>> config.a = 1000
>>> config.b.x
1008
|
Compiling WxPython with Py2EXE and IDLE-X
Question: At the moment i am learning python and i have been experimenting compiling
python code.
The problem i am having is, once the script is compiled, it does not respond.
I learnt that an IDLE-X extension can help fix this problem, but the same
error occurs after compilation. I am running Windows 7 Ultimate 64Bit,
wxPython 2.7, python 2.7 and py2exe for 2.7. ->
import wx
class aero(wx.Frame):
def __init__(self,parent,id):
wx.Frame.__init__(self,parent,id,'Aero v1.0b', size=(500,500))
panel=wx.Panel(self)
status=self.CreateStatusBar()
menubar=wx.MenuBar()
first=wx.Menu()
second=wx.Menu()
first.Append(wx.NewId(),"New Window","This is a new window" )
first.Append(wx.NewId(),"Open...","This will open a new window" )
menubar.Append(first,"File")
menubar.Append(second,"Edit")
self.SetMenuBar(menubar)
if __name__=='__main__':
app=wx.PySimpleApp()
frame=aero(parent=None,id=-1)
frame.Show()
app.MainLoop
i got some help off this thread: [pythonw.exe is not
responding](http://stackoverflow.com/questions/9987419/pythonw-exe-is-not-
responding)
and i would like to learn why this happens.
Thankyou very much.
Answer: Since py2exe isn't getting updated much any more, I would recommend trying
PyInstaller or cx_freeze instead.
|
python script that captures an image and compares it to another image
Question: Basically, I want to automate something. I would capture a 100x50 picture of a
certain button, and have a script that takes a picture of the same area with
the same size, and then compares the two pictures to see if they are
different. If they are different, it would do nothing, if they are similar +/-
a certain amount, it would then click it.
I have the screen capture kind of worked out, I just don't know how to compare
them with PIL. I have done a lot of googling without success.
Answer: You can use `pylab.imread` to easily read an image into a NumPy array:
In [3]: import scipy, pylab
In [4]: x = pylab.imread('example.jpg')
In [5]: x.shape
Out[5]: (480, 640, 3)
You can view the image, or any subset thereof:
In [6]: pylab.imshow(x)
Out[6]: <matplotlib.image.AxesImage at 0x98e564c>
In [7]: pylab.show()
If you wanted to compare two 8-by-8 blocks (say, the top left block of the red
and blue layers), you could compute the mean squared error:
In [8]: x[:8,:8,0]
Out[8]:
array([[147, 143, 146, 144, 146, 148, 146, 149],
[145, 142, 146, 145, 147, 149, 148, 151],
[143, 141, 146, 145, 147, 147, 148, 150],
[143, 143, 146, 146, 146, 145, 147, 148],
[147, 147, 147, 148, 147, 145, 146, 146],
[146, 147, 145, 147, 148, 145, 147, 146],
[146, 147, 144, 147, 147, 144, 146, 144],
[147, 148, 144, 147, 147, 144, 146, 144]], dtype=uint8)
In [9]: x[:8,:8,1]
Out[9]:
array([[125, 121, 122, 120, 118, 120, 116, 120],
[123, 120, 122, 122, 119, 121, 118, 122],
[122, 120, 123, 122, 120, 120, 118, 121],
[122, 122, 123, 122, 120, 118, 117, 119],
[124, 123, 123, 124, 121, 119, 119, 119],
[122, 123, 120, 122, 121, 119, 119, 119],
[121, 122, 116, 119, 119, 117, 119, 117],
[122, 122, 115, 118, 119, 116, 119, 117]], dtype=uint8)
In [10]: def mse(x, y):
....: return scipy.mean((x.astype(float)-y)**2)
In [11]: mse(x[:8,:8,0], x[:8,:8,1])
Out[11]: 676.0625
|
Moving Django 1.3 to new server
Question: I'm trying to move website made in Django 1.3.
Server is set up as the privies one (I think so).
After Django installation, I moved all files to new server, I swap settings
files so now in settings are files from the privies server. I changed files
locations in setting, so right now all are pointed to sew server location.
Also some modules was missing which I install and I don’t get errors with
missing django modules.
When I'm trying to set visible on internet, I get following errors:
[root@575283 somod]# python manage.py runserver 0.0.0.0:8000
Traceback (most recent call last):
File "manage.py", line 13, in <module>
execute_manager(settings)
File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 438, in execute_manager
utility.execute()
File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 191, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 209, in execute
translation.activate('en-us')
File "/usr/lib/python2.6/site-packages/django/utils/translation/__init__.py", line 100, in activate
return _trans.activate(language)
File "/usr/lib/python2.6/site-packages/django/utils/translation/trans_real.py", line 202, in activate
_active.value = translation(language)
File "/usr/lib/python2.6/site-packages/django/utils/translation/trans_real.py", line 185, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "/usr/lib/python2.6/site-packages/django/utils/translation/trans_real.py", line 162, in _fetch
app = import_module(appname)
File "/usr/lib/python2.6/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/__init__.py", line 1, in <module>
from sorl.thumbnail.fields import ImageField
File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/fields.py", line 2, in <module>
from django.db import models
File "/usr/lib/python2.6/site-packages/django/db/__init__.py", line 78, in <module>
connection = connections[DEFAULT_DB_ALIAS]
File "/usr/lib/python2.6/site-packages/django/db/utils.py", line 93, in __getitem__
backend = load_backend(db['ENGINE'])
File "/usr/lib/python2.6/site-packages/django/db/utils.py", line 33, in load_backend
return import_module('.base', backend_name)
File "/usr/lib/python2.6/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 14, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb
I'm guessing I missed MySQLdb module, any tips?
Also what bout other output lines?
Old server has Django 1.3
I'm using CentOS 6, Apache 2, Django 1.3.1, mod_wsgi, Python 2.6.6
Answer: You need to install [MySQL-python](http://pypi.python.org/pypi/MySQL-python).
You can install it using [pip](http://www.pip-
installer.org/en/latest/index.html):
`sudo pip install MySQL-python`
If you need help on how to run Django with Apache, see [the official
docs](https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/modwsgi/).
If they are confusing, [keep
Googling](https://www.google.se/webhp?sourceid=chrome-
instant&ie=UTF-8&ion=1#hl=sv&sclient=psy-
ab&q=how%20to%20server%20django%20with%20apache%20wsgi&oq=how%20to%20server%20django%20with%20apache%20wsgi&aq=f&aqi=&aql=&gs_l=serp.3...2231.2809.0.2872.5.5.0.0.0.0.84.258.5.5.0...0.0.G58deIDVKsk&pbx=1&bav=on.2,or.r_gc.r_pw.r_cp.r_qf.,cf.osb&fp=393b2ac78977d9cd&biw=2504&bih=1346&ion=1).
Good luck :)
|
Plot a cube of 3D intensity data
Question: I have k cubes of `(n,n,n)` intensity values and I would like to plot them.
I consider them as diffusion tensors in diffusion MRI and I would like to
visualize them (maybe as ellipsoids) and then try to "align" in some way. At
present I simply plot for each cube its n "slice" `(n,n)`.
Is there any python module for this task?
Answer: You can use mayavi2 for this. Since I don't have a representation of your
data, I gave a minimal working example with some random spheres over a grid
below:
import numpy
import mayavi.mlab as mlab
# Create some random data
N = 20
x, y, z = numpy.mgrid[-5:5:20j, -5:5:20j, -5:5:20j]
val = numpy.random.random(z.shape)
# Plot and show in mayavi2
pts = mlab.points3d(x, y, z, val, scale_factor=.5,transparent=True)
mlab.show()

|
Parsing emails problems
Question: I'm having problems with decoding emails that that I'm fetching.
The script should log on to an email account, get the unread messages and then
later on store them in a database. I only want the actual text from the email
but none of the html stuff.
I have found many examples but none of the seems to work. I have tried
[this](http://stackoverflow.com/questions/787739/python-email-get-payload-
decode-fails-when-hitting-equal-sign) and
[this](http://stackoverflow.com/questions/7331351/python-email-header-
decoding-utf-8) and some more I have found.
The Code I have now:
import imaplib, sys, email
import email.parser
myparser = email.parser.Parser()
conn = imaplib.IMAP4_SSL(host='mail.something.com')
retcode, capabilities = conn.login('username', 'XXXXX')
conn.select('Inbox', readonly = 1) # Select inbox as read-only
retcode, messages = conn.search(None, '(UNSEEN)')
if retcode == 'OK':
for message in messages[0].split(' '):
if message == '':
continue
ret, data = conn.fetch(message,'(RFC822)')
msg = email.message_from_string(data[0][1])
# rootMessage = myparser.parse(data[0][1])
# print 'Message %s\n%s\n' % (message, rootMessage)
print msg
print '---------------------------------------------------------------'
conn.close()
As you can see there is no decoding in this because everything that I have
tried has failed.
I am very new to python so if someone could steer me into the right direction
I would really appreciate it. A hack would be all right, it not a mission
critical script, but a generic solution would be best.
-G
UPDATE:
There is no error, the problem is that the output is not decoded correctly.
Example input:
This is a test message.
Gísli
Output:
This is a test message.
G=EDsli
Answer: This can help:
import quopri
print quopri.decodestring(msg).decode('utf8')
Or this:
import base64
body = base64.b64decode(msg)
|
Looping over weekday in python time object
Question: I have a dataset of drivers' travel diaries. For each trip there is an
associated start time, end time and day of week in a csv file. There are no
dates associated with the trips.
I have now got the data into python where each start time and end time has the
weekday attached to it like so:
time.struct_time(tm_year=1900, tm_mon=1, tm_mday=1, tm_hour=23,
tm_min=45, tm_sec=0, tm_wday=0, tm_yday=1, tm_isdst=-1)
print journey['BeginTime'][2].tm_wday, journey['BeginTime'][2].tm_hour
Which returns 0 for a Monday and 23 for the hour.
There's 11,000 of these trips and what I want to get is a weekly profile of
the number of cars which are driving based on time of day.
This can be inferred by counting the number of trips that are between their
respective ['BeginTime'] and ['EndTime'] interval over a specified time
interval. A five minute interval is sufficient as the data is to the nearest
five minutes.
Is there an elegant python way to do this? Something like:
for fiveMinutes in Week:
count = 0
for trip in range(len(journey['BeginTime']):
if journey['BeginTime'][trip] == fiveMinutes
or (journey['BeginTime'][trip] < fiveMinutes
and journey['EndTime'][trip] > fiveMinutes):
count = count + 1
carCount[fiveMinutes] = count
Answer: In case this helps, here is an idea ...
from datetime import datetime, timedelta
# This does not check for crossing from Sunday to Monday
def convert_dt(start_dt, journey):
begin_weekday, begin_hour, begin_minute = journey[0]
end_weekday, end_hour, end_minute = journey[1]
begin_dt = start_dt + timedelta(days=begin_weekday)
begin_dt += timedelta(hours=begin_hour, minutes=begin_minute)
end_dt = start_dt + timedelta(days=end_weekday)
end_dt += timedelta(hours=end_hour,minutes=end_minute)
return (begin_dt, end_dt)
def get_slot_journeys(start_dt, journeys):
next_dt = start_dt
slot_count = 60/5 * 24 * 7
slot_dict = {}
journey_dts = []
#convert journey begin and end to datetimes
for index in range(len(journeys['begin_weekday'])):
next_journey = [(journeys['begin_weekday'][index],
journeys['begin_hour'][index],
journeys['begin_minute'][index],),
(journeys['end_weekday'][index],
journeys['end_hour'][index],
journeys['end_minute'][index],)
]
journey_dts.append(convert_dt(start_dt, next_journey))
for slot in range(slot_count):
slot_dict[next_dt] = 0
for journey_start, journey_end in journey_dts:
if next_dt >= journey_start and next_dt <= journey_end:
slot_dict[next_dt] = slot_dict[next_dt] + 1
next_dt += timedelta(minutes=(5))
return slot_dict
if __name__ == "__main__":
start_dt = datetime(2012, 1, 2, 0, 0)
journeys = {'begin_weekday': [0, 0],
'begin_hour': [14, 18],
'begin_minute': [20, 30],
'end_weekday': [0, 1],
'end_hour': [19, 12],
'end_minute': [15, 55],
}
slot_dict = get_slot_journeys(start_dt, journeys)
slot_keys = slot_dict.keys()
slot_keys.sort()
for key in slot_keys:
if slot_dict[key]:
print key, slot_dict[key]
|
TypeError: 'InMemoryUploadedFile' object is not subscriptable
Question: I have a Google Appengine Project using Python2.7 and Django1.2 on Eclipse,
that allows the user to use a form to upload a picture, resize it, and store
it as a BLOB field.
I added a breakpoint where I indicated below, and saw "file['content']"
showing a value " TypeError: 'InMemoryUploadedFile' object is not
subscriptable" in the Expressions view.
When I step into or over this line, it jumps to the error handler.
Can someone please advise how I can fix this problem? Thanks in advance!
if req.method == 'POST':
try:
u_form = UserInfoForm(req.POST)
if not u_form.is_valid():
return err_page(_('Error'))
u = coffeeuser.CoffeeUser.all().filter('user =', user_info).get()
u.nickname = user_info.nickname()
u.realname = req.POST.get('real_name')
u.phone = req.POST.get('phone')
u.address = req.POST.get('address')
if req.FILES.get('photo_file'):
file = req.FILES.get('photo_file')
img = images.Image(file['content']) <<<Breakpoint...Error occurs here
img.resize(width=50, height=50)
resized_img = img.execute_transforms(output_encoding=images.JPEG)
u.photo_file = db.Blob(resized_img)
u.put()
return HttpResponseRedirect('/user/')
except Exception, x:
return err_page(_('Error'))
And here is the dump of the Console window as this happens. I don't see any
error messages here.
INFO 2012-05-26 07:34:21,114 dev_appserver.py:2891] "GET /favicon.ico
HTTP/1.1" 404 - DEBUG 2012-05-26 07:35:15,960 dev_appserver.py:656] Matched
"/user/" to CGI dispatcher with path main.py DEBUG 2012-05-26 07:35:16,319
dev_appserver_import_hook.py:1246] Enabling PIL: ['_imaging', '_imagingcms',
'_imagingft', '_imagingmath'] DEBUG 2012-05-26 07:35:16,322
dev_appserver_import_hook.py:1246] Enabling django: [] DEBUG 2012-05-26
07:35:16,322 dev_appserver.py:1624] Executing CGI with env: {'HTTP_REFERER':
'http://localhost:8080/user/', 'REQUEST_ID_HASH': 'C1DFD96E',
'SERVER_SOFTWARE': 'Development/1.0', 'SCRIPT_NAME': '', 'REQUEST_METHOD':
'POST', 'PATH_INFO': '/user/', 'HTTP_ORIGIN': 'http://localhost:8080',
'SERVER_PROTOCOL': 'HTTP/1.0', 'QUERY_STRING': '', 'CONTENT_LENGTH': '927730',
'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3', 'APPENGINE_RUNTIME':
'python27', 'TZ': 'UTC', 'HTTP_COOKIE': 'RememberMe=YPD/ztDwsHCs3J9cPG5c+g==;
dev_appserver_login="[email protected]:False:185804764220139124118";
sessionid=2b5fc41e4c0332b1161a002ae12e616b;
csrftoken=05e24dcb62093082dc1fafe66c0a6dbb', 'SERVER_NAME': 'localhost',
'REMOTE_ADDR': '127.0.0.1', 'SDK_VERSION': '1.6.5', 'PATH_TRANSLATED':
'C:\\_dev\eclipse-work\gae\pydev5\src\main.py', 'SERVER_PORT': '8080',
'CONTENT_TYPE': 'multipart/form-data;
boundary=----WebKitFormBoundaryC4BfGc98AzYhJTQD', 'CURRENT_VERSION_ID': '1.1',
'USER_ORGANIZATION': '', 'USER_ID': '185804764220139124118',
'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.19
(KHTML, like Gecko) Chrome/18.0.1025.168 Safari/535.19', 'HTTP_HOST':
'localhost:8080', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_CACHE_CONTROL': 'max-
age=0', 'USER_EMAIL': '[email protected]', 'HTTP_ACCEPT':
'text/html,application/xhtml+xml,application/xml;q=0.9,_/_ ;q=0.8',
'APPLICATION_ID': 'dev~quizoncloud', 'GATEWAY_INTERFACE': 'CGI/1.1',
'HTTP_ACCEPT_LANGUAGE': 'en-US,en;q=0.8', 'AUTH_DOMAIN': 'gmail.com',
'_AH_ENCODED_SCRIPT_NAME': '/user/'} DEBUG 2012-05-26 07:36:19,815
datastore_stub_index.py:181] No need to update index.yaml
Answer: `file` is not a dictionary, so you can't do key lookup like that. Perhaps you
mean `file.content`?
Although I don't think the object has a `content` attribute either - see [the
documentation](https://docs.djangoproject.com/en/1.3/topics/http/file-
uploads/#django.core.files.uploadedfile.UploadedFile) for UploadedFile
objects. Maybe you meant `file.read()`?
(Also, don't call your variable `file` \- that hides the built-in `file`
function).
|
wordpress with python on proxy server
Question: This is a code for posting on a blog. It is my first try. I dont know what is
the error in it. I am using proxy server and the error I'm getting is
connection to server failed.
Can anyone help me out pleaseeeeeeeeee :/
import wordpresslib
# dummy data to be on safe side
data = "Post content, just ensuring data is not empty"
url='http://agneesa.wordpress.com/wordpress/xmlrpc.php'
# insert correct username and password
wp=wordpresslib.WordPressClient(url,'agnsa','pan@13579')
wp.selectBlog(0)
post=wordpresslib.WordPressPost()
post.title='try'
post.description=data
idPost=wp.newPost(post,True)
here is the traceback
here is the traceback file
Traceback (most recent call last):
File "C:\Python27\Lib\example.py", line 34, in <module>
post.categories = (wp.getCategoryIdFromName('Python'),)
File "C:\Python27\Lib\wordpresslib.py", line 332, in getCategoryIdFromName
for c in self.getCategoryList():
File "C:\Python27\Lib\wordpresslib.py", line 321, in getCategoryList
self.user, self.password)
File "C:\Python27\Lib\xmlrpclib.py", line 1224, in __call__
return self.__send(self.__name, args)
File "C:\Python27\Lib\xmlrpclib.py", line 1578, in __request
verbose=self.__verbose
File "C:\Python27\Lib\xmlrpclib.py", line 1264, in request
return self.single_request(host, handler, request_body, verbose)
File "C:\Python27\Lib\xmlrpclib.py", line 1292, in single_request
self.send_content(h, request_body)
File "C:\Python27\Lib\xmlrpclib.py", line 1439, in send_content
connection.endheaders(request_body)
File "C:\Python27\Lib\httplib.py", line 954, in endheaders
self._send_output(message_body)
File "C:\Python27\Lib\httplib.py", line 814, in _send_output
self.send(msg)
File "C:\Python27\Lib\httplib.py", line 776, in send
self.connect()
File "C:\Python27\Lib\httplib.py", line 757, in connect
self.timeout, self.source_address)
File "socket.py", line 571, in create_connection
raise err
error: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
Answer: From the looks of your site, the url you posted returns a 404 (not actually
there). However, this does seem ready to receive POST requests:
<http://agneesa.wordpress.com/xmlrpc.php>
I suggest you try checking that URL for accuracy.
This is what I get when I try your code with your original URL:
xmlrpclib.ProtocolError: <ProtocolError for \
agneesa.wordpress.com/wordpress/xmlrpc.php: 404 Not Found>
This is what I get when I try it with the modified URL:
wordpresslib.WordPressException: \
<WordPressException 403: 'Bad login/pass combination.'>
... obviously because thats not your real account info. In a nutshell, its
possible your proxy could also be contributing to problems if its not set up
to properly forward the request, but without us knowing specifics about your
proxy config, there is no way to know for sure.
|
When I enter the django shell, why does it 'freeze' the database
Question: Why does the database remain static when a user enters the django shell via
`python manage.py runserver`? For example:
>>> from userprofile.models import UserProfile
>>> up=UserProfile.objects.get(id=4)
>>> up.get_jobs_applied_for()
[<JobApplication: david - Editor>, <JobApplication: david - Assistant Director>]
# delete entries in the mysql database
>>> up.get_jobs_applied_for()
[<JobApplication: david - Editor>, <JobApplication: david - Assistant Director>]
# but the results do not reflect that
And the method being called:
# in `UserProfile` class
def get_jobs_applied_for(self):
jobs_applied_for = self.jobapplication_set.order_by('-timestamp')
return jobs_applied_for
Why doesn't it query the db in real-time?
Answer: As Matthew Schinckel asked, where and when does the deletion occur? If it
occurs in another process, before the second `>>> up.get_jobs_applied_for()`,
and you're using MySQL w/ isolation level set to [`REPEATABLE
READ`](http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html), the MySQL
DB would provide you an earlier snapshot before the deletion. (Providing that
the code in your question are running in a whole transaction, for example in
managed transaction or on some MySQL connection pool)
Also, you could use `django.db.connection.queries` between lines to confirm
whether Django tries to query from DB actually.
|
PS3 controller driver -> uinput-> python? somehow?
Question: I'm trying to read from a PS3 controller in python on Ubuntu and I'm not
having much luck. I started with the ps3joy driver from Willow Garage
(http://www.ros.org/wiki/ps3joy) which supposedly publishes all the important
bits of the PS3 controller to something I had never heard of called "uinput".
Apparently it's a linux feature that allows userspace drivers to provide
system events. ...Why the WG driver requires root access given that it's
supposedly a userspace driver is beyond me, but that's not my question.
Anyway, the current state of me trying to get it to work is that I've got the
driver working, and I've verified that it responds to button presses on the
controller, but I don't know how to pull any of that data out so I can use it.
My first guess was to use pygame to (hopefully) read from /dev/uinput (which
I'm pretty sure is where the driver sends the data):
from pygame import joystick
if not joystick.get_init():
joystick.init()
js = joystick.Joystick(0) # there is only one joystick... even if the driver isn't running(!)
js.init()
print js.get_numbuttons() # perhaps coincidentally correctly prints 17 which is the number of buttons on a PS3 controller
for i in range(js.get_numaxes()):
print js.get_axis(i) # always prints 0, no matter what I'm doing with the controller
but it didn't work. The most telling part of the problem is that it does the
same thing if I don't have the WG driver running at all.
I'm sure this is something easy, that I'm just not reading the right
information, but googling has not helped me find what the right information is
and I'm getting tired and desperate.
Answer: You don't need the driver. Assuming the controller exposes itself as a HID,
you can use the [event subsystem](http://www.linuxjournal.com/article/6429) to
read controller events directly from the device.
|
Log everything printed into a file
Question: I would like to create a function that keeps a record of every `print`
command, storing each command's string into a new line in a file.
def log(line):
with open('file.txt', "a") as f:
f.write('\n' + line)
This is what I have, but is there any way to do what I said using Python?
Answer: Try replacing `stdout` with custom class:
import sys
class LoggedStdout():
def __init__(self, filename = None):
self.filename = filename
def write(self, text):
sys.__stdout__.write(text)
if not self.filename is None:
self.log(text)
def log(self, line):
with open(self.filename, "a") as f:
f.write('\n' + line)
sys.stdout = LoggedStdout('file.txt')
print 'Hello world!'
This would affect not only `print`, but also any other function that prints
something to `stdout`, but it is often even better.
For production-mode logging it's much better to use something like [`logging`
module](http://docs.python.org/library/logging.html), rather than home-made
hooks over standard IO streams.
|
Julia's Python performance example in pypy
Question: [Julia](http://julialang.org/) is a new statistical programming language that
claims significantly better performance than competing languages. I'm trying
to verify this. Julia has a performance test written in Python:
<https://github.com/JuliaLang/julia/blob/master/test/perf/perf.py>
I can't get it to work with pypy. Perhaps this is due to numpypy
incompatibilities with numpy, but I'm not getting far enough to determine
that. I followed the ImportError advice `"...or just write 'import numpypy'
first in your program..."` but I get another ImportError: `"No module named
numpy.linalg"`
I have near zero experience with Python and I'm looking for a complete
solution that I can run. The benefit of getting this to work is that we can we
have a apples-to-apples (jit lang-to-jit lang) comparison.
Answer: 
There are 4 test on Julia git (perf.py) in pure Python. Here, I run, in the
same computer, **perf.py** (only the pure Python test) and **perf.pl** for a
apples-to-apples comparison. I'm a little worried for Python/Pypy timing :/
And... Why
## fibonacci ##
def fib(n):
if n<2:
return n
return fib(n-1)+fib(n-2)
is slower in Pypy than in Python ?
* * *
I post this question in <https://bugs.pypy.org/issue1344> [Pypy slower in
recursion than Python2.7, Python3.2 and Julia] I get the next answer:
> This is a situation where the warmup time is very significant (it tries to
> inline all the recursion), but once you warm it up it's actually very fast.
So, I do the text with different numbers of **n** for **fib(n)**. Indeed, Pypy
comes faster than Python with a n > 30, but in recursion is slower than Julia:
[ En **bold** the faster python implementation ]

* * *
**Because are implemented with recursion, Quicksort and fib are slower in
Pypy.**
**Pypy looks to have the same performance than Julia.**
|
why is xrange able to go back to beginning in Python?
Question: I've encountered this code from [Most pythonic way of counting matching
elements in something
iterable](http://stackoverflow.com/questions/157039/most-pythonic-way-of-
counting-matching-elements-in-something-iterable)
r = xrange(1, 10)
print sum(1 for v in r if v % 2 == 0) # 4
print sum(1 for v in r if v % 3 == 0) # 3
r is iterated once. and then it's iterated again. I thought if an iterator is
once consumed then it's over and it should not be iterated again.
Generator expressions can be iterated only once:
r = (7 * i for i in xrange(1, 10))
print sum(1 for v in r if v % 2 == 0) # 4
print sum(1 for v in r if v % 3 == 0) # 0
enumerate(L) too:
r = enumerate(mylist)
and file object too:
f = open(myfilename, 'r')
Why does xrange behave differently?
Answer: Because `xrange` does not return a generator. It returns an [xrange
object](http://docs.python.org/library/functions.html#xrange).
>>> type(xrange(10))
<type 'xrange'>
In addition to repeated iteration, `xrange` objects support other things that
generators don't -- like indexing:
>>> xrange(10)[5]
5
They also have a length:
>>> len(xrange(10))
10
And they can be reversed:
>>> list(reversed(xrange(10)))
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
In short, `xrange` objects implement the full
[sequence](http://docs.python.org/glossary.html#term-sequence)
[interface](http://docs.python.org/library/collections.html#collections-
abstract-base-classes):
>>> import collections
>>> isinstance(xrange(10), collections.Sequence)
True
They just do it without using up a lot of memory.
Note also that in Python 3, the `range` object returned by `range` has all the
same properties.
|
How to execute a file that requires being in the same directory?
Question: I have a python script that needs to execute a `.jar` file that is located in
another directory. What would be the best way to do this? So far I was
thinking -
subprocess.call(["cd","/path/to/file"])
subprocess.call(["./file.jar"])
How should I do this?
**Update:**
Using both of the answers below, this is what I ended up doing:
subprocess.call(shlex.split("./file.jar -rest -of -command"), cwd=COMMAND_FOLDER)
Answer: To run a process in a different current working directory, use
`subprocess.Popen`'s `cwd` parameter:
import subprocess
proc = subprocess.Popen(['file.jar'], cwd = '/path/to/file')
|
Why print operation within signal handler may change deadlock situation?
Question: I got simple program as below:
import threading
import time
import signal
WITH_DEADLOCK = 0
lock = threading.Lock()
def interruptHandler(signo, frame):
print str(frame), 'received', signo
lock.acquire()
try:
time.sleep(3)
finally:
if WITH_DEADLOCK:
print str(frame), 'release'
lock.release()
signal.signal(signal.SIGINT, interruptHandler)
for x in xrange(60):
print time.strftime("%H:%M:%S"), 'main thread is working'
time.sleep(1)
So, if you start that program and even Ctrl+C is pressed twice within 3
seconds, there is no deadlock. Each time you press Ctrl + C proper line is
displayed. If you change WITH_DEADLOCK=1 and you would press Ctrl+C twice
(withing 3 seconds) then program will be hung.
Does anybody may explain why print operation make such difference?
(My python version is 2.6.5)
Answer: To be honest I think J.F. Sebastian's comment is the most appropriate answer
here - you need to make your signal handler reentrant, which it currently
isn't, and it is mostly just surprising that it works anyway without the print
statement.
|
Paramiko ssh_config parameters
Question: I'm using python with paramiko (wrapped in pysftp) and there appears to be an
issue where it will hang for a long time if it cannot authenticate during ssh.
I can't figure out a way to set a timeout for the connection and I'm cycling
through many machines, so a single machine that is pingable, but not ssh'able
(can't reach via cmdline ssh either) is hanging everything. Using this:
ssh -o ServerAliveInterval=1 -o ServerAliveCountMax=1 <host>
I can at least get it to error out after 1 second without waiting for a long
time for the authentication in paramiko to die out and raise an exception.
However, I can't figure out how to pass these ssh_config options to paramiko
(or better yet to apply a timeout to the connect). I tried using the SSHConfig
module and that reads in a config file, but it doesn't seem to apply the data
anywhere, seems more used for host aliases.
Any help would be appreciated, been searching around for information/help for
many hours.
Answer: Establish the initial connection using `SSHClient.connect()` with a specified
socket timeout, then create a `SFTPClient` using its transport.
**Successful connection**
>>> import paramiko
>>> client = paramiko.SSHClient()
>>> client.load_system_host_keys()
>>> client.connect(hostname='localhost', port=22, username='user', password='****', timeout=5.0)
>>> sftp = paramiko.SFTPClient.from_transport(client.get_transport())
>>> dirlist = sftp.listdir('.')
**Timed out connection**
>>> import paramiko
>>> client = paramiko.SSHClient()
>>> client.load_system_host_keys()
>>> client.connect(hostname='slowhost', username='user', password='****', timeout=1.0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/paramiko/client.py", line 296, in connect
sock.connect(addr)
File "/usr/lib64/python2.7/socket.py", line 222, in meth
return getattr(self._sock,name)(*args)
socket.timeout: timed out
|
Generating Xml using python
Question: Kindly have a look at below code i am using this to generate a xml using
python .
from lxml import etree
# Some dummy text
conn_id = 5
conn_name = "Airtelll"
conn_desc = "Largets TRelecome"
ip = "192.168.1.23"
# Building the XML tree
# Note how attributes and text are added, using the Element methods
# and not by concatenating strings as in your question
root = etree.Element("ispinfo")
child = etree.SubElement(root, 'connection',
number = str(conn_id),
name = conn_name,
desc = conn_desc)
subchild_ip = etree.SubElement(child, 'ip_address')
subchild_ip.text = ip
# and pretty-printing it
print etree.tostring(root, pretty_print=True)
This will produce:
<ispinfo>
<connection desc="Largets TRelecome" number="5" name="Airtelll">
<ip_address>192.168.1.23</ip_address>
</connection>
</ispinfo>
But i want it to be like :
<ispinfo>
<connection desc="Largets TRelecome" number='1' name="Airtelll">
<ip_address>192.168.1.23</ip_address>
</connection>
</ispinfo>
Mean number attribute should be come in a single quote .Any idea ....How can i
achieve this
Answer: There is no flag in lxml to do this, so you have to resort to manual
manipulation.
import re
re.sub(r'number="([0-9]+)"',r"number='\1'", etree.tostring(root, pretty_print=True))
However, why do you want to do this? As there is no difference other than
cosmetics.
|
Python eliminate duplicates of list with unhashable elements in one line
Question: > **Possible Duplicate:**
> [Python: removing duplicates from a list of
> lists](http://stackoverflow.com/questions/2213923/python-removing-
> duplicates-from-a-list-of-lists)
Say i have list
a=[1,2,1,2,1,3]
If all elements in a are hashable (like in that case), this would do the job:
list(set(a))
But, what if
a=[[1,2],[1,2],[1,3]]
?
Answer:
>>> from itertools import groupby
>>> a = [[1,2],[1,2],[1,3]]
>>> [k for k,v in groupby(sorted(a))]
[[1, 2], [1, 3]]
|
Error in Fullcalendar with json and web2py
Question: I´m calling:
events: {
url: '/CondominioVip/evento/evento_json.json',
error: function() {
alert('there was an error while fetching events!');
}
}
I've also tried to add `type: 'POST` but it didn't work either.
My controller:
def evento_json():
events= "[{'title':'event1','start':'2010-01-01'},{'title':'event3','start':'2010-01-09 12:30:00','allDay':False}]"
return events
Test call from browser
(`http://localhost:8000/CondominioVip/evento/evento_json.json`):
>
> [{'title':'event1','start':'2010-01-01'},{'title':'event3','start':'2010-01-09
> 12:30:00','allDay':False}]
Request from web2py ajax function:
> Accept:application/json, text/javascript, _/_ ; q=0.01
> Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
> Accept-Encoding:gzip,deflate,sdch
> Accept-Language:pt-BR,pt;q=0.8,en-US;q=0.6,en;q=0.4
> Cache-Control:max-age=0
> Connection:keep-alive
> Content-Length:31
> Content-Type:application/x-www-form-urlencoded
> Cookie:session_id_admin=127.0.0.1-f9db7d99-4e7e-4bae-a229-d6614e9599f0;
> cvip_language=pt-BR;
> session_id_condominiovip=127.0.0.1-b820cbe3-9a55-40bb-8618-8d3f9a43f7c2
> Host:`localhost:8000`
> Origin:`http://localhost:8000`
> Referer:`http://localhost:8000/CondominioVip/evento/index/2`
> User-Agent:Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.5 (KHTML, like
> Gecko) Chrome/19.0.1084.46 Safari/536.5
> X-Requested-With:XMLHttpRequest
Response headers:
> Cache-Control:no-store, no-cache, must-revalidate, post-check=0, pre-check=0
> Connection:keep-alive
> Content-Length:105
> Content-Type:application/json
> Date:Tue, 29 May 2012 02:54:04 GMT
> Expires:Tue, 29 May 2012 02:54:03 GMT
> Pragma:no-cache
> Server:Rocket 1.2.4 Python/2.7.3
> Set-Cookie:cvip_language=pt-BR; expires=Wed, 30-May-2012 02:54:04 GMT;
> Path=/,
> session_id_condominiovip=127.0.0.1-b820cbe3-9a55-40bb-8618-8d3f9a43f7c2;
> Path=/
> X-Powered-By:web2py
Response:
>
> [{'title':'event1','start':'2010-01-01'},{'title':'event3','start':'2010-01-09
> 12:30:00','allDay':False}]
I'm out of options. Any help would be apprecianted.
Update:
I've installed the JsonView extension on Chrome, so returning a string is not
considered a json response.
I made some changes:
def evento_json():
rows = db(evento.id>0).select(evento.id,evento.titulo,evento.data_hora_inicio,evento.data_hora_fim)
events = []
for row in rows:
event = {'title': row['titulo'],
'start': row['data_hora_inicio'],
'end': row['data_hora_fim'],
'allDay': False,
'url': URL(c='evento', f='index', args=[row['id']], extension=False)}
events.append(event)
return events
But web2py throws an error. I´ve printed "events" and "json(events)" before
send it to generic.json, and the format is exactly what fullcalendar expects.
The way I found to stop the error is:
In controller:
def evento_json():
import datetime
start = datetime.datetime.fromtimestamp(int(request.vars['start'])).strftime('%Y-%m-%d %H:%M:%S')
end = datetime.datetime.fromtimestamp(int(request.vars['end'])).strftime('%Y-%m-%d %H:%M:%S')
set = db((evento.id>0) &
(evento.data_hora_inicio >= start) &
(evento.data_hora_fim <= end))
if (not auth.has_membership(auth.id_group(role='site_admin'), auth.user.id)) and \
(not auth.has_membership(auth.id_group(role='cond_admin'), auth.user.id)):
set = set(evento.flag_disp==True)
rows = set.select(evento.id,
evento.titulo,
evento.data_hora_inicio,
evento.data_hora_fim,
evento.flag_disp)
events = []
for row in rows:
event = {'title': row['titulo'],
'start': row['data_hora_inicio'],
'end': row['data_hora_fim'],
'allDay': False,
'url': URL(c='evento', f='index', args=[row['id']], extension=False),
'color': 'blue' if row['flag_disp'] is True else 'red'}
events.append(event)
if events:
from gluon.serializers import json
return XML(json(events))
else:
return '{}'
And create a view evento/evento_json.json with the following:
> {{=response._vars)}}
It works! But seems a bug to me. I'm not sure if I'm doing something wrong.
Answer: Maybe try:
events= "[{'title':'event1','start':'2010-01-01'},{'title':'event3','start':'2010-01-09T12:30:00Z','allDay':false}]"
(Changed the event3 start date/time to ISO8601 format as per the
[API](http://arshaw.com/fullcalendar/docs/event_data/Event_Object/), and
changed "False" to "false".)
|
Python star unpacking for version 2.7
Question: As mentioned [here](http://stackoverflow.com/a/431959/386279), you can use the
star for unpacking an unknown number of variables (like in functions), but
only in python 3:
>>> a, *b = (1, 2, 3)
>>> b
[2, 3]
>>> a, *b = (1,)
>>> b
[]
In python 2.7, the best I can come up with is (not terrible, but annoying):
c = (1, 2, 3)
a, b = c[0], c[1:] if len(c) > 1 else []
Is there a way to import this from __future__ like division, or will I need my
own function to do unknown-length unpacking in python 2.7?
Answer: in python 2.X, you can do:
c = (1, 2, 3)
a, b = c[0], c[1:]
as long as `c` has at east one member it will work because if `c` only has 1
thing in it `c[1:]` is `[]`.
You should probably make sure there is at least one thing in `c` though, or
else `c[0]` will raise an exception.
You could do something like:
try:
c = tuple(c)
a, b = c[0], c[1:]
except TypeError, IndexError:
# c is not iterable, or c is iterable, but it doesn't have any stuff in it.
# do something else
pass
|
Multithreading in Python with the threading and queue modules
Question: I have a file with hundreds of thousands of lines, each line of which needs to
be undergo the same process (calculating a co-variance). I was going to
multithread because it takes pretty long as is. All the examples/tutorials I
have seen have been fairly complicated for what I want to do, however. If
anyone could point me to a good tutorial that explains how to use the two
modules together that would be great.
Answer: Whenever I have to process something in parallel, I use something similar to
this (I just ripped this out of an existing script):
#!/usr/bin/env python2
# This Python file uses the following encoding: utf-8
import os, sys, time
from multiprocessing import Queue, Manager, Process, Value, Event, cpu_count
class ThreadedProcessor(object):
def __init__(self, parser, input_file, output_file, threads=cpu_count()):
self.parser = parser
self.num_processes = threads
self.input_file = input_file
self.output_file = output_file
self.shared_proxy = Manager()
self.input_queue = Queue()
self.output_queue = Queue()
self.input_process = Process(target=self.parse_input)
self.output_process = Process(target=self.write_output)
self.processes = [Process(target=self.process_row) for i in range(self.num_processes)]
self.input_process.start()
self.output_process.start()
for process in self.processes:
process.start()
self.input_process.join()
for process in self.processes:
process.join()
self.output_process.join()
def parse_input(self):
for index, row in enumerate(self.input_file):
self.input_queue.put([index, row])
for i in range(self.num_processes):
self.input_queue.put('STOP')
def process_row(self):
for index, row in iter(self.input_queue.get, 'STOP'):
self.output_queue.put([index, row[0], self.parser.parse(row[1])])
self.output_queue.put('STOP')
def write_output(self):
current = 0
buffer = {}
for works in range(self.num_processes):
for index, id, row in iter(self.output_queue.get, 'STOP'):
if index != current:
buffer[index] = [id] + row
else:
self.output_file.writerow([id] + row)
current += 1
while current in buffer:
self.output_file.writerow(buffer[current])
del buffer[current]
current += 1
Basically, you have two processes managing the reading/writing of the file.
One reads and parses the input, the other reads from the "done" queue and
writes to your output file. The other processes are spawned (in this case the
number is equal to the number of total processor cores your CPU has) and they
all process elements from the input queue.
|
string append in python
Question: i am trying to prepend each line of a file with " (2 spaces/tabs after ") and
append with string- "\r\n"+". the lines of the file before this operation
looks like as folllows.
<!--You have a CHOICE of the next 5 items at this level-->
<!--Optional:-->
<urn:Account id=\"?\">
<!--Optional:-->
............
.............
I am using the following code,
inf=open("req.txt","r")
outf=open("out.txt","w")
for line in inf.readlines():
outf.write("\" "+line+"\\r\\n\" +")
inf.close()
outf.close()
prepending is happening as expected but appending is not happening properly.
Final result was all lines were prepended with - \r\n" +" except the first
line. first line was prepended with only " .
I want each line prepended with " and appended with "\r\n"+"
Answer: You should probably use python's built in string formatting.
outf.write("%s%s%s" % ('" ', line, '\r\n" +'))
However! You're not removing the newlines from your data before changing it.
As a result you're getting your format, the entire line (including the
newline) and then your second part.
You'll want to run it through python's built in `rstrip` method.
outf.write("%s%s%s" % ('" ', line.rstrip(), '\r\n" +'))
One thing you want to watch out for is that rstrip will remove any white
space, if you want to remove just newlines then you can use:
outf.write("%s%s%s" % ('" ', line.rstrip("\r\n"), '\r\n" +'))
However, once you do this, you'll need to put a new line at the end of your
string again.
outf.write("%s%s%s%s" % ('" ', line.rstrip("\r\n"), '\r\n" +', "\r\n"))
However! Depending on your OS your default line ending may be different, to do
it correctly you'll want to `import os` then:
outf.write("%s%s%s%s" % ('" ', line.rstrip(os.linesep), '\r\n" +', os.linesep))
|
Python - How do i remove the window border? I have imported UI from Qt into Python and applied setWindowFlags
Question: How to make this window Border-less (remove minimize/maximize/close)?

1 import sys
2 from PyQt4 import QtCore, QtGui
3 from qt import Ui_MainWindow
4
5 class StartQT4(QtGui.QMainWindow):
6 def __init__(self, parent=None):
7 QtGui.QWidget.__init__(self, parent)
8 self.ui = Ui_MainWindow()
9 self.ui.setupUi(self))
10
11 if __name__ == "__main__":
12 app = QtGui.QApplication(sys.argv)
13 myapp = StartQT4()
14 myapp.show()
15 app.setWindowFlags(app.FramelessWindowHint) <<< does not working
16 sys.exit(app.exec_())
17
Answer: You need to set the window flag before calling `show` on the main window.
A minimal working example would look like this:
import sys
from PyQt4 import QtCore, QtGui
class StartQT4(QtGui.QMainWindow):
def __init__(self, parent=None):
super(StartQT4, self).__init__(parent)
self.setWindowFlags(QtCore.Qt.FramelessWindowHint)
self.b = QtGui.QPushButton("exit", self, clicked=self.close)
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
myapp = StartQT4()
myapp.show()
sys.exit(app.exec_())
|
bittorrent tracker server for private file transfer - python
Question: We have a client/server application that needs to transfer the same large
files to, sometimes, many different clients.
At first all is being done the most obvious way, serving the file from the
webserver api where the clients send their requests to, but everything is done
manually.
A great way to dramatically improve error redundancy, and transfer speed would
be to use a peer-to-peer protocol such as bittorrent.
Due to deadline constraints though I can't spend too much time on the
trial/error process.
I can't find any simple tracker implementation that is easily integrated to
our python api.
Does anybody know of any up to date bittorrent tracker that is simple enough
to just work without all whistles and bells?
Answer: Here is a open-source tracker written in python,
<https://github.com/JosephSalisbury/python-bittorrent>
According to author, all you need to do is:
from bittorrent import Tracker
tracker = Tracker()
tracker.run()
Just for information here is the list of few open-source trackers,
<http://en.wikipedia.org/wiki/Comparison_of_BitTorrent_tracker_software>
The Pirate Bay (World's largest tracker) use Opentracker software,
<http://en.wikipedia.org/wiki/Opentracker>
|
Python threading outperforms simple while loop OR threading Optimization
Question: A few hours ago, I asked a question about Python multithreading. To understand
how it works, I have performed some experiments, and here are my tests:
* * *
Python script which uses threads:
import threading
import Queue
import time
s = 0;
class ThreadClass(threading.Thread):
lck = threading.Lock()
def __init__(self, inQ, outQ):
threading.Thread.__init__(self)
self.inQ = inQ
self.outQ = outQ
def run(self):
while True:
global s
#print self.getName()+" is running..."
self.item = self.inQ.get()
#self.inQ.task_done()
ThreadClass.lck.acquire()
s += self.item
ThreadClass.lck.release()
#self.inQ.task_done()
self.outQ.put(self.item)
self.inQ.task_done()
inQ = Queue.Queue()
outQ = Queue.Queue()
i = 0
n = 1000000
print "putting items to input"
while i<n:
inQ.put(i)
i += 1
start_time = time.time()
print "starting threads..."
for i in xrange(10):
t = ThreadClass(inQ, outQ);
t.setDaemon(True)
t.start()
inQ.join()
end_time = time.time()
print "Elapsed time is: %s"%(end_time - start_time)
print s
* * *
The following has the same functionality with a simple while loop:
import Queue
import time
inQ = Queue.Queue()
outQ = Queue.Queue()
i = 0
n = 1000000
sum = 0
print "putting items to input"
while i<n:
inQ.put(i)
i += 1
print "while loop starts..."
start_time = time.time()
while inQ.qsize() > 0:
item = inQ.get()
sum += item
outQ.put(item)
end_time = time.time()
print "Elapsed time is: %s"%(end_time - start_time)
print sum
* * *
If you run these programs on your machine, you can see that threads are much
slower than a simple while loop. I am a bit confused about threads and want to
know what is wrong with the threaded code. How can I optimize it (in this
situation), and why it is slower than the while loop?
Answer: threading is always tricky, by threading in Python is special.
To discuss optimization, you have to focus on special cases, otherwise there
is no single answer. The initial thread solution on my computer runs on 37.11
s. If you use a local variable to sum the elements of each thread and then
lock only at the end, the time drops to 32.62s.
Ok. The no thread solution runs on 7.47 s. Great. But if you want to sum a ton
of numbers in Python, you just use the built in function sum. So, if we use a
List with no threads and the sum built in, the time drops to 0.09 s. Great!
Why?
Threads in Python are subject to the Global Interpreter Lock (GIL). They will
never run Python code in parallel. They are real threads, but internally, they
are only allowed to run X Python instructions before releasing the GIL to
another thread. For very simple calculations, the cost of creating a thread,
locking and context switching is much bigger than the cost of your simple
computation. So in this case, the overhead is 5 times bigger than the
computation itself. Threading in Python is interesting when you can't use
async I/O or when you have blocking functions that should run at the same
time.
But, why the sum built in is faster than the Python no thread solution? The
sum built in is implemented in C, and Python loops suck performance wise. So
it is much faster to iterate all elements of the list using the built in sum.
Is it always the case? No, it depends on what you are doing. If you were
writing these numbers to n different files, the threading solution could have
a chance, as the GIL is released during I/O. But even then, we would need to
check if I/O buffering/disk sync time would not be game changers. This kind of
detail makes a final answer very difficult. So, if you want to optimize
something, you must have exactly what you have to optimize. To sum a list of
numbers in Python, just use the sum built in.
|
Generating Symmetric Matrices in Numpy
Question: I am trying to generate symmetric matrices in numpy. Specifically, these
matrices are to have random places entries, and in each entry the contents can
be random. Along the main diagonal we are not concerned with what enties are
in there, so I have randomized those as well.
The approach I have taken is to first generate a nxn all zero matrix and
simply loop over the indices of the matrices. However, given considering
looping is relatively expensive in python, I'm wondering if I can acheive the
same thing without using python's for loops.
Is there some things built into numpy that allow me to acheive my goal more
efficiently?
Here is my current code:
import numpy as np
import random
def empty(x, y):
return x*0
b = np.fromfunction(empty, (n, n), dtype = int)
for i in range(0, n):
for j in range(0, n):
if i == j:
b[i][j] = random.randrange(-2000, 2000)
else:
switch = random.random()
random.seed()
if switch > random.random():
a = random.randrange(-2000, 2000)
b[i][j] = a
b[j][i] = a
else:
b[i][j] = 0
b[j][i] = 0
Answer: You could just do something like:
import numpy as np
N = 100
b = np.random.random_integers(-2000,2000,size=(N,N))
b_symm = (b + b.T)/2
Where you can choose from whatever distribution you want in the `np.random` or
equivalent scipy module.
**Update:** If you are trying to build graph-like structures, definitely check
out the networkx package:
<http://networkx.lanl.gov>
which has a number of built-in routines to build graphs:
<http://networkx.lanl.gov/reference/generators.html>
Also if you want to add some number of randomly placed zeros, you can always
generate a random set of indices and replace the values with zero.
|
Controlling PowerPoint with Python's win32com. How to access "Save As" option programmatically
Question: I'm trying to open powerpoint via python and then save the slide presentation
as pdf handouts (three to a page). After a bit of googling, I stumbled upon
[this](http://stackoverflow.com/questions/2170830/vba-save-presentation-as-
pdf-in-handout-2x2-format-office-2007). A pretty similar question. however, I
can't seem to get it running. I think I may be 'translating' the VB into
Python incorrectly.
Looking over the MSDN docs [here](http://msdn.microsoft.com/en-
us/library/bb231096.aspx), I attempted to fill in the two required arguments
as a starting point. While it seems to accept the `path`, it throws an error
with the `FixedFormatType`.
My code is as follows.
import win32com.client, sys, os
app = win32com.client.Dispatch("PowerPoint.Application")
app.Visible = True
names = []
for i in os.listdir(os.getcwd()):
if i[-1] == 't':
tmp = os.getcwd() + '\\' + i
names.append(tmp)
tmp = ''
presentation = app.Presentations.Open(names[0])
presentation.ExportAsFixedFormat(os.getcwd() + '\\test.pdf', 'PpFixedFormatType')
I've tried many variations of the `ppFixdFormatType` argument.
`ppFixedFormatTypePDF`, `win32com.ppFixedFormatTypePDF`,
`win32com.client.constants.ppFixedFormatTypePDF`, etc.. but all raise an error
of one type or another. Most are similar to this: `ValueError: invalid literal
for int() with base 10: 'PpFixedFormatType'`
Answer: You may have encountered this bug in PowerPoint's API: [Type mismatch when
using export function of PowerPoint
2007](http://sourceforge.net/p/pywin32/bugs/339/). The solution is to specify
the PrintRange parameter to the ExportAsFixedFormat() call.
|
most efficient way to substring path and file out of a string
Question: I am new to python, just wondering what's the best way for python to do the
following:
file='/var/log/test.txt'
==action==
after ==action==, I want to get the path and the file separated like:
path='/var/log'
file_name='test.txt'
I am not asking how to do this, I am asking the most efficient way to do this
using least lines of code.
=EDIT=
What if my file='test.txt' instead of file='/var/log/test.txt'. I would more
likely expect:
path='.'
path='test.txt'
other than
path=''
file_name='test.txt'
what's the tip for this then?
Answer:
file = '/var/log/test.txt'
path, file_name = os.path.split(file)
yields:
path
'/var/log'
file_name
'test.txt'
To use
[os.path.split()](http://docs.python.org/library/os.path.html?highlight=os.path.split#os.path.split)
requires `import os`. I'd have to think that the Python library is as
efficient as it could be.
To respond to the update/edit, if no path is specified and you want the `.`
for path, add:
if not path: path = '.'
I.e.,
file = 'test.txt'
path, file_name = os.path.split(file)
if not path: path = '.'
gives:
path
'.'
file_name
'test.txt'
|
Defining PYTHONPATH automatically in virtualenvs
Question: Is possible to configure PYTHONPATH for a virtualenv automatically with
mkvirtualenv? I don't define PYTHONPATH in my ~/.bashrc, but in each
virtualenv. Every time I create a new virtualenv, I have to put these lines in
`$VIRTUAL_ENV/bin/activate` manually:
in deactivate function:
unset PYTHONPATH
outside deactivate:
PYTHONPATH="$VIRTUAL_ENV/lib/python2.7/site-packages"
I'd like to put these lines automatically with mkvirtualenv. I'm using
virtualenv 1.7.1.2.
Answer: The path `$VIRTUAL_ENV/lib/python2.7/site-packages` is default in virtualenv.
I only removed [ipython](http://ipython.org/) from my system and installed it
inside virtualenv:
workon myvirtualenv
pip install ipython
Thus, I checked path using inside ipython using:
import sys
sys.path
|
Python module not found (directory problems)
Question: I have a Python 2.5 project with following directory structure:
database/__init__.py
database/createDBConnection.py
gui/mainwindow.py
When I try to run
python gui/mainwindow.py
I get the error
C:\PopGen>python gui/mainwindow.py
Traceback (most recent call last):
File "gui/mainwindow.py", line 12, in <module>
from database.createDBConnection import createDBC
ImportError: No module named database.createDBConnection
In mainwindow.py, there is following statement on line 12
from database.createDBConnection import createDBC
The problem occurs because Python can't find the database module.
Question: What can I do in order to fix this error?
Here's the code of the project:
<https://www.dropbox.com/sh/edfutlba960atp9/MwFpaepEpl>
I tried to use
C:\PopGen>python -m gui.mainwindow
but got these errors
Traceback (most recent call last):
File "C:\Python25\lib\runpy.py", line 95, in run_module
filename, loader, alter_sys)
File "C:\Python25\lib\runpy.py", line 52, in _run_module_code
mod_name, mod_fname, mod_loader)
File "C:\Python25\lib\runpy.py", line 32, in _run_code
exec code in run_globals
File "C:\PopGen\gui\mainwindow.py", line 13, in <module>
from file_menu.wizard_window_validate import Wizard
ImportError: No module named file_menu.wizard_window_validate
Answer: There are several ways to fix this, but this is perhaps the easiest one.
Try adding this in mainwindow.py, prior to the import that is failing:
import sys
sys.path.append("C:/path/to/database/module")
|
Django Python: Install multiple versions of the same package within a virtualenv
Question: I'm integrating two Django apps together, but am finding that one requires
django-mptt version 3.x and the other requires 5.x. I can't upgrade the 3.x
app because I don't 'own' that particular app and it might be needed for some
old dependencies.
Is there any way short of forking and namespacing the django-mptt 5.x version
so both can be used? I'd really hate to fork it and am wondering if there are
better options out there.
Just wondering what others have done in similar situations. Thank you for
reading.
Answer: This is not possible with the usual python tools, since virtualenv are
supposed to be used for that.
One possibility is to put each version of the dependancy in each app
directory, this way they may import this one first instead of the other.
But if your goal is to not edit any of the apps code, you better prey they
don't play with the Python Path, nor share any imports related to the
dependancies.
|
Filtering objects matching a given logical formula
Question: I have an array of Python objects, and I would like to get all objects that
evaluate to true for a given logical formula.
Let's say the objects are tasks in a task tracking system. Each task has a
status, an assignee and a due date. Now I'd like to get all objects that
evaluate to true for this formula:
(assignee=cdecker OR due=tomorrow) AND status=open
For things like the object fields (`assignee` and `status`) it would be a
simple matter of checking equality, whereas `due=tomorrow` would be a function
to be called on the object.
Is there a simple parser that I could use to parse the expression and then
evaluate it for all the objects in my starting set (basically wire all
literals to function calls)?
Answer: Try [pyparsing](http://pyparsing.wikispaces.com/). Here's a parser for a
grammar that supports equality, boolean ops and nested brackets:
from pyparsing import *
expr = Forward()
factor = Forward()
term = Forward()
iand = Literal("AND").suppress()
ior = Literal("OR").suppress()
ieq = Literal("=").suppress()
lpar = Literal("(").suppress()
rpar = Literal(")").suppress()
ident = Word(alphas)
atom = ident | lpar + expr + rpar
factor << Group(atom + ZeroOrMore(ieq + atom))("equal")
term << Group(factor + ZeroOrMore(iand + term))("and")
expr << Group(term + ZeroOrMore(ior + expr))("or")
You can apply this to your formulas to obtain tree-alike structures, which you
can evaluate at some point later, for example:
formula = "(assignee=cdecker OR due=tomorrow) AND status=open"
tree = expr.parseString(formula, parseAll=True)
print tree.asXML()
|
SOAP web service behind proxy, access using python-suds
Question: I have this strange case scenario with python suds.
I have a soap service (java) running on a local ip, say
`http://10.0.0.1:8080/services/`
I use suds http base auth within the local network and it's working fine.
from suds.client import Client
c = Client(url, username="user", password="pass")
But I want to make it accessible from outside, so I asked the system admin :
"Can you set up a external IP use reverse proxy for this soap service"
"Yes, but the company firewall doesn't allow port 8080, so your rule will be:
`http://10.0.0.1:8080/services/*` <-> `https://example.com/services/*`
Then the rule is setup but I just can't make the client to work. I tried all
kinds of transport:
from suds.transport.https import WindowsHttpAuthenticated
from suds.transport.http import HttpAuthenticated
#from suds.transport.https import HttpAuthenticated
from suds.client import Client
http = HttpAuthenticated(username="jlee", password="jlee")
#https = HttpAuthenticated(username="jlee", password="jlee")
ntlm = WindowsHttpAuthenticated(username="jlee", password="jlee")
url = "https://example.com/services/SiteManager?wsdl"
c = Client(url, transport = http)
it always returns:
suds.transport.TransportError: HTTP Error 403: Forbidden ( The server denied the specified Uniform Resource Locator (URL). Contact the server administrator. )
I tried to access the URL `https://example.com/services/SiteManager?wsdl` from
chrome, it returns 403 too!
But if I sign in first using other routes (my server is running other http
pages on tomcat), and then access the URL again the wsdl desc page shows up!
Can anybody tell me what's wrong with this? is it to do with the configuration
of the reverse proxy server or the suds transport?
Thanks very much! Jackie
Answer: Found the solution by talking to system admin(who's in charge of setting up
the reverse proxy), he said there is an checkbox option in MS DMZ(reverse
proxy server) for allow http base auth.
|
PyQt4 - QGIS form error
Question: I've to build a form in QGIS to customize data input for each polygon in the
shapefile. I use QtDesigner to create a form (.ui), with some textboxes and
comboboxes pointing to the fields of my shapefile.
Then I use the python file from Nathan QGIS Blog to add some logic.
Python code:
from PyQt4.QtCore import *
from PyQt4.QtGui import *
nameField = None
myDialog = None
def formOpen(dialog,layerid,featureid):
global myDialog
myDialog = dialog
global nameField
nameField = dialog.findChild(QTextEdit,"PART")
buttonBox = dialog.findChild(QDialogButtonBox,"buttonBox")
nameField.textChanged.connect(Name_onTextChanged)
# Disconnect the signal that QGIS has wired up for the dialog to the button box.
buttonBox.accepted.disconnect(myDialog.accept)
# Wire up our own signals.
buttonBox.accepted.connect(validate)
buttonBox.rejected.connect(myDialog.reject)
def validate():
# Make sure that the name field isn't empty.
if not nameField.text().length() > 0:
nameField.setStyleSheet("background-color: rgba(255, 107, 107, 150);")
msgBox = QMessageBox()
msgBox.setText("Field PART must not be NULL.")
msgBox.exec_()
else:
# Return the form as accpeted to QGIS.
myDialog.accept()
def Name_onTextChanged(text):
if not nameField.text().length() > 0:
nameField.setStyleSheet("background-color: rgba(255, 107, 107, 150);")
else:
nameField.setStyleSheet("")
So I open an edit session in QGIS and I click on a polygon with Identify tool,
but when I clik on OK button on my customized form, regardless field PART is
NULL or not, the following error occurs:
ERROR CODE LINE >>>> if not nameField.text().length() > 0:
ERROR MESSAGE >>>> AttributeError: 'str' object has no attribute 'text'
I'm running QGIS 1.7.4, Python 2.7.2, Windows 7 64-bit.
I miss something... Please, anybody can help me?
Answer: It looks like you have a Python error more than a problem with QGIS.
You have two instances of if not nameField.text().length() > 0:
def validate():
if not nameField.text().length() > 0:
and
def Name_onTextChanged(text):
if not nameField.text().length() > 0:
Initially, it looks like nameField is not an input for either of these
functions. So I guess these are assigned somewhere else and you've reduced the
code example. Also, you have text as a variable input for 'Name_onTextChanged'
but you also try and use it as a function 'nameField.text().length()'. This
might be a problem.
Generally, Python is complaining because it cannot perform the operation
'text()' on the variable nameField, which it believes is a string. [There is
no text() function available for
strings](http://docs.python.org/2/library/string.html). And it looks like
nameField is actually supposed to be a QTextEdit object.
If nameField is a QTextEdit object, then you can use toPlainText() instead
which should do what you need it to do. So something like
if not nameField.toPlainText().strip().length() > 0:
In this instance, I have included .strip() as well so that you do not get a
positive result if there are white spaces in text field.
Does that help at all?
|
Windows explorer context menus with sub-menus using pywin32
Question: I'm trying add some shell extensions using python with icons and a sub menu
but I'm struggling to get much further than the demo in pywin32. I can't seem
to come up with anything by searching google, either.
I believe I need to register a com server to be able to change the options in
submenu depending on where the right clicked file/folder is and the type of
file etc.
# A sample context menu handler.
# Adds a 'Hello from Python' menu entry to .py files. When clicked, a
# simple message box is displayed.
#
# To demostrate:
# * Execute this script to register the context menu.
# * Open Windows Explorer, and browse to a directory with a .py file.
# * Right-Click on a .py file - locate and click on 'Hello from Python' on
# the context menu.
import pythoncom
from win32com.shell import shell, shellcon
import win32gui
import win32con
class ShellExtension:
_reg_progid_ = "Python.ShellExtension.ContextMenu"
_reg_desc_ = "Python Sample Shell Extension (context menu)"
_reg_clsid_ = "{CED0336C-C9EE-4a7f-8D7F-C660393C381F}"
_com_interfaces_ = [shell.IID_IShellExtInit, shell.IID_IContextMenu]
_public_methods_ = shellcon.IContextMenu_Methods + shellcon.IShellExtInit_Methods
def Initialize(self, folder, dataobj, hkey):
print "Init", folder, dataobj, hkey
self.dataobj = dataobj
def QueryContextMenu(self, hMenu, indexMenu, idCmdFirst, idCmdLast, uFlags):
print "QCM", hMenu, indexMenu, idCmdFirst, idCmdLast, uFlags
# Query the items clicked on
format_etc = win32con.CF_HDROP, None, 1, -1, pythoncom.TYMED_HGLOBAL
sm = self.dataobj.GetData(format_etc)
num_files = shell.DragQueryFile(sm.data_handle, -1)
if num_files>1:
msg = "&Hello from Python (with %d files selected)" % num_files
else:
fname = shell.DragQueryFile(sm.data_handle, 0)
msg = "&Hello from Python (with '%s' selected)" % fname
idCmd = idCmdFirst
items = ['First Python content menu item!']
if (uFlags & 0x000F) == shellcon.CMF_NORMAL: # Check == here, since CMF_NORMAL=0
print "CMF_NORMAL..."
items.append(msg)
elif uFlags & shellcon.CMF_VERBSONLY:
print "CMF_VERBSONLY..."
items.append(msg + " - shortcut")
elif uFlags & shellcon.CMF_EXPLORE:
print "CMF_EXPLORE..."
items.append(msg + " - normal file, right-click in Explorer")
elif uFlags & CMF_DEFAULTONLY:
print "CMF_DEFAULTONLY...\r\n"
else:
print "** unknown flags", uFlags
win32gui.InsertMenu(hMenu, indexMenu,
win32con.MF_SEPARATOR|win32con.MF_BYPOSITION,
0, None)
indexMenu += 1
for item in items:
win32gui.InsertMenu(hMenu, indexMenu,
win32con.MF_STRING|win32con.MF_BYPOSITION,
idCmd, item)
indexMenu += 1
idCmd += 1
win32gui.InsertMenu(hMenu, indexMenu,
win32con.MF_SEPARATOR|win32con.MF_BYPOSITION,
0, None)
indexMenu += 1
return idCmd-idCmdFirst # Must return number of menu items we added.
def InvokeCommand(self, ci):
mask, hwnd, verb, params, dir, nShow, hotkey, hicon = ci
win32gui.MessageBox(hwnd, "Hello", "Wow", win32con.MB_OK)
def GetCommandString(self, cmd, typ):
# If GetCommandString returns the same string for all items then
# the shell seems to ignore all but one. This is even true in
# Win7 etc where there is no status bar (and hence this string seems
# ignored)
return "Hello from Python (cmd=%d)!!" % (cmd,)
def DllRegisterServer():
import _winreg
folder_key = _winreg.CreateKey(_winreg.HKEY_CLASSES_ROOT,
"Folder\\shellex")
folder_subkey = _winreg.CreateKey(folder_key, "ContextMenuHandlers")
folder_subkey2 = _winreg.CreateKey(folder_subkey, "PythonSample")
_winreg.SetValueEx(folder_subkey2, None, 0, _winreg.REG_SZ,
ShellExtension._reg_clsid_)
file_key = _winreg.CreateKey(_winreg.HKEY_CLASSES_ROOT,
"*\\shellex")
file_subkey = _winreg.CreateKey(file_key, "ContextMenuHandlers")
file_subkey2 = _winreg.CreateKey(file_subkey, "PythonSample")
_winreg.SetValueEx(file_subkey2, None, 0, _winreg.REG_SZ,
ShellExtension._reg_clsid_)
print ShellExtension._reg_desc_, "registration complete."
def DllUnregisterServer():
import _winreg
try:
folder_key = _winreg.DeleteKey(_winreg.HKEY_CLASSES_ROOT,
"Folder\\shellex\\ContextMenuHandlers\\PythonSample")
file_key = _winreg.DeleteKey(_winreg.HKEY_CLASSES_ROOT,
"*\\shellex\\ContextMenuHandlers\\PythonSample ")
except WindowsError, details:
import errno
if details.errno != errno.ENOENT:
raise
print ShellExtension._reg_desc_, "unregistration complete."
if __name__=='__main__':
from win32com.server import register
register.UseCommandLine(ShellExtension,
finalize_register = DllRegisterServer,
finalize_unregister = DllUnregisterServer)
Answer: I found out how to do this after a lot of trial and error and googling.
The example below shows a menu with a submenu and icons.
# A sample context menu handler.
# Adds a menu item with sub menu to all files and folders, different options inside specified folder.
# When clicked a list of selected items is displayed.
#
# To demostrate:
# * Execute this script to register the context menu. `python context_menu.py --register`
# * Restart explorer.exe- in the task manager end process on explorer.exe. Then file > new task, then type explorer.exe
# * Open Windows Explorer, and browse to a file/directory.
# * Right-Click file/folder - locate and click on an option under 'Menu options'.
import os
import pythoncom
from win32com.shell import shell, shellcon
import win32gui
import win32con
import win32api
class ShellExtension:
_reg_progid_ = "Python.ShellExtension.ContextMenu"
_reg_desc_ = "Python Sample Shell Extension (context menu)"
_reg_clsid_ = "{CED0336C-C9EE-4a7f-8D7F-C660393C381F}"
_com_interfaces_ = [shell.IID_IShellExtInit, shell.IID_IContextMenu]
_public_methods_ = shellcon.IContextMenu_Methods + shellcon.IShellExtInit_Methods
def Initialize(self, folder, dataobj, hkey):
print "Init", folder, dataobj, hkey
win32gui.InitCommonControls()
self.brand= "Menu options"
self.folder= "C:\\Users\\Paul\\"
self.dataobj = dataobj
self.hicon= self.prep_menu_icon(r"C:\path\to\icon.ico")
def QueryContextMenu(self, hMenu, indexMenu, idCmdFirst, idCmdLast, uFlags):
print "QCM", hMenu, indexMenu, idCmdFirst, idCmdLast, uFlags
# Query the items clicked on
files= self.getFilesSelected()
fname = files[0]
idCmd = idCmdFirst
isdir= os.path.isdir(fname)
in_folder= all([f_path.startswith(self.folder) for f_path in files])
win32gui.InsertMenu(hMenu, indexMenu,
win32con.MF_SEPARATOR|win32con.MF_BYPOSITION,
0, None)
indexMenu += 1
menu= win32gui.CreatePopupMenu()
win32gui.InsertMenu(hMenu,indexMenu,win32con.MF_STRING|win32con.MF_BYPOSITION|win32con.MF_POPUP,menu,self.brand)
win32gui.SetMenuItemBitmaps(hMenu,menu,0,self.hicon,self.hicon)
# idCmd+=1
indexMenu+=1
if in_folder:
if len(files) == 1:
if isdir:
win32gui.InsertMenu(menu,0,win32con.MF_STRING,idCmd,"Item 1"); idCmd+=1
else:
win32gui.InsertMenu(menu,0,win32con.MF_STRING,idCmd,"Item 2")
win32gui.SetMenuItemBitmaps(menu,idCmd,0,self.hicon,self.hicon)
idCmd+=1
else:
win32gui.InsertMenu(menu,0,win32con.MF_STRING,idCmd,"Item 3")
win32gui.SetMenuItemBitmaps(menu,idCmd,0,self.hicon,self.hicon)
idCmd+=1
if idCmd > idCmdFirst:
win32gui.InsertMenu(menu,1,win32con.MF_SEPARATOR,0,None)
win32gui.InsertMenu(menu,2,win32con.MF_STRING,idCmd,"Item 4")
win32gui.SetMenuItemBitmaps(menu,idCmd,0,self.hicon,self.hicon)
idCmd+=1
win32gui.InsertMenu(menu,3,win32con.MF_STRING,idCmd,"Item 5")
win32gui.SetMenuItemBitmaps(menu,idCmd,0,self.hicon,self.hicon)
idCmd+=1
win32gui.InsertMenu(menu,4,win32con.MF_SEPARATOR,0,None)
win32gui.InsertMenu(menu,5,win32con.MF_STRING|win32con.MF_DISABLED,idCmd,"Item 6")
win32gui.SetMenuItemBitmaps(menu,idCmd,0,self.hicon,self.hicon)
idCmd+=1
win32gui.InsertMenu(hMenu, indexMenu,
win32con.MF_SEPARATOR|win32con.MF_BYPOSITION,
0, None)
indexMenu += 1
return idCmd-idCmdFirst # Must return number of menu items we added.
def getFilesSelected(self):
format_etc = win32con.CF_HDROP, None, 1, -1, pythoncom.TYMED_HGLOBAL
sm = self.dataobj.GetData(format_etc)
num_files = shell.DragQueryFile(sm.data_handle, -1)
files= []
for i in xrange(num_files):
fpath= shell.DragQueryFile(sm.data_handle,i)
files.append(fpath)
return files
def prep_menu_icon(self, icon): #Couldn't get this to work with pngs, only ico
# First load the icon.
ico_x = win32api.GetSystemMetrics(win32con.SM_CXSMICON)
ico_y = win32api.GetSystemMetrics(win32con.SM_CYSMICON)
hicon = win32gui.LoadImage(0, icon, win32con.IMAGE_ICON, ico_x, ico_y, win32con.LR_LOADFROMFILE)
hdcBitmap = win32gui.CreateCompatibleDC(0)
hdcScreen = win32gui.GetDC(0)
hbm = win32gui.CreateCompatibleBitmap(hdcScreen, ico_x, ico_y)
hbmOld = win32gui.SelectObject(hdcBitmap, hbm)
# Fill the background.
brush = win32gui.GetSysColorBrush(win32con.COLOR_MENU)
win32gui.FillRect(hdcBitmap, (0, 0, 16, 16), brush)
# unclear if brush needs to be feed. Best clue I can find is:
# "GetSysColorBrush returns a cached brush instead of allocating a new
# one." - implies no DeleteObject
# draw the icon
win32gui.DrawIconEx(hdcBitmap, 0, 0, hicon, ico_x, ico_y, 0, 0, win32con.DI_NORMAL)
win32gui.SelectObject(hdcBitmap, hbmOld)
win32gui.DeleteDC(hdcBitmap)
return hbm
def InvokeCommand(self, ci):
mask, hwnd, verb, params, dir, nShow, hotkey, hicon = ci
win32gui.MessageBox(hwnd, str(self.getFilesSelected()), "Wow", win32con.MB_OK)
def GetCommandString(self, cmd, typ):
# If GetCommandString returns the same string for all items then
# the shell seems to ignore all but one. This is even true in
# Win7 etc where there is no status bar (and hence this string seems
# ignored)
return "Hello from Python (cmd=%d)!!" % (cmd,)
def DllRegisterServer():
import _winreg
folder_key = _winreg.CreateKey(_winreg.HKEY_CLASSES_ROOT,
"Folder\\shellex")
folder_subkey = _winreg.CreateKey(folder_key, "ContextMenuHandlers")
folder_subkey2 = _winreg.CreateKey(folder_subkey, "PythonSample")
_winreg.SetValueEx(folder_subkey2, None, 0, _winreg.REG_SZ,
ShellExtension._reg_clsid_)
file_key = _winreg.CreateKey(_winreg.HKEY_CLASSES_ROOT,
"*\\shellex")
file_subkey = _winreg.CreateKey(file_key, "ContextMenuHandlers")
file_subkey2 = _winreg.CreateKey(file_subkey, "PythonSample")
_winreg.SetValueEx(file_subkey2, None, 0, _winreg.REG_SZ,
ShellExtension._reg_clsid_)
print ShellExtension._reg_desc_, "registration complete."
def DllUnregisterServer():
import _winreg
try:
folder_key = _winreg.DeleteKey(_winreg.HKEY_CLASSES_ROOT,
"Folder\\shellex\\ContextMenuHandlers\\PythonSample")
file_key = _winreg.DeleteKey(_winreg.HKEY_CLASSES_ROOT,
"*\\shellex\\ContextMenuHandlers\\PythonSample")
except WindowsError, details:
import errno
if details.errno != errno.ENOENT:
raise
print ShellExtension._reg_desc_, "unregistration complete."
if __name__=='__main__':
from win32com.server import register
register.UseCommandLine(ShellExtension,
finalize_register = DllRegisterServer,
finalize_unregister = DllUnregisterServer)
|
How to do multiple arguments to map function where one remains the same in python?
Question: Lets say we have a function add as follows
def add(x, y):
return x + y
we want to apply map function for an array
map(add, [1, 2, 3], 2)
The semantics are I want to add 2 to the every element of the array. But the
`map` function requires a list in the third argument as well.
**Note:** I am putting the add example for simplicity. My original function is
much more complicated. And of course option of setting the default value of
`y` in add function is out of question as it will be changed for every call.
Answer: One option is a list comprehension:
[add(x, 2) for x in [1, 2, 3]]
More options:
a = [1, 2, 3]
import functools
map(functools.partial(add, y=2), a)
import itertools
map(add, a, itertools.repeat(2, len(a)))
|
EOFError in python
Question: I got an EOFError at line 87 of the following code:
import random
def printDice(diceList):
upperLine=" _____ _____ _____ _____ _____"
line1="|"
line2="|"
line3="|"
lowerLine=" ----- ----- ----- ----- -----"
for i in range(len(diceList)):
if(diceList[i]==1):
line1+=" "
elif(diceList[i]==2):
line1+="* "
elif(diceList[i]==3):
line1+="* "
elif(diceList[i]==4):
line1+="* *"
elif(diceList[i]==5):
line1+="* *"
else:
line1+="* *"
if(i==4):
line1+="|"
else:
line1+="| |"
for i in range(len(diceList)):
if(diceList[i]==1):
line2+=" * "
elif(diceList[i]==2):
line2+=" "
elif(diceList[i]==3):
line2+=" * "
elif(diceList[i]==4):
line2+=" "
elif(diceList[i]==5):
line2+=" * "
else:
line2+="* *"
if(i==4):
line2+="|"
else:
line2+="| |"
for i in range(len(diceList)):
if(diceList[i]==1):
line3+=" "
elif(diceList[i]==2):
line3+=" *"
elif(diceList[i]==3):
line3+=" *"
elif(diceList[i]==4):
line3+="* *"
elif(diceList[i]==5):
line3+="* *"
else:
line3+="* *"
if(i==4):
line3+="|"
else:
line3+="| |"
print upperLine
print line1
print line2
print line3
print lowerLine
tellMe="N"
print
print "The purpose of the game is to figure out the rule."
print "I can tell you three things:\n1. The name of the game is petals around a rose, the name is important.\n2. I can tell you whether or not your guess is right and the score of the dice.\n3. I can tell you that the score is always even or 0"
print
print "At any time you can quit by typing an odd number"
go="Y"
wrongGuesses=0
while(go=="Y"):
diceList=[]
score=0
rightWrong="N"
for i in range(5):
diceList.append(random.randrange(1,7))
for i in range(5):
if(diceList[i]==3):
score+=2
elif(diceList[i]==5):
score+=4
printDice(diceList)
print
while(rightWrong=="N"):
guess=input("What is your guess? ")
if(guess%2==1):
break
if(guess!=score):
print "Wrong"
wrongGuesses+=1
tellMe=raw_input("Tell you (Y or N)? ")
tellMe=tellMe.upper()
if(tellMe=="Y"):
print "The score was "+str(score)+"."
rightWrong="Y"
else:
rightWrong="Y"
print "Right"
if(wrongGuesses%13==0 and wrongGuesses!=0):
print"The name is very important."
if((wrongGuesses==30) and (wrongGuesses!=0)):
print "The maximum score is 20."
print
I was using the site codepad.org to run this program. I have actually run this
program in an IDE but when I use this site this error pops up:
What is your guess?
Traceback (most recent call last):
Line 88, in <module>
guess=input("What is your guess? ")
EOFError
Answer: Your program reads user input from standard input. When running it on
codepad.org, there is no user input and attempting to read from standard input
will result in an `EOFError`.
If you use the site ideone.com instead, it allows you to specify the user
input. However, you will have to provide the entire input ahead of time rather
than interactively. If your program tries to read past the end of the provided
input, you will get `EOFError` again.
|
Python: How to get Outer class variables from inner static class?
Question: I want to specify variable once by making instance Outer(variable), than this
variable use in all static classes, how should I do that? Is there any other
solution than use not static methods and pass Outer into each inner class?
class Outer():
def __init__(self, variable):
self.variable= variable
class Inner1():
@staticmethod
def work1():
**print Outer.variable**
class Inner2():
@staticmethod
def work2():
**print Outer.variable**
Answer: If you really want such thing, metaclass may help a little, for example:
from types import ClassType
class OuterMeta(type):
def __new__(mcls, name, base, attr):
ret = type.__new__(mcls, name, base, attr)
for k, v in attr.iteritems():
if isinstance(v, (ClassType, type)):
v.Outer = ret
return ret
class Outer(object):
__metaclass__ = OuterMeta
var = 'abc'
class Inner:
def work(self):
print self.Outer.var
@classmethod
def work2(cls):
print cls.Outer.var
then
>>> Outer.Inner.work2()
abc
|
How to return Python exception info to jQuery.ajax POST call?
Question: I make a JQUERY AJAX code in my HTML code:
$.ajax({
type: "POST",
url: "runrep.py",
data:
{
'my_data' : 'test'
},
success: function(html)
{
}
error:function(xhr,err)
{
alert("Failed\nreadyState: "+xhr.readyState+"\nstatus: "+xhr.status + "\nresponseText: "+xhr.responseText);
}
});
In my backend code 'runrep.py' I have specifically trying to connect to a
MySQL DB which is down.
When I run the program in Apache error_logs I see the DB unable to connect
MySQL exception but the UI page does not renders any error?
class SampleClass:
def __init__(self):
self.conn = MySQLdb.connect(host="host",user="user",passwd="passwd",db="db")
self.cursor = self.conn.cursor(MySQLdb.cursors.DictCursor)
form = cgi.FieldStorage()
self.my_data = form["my_data"].value
What should I do that I can print the program fault either be it exceptions,
errors due to run time issue to browser / client side? Even if I catch the
exceptions how can I return failure to AJAX call accordingly?
Answer: Basically, you should handle the exception in your code and pass it to the
server code. For example, if you are using Werkzeug/Flask (see [the
documentation](http://werkzeug.pocoo.org/docs/exceptions/)), you can do it
this way:
import werkzeug
class SampleClass:
def __init__(self):
try:
self.conn = MySQLdb.connect(host="host",user="user",passwd="passwd",db="db")
except Exception as e:
raise werkzeug.exceptions.HTTPException("Error: %s" % str(e) )
self.cursor = self.conn.cursor(MySQLdb.cursors.DictCursor)
form = cgi.FieldStorage()
self.starttime = form["my_data"].value
The raised exception should be converted by the server framework to a proper
HTTP response with some HTTP status code representing an error, such as 500.
When an erroneous code is responded by the server, then you can handle it in
the AJAX query.
Refer to your HTTP/WSGI framework for more information about exception
handling and responses with error status codes.
|
Euro sign issue when reading an RTF file with Python
Question: I need to generate a document in RTF using Python and pyRTF, everything is ok:
I have no problem with accented letters, it accepts even the euro sign without
errors, but instead of `€`, I get this sign: `¤`. I encode the strings in this
way:
x.encode("iso-8859-15")
I googled a lot, but I was not able to solve this issue, what do I have to do
to get the euro sign?
Answer: The RTF standard uses UTF-16, but shaped to fit the RTF command sequence
format. Documented at
<http://en.wikipedia.org/wiki/Rich_Text_Format#Character_encoding>. pyRTF
doesn't do any encoding for you, unfortunately; handling this has been on the
project's TODO but obviously they never got to that before abandoning the
library.
This is based on code I used in a project recently. I've now released this as
[`rtfunicode` on PyPI](http://pypi.python.org/pypi/rtfunicode), with support
for Python 2 and 3; the python 2 version:
import codecs
import re
_charescape = re.compile(u'([\x00-\x1f\\\\{}\x80-\uffff])')
def _replace(match):
codepoint = ord(match.group(1))
# Convert codepoint into a signed integer, insert into escape sequence
return '\\u%s?' % (codepoint if codepoint < 32768 else codepoint - 65536)
def rtfunicode_encode(text, errors):
# Encode to RTF \uDDDDD? signed 16 integers and replacement char
return _charescape.sub(_replace, escaped).encode('ascii')
class Codec(codecs.Codec):
def encode(self, input, errors='strict'):
return rtfunicode_encode(input, errors), len(input)
class IncrementalEncoder(codecs.IncrementalEncoder):
def encode(self, input, final=False):
return rtfunicode_encode(input, self.errors)
class StreamWriter(Codec, codecs.StreamWriter):
pass
def rtfunicode(name):
if name == 'rtfunicode':
return codecs.CodecInfo(
name='rtfunicode',
encode=Codec().encode,
decode=Codec().decode,
incrementalencoder=IncrementalEncoder,
streamwriter=StreamWriter,
)
codecs.register(rtfunicode)
Instead of encoding to "iso-8859-15" you can then encode to 'rtfunicode'
instead:
>>> u'\u20AC'.encode('rtfunicode') # EURO currency symbol
'\\u8364?'
Encode any text you insert into your RTF document this way.
Note that it only supports UCS-2 unicode (`\uxxxx`, 2 bytes), not UCS-4
(`\Uxxxxxxxx`, 4 bytes); `rtfunicode` 1.1 supports these by simply encoding
the UTF-16 surrogate pair to two `\uDDDDD?` signed integers.
|
Internationalization in Django doesn't get activated
Question: I have followed the documentation how to do the i18n but the words still show
up in English.
**Settings.py:**
USE_I18N = True
LANGUAGES = (
('en', 'English'),
('de', 'German'),
)
LANGUAGE_CODE = 'de'
**Views:**
from django.utils.translation import ugettext as _
...
messages.set_level(request, messages.SUCCESS)
messages.success(request, _(u'An invitation was sent to %s.') % invitation.email)
I am developing on Ubuntu and gettext is installed. In the command line I
enter these:
django-admin.py makemessages -l de
I get a .po file and edit it accordingly:
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PACKAGE VERSION\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2012-06-01 17:42+0100\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <[email protected]>\n"
"Language: \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#: MyBookmarks/templates/friend_invite.html:3
#: MyBookmarks/templates/friend_invite.html:4
msgid "Invite A Friend"
msgstr "hhhhh"
#: MyBookmarks/templates/friend_invite.html:11
msgid "send invite"
msgstr "gggggggggg"
#: MyBookmarksApp/forms.py:8
msgid "Friend's Name"
msgstr "ffffffffff"
#: MyBookmarksApp/forms.py:9
msgid "Friend's Email"
msgstr "ddddddddd"
#: MyBookmarksApp/views.py:123
#, python-format
msgid "An invitation was sent to %s."
msgstr "ssssssssssss %s"
#: MyBookmarksApp/views.py:127
msgid "An error happened when sending the invitation."
msgstr "aaaaaaa"
Then I run this command:
django-admin.py compilemessages
and the .mo file is created. I run the app and everything is still in plain
English. What am I missing?
**Update 2:**
I have now tried a new approach:
I have added
'django.middleware.locale.LocaleMiddleware',
to the MIDDLEWARE_CLASSES in settings.py
Then I have added to url.py the following:
# i18n
(r'^i18n/', include('django.conf.urls.i18n')),
And in my base.html template I have added this:
{% load i18n %}
...
<div id="footer">
<form action="/i18n/setlang/" method="post">
<select name="language">
{% for lang in LANGUAGES %}
<option value="{{ lang.0 }}">{{ lang.1 }}</option>
{% endfor %}
</select>
<input type="submit" value="Go" />
</form>
</div>
Now I can see a dropdown on my page to change the language. Even after
changing the language to German, nothing changes. Something seems broken in
1.4...
**update 3:**
I have created a new simple test project to demonstrate the problem.
It is a very simple project and you can switch between German & English at
main page. You see the selected Language code actually changes, which is a
good sign but the translation simply doesn't happen. I wonder if this is a bug
that needs reported. Your cooperation is highly appreciated.
Please download from here: <http://www.chasebot.com/TestProject.zip>
(Once extracted in settings.py you need to change the absolute path to the
database)
Please let me know if you can reproduce it. Thank you
Answer: Do you use the set_language() method ?
<https://docs.djangoproject.com/en/1.4/topics/i18n/translation/#set-language-
redirect-view>
How do you change the current language? Using a form ? Are you sure the
language is changed ? Print the current language in the html template. It
might be some cookie been stored in the browser with non-"DE" value already,
preventing the switch to the desired language.
**UPDATE** : OK, it seems Django cannot find the locale directory. The
directory structure seems a bit messy. Read here :
<https://docs.djangoproject.com/en/dev/topics/i18n/translation/#how-django-
discovers-translations>
The directories listed in LOCALE_PATHS have the highest precedence, with the ones appearing first having higher precedence than the ones appearing later.
Then, it looks for and uses if it exists a locale directory in each of the installed apps listed in INSTALLED_APPS. The ones appearing first have higher precedence than the ones appearing later.
So, you don't have LOCALE_PATHS in settings.py, and locale/ is not a subdir of
an app, included in INSTALLED_APPS.
Just add
LOCALE_PATHS = ('/path/to/locale/', )
This should do the trick (I've tested with your test project).
|
How do you get checkbox selections from a CustomTreeCtrl
Question: I'm working with a CustomTreeCtrl with checkboxes and I can't figure out how
to determine which checkboxes are selected. I looked at
<http://xoomer.virgilio.it/infinity77/wxPython/Widgets/wx.TreeCtrl.html#GetSelection>
and put this together:
import string
import os
import sys
import wx
import wx.lib.agw.customtreectrl as CT
class MyFrame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent, -1, "CustomTreeCtrl Demo")
custom_tree = CT.CustomTreeCtrl(self, agwStyle=wx.TR_DEFAULT_STYLE)
root = custom_tree.AddRoot("The Root Item")
for y in range(5):
last = custom_tree.AppendItem(root, "item %d" % y)
for z in range(5):
item = custom_tree.AppendItem(last, "item %d" % z, ct_type=1)
self.Bind(CT.EVT_TREE_ITEM_CHECKED, self.ItemChecked)
def ItemChecked(self, event):
print("Somebody checked something")
print(event.GetSelections())
app = wx.PySimpleApp()
frame = MyFrame(None)
app.SetTopWindow(frame)
frame.Show()
app.MainLoop()
When I check a box, I get the Traceback: "AttributeError: 'TreeEvent' object
has no attribute 'GetSelections'" Any suggestions on how to read which boxes
are selected would be great!
Answer: The `event` object in question doesn't have a `GetSelections` method. It does
have a `GetSelection`, which will tell you which item was selected at that
event. If you want to get all of the selected items inside `ItemChecked`,
rename `custom_tree` to `self.custom_tree` and then you're allowed to call
`self.custom_tree.GetSelections()` inside `ItemChecked`.
If in future you want to know what kind of methods are available for some
event object, you can put `print(dir(event))` in your handler.
The custom tree control doesn't have a method to get the checked items. One
thing that you could do is create a `self.checked_items` list in your frame,
and maintain it in your `ItemChecked` method. This list could hold either the
string values for the items or the items themselves. For instance,
class MyFrame(wx.Frame):
def __init__(self, parent):
# ....
self.checked_items = []
# ....
def ItemChecked(self, event):
if event.IsChecked():
self.checked_items.append(event.GetItem())
# or to store the item's text instead, you could do ...
# self.checked_items.append(self.custom_tree.GetItemText(event.GetItem()))
else:
self.checked_items.remove(event.GetItem())
# or ...
# self.checked_items.remove(self.custom_tree.GetItemText(event.GetItem()))
|
Python name space issues with ipython parallel
Question: I'm starting to experiment with the IPython parallel tools and have an issue.
I start up my python engines with:
ipcluster start -n 3
Then the following code runs fine:
from IPython.parallel import Client
def dop(x):
rc = Client()
dview = rc[:]
dview.block=True
dview.execute('a = 5')
dview['b'] = 10
ack = dview.apply(lambda x: a+b+x, x)
return ack
ack = dop(27)
print ack
returns [42, 42, 42] as it should. But if I break the code into different
files: dop.py:
from IPython.parallel import Client
def dop(x):
rc = Client()
dview = rc[:]
dview.block=True
dview.execute('a = 5')
dview['b'] = 10
print dview['a']
ack = dview.apply(lambda x: a+b+x, x)
return ack
and try the following:
from dop import dop
ack = dop(27)
print ack
I get errors from each engine:
[0:apply]: NameError: global name 'a' is not defined
[1:apply]: NameError: global name 'a' is not defined
[2:apply]: NameError: global name 'a' is not defined
I don't get it...why can't I put the function in a different file and import
it?
Answer: Quick answer: decorate your function with `@interactive` from
`IPython.parallel.util`[1] if you want it to have access to the engine's
global namespace:
from IPython.parallel.util import interactive
f = interactive(lambda x: a+b+x)
ack = dview.apply(f, x)
The actual explanation:
the IPython user namespace is essentially the module `__main__`. This is where
code is run when you do `execute('a = 5')`.
If you define a function interactively, its module is also `__main__`:
lam = lambda x: a+b+x
lam.__module__
'__main__'
When the Engine unserializes a function, it does so in the appropriate global
namespace for the function's module, so functions defined in `__main__` in
your client are also defined in `__main__` on the Engine, and thus have access
to `a`.
Once you put it in a file and import it, then the functions are no longer
attached to `__main__`, but the module `dop`:
from dop import dop
dop.__module__
'dop'
All functions conventionally defined in that module (lambdas included) will
have this value, so when they are unpacked on the Engine their global
namespace will be that of the `dop` module, _not_ `__main__`, so your 'a' is
not accessible.
For this reason, IPython provides a simple `@interactive` decorator that
results in any function being unpacked as if it were defined in `__main__`,
regardless of where the function is actually defined.
For an example of the difference, take this `dop.py`:
from IPython.parallel import Client
from IPython.parallel.util import interactive
a = 1
def dop(x):
rc = Client()
dview = rc[:]
dview['a'] = 5
f = lambda x: a+x
return dview.apply_sync(f, x)
def idop(x):
rc = Client()
dview = rc[:]
dview['a'] = 5
f = interactive(lambda x: a+x)
return dview.apply_sync(f, x)
Now, `dop` will use 'a' from the dop module, and `idop` will use 'a' from your
engine namespaces. The only difference between the two is that the function
passed to apply is wrapped in
[`@interactive`](https://github.com/ipython/ipython/blob/master/IPython/parallel/util.py#L227):
from dop import dop, idop
print dop(5) # 6
print idop(5) # 10
[1]: In IPython >= 0.13 (upcoming release), `@interactive` is also available
as `from IPython.parallel import interactive`, where it always should have
been.
|
Python Changing the format of a text file to a new format
Question: I am giving a text file with the format below:
3 Bham Hoover - Vestiva
123 234 1 456 876 1 876 745 1
0
4 Bham Vestiva - Greensprings
235 876 1 647 987 1 098 765 1 234 546 1
0
This goes on for several more lines, but I am trying to convert this format to
the following:
Event
Disconnect branch from bus 123 to 234 circuit 1
Disconnect branch from bus 456 to 876 circuit 1
Disconnect branch from bus 876 to 745 circuit 1
end
Event
Disconnect branch from bus 235 to 876 circuit 1
Disconnect branch from bus 647 to 987 circuit 1
Disconnect branch from bus 098 to 765 circuit 1
Disconnect branch from bus 234 to 546 circuit 1
end
Answer:
from itertools import islice
with open('file.txt', 'r') as f:
# iterate over every 3rd line, starting with the 2nd
for line in islice(f, 1, None, 3):
parts = line.split()
print 'Event'
# iterate over 3-element chunks
for x in zip(*(iter(parts),) * 3):
print 'Disconnect branch from bus %s to %s circuit %s' % x
print 'end'
Output:
Event
Disconnect branch from bus 123 to 234 circuit 1
Disconnect branch from bus 456 to 876 circuit 1
Disconnect branch from bus 876 to 745 circuit 1
end
Event
Disconnect branch from bus 235 to 876 circuit 1
Disconnect branch from bus 647 to 987 circuit 1
Disconnect branch from bus 098 to 765 circuit 1
Disconnect branch from bus 234 to 546 circuit 1
end
Credits to the chunking code go to [Iteration over list
slices](http://stackoverflow.com/questions/1335392/iteration-over-list-
slices/1335618#1335618)
|
python file in-out adding last three characters
Question: Ok, so basic python question. I have a simple script to replace text on
html(txt) files. I wrote some code and put in some newline html that had xhtml
coding i wanted to replace so I tried to write a python script to replace the
xhtml coding versions with regular newline html. Sometimes when I run the
code, it works fine, but sometimes it rewrites the file, and at the end the
last two characters are repeated after a newline. I'm running 2.7 on OS X
Lion, if that matters. Here's the code:
import sys
import re
def replace_text(filename):
with open(filename, 'r+') as f:
p = re.compile( '(</br>|<br/>|<br />)')
f_data = open(filename, 'r+').read()
f.write(p.sub('<br>', f_data))
f.close()
def main():
args = sys.argv[1:]
if not args:
print 'usage: [--summaryfile] file [file ...]'
sys.exit(1)
summary = False
if args[0] == '--summaryfile':
summary = Trye
del args[0]
for filename in args:
replace_text(filename)
if __name__ == '__main__':
main()
So when I run it with a file (say 'foo.txt') that has the content:
</br> <br/> <br /> <br> poop
sometimes it outputs:
<br> <br> <br> <br> poop
and sometimes it outputs:
<br> <br> <br> <br> poop
op
huh? Using OS X Lion with python 2.7. I'm missing something simple? Also using
Gedit 3.2.6 as an editor. Any help?
Answer: You're shortening the content of the file, but you're not shortening its
length. Don't forget to do so via `.truncate()` before closing.
|
Python: Remove Duplicates from Text File
Question: I am new to python. I want to remove duplicate word
and except English word i want to delete all other word and blank line.
purely English word only i want to extract.
i have some text file which contain such like following
aaa
bbb
aaa223
aaa
ccc
ddd
kei60:
sj@6999
jack02
jparkj
so after process duplicate i want to get result following
aaa
bbb
ccc
ddd
jparkj
following is what i tried script source.
if anyone help me much appreciate! thanks!
# read a text file, replace multiple words specified in a dictionary
# write the modified text back to a file
import re
def replace_words(text, word_dic):
"""
take a text and replace words that match a key in a dictionary with
the associated value, return the changed text
"""
rc = re.compile('|'.join(map(re.escape, word_dic)))
def translate(match):
return word_dic[match.group(0)]
return rc.sub(translate, text)
def main():
test_file = "prxtest.txt"
# read the file
fin = open(test_file, "r")
str2 = fin.read()
fin.close()
# the dictionary has target_word:replacement_word pairs
word_dic = {
'.': '\n',
'"': '\n',
'<': '\n',
'>': '\n',
'!': '\n',
"'": '\n',
'(': '\n',
')': '\n',
'[': '\n',
']': '\n',
'@': '\n',
'#': '\n',
'$': '\n',
'%': '\n',
'^': '\n',
"&": '\n',
'*': '\n',
'_': '\n',
'+': '\n',
'-': '\n',
'=': '\n',
'}': '\n',
'{': '\n',
'"': '\n',
";": '\n',
':': '\n',
'?': '\n',
',': '\n',
'`': '\n',
'~': '\n',
'1': '\n',
'2': '\n',
'3': '\n',
'4': '\n',
"5": '\n',
'6': '\n',
'7': '\n',
'8': '\n',
'9': '\n',
'0': '\n',
' ': '\n'}
# call the function and get the changed text
str3 = replace_words(str2, word_dic)
# write changed text back out
fout = open("clean.txt", "w")
fout.write(str3)
fout.close()
if __name__ == "__main__":
main()
Answer: This will capture lines containing only letters:
fin = open(test_file, 'r')
fout = open('clean.txt', 'w')
s = set()
for line in fin:
if line.rstrip().isalpha():
if not line in s:
s.add(line)
fout.write(line)
fin.close()
fout.close()
|
Capture output from subprocess.call that I have no control over
Question: I'm testing a piece of Python code that uses subprocess.call(), so I have no
control over that function call. I need to capture the output from that system
call to do assertions. I tried to set os.stdout to a StringIO object, but that
doesn't capture the system call outputs. How do I solve this problem? Here's
my code so far:
Code to test (I have no control over this):
def runFile(filename):
check_call("./" + filename)
My attempt to capture system call output:
import StringIO
oldout = sys.stdout
try:
sys.stdout = StringIO.StringIO()
runFile("somefile")
output = sys.stdout.getvalue()
assert output == "expected-output"
finally:
sys.stdout = oldout
Answer: You could redirect your own stdout by
rd, wr = os.pipe()
oldstdout = os.dup(1)
os.dup2(wr, 1)
os.close(wr)
and then read from `rd` while the external process is running.
This might not work, as the external process might write more than fits in the
pipe buffer. In this case, you would have to spawn a reading thread.
Afterwards, you restore the old state with
os.dup2(oldstdout, 1)
os.close(oldstdout)
os.close(rd)
and continue normally.
|
os.system doesn't work in Python
Question: I'm working on windows vista, but I'm running python from DOS command. I have
this simple python program. (It's actually one py file named test.py)
import os
os.system('cd ..')
When I execute "python test.py" from a Dos command, it doesn't work. For
example, if the prompt Dos Command before execution was this:
C:\Directory>
After execution, must be this:
C:\>
Help Plz.
Answer: First, you generally don't want to use `os.system` \- take a look at the
[subprocess module](http://docs.python.org/library/subprocess.html) instead.
But, that won't solve your immediate problem (just some you might have down
the track) - the actual reason `cd` won't work is because it changes the
working directory of the _subprocess_ , and doesn't affect the process Python
is running in - to do that, use
[`os.chdir`](http://docs.python.org/library/subprocess.html).
|
Python Tkinter - add external function as command in menu
Question: I'm having a problem with Tkinter menu. Here is the code for my gui.py file:
from tkinter import *
from SS2 import file
class AppUI(Frame):
def __init__(self, master=None):
Frame.__init__(self, master, relief=SUNKEN, bd=2)
self.menubar = Menu(self)
menu = Menu(self.menubar, tearoff=0)
self.menubar.add_cascade(label="File", menu=menu)
menu.add_command(label="Open", command=file.open())
menu.add_command(label="Save")
menu.add_command(label="Save as...")
menu.add_command(label="Exit",
command=root.quit)
menu = Menu(self.menubar, tearoff=0)
self.menubar.add_cascade(label="Image", menu=menu)
menu.add_command(label="Size")
menu.add_command(label="Rotate")
menu.add_command(label="Crop")
menu = Menu(self.menubar, tearoff=0)
self.menubar.add_cascade(label="Color", menu=menu)
menu = Menu(self.menubar, tearoff=0)
self.menubar.add_cascade(label="Filter", menu=menu)
menu.add_command(label="Blur")
menu.add_command(label="Contour")
menu.add_command(label="Emboss")
menu.add_command(label="Smooth")
menu.add_command(label="Sharpen")
menu = Menu(self.menubar, tearoff=0)
self.menubar.add_cascade(label="Help", menu=menu)
menu.add_command(label="About")
try:
self.master.config(menu=self.menubar)
except AttributeError:
# master is a toplevel window (Python 1.4/Tkinter 1.63)
self.master.tk.call(master, "config", "-menu", self.menubar)
self.canvas = Canvas(self, bg="white", width=400, height=400,
bd=0, highlightthickness=0)
self.canvas.pack()
root = Tk()
app = AppUI(root)
app.pack()
root.mainloop()
And here is the code for my file.py:
from tkinter import *
from tkinter.filedialog import askopenfilename
def open():
filename = askopenfilename(filetypes=[("allfiles","*"),("imagesfiles","*.png")])
The problem is, when I run the gui.py file, the file dialogue always appears
before the menu, and when I close it and try to access it through the menu
Open, nothing happens. What did I do wrong here? Please help and thanks in
advance.
Answer: Commands should give the name of the method. What you're doing is _calling_
the method. Tkinter will then use the return value of that method as the
button command. Solution: leave out the brackets.
..., command=file.open
instead of
..., command=file.open()
You did it right for the exit button though!
|
GeocoderDotUS... Syntax of Python check for 'None'
Question:
import csv
from geopy import geocoders
import time
g = geocoders.GeocoderDotUS()
spamReader = csv.reader(open('locations.csv', 'rb'), delimiter='\t', quotechar='|')
f = open("output.txt",'w')
for row in spamReader:
a = ', '.join(row)
#exactly_one = False
time.sleep(1)
place, (lat, lng) = g.geocode(a)
if None in (lat, lng):
f.write("none")
else:
b = str(place) + "," + "[" + str(lat) + "," + str(lng) + "]" + "\n"
print b
f.write(b)
So I have established that if GeocoderDotUS does not find an address it will
return None. I have written some script to attempt to check for None however I
still seem to be getting this trace back. I am a bit perplexed.
Traceback (most recent call last):
File "C:\Users\Penguin\workspace\geocode-nojansdatabase\src\GeocoderDotUS.py", line 17, in <module>
place, (lat, lng) = g.geocode(a)
TypeError: 'NoneType' object is not iterable
Is there some error in my check for None systax? Thanks in advance for any
help out....
Answer: As you can see in the error message, the problem is in the `place, (lat, lng)
= ...` line.
It is **the g.geocode calll** which returns **None** (and you try to assign
this to the `place` variable and the `lat, lng` tuple immediately, which
obviously must fail).
So try something along these lines:
result = g.geocode(a)
if result:
place, (lat, lng) = result
else:
# ...
|
How to securely store LWPCookieJar objects in python?
Question: I'm using a `cookielib.LWPCookieJar` object in Python 2.6 to save cookies and
re-load them on future invocations of my script. The `save()` method produces
files with the default permissions - that is, other users on my system can
read (and presumably then use) cookies I save this way.
It seems to me that persistent cookies should usually be saved in a user-only
readable file (umask 077), for security. Is there a way to do this without re-
implementing the `save()` method in my own subclass?
Answer: I agree that this is important -- sessions IDs are often saved as cookies.
Would it suffice to save the cookie to a file in a directory which only the
user can access?
os.mkdir( myTmpDir, 0700 )
// Now save the CookieJar in there...
|
How to use dj-database-url while connecting with postgresql in heroku using python
Question: I'm here because I'm really really new with heroku-python-django-postgresql
group. I have googled for a usage for dj-database-url and I don't understand
why i have to use it when developing a python application that needs to
connect with postgresql. I have added postgresql (dev version) as add-on to my
application, but I don't know how to tell to the app that I want it to use my
db.
so, the short question is, How do I indicate to dj-database-url that I want to
use my database?
Thanks for your time and answers, I'll appreciate your help because this is
very very urgent!
Answer: dj-database-url is a utility to help you load your database into your
dictionary from the DATABASE_URL environment variable. Heroku uses environment
variables for your database and other addons. To begin using your database
you'd simply use the below command to setup your DATABASES dictionary:
import dj_database_url
DATABASES['default'] = dj_database_url.config()
And maybe stash DATABASE_URL in your virtualenv activate script.
|
Understanding global variable in Python
Question: I came across a strange issue in Python when using global variables.
I have two modules(files):`mod1.py` and `mod2.py`
`mod1` tries to modify the global variable `var` defined in `mod2`. But the
`var` in mod2 and `var` in `mod` seems to be two different things. Thus, the
result shows that such modification does not work.
Here is the code:
#code for mod2.py
global var
var = 1
def fun_of_mod2():
print var
#code for mod1.py
from mod2 import var,fun_of_mod2
global var #commenting out this line yields the same result
var = 2 #I want to modify the value of var defined in mod2
fun_of_mod2() #but it prints: 1 instead of 2. Modification failed :-(
Any hint on why this happens? And how can I modify the value of `val` defined
in `mod2` in `mod1`?
Thanks
Answer: When you import `var` into `mod1`:
from mod2 import var,fun_of_mod2
You are giving it the name `var` _in mod1's namespace._ It is as if you did
this:
import mod2
var = mod2.var
fun_of_mod2 = mod2.fun_of_mod2
del mod2
In other words, there are now two names for the value, `mod1.var` and
`mod2.var`. They are the same at first, but when you reassign `mod1.var`,
`mod2.var` still points to the same thing.
What you want to do is just:
import mod2
Then access and assign the variable as `mod2.var`.
It's important to note that global variables in Python are not truly global.
They are global only to the module they're declared in. To access global
variables inside another module, you use the `module.variable` syntax. The
`global` statement can be used inside a function to allow a module-global name
to be assigned to (without it, assigning to a variable makes it a local
variable in that function). It has no other effect.
|
Python - Multithreaded Word / Line Count
Question: I'm trying to get a handle on multithreading in python. I have working code
that calculates the number of words, the number of lines with text, and
creates a dict with the count of each word. It runs fast on small files like
the one noted in the code comments. However I usually use glob to pull in
multiple files. When I do I have significantly increased run times. Meanwhile
since my script was single threaded I see that I have 3 other cores sitting
idle while one maxes out.
I thought I would give pythons multithreading module a shot, here's what I
have done so far (non-working):
#!/bin/python
#
# test file: http://www.gutenberg.org/ebooks/2852.txt.utf-8
import fileinput
from collections import defaultdict
import threading
import time
inputfilename = 'pg2852.txt'
exitFlag = 0
line = []
line_counter = 0
tot_words = 0
word_dict = defaultdict(int)
def myCounters( threadName, delay):
for line in fileinput.input([inputfilename]):
line = line.strip();
if not line: continue
words = line.split()
tot_words += len(words)
line_counter += 1
for word in words:
word_dict[word] += 1
print "%s: %s:" %( threadName, time.ctime(time.time()) )
print word_dict
print "Total Words: ", tot_words
print "Total Lines: ", line_counter
try:
thread.start_new_thread( myCounters, ("Thread-1", 2, ) )
thread.start_new_thread( myCounters, ("Thread-2", 4, ) )
except:
print "Error: Thread Not Started"
while 1:
pass
For those of you who try this code, it doesn't work. I assume that I need to
break the input file into chunks and merge the output somehow. ? map/reduce ?
perhaps there's a simpler solution?
Edit:
Maybe something like:
1. open the file,
2. break it into chunks
3. feed each chunk to a different thread
4. get counts and build dict on each chunk
5. merge counts / dict
6. return results
Answer: First of all, you are correct that you need to use parallel processes rather
than parallel threads. Doing _this kind of task_ [see ETA below] will not
scale well to multiple threads under python, due to the Global Interpreter
Lock (GIL).
If you wanted to process a single file in parallel, the obvious way would be
to first check the file size, then assign equal-sized chunks to multiple
processes. That would just involve telling each process from what position in
the file to start, and what position to end. (Of course, you would have to be
careful not to count any words or lines twice. A simple approach would be to
have each process ignore the initial bytes until it gets to the start of a
line, and then start counting).
**However** , you state in your question that you will be using a glob to
process multiple files. So instead of taking the complex route of chunking
files and assigning the chunks to different processes, an easier option is
simply assigning different _files_ to different processes.
* * *
**ETA:**
Using threads in Python is suitable for certain use cases, such as using I/O
functions that block for a long time. @uselpa is right that if processing is
I/O bound then threads may perform well, but that is not the case here because
the bottleneck is actually the parsing, not the file I/O. This is due to the
performance characteristics of Python as an interpreted language; in a
compiled language, the I/O is more likely to be the bottleneck.
I make these claims because I have just done some measuring based on the
original code (using a test file containing 100 concatenated copies of
pg2852.txt):
* Running as a single thread took about 2.6s to read and parse the file, but only 0.2s when I commented out the parsing code.
* Running two threads in parallel (reading from the same file) took 7.2s, but two single-threaded processes launched in parallel took only 3.3s to _both_ complete.
|
Hadoop streaming crashes in the middle of map/reduce operation
Question: I'm using hadoop 1.0.1 on a single node and I'm trying to stream a tab
delimited file using python 2.7. I can get Michael Noll's word count scripts
to run using hadoop/python, but can't get this extremely simple mapper and
reducer to work that just duplicates the file. Here's the mapper:
import sys
for line in sys.stdin:
line = line.strip()
print '%s' % line
Here's the reducer:
import sys
for line in sys.stdin:
line = line.strip()
print line
Here's part of the input file:
1 857774.000000
2 859164.000000
3 859350.000000
...
The mapper and reducer work fine within linux:
cat input.txt | python mapper.py | sort | python reducer.py > a.out
but after I chmod the mapper and reducer, move the input file to hdfs and
check that it's there and run:
bin/hadoop jar contrib/streaming/hadoop-*streaming*.jar -file mapperSimple.py -mapper mapperSimple.py -file reducerSimple.py -reducer reducerSimple.py -input inputDir/* -output outputDir
I get the following error:
12/06/03 10:19:11 INFO streaming.StreamJob: map 0% reduce 0%
12/06/03 10:20:15 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201206030550_0003_m_000001
12/06/03 10:20:15 INFO streaming.StreamJob: killJob...
Streaming Job Failed!
Any ideas? Thanks.
Answer: Do your python files have the [shebang /
hashbang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) headers? I imagine
your problem is when Java comes to execute the mapper python file, it's asking
the os to execute the file, and without shebang / hashbang notation, it
doesn't know how to execute the file. I would also ensure your files are
marked with executable permissions (`chmod a+x mapperSimple.py`):
#!/usr/bin/python
import sys
for line in sys.stdin:
line = line.strip()
print '%s' % line
Try this from the command line to ensure the shell knows to execute the files
with the python interpreter:
cat input.txt | ./mapper.py | sort | ./reducer.py > a.out
|
Python / CGI - Upload file attempt returns an empty page
Question: I really searched about 50 related pages but never seen a problem similar to
my problem. When I press the submit button, it calls the script but the script
returns an empty page and I see no file was uploaded. There is no typing error
in my codes, I checked it several times and I really need this code running
for my project. What might be the problem? I am running apache under ubuntu
and my codes are:
html code:
<html><body>
<form enctype="multipart/form-data" action="save_file.py" method="post">
<p>File: <input type="file" name="file"></p>
<p><input type="submit" value="Upload"></p>
</form>
</body></html>
python code:
#!/usr/bin/env python
import cgi, os
import cgitb; cgitb.enable()
try: #windows needs stdio set for binary mode
import msvcrt
msvcrt.setmode (0, os.O_BINARY)
msvcrt.setmode (1, os.O_BINARY)
except ImportError:
pass
form = cgi.FieldStorage()
#nested FieldStorage instance holds the file
fileitem = form['file']
#if file is uploaded
if fileitem.filename:
#strip leading path from filename to avoid directory based attacks
fn = os.path.basename(fileitem.filename)
open('/files' + fn, 'wb').write(fileitem.file.read())
message = 'The file "' + fn + '" was uploaded successfully'
else:
message = 'No file was uploaded'
print """\
Content-Type: text/html\n
<html><body>
<p>%s</p>
</body></html>
""" % (message,)
Answer: I just tested your script, with a few small corrections to the paths to make
it work for me locally. With the paths set correctly, and permissions set
properly, this code does work fine.
Here are the things to make sure of:
1. In your html file's form properties, make sure you are pointing to the python script that lives in a cgi-bin: `action="/cgi-bin/save_file.py"`. For me, I have a cgi-bin at the root of my web server, and I placed the python script there. It will not work if you are running the script from a standard document location on the web server
2. Make sure your save_file.py has executable permissions: `chmod 755 save_file.py`
3. In your save_file.py, ensure that you are building a valid path to open the file for saving. I made mine absolute just for testing purposes, but something like this: `open(os.path.join('/path/to/upload/files', fn)`
With those points set correctly, you should not have any problems.
|
Is it a good practice to use decorator to convert all Unicode strings to binary strings?
Question: Recently I wrote a little script to connect to servers via telnet (for some
reason all the specific programs refused to connect). The program was this:
import telnetlib
tn = telnetlib.Telnet('www.google.com',80)
tn.write('GET / HTTP/1.0\n')
tn.write('Host: www.google.com\n')
tn.write('User-agent: Chrome\n\n')
out = tn.read_all()
print(out)
It worked perfectly in Python 2.x. But then I started using it in Python 3.x
and encounter the problem, which was that all the strings as of Python 3.x are
Unicode objects. So I started thinking how I can remedy it and came out with a
solution using a decorator.
Actually I could have put `b` before every string (there were only few of
them). But I wanted to do it as if it were a huge code snippet (just for
training and using good code practice).
The solution with a decorator is the following:
def to_binary(f):
def wrapper(self, *args):
s,*args = args
s = s.encode() #this line performs conversion to binary string
return f(self, s, *args)
return wrapper
telnetlib.Telnet.write = to_binary(telnetlib.Telnet.write)
My question is:
Is this a good solution to be used in the projects in the future by me? Or I
had better prepend every string in `tn.write` with `b`, or maybe even use
something else?
Answer: It's a terrible solution, if for no other reason than it duplicates what one
of the classes in the
[`codecs`](http://docs.python.org/py3k/library/codecs.html#streamwriter-
objects) module does.
|
Python code, extracting extensions
Question: > **Possible Duplicate:**
> [In python, how can I check if a filename ends in '.html' or
> '_files'?](http://stackoverflow.com/questions/10873777/in-python-how-can-i-
> check-if-a-filename-ends-in-html-or-files)
import os
path = '/Users/Marjan/Documents/Nothing/Costco'
print path
names = os.listdir(path)
print len(names)
for name in names:
print name
Here is the code I've been using, it lists all the names in this category in
terminal. There are a few filenames in this file (Costco) that don't have
.html and _files. I need to pick them out, the only issue is that it has over
2,500 filenames. Need help on a code that will search through this path and
pick out all the filenames that don't end with .html or _files. Thanks guys
Answer:
for name in names:
if filename.endswith('.html') or filename.endswith('_files'):
continue
#do stuff
Usually
[`os.path.splitext()`](http://docs.python.org/library/os.path.html#os.path.splitext)
would be more appropriate if you needed the extension of a file, but in this
case `endswith()` is perfectly fine.
|
Rerunning the Django Project
Question: Intitally when i setup i didn't have any error when i typed python manage.py
runserver. However when i installed mysql and changed admins and databases in
my settings.py, i can't seem to run the server again.
Setting.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': '', # Or path to database file if using sqlite3.
'USER': '', # Not used with sqlite3.
'PASSWORD': '', # Not used with sqlite3.
'HOST': '', # Set to empty string for localhost. Not used with sqlite3.
'PORT': '', # Set to empty string for default. Not used with sqlite3.
}
}
When i run python manage.py run server:
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
Answer: YOu have to make sure to active your `env` as you noticed. As you can see the
`django` package ONLY exists in your env. If you are not using it then you can
not access any part of the django package (`django.core.management`) THere are
many man tutorials explaining how virtualenv functions.
<http://www.arthurkoziel.com/2008/10/22/working-virtualenv/>
<http://iamzed.com/2009/05/07/a-primer-on-virtualenv/>
|
Google App Engine: "We can not locate data file'
Question: I am a beginner with google app engines. My goal is to port an existing
webpage to GAE. The difficulty I am having centers around the location of the
.js files. To get it to run on my local machine, I placed .js files in a
static directory with references in the html like
<link href="/static/tblDz_Qs_clinical.js"....
Everything works fine on my local machine, but when I deploy, I get a message
that
'we cannot locate the datafile
http://dermdudes.appspot.com/static/tblDz_Qs_clinical.js'
Here is the main.py:
import os
import webapp2
import jinja2
template_dir = os.path.join(os.path.dirname(__file__), 'templates')
jinja_env = jinja2.Environment(loader = jinja2.FileSystemLoader(template_dir), autoescape=False)
class Handler(webapp2.RequestHandler):
def write(self, *a, **kw):
self.response.out.write(*a, **kw)
def render_str(self, template, **params):
t = jinja_env.get_template(template)
return t.render(params)
def render(self, template, **kw):
self.write(self.render_str(template, **kw))
class MainHandler(Handler):
def get(self):
self.render('DD_querydriven2.html')
app = webapp2.WSGIApplication([('/', MainHandler)],
debug=True)
here is the app.yaml:
application: dermdudes
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /static
static_dir: static
- url: .*
script: main.app
libraries:
- name: webapp2
version: "2.5.1"
- name: jinja2
version: latest
here is the reference in the html that is triggering the message:
<link href="/static/tblDz_Qs_clinical.js" type="application/json" rel="exhibit/data" />
If I can only get GAE to find the .js files then everything will be perfect.
Thanks for your help.
Answer: Psychic debugging: Case sensitive files name?, make the filename lowercase and
change the reference to be lower case also.
|
alignment of stacked subplots
Question: EDIT:
I found myself an answer (see below) how to align the images within their
subplots:
for ax in axes:
ax.set_anchor('W')
EDIT END
I have some data I plot with imshow. It's long in x direction, so I break it
into multiple lines by plotting slices of the data in vertically stacked
subplots. I am happy with the result but for the last subplot (not as wide as
the others) which I want left aligned with the others.
The code below is tested with Python 2.7.1 and matplotlib 1.2.x.
#! /usr/bin/env python
import matplotlib.pyplot as plt
import numpy as np
x_slice = [0,3]
y_slices = [[0,10],[10,20],[20,30],[30,35]]
d = np.arange(35*3).reshape((35,3)).T
vmin = d.min()
vmax = d.max()
fig, axes = plt.subplots(len(y_slices), 1)
for i, s in enumerate(y_slices):
axes[i].imshow(
d[ x_slice[0]:x_slice[1], s[0]:s[1] ],
vmin=vmin, vmax=vmax,
aspect='equal',
interpolation='none'
)
plt.show()
results in

Given the tip by Zhenya I played around with axis.get/set_position. I tried to
half the width but I don't understand the effect it has
for ax in axes:
print ax.get_position()
p3 = axes[3].get_position().get_points()
x0, y0 = p3[0]
x1, y1 = p3[1]
# [left, bottom, width, height]
axes[3].set_position([x0, y0, (x1-x0)/2, y1-y0])

`get_position` gives me the bbox of each subplot:
for ax in axes:
print ax.get_position()
Bbox(array([[ 0.125 , 0.72608696],
[ 0.9 , 0.9 ]]))
Bbox(array([[ 0.125 , 0.5173913 ],
[ 0.9 , 0.69130435]]))
Bbox(array([[ 0.125 , 0.30869565],
[ 0.9 , 0.4826087 ]]))
Bbox(array([[ 0.125 , 0.1 ],
[ 0.9 , 0.27391304]]))
so all the subplots have the exact same horizontal extent (0.125 to 0.9).
Judging from the narrower 4th subplot the image inside the subplot is somehow
centered.
Let's look at the AxesImage objects:
for ax in axes:
print ax.images[0]
AxesImage(80,348.522;496x83.4783)
AxesImage(80,248.348;496x83.4783)
AxesImage(80,148.174;496x83.4783)
AxesImage(80,48;496x83.4783)
again, the same horizontal extent for the 4th image too.
Next try AxesImage.get_extent():
for ax in axes:
print ax.images[0].get_extent()
# [left, right, bottom, top]
(-0.5, 9.5, 2.5, -0.5)
(-0.5, 9.5, 2.5, -0.5)
(-0.5, 9.5, 2.5, -0.5)
(-0.5, 4.5, 2.5, -0.5)
there is a difference (right) but the left value is the same for all so why is
the 4th one centered then?
EDIT: They are all centered...
Answer: You can control the position of the subplot manually, like so:
for ax in axes:
print ax.get_position()
and
ax[3].set_position([0.1,0.2,0.3,0.4])
Alternatively, you may want to have a look at
[GridSpec](http://matplotlib.sourceforge.net/users/gridspec.html)
|
How to enable OpenSSL support in an alternate install of Python 2.5.1?
Question: Some background info:
I'm trying to run a server program in `python 2.5.1` (the version the server
was written for and tested on). The program needs the OpenSSL library for some
of its functions. I installed python 2.5.1 from source as the yum repository
for the Amazon Linux instance I'm running on does not have the version of
python I need.
When I try to run the server with python 2.5.1 I get the following import
error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named OpenSSL
I know that the OpenSSL libraries are installed as I can import them into
Python 2.6 (the version of python installed by yum). It's just that my python
2.5.1 installation can't see them.
I have also installed pyOpenSSL via yum with no luck.
Answer: installed python libraries are specific to a particular version. so the
pyOpenSSL you installed from yum will be for the system python. you need to
install a separate instance of pyOpenSSL for the alt-installed 2.5 python.
if you use python2.5 to install distutils then you'll find that you have an
`easy_install-2.5` that you can use: `easy_install-2.5 pyopenssl` (or
similar). but note that may also install a new version of `easy_install`,
overwriting the existing one for the system python (if you have one). to use
distutils with the existing package use `easy_install-2.7` (if it's python
2.7).
does that make sense? basically, each python is distinct and needs its own set
of libraries. in contrast, easy_install is installed globally, but there is a
version-specific copy of easy_install for each python...!
if you want to avoid the mess with easy_install, you can use virtualenv.
create a new environment for 2.5, enable that, and you can install the
pyopenssl in there (using the easy_install from the environment). that may
sound more complicated if you've never ysed virtualenv, but if you give it a
little time to understand it will likely work out better in the long-term.
|
How to convert a webpage (from an intranet wiki) to an Office document?
Question: I have a set of Wiki pages (MediaWiki style) on my company's intranet that I
would like to convert to Microsoft Office Word documents (or something that I
can import in it). I am looking for something that has:
## Requirements
* Keep the formatting as much as it can
* Does not require to change anything on the server that hosts the Wiki (no plugin can be added nor configuration files can be modified from my side)
* The solution can be programmatically (as I am a developer too), in flavor of Python/C#/C++ and the like
## Exclusions
* Does not look like a solution as _"Wiki to Acrobat PDF Pro to Microsof Office Word"_ (as we do not have Acrobat PDF Pro). Actually, even the non-Pro version (that allows a "Save as Microsoft Word online" option) is not available in my company (very old version of Adobe suite). However, I can still export the page as a pdf, but from the Wiki we have, it does not look good (because some element are too big, for an A4 format, and the extra parts are scraped out of the produced pdf. I would like them to be included anyway and be able to play with "bad" formatting within Word eventually
* As it is an intranet wiki, online solutions are out of the scope
* Solutions that implies I could copy the db of the Wiki and do the operation elsewhere (at home for example) are also out of the scope
## Options
* The solution can be either on Windows or Linux-like (CentOS)
* If it can do it in batch, it is better, but not required
## Question
Would you have any hint of a solution that could fit my needs?
Answer: A very simple solution is to open the URL of the Wiki in Word's _Open
Document_ dialog, e.g. by pasting the URL
[http://en.wikipedia.org/w/index.php?title=Microsoft_Word&printable=yes](http://en.wikipedia.org/w/index.php?title=Microsoft_Word&printable=yes)
into the _File Name_ text box. This does not require any programming, still
gives a satisfying result.
If you need a batch solution, you can write a simple script in VBA that
creates and saves the documents for you:
Sub OpenFromWiki()
Documents.Open FileName:= _
"http://en.wikipedia.org/w/index.php?title=Microsoft_Word&printable=yes", _
ConfirmConversions:=False, ReadOnly:=True, AddToRecentFiles:=False, _
PasswordDocument:="", PasswordTemplate:="", Revert:=False, _
WritePasswordDocument:=""
End Sub
|
libjpeg.so.62: cannot open shared object file: No such file or directory
Question: I am trying to do some image processing with python and the PIL. I was having
a problem that I wouldn't correctly import the _imaging folder so I did a
reinstall and now I am getting this problem:
libjpeg.so.62: cannot open shared object file: No such file or directory
I did apt-get remove python-imaging in the command line and then apt-get
install python-imaging and now it won't work in Eclipse. Any tips?
Answer: Try installing the libjpeg62 module in Ubuntu, the libjpeg.so.62 file is a
part of it.
|
What's the pythonic way to run a lottery?
Question: I need to pick several random items from a weighted set. Items with a higher
weight are more likely to get picked. I decided to model this after a lottery.
I feel that my solution makes good C++, but I don't think it makes for good
python.
What's the pythonic way of doing this?
def _lottery_winners_by_participants_and_ticket_counts(participants_and_ticket_counts, number_of_winners):
"""
Returns a list of winning participants in a lottery. In this lottery,
participant can have multiple tickets, and participants can only win
once.
participants_and_ticket_counts is a list of (participant, ticket_count)
number_of_winners is the maximum number of lottery winners
"""
if len(participants_and_ticket_counts) <= number_of_winners:
return [p for (p, _) in participants_and_ticket_counts]
winners = []
for _ in range(number_of_winners):
total_tickets = sum(tc for (_, tc) in participants_and_ticket_counts)
winner = random.randrange(0, total_tickets)
ticket_count_offset = 0
for participant_ticket_count in participants_and_ticket_counts:
(participant, ticket_count) = participant_ticket_count
if winner < ticket_count + ticket_count_offset:
winners.append(participant)
participants_and_ticket_counts.remove(participant_ticket_count)
break
ticket_count_offset += ticket_count
return winners
* * *
Edit: Sorry I forgot this earlier, but weight is an integer which could be in
the thousands.
* * *
**Edit:** I think I have my final solution based on the comment of @Flo
Notes
* I'm working in Python 2.7, so I created my own accumulate(). It works differently (and I think better) than accumulate() in Python 3. My version can accumulate from an iterable of tuples based on an add function.
* I also have special knowledge that participants_and_ticket_counts is a mutable list and will not be used after _lottery_winners_by_participants_and_ticket_counts() is called. That's why I can pop() it.
Here's my solution:
def _lottery_winners_by_participants_and_ticket_counts(participants_and_ticket_counts, number_of_winners):
"""
Returns a list of winning participants in a lottery. In this lottery,
participant can have multiple tickets, and participants can only win once.
participants_and_ticket_counts is a list of (participant, ticket_count)
number_of_winners is the maximum number of lottery winners
"""
def _accumulate(iterable, func):
total = 0
for element in iterable:
total = func(total, element)
yield total
if len(participants_and_ticket_counts) <= number_of_winners:
return list(winner for (winner, _) in participants_and_ticket_counts)
winners = list()
for _ in range(number_of_winners):
accumulation = list(_accumulate(participants_and_ticket_counts, lambda total, ptc: total + ptc[1]))
winning_number = random.randrange(0, accumulation[-1])
index_of_winner = bisect.bisect(accumulation, winning_number)
(winner, _) = participants_and_ticket_counts.pop(index_of_winner)
winners.append(winner)
return winners
Thanks to everyone for their help!
Answer: [numpy.random.choice](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html)
has a nice solution to this. Here's how you can use it:
>>> import numpy as np
>>> from numpy.random import choice
>>> names = ['Harry', 'Sally', 'Joe', 'Bob', 'Angela', 'Jack', 'Jill', 'Jeff']
>>> weights = [1,4,6,3,5,7,10,14]
>>> p = np.array(weights, dtype=float) / sum(weights)
>>> p
array([ 0.02, 0.08, 0.12, 0.06, 0.1 , 0.14, 0.2 , 0.28])
>>> choice(names, size=5, p=p)
array(['Jill', 'Jack', 'Jeff', 'Jeff', 'Angela'],
dtype='|S6')
>>> choice(names, size=5, p=p)
array(['Jill', 'Jack', 'Joe', 'Jill', 'Sally'],
dtype='|S6')
>>> choice(names, size=5, p=p)
array(['Jack', 'Angela', 'Joe', 'Sally', 'Jill'],
dtype='|S6')
However, this function was added in numpy 1.7. If you have an older version,
you can just copy the function: <http://pastebin.com/F5gti0qJ>
|
Why does my 'instance' turn into an 'ndarray' when I use Scipy optimize?
Question: I have written a function using a quantum simulation class
[QuTIP](http://qutip.googlecode.com) that returns a float. Next, I called
scipy.optimize.fmin_cg on the function. I keep getting the error:
AttributeError: 'numpy.ndarray' object has no attribute 'expm'
on the line:
U_sq = H_sq.expm
But H_sq is an instance of Qobj, not an ndarray. If I run the function outside
of scipy.optimize.fmin_cg, it returns the type as 'instance'; when it runs
inside of fmin_cg it returns the type as 'ndarray'.
Why does it do this? Is there a optimization function in python that will
respect using instances like this?
Here is the code:
from qutip import *
from numpy import *
import scipy.optimize
def sq_fidelity(eps,N=7):
H_sq = squeez(N,eps);
print type(H_sq);
one_ph = basis(N,1);
U_sq = H_sq.expm();
squ = U_sq*one_ph;
fidelity = expect(fock_dm(N,1),squ);
return float(fidelity)
if __name__=='__main__':
print sq_fidelity(0.2);
eps = scipy.optimize.fmin_cg(sq_fidelity, x0=0.2, args=(7,));
Answer: The issue here is that `fmin_cg` is passing an `ndarray` (of length 1) to your
objective function. You can extract the scalar value by just changing the
first line of `sq_fidelity` to:
H_sq = squeez(N, float(eps))
|
What is a better way to deal with unknown list structures in python?
Question: My python program receives MIDI data from a C library. Sometimes the data will
look like this:
[[[[240,0,1,116]]],[[[3,100,8,1]]],[[[107,247,0,0]]]]
and sometimes it will include timestamps like this:
[[[[240,0,1,116],26738]],[[[3,100,8,1],26738]],[[[107,247,0,0],26738]]]
I need the data in an array of bytes, with the timestamp values discarded. The
code I wrote to do this is:
def convertMidiSysex(data):
while isinstance(data[0][0], list):
out = []
for index, value in enumerate(data):
out = out+value
data = out
out = array.array('B')
for i in range(len(data)):
if isinstance(data[i], list):
for j in range(len(data[i])):
out.append(data[i][j])
if out[-1] == 247: # 0xF7 is marker to end sysex message
return out
I can't help feeling that I'm doing this the hard way. Is there a better
approach to this?
Answer: Assuming the depth is always consistent:
from itertools import takewhile
def convertMidiSysex(data):
values = takewhile(lambda v: v != 247,
(a for x in data for a in x[0][0]))
out = array.array('B', values)
out.append(247)
return out
Examples:
>>> convertMidiSysex([[[[240,0,1,116]]],[[[3,100,8,1]]],[[[107,247,0,0]]]])
array('B', [240, 0, 1, 116, 3, 100, 8, 1, 107, 247])
>>> convertMidiSysex([[[[240,0,1,116],26738]],[[[3,100,8,1],26738]],[[[107,247,0,0],26738]]])
array('B', [240, 0, 1, 116, 3, 100, 8, 1, 107, 247])
|
Python Countdown w/o Sleep
Question: I am working on a Twisted socket, but what I've heard is that if you use
time.sleep while using a socket, the system hangs and the socket goes on halt.
Is there any way of doing a countdown without time.sleep?
Thanks.
Answer: Twisted has a couple of options. You could use a simple function to check a
condition.
from twisted.internet import reactor
expire = 10
def tick():
expire -= 1
if expire == 0:
reactor.stop()
return
reactor.callLater(1, tick)
reactor.callLater(0, tick)
reactor.run()
There is also twisted.internet.task.LoopingCall
<http://twistedmatrix.com/documents/current/api/twisted.internet.task.LoopingCall.html>
from twisted.internet.task import LoopingCall
def tick():
if expired = 0:
reactor.stop()
return
do_something()
task = LoopingCall(tick)
task.start()
reactor.run()
If you are using a twisted application (twistd) or some other multiservice,
there is also twisted.application.internet.TimerService
<http://twistedmatrix.com/documents/current/api/twisted.application.internet.TimerService.html>
|
Python: Submodules Not Found
Question: My Python couldn't figure out the submodules when I was trying to import
`reportlab.graphics.shapes` like this:
>>> from reportlab.graphics.shapes import Drawing
Traceback (most recent call last):
File "<pyshell#14>", line 1, in <module>
from reportlab.graphics.shapes import Drawing
ImportError: No module named shapes
I have copied the `reportlab` package to **/site-packages** and I can import
**module reportlab.graphics** successfully.
My Python version is 2.7.3.
Could anyone help me to fix this problem?
Answer: As @dan-boa pointed out, you can add paths to the module search path, but
since you can find the parent module, I doubt that this is your root problem.
Do you have some left-over installation of the module at another path? You can
check the path where it is finding the parent package (reportlab) by
executing:
import reportlab
print reportlab.__file__
If this is indeed the path you were expecting, then try this recursively with
the the sub-modules, until you can see where the problem is. Perhaps, your
package is corrupted? Try manually checking in the path returned if you can
find the files/modules in question.
If this is not the path you were expecting, clean-up the installation from
this 2nd path and try again.
Finally, in case you do find that it is a path problem, instead of adding the
path each time using sys.path.append, you can add it to
[PYTHONPATH](http://docs.python.org/using/cmdline.html#envvar-PYTHONPATH)
|
local variable referenced before assignment for decorator
Question: I'm using decorator with the functional syntax as it's described
[here](http://www.python.org/dev/peps/pep-0318/#current-syntax).
I loop over a list of dict. In this loop I wrap a generic function with a
decorator taking a parameter. Then i call the wrapped function with the
current dict as parameters.
My problem is that i get an `local variable 'generic_evaluator' referenced
before assignment` error.
Here is my code:
**The concerned decorator:**
def log(logfile_name):
def inner_log(func):
def wrapped(*args, **kwargs):
import os
ret = func()
# business code...
return wraps(func)(wrapped)
return inner_log
And here the place where I wrap `generic_evaluator` function with the `log`
decorator.
for evaluation in generic_evaluations:
generic_evaluator = log(evaluation['suffix'])(generic_evaluator)
generic_evaluator(evaluation['suffix'], evaluation['id'])
**EDIT**
It'll be probably more clear with this piece of code in addition:
@tictoc
def generic_evaluator(suffix_url, id):
xml = etree.parse(get_resource(base_url + "/" + suffix_url + "/" + str(id)))
links = fetch_urls_from_xml(xml)
return links
Answer: If `generic_evaluator` is a global function, and you actually want to rebind
the global name `generic_evaluator`, declare this name as global:
global generic_evaluator
If you don't want to rebind the global name, use a different local name and
initialise it:
local_generic_evaluator = generic_evaluator
[This answer](http://stackoverflow.com/questions/9264763/unboundlocalerror-in-
python/9264845#9264845) gives an explanation of what's going on.
|
Google App Engine (python): filter users based on custom fields
Question: i am using the `webapp2_extras.appengine.auth.models.User` service which
basically extends the `google.appengine.api.users model`. Now , I have custom
users registered with my application and they have a lot of custom fields. The
problem is i want to filter / (multi filter) all the users using various
custom fields. For example:
the user model has 2 fields `is_active` and `activation_key` now i want to
filter them using these fields, example:
from google.appengine.api import users
act_key = 'hw-2j38he63u83hd6hak3FshSqj3TGemn9'
user = users.all().filter('is_active =', False).filter('activation_key =', act_key).get()
if user:
return True
else:
return False
what are the best possible ways to filter on the user model using custom
fields?
Edit:
Also tried the following:
from webapp2_extras.appengine.auth.models import User
query = User.query().filter('is_active =', False)
print query
but this raises an error as follows:
Traceback (most recent call last):
File "/opt/google_appengine_1.6.4/google/appengine/ext/admin/__init__.py", line 320, in post
exec(compiled_code, globals())
File "<string>", line 6, in <module>
File "lib/ndb/query.py", line 968, in filter
raise TypeError('Cannot filter a non-Node argument; received %r' % arg)
TypeError: Cannot filter a non-Node argument; received 'is_active ='
Answer: The syntax for querying on Expando properties in NDB is a little different to
how it's done in the old DB API, and is documented
[here](https://developers.google.com/appengine/docs/python/ndb/queries#properties_by_string).
Your query needs to look something like this:
user = users.all().filter(ndb.GenericProperty('is_active') == False, ndb.GenericProperty('activation_key) == act_key).get()
|
Python Memory error solutions if permanent access is required
Question: first, I am aware of the amount of Python memory error questions on SO, but so
far, none has matched my use case.
I am currently trying to parse a bunch of textfiles (~6k files with ~30 GB)
and store each unique word. Yes, I am building a wordlist, no I am not
planning on doing evil things with it, it is for the university.
I implemented the list of found words as a set (created with `words =
set([])`, used with `words.add(word)`) and I am just adding every found word
to it, considering that the set mechanics should remove all duplicates.
This means that I need permanent access to the whole set for this to work (Or
at least I see no alternative, since the whole list has to be checked for
duplicates on every insert).
Right now, I am running into `MemoryError` about 25% through, when it uses
about 3.4 GB of my RAM. I am on a Linux 32bit, so I know where that limitation
comes from, and my PC only has 4 Gigs of RAM, so even 64 bit would not help
here.
I know that the complexity is probably terrible (Probably O(n) on each insert,
although I don't know how Python sets are implemented (trees?)), but it is
still (probably) faster and (definitly) more memory efficient than adding each
word to a primitive list and removing duplicates afterwards.
Is there any way to get this to run? I expect about 6-10 GB of unique words,
so using my current RAM is out of the question, and upgrading my RAM is
currently not possible (and does not scale too well once I start letting this
script loose on larger amounts of files).
My only Idea at the moment is caching on Disk (Which will slow the process
down even more), or writing temporary sets to disk and merging them
afterwards, which will take even more time and the complexity would be
horrible indeed. Is there even a solution that will not result in horrible
runtimes?
For the record, this is my full source. As it was written for personal use
only, it is pretty horrible, but you get the idea.
import os
import sys
words=set([])
lastperc = 0
current = 1
argl = 0
print "Searching for .txt-Files..."
for _,_,f in os.walk("."):
for file in f:
if file.endswith(".txt"):
argl=argl+1
print "Found " + str(argl) + " Files. Beginning parsing process..."
print "0% 50% 100%"
for r,_,f in os.walk("."):
for file in f:
if file.endswith(".txt"):
fobj = open(os.path.join(r,file),"r")
for line in fobj:
line = line.strip()
word, sep, remains = line.partition(" ")
if word != "":
words.add(word)
word, sep, remains = remains.partition(" ")
while sep != "":
words.add(word)
word, sep, remains2 = remains.partition(" ")
remains = remains2
if remains != "":
words.add(remains)
newperc = int(float(current)/argl*100)
if newperc-lastperc > 0:
for i in range(newperc-lastperc):
sys.stdout.write("=")
sys.stdout.flush()
lastperc = newperc
current = current+1
print ""
print "Done. Set contains " + str(len(words)) + " different words. Sorting..."
sorteddic = sorted(words, key=str.lower)
print "Sorted. Writing to File"
print "0% 50% 100%"
lastperc = 0
current = 1
sdicl = len(sorteddic)-1
fobj = open(sys.argv[1],"w")
for element in sorteddic:
fobj.write(element+"\n")
newperc = int(float(current)/sdicl*100)
if newperc-lastperc > 0:
for i in range(newperc-lastperc):
sys.stdout.write("=")
sys.stdout.flush()
lastperc = newperc
current = current+1
print ""
print "Done. Enjoy your wordlist."
Thanks for your help and Ideas.
Answer: You're probably going to need to store the keys on disk. A key-value store
like [Redis](http://redis.io/) might fit the bill.
|
Python SciPy call from terminal failing
Question: I am trying to call the following Python script from the Ubuntu terminal using
the standard
`python rosen.py`
but it fails. I can hit `F5` in idle and it works fine but it fails when
called from the terminal. The code for `rosen.py` is as follows:
from scipy.optimize import fmin
def rosen(x):
b=sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0)
print b
return b
x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
xopt = fmin(rosen, x0, xtol=1e-8)
print xopt
again, when run in idle it works fine, but when called from the terminal it
says that scipy doesn't exist...
I can run the following numpy code from the terminal or idle and it works
fine:
import numpy as np
a=np.sin(1)
print a
It will print in either the terminal window or idle window depending where it
was called.
Basically, how can I get the rosen.py to import SciPy and run when called from
the Ubuntu terminal??
Thank you very much for the help.
Answer: Do you have
#!/usr/bin/python
at the top of your file to identify the location of the python interpreter?
And made your script executable with
chmod +x rosen.py
Then either command works for me under Ubuntu:
./rosen.py
or
python rosen.py
(The `chmod` is optional if you want to be able to run the script w/o typing
`python` first on the command line. `python rosen.py` will work w/o the
`chmod`)
And as you are already importing scipy in your script, so I am not sure I
understand that part of the question.
|
Fresh installation of sphinx-quickstart fails
Question: Trying to get it going with Sphinx for the first time, with a clean Sphinx
1.1.3 installation, and shinx-quickstart fails. Should there be any
dependencies installed? I tried to `pip --force-reinstall sphinx` but the
result is the same.
myhost:doc anton$ sphinx-quickstart
Traceback (most recent call last):
File "/usr/local/bin/sphinx-quickstart", line 8, in <module>
load_entry_point('Sphinx==1.1.3', 'console_scripts', 'sphinx-quickstart')()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 318, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 2221, in load_entry_point
return ep.load()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/pkg_resources.py", line 1954, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/Library/Python/2.7/site-packages/Sphinx-1.1.3-py2.7.egg/sphinx/quickstart.py", line 19, in <module>
from sphinx.util.osutil import make_filename
File "/Library/Python/2.7/site-packages/Sphinx-1.1.3-py2.7.egg/sphinx/util/__init__.py", line 25, in <module>
from docutils.utils import relative_path
File "/Library/Python/2.7/site-packages/docutils-0.9-py2.7.egg/docutils/utils/__init__.py", line 19, in <module>
from docutils.io import FileOutput
File "/Library/Python/2.7/site-packages/docutils-0.9-py2.7.egg/docutils/io.py", line 18, in <module>
from docutils.error_reporting import locale_encoding, ErrorString, ErrorOutput
File "/Library/Python/2.7/site-packages/docutils-0.9-py2.7.egg/docutils/error_reporting.py", line 47, in <module>
locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1]
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/locale.py", line 496, in getdefaultlocale
return _parse_localename(localename)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/locale.py", line 428, in _parse_localename
raise ValueError, 'unknown locale: %s' % localename
ValueError: unknown locale: UTF-8
Answer: I was getting the same issue in Mac OS X Snow Leopard. It seems to be an issue
with Terminal.app.
Please add the following to your $HOME/.bash_profile
export LC_ALL=en_US.UTF-8
export LANG=en_US.UTF-8
Do
source $HOME/.bash_profile
and try. This will solve the issue.
|
How to port a Python application to Linux that works fine in Windows
Question: I am having trouble porting a working, Windows Python application to Linux. I
am having some problems, because I did not write the code and am just learning
Python. I am having trouble fixing the issues that it keeps throwing up. So
here is a kind of error that right now I am stuck with
Traceback (most recent call last):
File "alpha_beta", line 237, in <module>
main()
File "alpha_beta", line 185, in main
ABCCmd()
File "alpha_beta.py", line 74, in ABCCmd
File "C:\softs\Python\Lib\shutil.py", line 80, in copy
File "C:\softs\Python\Lib\shutil.py", line 47, in copyfile
IOError: [Errno 13] Permission denied: '/myPath/XFiles.bin.addr_patched
Any pointers on how to fix it will be much appreciated
Edit:
1) What I mean by I am stuck is, the traceback of the error goes to
C:\softs\Python\Lib but actually I am currently executing this code in Ubuntu.
Why would the traceback reference to windows library
2) Another thing that bothers me is it says there is an IOError.But when I try
to add permission for the denied one it gives me a chmod: changing permissions
of /myPath/xFiles.bin.addr_patched': Operation not permitted Edit 2:
I had commented out a module because I thought it wasn't very useful. Since
Now I am anyway discussing the porting issues, I thought I can bring up this
additional problem as well since I think the issue is the same and the fix
should be similar. On including #pdb module in the python code, I get the
following error
traceback (most recent call last):
File "alpha_beta", line 6, in <module>
import pdb
File "C:\softs\Python\Lib\pdb.py", line 14, in <module>
File "C:\softs\Python\Lib\pprint.py", line 39, in <module>
ImportError: No module named cStringIO
I looked at the importer_Cache and looks like this
'': None, '/usr/lib/python2.6/plat-linux2': None, '/usr/lib/python2.6/': None
'/usr/lib/pymodules/python2.6/gtk-2.0': None, '/usr/lib/python2.6/lib-tk': None,
'/usr/lib/python2.6/lib-old': <imp.NullImporter object at 0x7f1269048070>, '/usr/
/python2.6/dist-packages/gtk-2.0': None, '/usr/lib/python2.6/dist-packages/PIL': None,
'/usr/local/lib/python2.6/dist-packages': None, '/usr/lib/python2.6/dist-packages':
None
* **SOLVED** : _There was a weird wrapper in the script which was causing the python script to execute from a different location. Still don't understand how though. After modifying my script to just directly execute as python myScript.py it runs fine. Thank you all anyway for the help_
Answer: The mixture of Windows and Unix style paths in your error messages makes me
think that you may have some filenames that are hard coded or manually created
using os specific path separators ('\' or '/'). If you can figure out where
the `gOptions.inputTf` and `gWorkingTfFile` values are assigned you should
look into using
[`os.path.join`](http://docs.python.org/library/os.path.html#os.path.join) to
help you use the correct paths.
from os.path import join
file = join('A', 'B')
# 'A/B' on unix systems
# 'A\B' on windows systems
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.