text
stringlengths 226
34.5k
|
---|
Encoding string in UTF8 from list
Question: I'm having trouble formating the strings to utf-8 In this script im getting
data from excel file then printing it out in a loop, the problem is that the
string with special characters shows up wrong.
In result I keep getting 'PatrÄ«cija' instead of 'Patrīcija' Can't seem to
find the solution for this problem
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import xlrd
import datetime
def todaysnames():
todaysdate = datetime.datetime.strftime(datetime.date.today(), "%d.%m")
book = xlrd.open_workbook("vardadienas.xls")
sheet = book.sheet_by_name('Calendar')
for rownr in range(sheet.nrows):
if sheet.cell(rownr, 0).value == todaysdate:
string = (sheet.cell(rownr, 1).value)
string = string.encode(encoding="UTF-8",errors="strict")
names = string.split(', ')
return names
names = todaysnames()
for name in names:
print name
Answer: Changed encoding to `iso8859_13(Baltic languages)` and it fixed it.
|
Python unicode trouble
Question: I am having trouble with unicode in a script I am writing. I have scoured the
internet, including this site and I have tried many things, and I still have
no idea what is wrong.
My code is very long, but I will show an excerpt from it:
raw_results = get_raw(args)
write_raw(raw_results)
parsed_results = parse_raw(raw_results)
write_parsed(parsed_results)
Basically, I get raw results, which is in XML, encoded in UTF-8. Writing the
RAW data has no problems. But writing the parsed data is. So I am pretty sure
the problem is inside the function that parses the data.
I tried everything and I do not understand what the problem is. Even this
simple line gives me an error:
def parse_raw(raw_results)
content = raw_results.replace(u'<?xml version="1.0" encoding="UTF-8" standalone="yes"?>', u'')
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xd7 in position 570:
> ordinal not in range(128)
Ideally I would love to be able to work with unicode and have no problems, but
I also have no issue with replacing/ignoring any unicode and using only
regular text. I know I have not provided my full code, but understand that
it's a problem since it's work-related. But I hope this is enough to get me
some help.
**Edit:** the top part of my parse_raw function:
from xml.etree.ElementTree import XML, fromstring, tostring
def parse_raw(raw_results)
raw_results = raw_results.decode("utf-8")
content = raw_results.replace('<?xml version="1.0" encoding="UTF-8" standalone="yes"?>', '')
content = "<root>\n%s\n</root>" % content
mxml = fromstring(content)
**Edit2:** : I think it would be a good idea to point out that the code works
fine UNLESS there are special characters. When it's 100% English, no problem;
whenever any foreign letters or accented letters are involved is when the
issues arise.
Answer: `raw_results` is probably a `str` object, not a `unicode` object.
`raw_results.replace(u'...', ...)` causes Python to first decode the `str`
`raw_results` into a `unicode`. Python2 uses the `ascii` codec by default.
`raw_results` contains the byte `'\xd7'` at position 570, which is not
decodeable by the `ascii` codec (i.e., it is not an ascii character).
Here is a demonstration of how this error might occur:
In [27]: '\xd7'.replace(u'a',u'b')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd7 in position 0: ordinal not in range(128)
Whereas if `raw_results` were unicode, there would be no silent decoding with
`ascii`, and therefore no error would occur:
In [28]: u'\xd7'.replace(u'a',u'b')
Out[28]: u'\xd7'
You can fix this problem by decoding `raw_results` explicitly, provided you
know the appropriate codec:
raw_results = raw_results.decode('latin-1')
`latin-1` is just a guess. It might be correct if the character at position
570 is a multiplication symbol:
In [26]: print('\xd7'.decode('latin-1'))
×
|
Python expression evaluator and Interpreter pattern
Question: I am trying to implement a simple expression evaluator in python,but I am
stuck in parser method.Below is my code snippet.
class Number:
def __init__(self,value):
self.value=value
def execute(self):
return self.value
class Plus:
def __init__(self,left,right):
self.left=left
self.right=right
def execute(self):
return self.left+self.right
class Minus:
def __init__(self,left,right):
self.left=left
self.right=right
def execute(self):
return self.left-self.right
class Multiply:
def __init__(self,left,right):
self.left=left
self.right=right
def execute(self):
return self.left*self.right
import re
def parser(input):
stack=[]
token_pat = re.compile("\s*(?:(\d+)|(.))")
for number, operator in token_pat.findall(input):
if number:
stack.append(Number(int(number)))
else:
first,second=stack.pop(),stack.pop()
if operator=="+":
stack.append(Plus(first,second))
elif operator=="-":
stack.append(Minus(first,second))
elif operator=="*":
stack.append(Multiply(first,second))
else:
raise SyntaxError("unknown operator")
print stack[0].execute()
if __name__=="__main__":
parser('1 2 +')
When I am running the above code,I am getting following error.Can anybody
review my code .
Traceback (most recent call last):
File "Interpreter.py", line 52, in <module>
parser('1 2 +')
File "Interpreter.py", line 48, in parser
print stack[0].execute()
File "Interpreter.py", line 12, in execute
return self.left+self.right
TypeError: unsupported operand type(s) for +: 'instance' and 'in
Answer: The error message is that confusing because you're using [classic
classes](http://docs.python.org/reference/datamodel.html#new-style-and-
classic-classes). With new-style classes (i.e. inheriting from `object`), you
get a much more reasonable:
TypeError: unsupported operand type(s) for +: 'Number' and 'Number'
Either use `int`s instead of Number objects, or implement adding logic by
implementing [`__add__` and
friends](http://docs.python.org/reference/datamodel.html#object.__add__), or
add evaluating logic in `Plus.execute`.
Also note that you're reimplementing Python's built-ins. Additionally, having
an `execute` method is pretty much an anti-pattern. A much shorter
implementation would be
import functools,operator,re
def parser(inp):
stack=[]
token_pat = re.compile("\s*(?:(\d+)|(.))")
for number, op in token_pat.findall(inp):
if number:
stack.append(functools.partial(lambda i:i, int(number)))
else:
first,second=stack.pop(),stack.pop()
try:
op = {
'+': operator.add,
'-': operator.sub,
'*': operator.mul
}[op]
except KeyError:
raise SyntaxError("unknown operator")
stack.append(functools.partial(lambda op,first,second:
op(first(), second()), op, first, second))
print(stack[0]())
if __name__=="__main__":
parser('1 2 + 3 *')
|
Calling Python code from Gnome Shell extension
Question: I was looking for some time, but still can't find any documented way to call
python functions from GnomeShell extension code. Is there any possibility to
do that?
Answer: You can do it like this :)
const Util = imports.misc.util;
let python_script = '/path/to/python/script';
Util.spawnCommandLine("python " + python_script);
|
Granger Test (Python) Error message - TypeError: unsupported operand type(s) for -: 'str' and 'int'
Question: I am trying to run a granger causality test on two currency pairs but I seem
to get this error message in Shell whenever I try and test it. Can anyone
please advise?
I am very new to programming and need this to run an analysis for my project.
In shell, I am putting -
import ats15 ats15.grangertest('EURUSD', 'EURGBP', 8)
What is going wrong? I have copied the script below.
Thanks in advance.
## Heading ##def grangertest(Y,X,maxlag):
"""
Performs a Granger causality test on variables (vectors) Y and X.
The null hypothese is: Does X cause Y ?
Returned value: pvalue, F, df1, df2
"""
# Create linear model involving Y lags only.
n = len(Y)
if n != len(X):
raise ValueError, "grangertest: incompatible Y,X vectors"
M = [ [0] * maxlag for i in range(n-maxlag)]
for i in range(maxlag, n):
for j in range(1, maxlag+1):
M[i-maxlag][j-1] = Y[i-j]
fit = ols(M, Y[maxlag:])
RSSr = fit.RSS
# Create linear model including X lags.
for i in range(maxlag, n):
xlagged = [X[i-j] for j in range(1, maxlag+1)]
M[i-maxlag].extend(xlagged)
fit = ols(M, Y[maxlag:])
RSSu = fit.RSS
df1 = maxlag
df2 = n - 2 * maxlag - 1
F = ((RSSr - RSSu)/df1)/(RSSu/df2)
pvalue = 1.0 - stats.f.cdf(F,df1,df2)
return pvalue, F, df1, df2, RSSr, RSSu
Answer: You didn't post the full traceback, but this error message:
TypeError: unsupported operand type(s) for -: 'str' and 'int'
means what it says. There's an operand `-` \-- the subtraction operator -- and
it doesn't know how to handle subtracting an integer from a string. Why would
strings be involved? Well, you're calling the function with:
ats15.grangertest('EURUSD', 'EURGBP', 8)
and so you're giving `grangertest` two strings and an integer. But it seems
like `grangertest` expects
def grangertest(Y,X,maxlag):
two sequences (lists, arrays, whatever) of _numbers_ to use as Y and X, not
strings. If `EURUSD` and `EURGBP` are names you've given to lists beforehand,
then you don't need the quotes:
ats15.grangertest(EURUSD, EURGBP, 8)
but if not, then you should pass `grangertest` the lists under whatever name
you've called them.
|
How do I query for data in Rhythmbox
Question: I'm using ubuntu 12.04 and I'm trying to write a python plugin to query the
Rhythmbox database. The Rhythmbox version is v2.96 but this issue also occurs
with v2.97 as well. When I do a python query, Ubuntu crashes with a
segmentation fault.
I need to confirm the following is correct and if I've found a bug specific to
Ubuntu or if I've misunderstood how to correctly query. If anyone else using
another distro can confirm - this would be most welcome.
I've filed a [bug report](https://bugzilla.gnome.org/show_bug.cgi?id=682294)
on bugzilla with regards to the segmentation fault. However, my question is
not strictly about this - its specifically trying to confirm the correct
python code to query for data.
Thus my question: is the code snippet below correct to query the Rhythmbox
database or do I need to install an additional package to enable querying.
Steps:
1. Enable the python console plugin
2. type (or copy and paste line by line the following)
from gi.repository import RB, GLib
db = shell.props.db
query_model = RB.RhythmDBQueryModel.new_empty(db)
query = GLib.PtrArray()
db.query_append_params( query, RB.RhythmDBQueryType.EQUALS, RB.RhythmDBPropType.ARTIST, 'some artist name' )
db.query_append_params( query, RB.RhythmDBQueryType.EQUALS, RB.RhythmDBPropType.TITLE, 'some song name' )
db.do_full_query_parsed(query_model, query)
for row in query_model:
print row[0].get_string( RB.RhythmDBPropType.ARTIST )
print row[0].get_string( RB.RhythmDBPropType.TITLE )
If ubuntu 12.04 when I type this line, Ubuntu crashes with a segmentation
fault
db.query_append_params( query, RB.RhythmDBQueryType.EQUALS, RB.RhythmDBPropType.ARTIST, 'some artist name' )
Thus, have I actually used the first parameter in the call correctly - the
Query Pointer Array (PtrArray) - or is my query function parameters incorrect?
Answer: ## and the answer is...
Well, this issue is indeed a bug - but to answer my own question,
**yes** the syntax to query for data in Rhythmbox is as correctly stated in
the question.
_and there is a however..._
Querying for data only works for 64bit linux.
Yes really - I have been testing 32bit live-cd's of Fedora 17 as well as LMDE.
Both exhibit the same segmentation fault issue as Ubuntu 12.04.
The common factor is that I was testing Ubuntu 12.04/Fedora 17 and LMDE in
their 32bit incarnations.
Testing all three in their 64bit variants works as expected.
The 32bit issue is a bug - and has been reported on bugzilla - but the
question as posed has been answered.
Thanks.
|
Tab-delimited file using csv.reader not delimiting where I expect it to
Question: I am trying to loop through a tab-delimited file of election results using
Python. The following code does not work, but when I use a local file with the
same results (the commented out line), it does work as expected.
The only thing I can think of is some headers or content type I need to pass
the url, but I cannot figure it out.
Why is this happening?
import csv
import requests
r = requests.get('http://vote.wa.gov/results/current/export/MediaResults.txt')
data = r.text
#data = open('data/MediaResults.txt', 'r')
reader = csv.reader(data, delimiter='\t')
for row in reader:
print row
Results in:
...
['', '']
['', '']
['2']
['3']
['1']
['1']
['8']
['', '']
['D']
['a']
['v']
['i']
['d']
[' ']
['F']
['r']
['a']
['z']
['i']
['e']
['', '']
...
Answer: so whats happening, well, a call to `help` may shed some light.
>>> help(csv.reader)
reader(...)
csv_reader = reader(iterable [, dialect='excel']
[optional keyword args])
for row in csv_reader:
process(row)
The "iterable" argument can be any object that returns a line
of input for each iteration, such as a file object or a list. The
optional "dialect" parameter is discussed below. The function
also accepts optional keyword arguments which override settings
provided by the dialect.
so it appears that `csv.reader` expects an iterator of some kind which will
return a line, but we are passing a string which iterates on a char bases
which is why its parsing character by character, one way to fix this would be
to generate a temp file, but we don't need to, we just need to pass _any_
iterable object.
note the following, which simply splits the string to a list of lines, before
its fed to the reader.
import csv
import requests
r = requests.get('http://vote.wa.gov/results/current/export/MediaResults.txt')
data = r.text
reader = csv.reader(data.splitlines(), delimiter='\t')
for row in reader:
print row
this seems to work.
I also recommend using `csv.DictReader` its quite useful.
>>> reader = csv.DictReader(data.splitlines(), delimiter='\t')
>>> for row in reader:
... print row
{'Votes': '417141', 'BallotName': 'Michael Baumgartner', 'RaceID': '2', 'RaceName': 'U.S. Senator', 'PartyName': '(Prefers Republican Party)', 'TotalBallotsCastByRace': '1387059', 'RaceJurisdictionTypeName': 'Federal', 'BallotID': '23036'}
{'Votes': '15005', 'BallotName': 'Will Baker', 'RaceID': '2', 'RaceName': 'U.S. Senator', 'PartyName': '(Prefers Reform Party)', 'TotalBallotsCastByRace': '1387059', 'RaceJurisdictionTypeName': 'Federal', 'BallotID': '27435'}
basically it returns a dictionary for every row, using the header as the key,
this way we don't need to keep track of the order but instead just the name
making a bit easier for us ie `row['Votes']` seems more readable then
`row[4]`...
|
appending values to a list in python
Question: i am doing this:
def GetDistinctValues(theFile, theColumn):
lines=theFile.split('\n')
allValues=[]
for line in lines:
allValues.append(line[theColumn-1])
return list(set(allValues))
i am getting `string index out of range` on this line:
allValues.append(line[theColumn-1])
does anyone know what i am doing wrong?
here's the complete code if needed:
import hashlib
def doStuff():
createFiles('together.csv')
def readFile(fileName):
a=open(fileName)
fileContents=a.read()
a.close()
return fileContents
def GetDistinctValues(theFile, theColumn):
lines=theFile.split('\n')
allValues=[]
for line in lines:
allValues.append(line[theColumn-1])
return list(set(allValues))
def createFiles(inputFile):
inputFileText=readFile(inputFile)
b = inputFileText.split('\n')
r = readFile('header.txt')
DISTINCTCOLUMN=12
dValues = GetDistinctValues(inputFileText,DISTINCTCOLUMN)
for uniqueValue in dValues:
theHash=hashlib.sha224(uniqueValue).hexdigest()
for x in b:
if x[DISTINCTCOLUMN]==uniqueValue:
x = x.replace(', ',',').decode('latin-1','ignore')
y = x.split(',')
if len(y) < 3:
break
elif len(y) > 3:
desc = ' '.join(y[3:])
else:
desc = 'No description'
# Replacing non-XML-allowed characters here (add more if needed)
y[2] = y[2].replace('&','&')
desc = desc.replace('&','&')
r += '\n<Placemark><name>'+y[2].encode('utf-8','xmlcharrefreplace')+'</name>' \
'\n<description>'+desc.encode('utf-8','xmlcharrefreplace')+'</description>\n' \
'<Point><coordinates>'+y[0]+','+y[1]+'</coordinates></Point>\n</Placemark>'
r += readFile('footer.txt')
f = open(theHash,'w')
f.write(r)
f.close()
Answer: That is happening because `line` doesn't have as many elements as the code is
assuming. Try the following:
for line in lines:
if len(line) < theColumn:
print "This line doesn't have enough elements:\n" + line
else:
allValues.append(line[theColumn-1])
return list(set(allValues))
That will give you a hint, that is the type of error you expect when trying to
access an element out of the range of a list i. e. a non existent element.
|
Get output of Netcat in Pythonscript
Question: I need to get the output from netcat in python script. My current code is
response = subprocess.Popen(["netcat","smtp.myserver.net", "25"], stdout=subprocess.PIPE).stdout.read()
x = response.find("220 ***")
I want it to check for 200 _*_ status and work with that, but it seems to wait
for my input everytime it runs in a loop. How can I open netcat, get it's
result, close it and move on?
Answer: # Don't do that.
Python has, built-in, an [smtp client
library](http://docs.python.org/library/smtplib.html) that can do this
trivially. Really. There's _no_ reason not to use it.
from smtplib import SMTP
SMTP('smtp.myserver.net')
It's that simple. Catch the exceptions you care about.
### If you _really_ want to know what you were doing wrong,
Your program was waiting for you to type `quit`.
Solution? Close the writing-end first:
child = subprocess.Popen(["netcat","smtp.myserver.net", "25"],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE)
child.stdin.close()
response = child.stdout.read()
x = response.find("220 ***")
The reason you need to do this is because _by default_ standard input is still
connecting `netcat` to your terminal.
|
Does Mongodb have a special value that's ignored in queries?
Question: My web application runs on MongoDB, using python and pyMongo. I get this
scenario a lot - code that reads something like:
from pymongo import Connnection
users = Connection().db.users
def findUsers(firstName=None, lastName=None, age=None):
criteria = {}
if firstName:
criteria['firstName'] = firstName
if lastName:
criteria['lastName'] = lastName
if age:
criteria['age'] = age
query = users.find(criteria)
return query
I find that kind of messy how I need an `if` statement for every value that's
optional to figure out if it's needs to go into the search criteria. If only
there were a special query value that mongo ignored in queries. Then my code
could look like this:
def findUsers(firstName=<ignored by mongo>, lastName=<ignored by mongo>, age=<ignored by mongo>):
query = users.find({'firstName':firstName, 'lastName':lastName, 'age':age})
return query
Now isn't that so much cleaner than before, especially if you have many more
optional parameters. Any parameters that aren't specified default to something
mongo just ignores. Is there any way to do this? Or at-least something more
concise than what I currently have?
Answer: You're probably better off filtering your empty values in Python. You don't
need a separate if-statement for each of your values. The local variables can
be accessed by `locals()`, so you can create a dictionary by filtering out all
keys with `None` value.
def findUsers(firstName=None, lastName=None, age=None):
loc = locals()
criteria = {k:loc[k] for k in loc if loc[k] != None}
query = users.find(criteria)
Note that this syntax uses dictionary comprehensions, introduced in Python
2.7. If you're running an earlier version of Python, you need to replace that
one line with
criteria = dict((k, loc[k]) for k in loc if loc[k] != None)
|
How to run a shell in a separate process and get auto completions? (Python)
Question: I have a linux application that gets an input stream from some device. That
input should be directed to a shell process so that it emulates to the user a
standard shell. So far, I've done it by creating a process that runs '/bin/sh'
and I redirected its input, output, and stderr as follows:
import subprocess
p = subprocess.Popen(shell=False, args=['/bin/sh'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
_make_nonblocking(p.stdout) # sets O_NONBLOCK
_make_nonblocking(p.stderr)
When I just a pass command, everything works:
p.stdin.write('pwd\n')
p.stdout.read()
'/home/dave\n'
For auto completion, I tried writing:
p.stdin.write('ls s\t')
p.stdout.read()
IOError: [Errno 11] Resource temporarily unavailable
I expect to get a list of possible completions but nothing happens until I put
'\n' in stdin. (Also, there was nothing waiting in stderr).
I've looked through the telnetd code and saw the use of pty. I tried to use
pty.openpty() and set slave as stdin, but that didn't work either. How should
it be done?
UPDATE: I used the -i parameter as suggested. Now I have a problem that once I
use Popen and press ENTER the python shell move to the background as follows:
>>> p = subprocess.Popen(shell=False, args=['/bin/sh', '-i'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
>>> <ENTER>
[1]+ Stopped ipython
$
Answer: With `bash` autocompletion only works in interactive mode:
-i If the -i option is present, the shell is interactive.
This will do proper emulation, including displaying prompt and all the usual
stuff.
|
How do I write global variables correctly in python?
Question: I am using sqlalchemy for connection pooling, and I want to make an engine
object available to the other modules. I created a module for utilities that
other modules need, and it looks like this:
from sqlalchemy import [...]
_engine = create_engine(url)
_meta = MetaData()
_meta.bind = _engine
def get_meta():
return _meta
def get_engine():
return _engine
I tried doing this before without the leading underscore, and it didn't work.
I was under the impression that the leading underscore was only a conventional
style for private variables in python, but apparently it can effect the way
code is interpreted? Anyway, I'm just trying to have one particular live
engine object (which controls access to the database connection pool)
available to other modules and would like to know the best practice for doing
so, thank you.
Answer: From [PEP008](http://www.python.org/dev/peps/pep-0008/):
> `_single_leading_underscore`: weak "internal use" indicator. E.g. `from M
> import *` does not import objects whose name starts with an underscore.
So yes the interpreter handles identifiers with leading underscores
differently than identifiers without.
|
Scala mass URL download
Question: I've recently taken an interest in learning Scala (mostly working through
Project Euler at the moment), and I've decided on a simple program I want to
write: A simple concurrent downloader. I'm having some difficulty wrapping my
head around a good way to do it.
I know Python well, and I like the way the gevent library works. What I'd like
to do is solved with gevent [here](http://www.youtube.com/watch?v=UcKqyn-gUbY
"gevent example"). Is there a simple way to do something like this in Scala,
or am I totally heading down the wrong path? I've looked at the Dispatch
library, but it seems incomprehensible. Should I be using Actors?
Any help or guidance would be appreciated. Thanks.
Answer: Ok, I do agree that [Dispatch](http://dispatch.databinder.net/Dispatch.html)
documentation is a bit rough and small at the moment, but that'll probably
change in the future (and it is the situation with many great Scala libs).
But the result of applying Dispatch for your need is spectacular:
import dispatch._
(1 to 100).map{ i =>
Http(url("http://bash.org/?" + i) OK as.String)
}.map{ f =>
try {Some(f.apply)} catch {case e => println(e.getMessage); None}
}.seq.flatten
This would get you files for first 100 quotes from `bash.org`, in parallel.
|
Email datetime parsing with python
Question: I am trying to parse date time of an email using python script.
In mail date value is like below when i am opening mail detils...
from: [email protected]
to: [email protected]
date: Tue, Aug 28, 2012 at 1:19 PM
subject: Subject of that mail
I am using code like
mail = email.message_from_string(str1)
#to = re.sub('</br>','',mail["To"])
to = parseaddr(mail.get('To'))[1]
sender = parseaddr(mail.get('From'))[1]
cc_is = parseaddr(mail.get('Cc'))[1]
date = mail["Date"]
print date
Where as output of the same mails datetime using python parsing is like below
with time offset.
Tue, 28 Aug 2012 02:49:13 -0500
Where I am Actually hoping for
Tue, Aug 28, 2012 at 1:19 PM
I am so confused between relationship of this two values. Can anybody help me
to figure it out I need to get the same time into mail details.
Answer: When looking at an email in GMail, your local timezone is used when displaying
the date and time an email was sent. The "Tue, 28 Aug 2012 02:49:13 -0500" is
parsed, then updated to your local timezone, and formatted in a GMail-specific
manner.
## Parsing and formatting the stdlib way
The `email.utils` module includes a [`parsedate_tz()`
function](http://docs.python.org/library/email.util.html#email.utils.parsedate_tz)
that specifically deals with email headers with timezone offsets.
It returns a tuple compatible with
[`time.struct_time`](http://docs.python.org/library/time.html#time.struct_time),
but with a timezone offset added. An additional [`mktime_tz()`
function](https://docs.python.org/2/library/email.util.html#email.utils.mktime_tz)
converts that tuple to an offset value (time in seconds since the UNIX epoch).
This value then can be converted to a `datetime.datetime()` type object
easily.
The same module also has a [`formatdate()`
function](http://docs.python.org/library/email.util.html#email.utils.formatdate)
to convert the UNIX epoch timestamp to a email-compatible date string:
>>> from email.utils import parsedate_tz, mktime_tz, formatdate
>>> import time
>>> date = 'Tue, 28 Aug 2012 02:49:13 -0500'
>>> tt = parsedate_tz(date)
>>> timestamp = mktime_tz(tt)
>>> print formatdate(timestamp)
Tue, 28 Aug 2012 07:49:13 -0000
Now we have a formatted date in UTC suitable for outgoing emails. To have this
printed as my _local_ timezone (as determined by my computer) you need to set
the `localtime` flag to `True`:
>>> print formatdate(timestamp, True)
Tue, 28 Aug 2012 08:49:13 +0100
## Parsing and formatting using better tools
Note that things are getting hairy as we try and deal with timezones, and the
`formatdate()` function doesn't give you any options to format things a little
differently (like GMail does), nor does it let you choose a different timezone
to work with.
Enter the external [`python-dateutil` module](http://labix.org/python-
dateutil); it has a parse function that can handle just about anything, and
supports timezones properly
>>> import dateutil.parser
>>> dt = dateutil.parser.parse(date)
>>> dt
datetime.datetime(2012, 8, 28, 2, 49, 13, tzinfo=tzoffset(None, -18000))
The `parse()` function returns a [`datetime.datetime()`
instance](http://docs.python.org/library/datetime.html#datetime-objects),
which makes formatting a lot easier. Now we can use the [`.strftime()`
function](http://docs.python.org/library/datetime.html#datetime.datetime.strftime)
to output this as your email client does:
>>> print dt.strftime('%a, %b %d, %Y at %I:%M %p')
Tue, Aug 28, 2012 at 02:49 AM
That's still in the local timezone, of course; to cast this to your timezone
instead, use the [`.astimezone()`
method](http://docs.python.org/library/datetime.html#datetime.datetime.astimezone),
with a new `tzone` object. The `python-dateutil` package has some handy for
us.
Here is how you print it in the _local_ timezone (to your machine):
>>> import dateutil.tz
>>> print dt.astimezone(dateutil.tz.tzlocal()).strftime('%a, %b %d, %Y at %I:%M %p')
Tue, Aug 28, 2012 at 09:49 AM
or use a specific timezone instead:
>>> print dt.astimezone(dateutil.tz.tzstr('Asia/Kolkata')).strftime('%a, %b %d, %Y at %I:%M %p')
Tue, Aug 28, 2012 at 07:49 AM
|
Importing PyVISA via Py2exe
Question: My script file test.py is:
try:
from visa import *
except Exception as ex:
print ex
raw_input()
My setup.py file is:
from distutils.core import setup
import py2exe
setup(console=['test.py'])
I then run:
> python setup.py py2exe
If I then launch test.exe on my pc I see no issues but if I copy it to a new
pc with no prior python installs I get:
> [Error 126] The specified module could not be found
I don't know how to get the full traceback to print. Has anyone else seen this
issue? Running Python 2.7, PyVisa 1.4. Both machines are Win7.
Answer: Turns out the problem was that the visa libraries that pyvisa relies on
weren't installed on the test pc. These libraries don't come included with
pyvisa.
|
Python Unit Test - compare content of iterables in Python < 2.7
Question: How can I check if an array of custom class is sorted in the order that I
want? I know that there is assertSequenceEqual in Python 2.7 and up, but I'm
using Python 2.6, and while I might be able to do this using a forloop... is
there better something in Python < 2.7 that I can use?
actual_sorted = list.sort()
expected_sorted = [MyClass("wed", 20),
MyClass("wed", 19,),
MyClass("thu", 18,),
MyClass("wed", 17,)]
self.assertEqual(actual_sorted, expected_sorted) # Well, this doesn't work.
Thank you in advance!
Answer: As BrenBarn said, you have to add an `__eq__` method. I was writing a full
example in parallel before I saw his answer:
import unittest
class MyClass(object):
def __init__(self, day_name, day_num):
self.day_name = day_name
self.day_num = day_num
def __eq__(self, other):
return other.day_name == self.day_name and other.day_num == self.day_num
class ATestCase(unittest.TestCase):
def test_foo(self):
actual_sorted = [MyClass("wed", 20),
MyClass("wed", 19),
MyClass("thu", 18),
MyClass("wed", 17)]
expected_sorted = [MyClass("wed", 20),
MyClass("wed", 19),
MyClass("thu", 18),
MyClass("wed", 17)]
self.assertEqual(actual_sorted, expected_sorted)
if __name__ == '__main__':
unittest.main()
|
In python using Flask, how can I write out an object for download?
Question: I'm using Flask and running foreman. I data that I've constructed in memory
and I want the user to be able to download this data in a text file. I don't
want write out the data to a file on the local disk and make that available
for download.
I'm new to python. I thought I'd create some file object in memory and then
set response header, maybe?
Answer: Streaming files to the client without saving them to disk is covered in the
"pattern" section of Flask's docs - specifically, [in the section on
streaming](http://flask.pocoo.org/docs/patterns/streaming/). Basically, what
you do is return a fully-fledged
[`Response`](http://flask.pocoo.org/docs/api/#flask.Response) object wrapping
your iterator:
from flask import Response
# construct your app
@app.route("/get-file")
def get_file():
results = generate_file_data()
generator = (cell for row in results
for cell in row)
return Response(generator,
mimetype="text/plain",
headers={"Content-Disposition":
"attachment;filename=test.txt"})
|
Setting folder permissions in Windows using Python
Question: I'm using Python to create a new personal folder when a users AD account is
created. The folder is being created but the permissions are not correct. Can
Python add the user to the newly created folder and change their permissions?
I'm not sure where to begin coding this.
Answer: You want the `win32security` module, which is a part of
[pywin32](http://sourceforge.net/projects/pywin32/). Here's [an
example](http://timgolden.me.uk/python/win32_how_do_i/add-security-to-a-
file.html) of doing the sort of thing you want to do.
The example creates a new DACL for the file and replaces the old one, but it's
easy to modify the existing one; all you need to do is get the existing DACL
from the security descriptor instead of creating an empty one, like so:
import win32security
import ntsecuritycon as con
FILENAME = "whatever"
userx, domain, type = win32security.LookupAccountName ("", "User X")
usery, domain, type = win32security.LookupAccountName ("", "User Y")
sd = win32security.GetFileSecurity(FILENAME, win32security.DACL_SECURITY_INFORMATION)
dacl = sd.GetSecurityDescriptorDacl() # instead of dacl = win32security.ACL()
dacl.AddAccessAllowedAce(win32security.ACL_REVISION, con.FILE_GENERIC_READ | con.FILE_GENERIC_WRITE, userx)
dacl.AddAccessAllowedAce(win32security.ACL_REVISION, con.FILE_ALL_ACCESS, usery)
sd.SetSecurityDescriptorDacl(1, dacl, 0) # may not be necessary
win32security.SetFileSecurity(FILENAME, win32security.DACL_SECURITY_INFORMATION, sd)
|
my python modules (matplotlib, numpy) don't work
Question: I'm using mac mountain lion(10.8.1) and python(2.7.3). And I want to use
python matplotlib, numpy modules for figuring a graph. But there's not
matplotlib, numpy ver for mountain lion. I've tested with matplotlib(1.1.1),
numpy(1.7.0b1). They had been installed. But it doesn't work like below. I
just need a graph. Can mountain lion user draw a graph by python?
>>>import numpy
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import numpy
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/__init__.py", line 137, in <module>
import add_newdocs
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/add_newdocs.py", line 9, in <module>
from bumpy.lib import add_newdoc
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/__init__.py", line 4, in <module>
from type_check import *
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/type_check.py", line 8, in <module>
import numpy.core.numeric as _nx
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/__init__.py", line 5, in <module>
import multiarray
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so, 2): no suitable image found. Did find:
/Libraray/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/core/multiarray.so: no matching architecture in universal wrapper
Answer: There isn't a specific "for Mountain Lion" numpy or matplotlib. The only
compatibility consideration would be the python version, which in your case is
fine. My guess is that you just need to force it to build in 64bit only:
ARCHFLAGS="-arch x86_64" pip install numpy
|
OpenCV Save Image Dramatically Increases File Size of .tif
Question: I'm using [OpenCV](http://opencv.willowgarage.com/wiki/) in Python to find
images within other images using the
[MatchTemplate](http://opencv.willowgarage.com/documentation/cpp/object_detection.html)
method. I've noticed that when I load a .tif file then re-save it, even if I
have not altered the image in opencv, the file size is increasing
dramatically, e.g., from 68kb to 25mb.
I wrote a very basic script to verify that this happens without any other code
modifying the image.
import cv, sys
filename = sys.argv[1]
image = cv.LoadImage(filename)
cv.SaveImage('test.tif', image)
I didn't see any options for optimizing the image. Any ideas on how I might be
able to maintain the original file size within OpenCV or is this just a matter
of compressing after the fact?
Answer: It could be that the source is compressed tif while the rewritten one is not
compressed.
See <https://code.ros.org/trac/opencv/ticket/1080> for a potential solution
|
Open and reading multiple pcap files
Question: I'm trying to find out how I can use the dpkt module to open **multiple**
.pcap files and read them at the **same** time. After much googling and many
long hours, the examples which I manage to find only shows how you can open
and read 1 .pcap file.
I've tried using more than 1 for loop, and to zip() the files using an array
but to no avail. There is an error, ValueError: need more than 1 value to
unpack. Any suggestions? Here is my current python script:
import dpkt, socket, glob, pcap, os
files = [open(f) for f in glob.glob('*.pcap')]
abc = dpkt.pcap.Reader(file("abc.pcap", "rb"))
fgh = dpkt.pcap.Reader(file("fgh.pcap", "rb"))
print files
print "\r\n"
List = [abc, fgh]
for ts, data in zip(List):
eth = dpkt.ethernet.Ethernet(data)
ip = eth.data
tcp = ip.data
src = socket.inet_ntoa(ip.src)
dst = socket.inet_ntoa(ip.dst)
if tcp.dport == 80 and len(tcp.data) > 0:
http = dpkt.http.Request(tcp.data)
print "-------------------"
print "HTTP Request /", http.version
print "-------------------"
print "Type: ", http.method
print "URI: ", http.uri
print "User-Agent: ", http.headers ['user-agent']
print "Source: ", src
print "Destination: ", dst
print "\r\n"
**EDIT://**
Hey, thanks for all the suggestions. In order to simplify the process, I've
modified my code to open .txt files for now. My code is found below as
indicated. There is no error as shown in the output, but how do I get rid of
the new lines symbol '\n', the brackets and the single quotes when I print the
output?
**Code:**
import glob
fileList = [glob.glob('*.txt')]
for files in fileList:
print "Files present:",files
print ""
a = open("1.txt", 'r')
b = open("2.txt", 'r')
List = [a,b]
for line in zip(*List):
print line
**Output:**
>Files present: ['2.txt', '1.txt']
>
>('This is content from the FIRST .txt file\n', 'This is content from the SECOND .txt file\n')
>('\n', '\n')
>('Protocol: Testing\n', 'Protocol: PCAP\n')
>('Version: 1.0\n', 'Version: 2.0\n')
Answer: `zip()` takes each thing to iterate over as separate arguments.
for ts, data in zip(abc, fgh):
//...
By making the list first, you are only giving `zip()` on thing to iterate
over, that thing just happens to contain things that can be iterated over.
|
Reverse IP Lookup with Python
Question: How can I lookup all hosts hosted on an IP Address? I have checked Bing's API,
but I don't think they provide free API key anymore to make a query with an IP
Address. Google probably would block after searching first 2-3 pages. I was
also looking at the [shodanhq
api](http://docs.shodanhq.com/python/reference.html), but I think shodan
doesnt support a reverse lookup!
I am using python 2.7 on Windows.
Answer: May be this script is for you:
import urllib2
import socket
import sys
import re
class reverseip(object):
def __init__(self, server='http://www.ip-adress.com/reverse_ip/'):
t= """ Tool made by: LeXeL lexelEZ[at]gmail[dot]com """
print t
try:
self.site = raw_input("Enter site to start scan: ")
self.fileout = raw_input("Enter logfile name: ")
except KeyboardInterrupt:
print "\n\nError: Aborted"
sys.exit(1)
self.server = server
self.ua = "Mozilla/5.0 (compatible; Konqueror/3.5.8; Linux)"
self.h = {"User-Agent": self.ua}
self.write = True
try:
outp = open(self.fileout, "w+")
outp.write(t)
outp.close()
except IOError:
print '\n Failed to write to %s' % (self.fileout)
print '\n Continuing without writing'
self.write = False
def url(self):
r = urllib2.Request('%s%s' % (self.server, self.site), headers=self.h)
f = urllib2.urlopen(r)
self.source = f.read()
def getip(self):
try:
ip = socket.gethostbyname(self.site)
except IOError, e:
print "Error: %s " %(e)
else:
print "\t\nScanning ip %s \n\n" %(ip)
def whoami(self):
found = re.findall("href=\"/whois/\S+\">Whois</a>]",self.source)
for i in found:
i = i.replace("href=\"/whois/","")
i = i.replace("\">Whois</a>]","")
print "\t%s " % (i)
if self.write:
try:
outp = open(self.fileout, "a")
outp.write('%s\n' % (i))
outp.close()
except IOError:
print '\n Failed to write'
sys.exit(1)
if __name__ == '__main__':
p = reverseip()
p.url()
p.getip()
p.whoami()
With tiny modifciations you can get what you want....tell me what do you
think, and let me know if I can help more...Thanks!
|
Python Paramiko: verifying SSH host key fingerprints manually
Question: I am using python Paramiko to connect using ssh to a remote ubuntu box hosted
on a vps provider. Using a windows 7 based client machine, I am able to
connect as follows:
import paramiko
import binascii
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname='HOSTNAME', username='USERNAME', password='PASSWORD')
This is all good, but now I want to verify the host server identity and
because I'm on windows, Paramiko won't be able to fetch the known_hosts file
or something like it. I tried the following code:
#... after connection is successful
keys = ssh.get_host_keys()
key = keys['HOSTNAME']['ssh-rsa']
print binascii.hexlify(key.get_fingerprint())
# key.get_fingerprint() returns the md5 hash of
# the public part of the key (whatever that means)
which is giving an output similar to the following:
a42273f83e62d65cc87231a2ba33eff3
The thing is, on my VPS provider's cpanel, I have the RSA and DSA host key
fingerprints listed as something like:
RSA 1b:c2:f4:8f:f2:86:fc:f2:96:ba:cc:24:41:e9:d7:86
DSA 36:b9:1f:ad:53:b5:c4:38:78:bf:cb:9d:38:fa:44:ce
and as can be seen none of the fingerprints match the generated one. How can I
compare my manually generated fingerprint to the fingerprint values on the
remote host's cpanel? Is what I'm doing correct?
Answer: Solved, It turned out that the the RSA key fingerprint displayed in the
control panel isn't for the VPS host, it's for an out of band special console.
The actual key (of the VPS host) is located under:
/etc/ssh/ssh_host_rsa_key.pub and the fingerprint value (its MD5 hash) is
exactly what I was getting from the client side verification code. So there
wasn't any problem to begin with.
|
How to retrieve the value using a regular expression in Python?
Question: I wrote code like this:
print re.findall(r'(<td width="[0-9]+[%]?" align="(.+)">|<td align="(.+)"> width="[0-9]+[%]?")([ \n\t\r]*)([0-9,]+\.[0-9]+)([ \n\t\r]*)([&]?[a-zA-Z]+[;]?)([ \n\t\r]*)<span class="(.+)">',r.text,re.MULTILINE)
to get this line:
<td width="47%" align="left">556.348 <span class="uccResCde">
I want the value 556.348. How I can get it using [regular
expressions](http://en.wikipedia.org/wiki/Regular_expression)?
Answer: A straight cut and paste from [the HTMLParser
documentation](http://docs.python.org/library/htmlparser.html) will get you
the data out of the tags, but not using regular expressions.
from HTMLParser import HTMLParser
# Create a subclass and override the handler methods
class MyHTMLParser(HTMLParser):
def handle_starttag(self, tag, attrs):
print "Encountered a start tag:", tag
def handle_endtag(self, tag):
print "Encountered an end tag :", tag
def handle_data(self, data):
print "Encountered some data :", data
# Instantiate the parser and fed it some HTML
parser = MyHTMLParser()
parser.feed('<td width="47%" align="left">556.348 <span class="uccResCde">')
|
Python: List all the file names in a directory and its subdirectories and then print the results in a txt file
Question: My problem is as follows. I want to list all the file names in my directory
and its subdirectories and have that output printed in a txt file. Now this is
the code I have so far:
import os
for path, subdirs, files in os.walk('\Users\user\Desktop\Test_Py'):
for filename in files:
f = os.path.join(path, filename)
a = open("output.txt", "w")
a.write(str(f))
This lists the names of the files in the folders (there are 6) but each new
file overwrites the old so there is only one file name in the output.txt file
at any given time. How do I change this code so that it writes all of the file
names in the output.txt file?
Answer: don't open a file in your `for` loop. open it before your `for` loop
like this
import os
a = open("output.txt", "w")
for path, subdirs, files in os.walk(r'C:\Users\user\Desktop\Test_Py'):
for filename in files:
f = os.path.join(path, filename)
a.write(str(f) + os.linesep)
Or using a context manager (which is better practice):
import os
with open("output.txt", "w") as a:
for path, subdirs, files in os.walk(r'C:\Users\user\Desktop\Test_Py'):
for filename in files:
f = os.path.join(path, filename)
a.write(str(f) + os.linesep)
|
authGSSServerInit looks for wrong entry from keytab
Question: I am attempting to initialize a context for GSSAPI server-side authentication,
using python-kerberos (1.0.90-3.el6). **My problem is that
myserver.localdomain gets converted to myserver - a part of my given principal
gets chopped off somewhere. Why does this happen?**
Example failure:
>>> import kerberos
>>> kerberos.authGSSServerInit("[email protected]")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
kerberos.GSSError: (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Unknown error', 0))
>>>
With the help of KRB5_TRACE I get the reason:
[1257] 1346344556.406343: Retrieving HTTP/myserver@LOCALDOMAIN from WRFILE:/etc/krb5.keytab (vno 0, enctype 0) with result: -1765328203/No key table entry found for HTTP/myserver@LOCALDOMAIN
I can not generate the keytab for plain HTTP/myserver@LOCALDOMAIN because it
would force also the users to access the server with such address. I need to
get the function to work with the proper FQDN name. As far as I can see
authGSSServerInit is supposed to work with the FQDN without mutilating it.
I think the python-kerberos method calls the following krb5-libs (1.9-33.el6)
provided functions, the problem might be also in those:
maj_stat = gss_import_name(&min_stat, &name_token, GSS_C_NT_HOSTBASED_SERVICE, &state->server_name);
maj_stat = gss_acquire_cred(&min_stat, state->server_name,GSS_C_INDEFINITE,GSS_C_NO_OID_SET, GSS_C_ACCEPT, &state->server_creds, NULL, NULL);
The kerberos is properly configured on this host, and confirmed to work. I can
for instance kinit as user, and perform authentication the tickets. It is just
the authGSSServerInit that fails to function properly.
Answer: Some of the documentation is misleading:
def authGSSServerInit(service):
"""
Initializes a context for GSSAPI server-side authentication with the given service principal.
authGSSServerClean must be called after this function returns an OK result to dispose of
the context once all GSSAPI operations are complete.
@param service: a string containing the service principal in the form 'type@fqdn'
(e.g. '[email protected]').
@return: a tuple of (result, context) where result is the result code (see above) and
context is an opaque value that will need to be passed to subsequent functions.
"""
In fact the API expects only the type. For instance "HTTP". The rest of the
principal gets generated with the help of resolver(3). Although the rest of
the kerberos stuff is happy using short names the resolver generates FQDN, but
only if dnsdomainname is properly set.
|
Using ZeroMQ in a PHP script inside Apache
Question: I want to use the ZeroMQ publisher/subscriber to dispatch data from my web
application to multiple servers.
I use Apache and PHP for the web app, my php script works as follow:
//Initialization
$context = new ZMQContext();
$publisher = $context->getSocket(ZMQ::SOCKET_PUB);
$publisher->bind("tcp://*:5556");
//Then publishing for testing:
$publisher->send("test");
$publisher->send("test");
$publisher->send("test");
$publisher->send("test");
$publisher->send("test");
For testing I adapted a subscriber from the documentation in python:
import sys
import zmq
# Socket to talk to server
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect ("tcp://localhost:5556")
# Subscribe to zipcode, default is NYC, 10001
socket.setsockopt(zmq.SUBSCRIBE, "")
print "Waiting..."
# Process 5 updates
for update_nbr in range (5):
string = socket.recv()
print string
The whole thing works when I run the php script from command line but does not
work through Apache (when the script is run through a web browser).
Is there anything I should do to my Apache configuration to make it work ?
Thanks
Alexandre
Answer: It seems that the only problem was that the connection didn't have time to be
established.
Adding a sleep on the publisher after the binding and before the send did
resolve the issue, even though not quite elegantly.
The issue is explained here:
<http://zguide.zeromq.org/page:all#Getting-the-Message-Out>
|
u'Too' u'much' u'unicode' u'returned'
Question: I have an api which I'm putting things into and out of in a natural language
processing context, using json.
_Everything_ is coming out as unicode. For example, if retrieve a list of
words from my api, every single word is u''. This is what the json output
looks like after printing to a file:
{u'words': [u'every', u'single', u'word']}
I must clarify that in the terminal everything looks good, just not when I
print the output to a file.
I haven't figured out yet if this is preferable default behavior or if I need
to do something along the way to make this plain, or what. The outputs are
going to used with languages other than python, other contexts where they need
to be readable and/or parseable.
So clearly I don't have a grasp on python & unicode and how and where this is
being.
1. Is this preferable when dealing with json? Should I not worry about it?
2. How I turn this off, or how do I take an extra step (I've already tried but can't figure out exactly where this is doing this) to make this less of a nuisance.
I have much to learn, so any input is appreciated.
EDIT: all the input has been useful, thank you.
I was under the mistaken notion that jsonify did more than it actually does I
guess. If I do json.dumps earlier in my task chain, I get actual json on the
other end.
Answer: There is nothing wrong with this, and you don't need to do anything about it.
In Python 2, a `str` is similar to a C string - it's just a sequence of bytes,
sometimes incorrectly assumed to be ASCII text. It can contain _encoded_ text,
e.g. as UTF-8 or ASCII.
The `unicode` type represents an actual string of text, similar to a Java
`String`. It is text in the abstract sense, not tied to a particular encoding.
You can decode a `str` into `unicode`, or encode a `unicode` into a `str`.
JSON keys and values are strings - they are not byte arrays, but text - so
they are represented by `unicode` objects in Python.
If you need JSON output for use in another language, use the `json` module to
produce it from your dictionary:
>>> import json
>>> print json.dumps({u'words': [u'every', u'single', u'word']})
{"words": ["every", "single", "word"]}
|
How to install Django 1.4 with Python 3.2.3 in Debian?
Question: I installed Python 3.2.3 in Debian /usr/local/bin/python3 and I installed
Django 1.4 in the same directory. But when I try to import django from python
3 shell interpreter I get syntax error! What am I doing wrong?
Answer: Django does not support Python 3. You will need to install a version of Python
2.x.
|
In Python, how do I inspect and then re-raise an exception while maintaining the original call stack?
Question: I've got a situation where I'm catching a specific exception type, inspecting
the exception's message to check if it's actually an exception I want to
catch, and then re-raising the exception if not:
try:
# do something exception-prone
except FooException as e:
if e.message == 'Something I want to handle':
# handle the exception
else:
raise e
This works fine, with one problem. In the case I re-raise the exception, that
exception now occurs at the line I re-raised it (i.e. at `raise e`), rather
than at the location the exception originally occurred. This isn't ideal for
debugging, where you want to know where the original exception happened.
Thus my question: is there any way to re-raise or otherwise "pass on" an
exception after catching it while maintaining the original exception location?
NOTE: In case you are wondering what the actual situation is: I'm dynamically
importing some modules using `__import__`. I'm catching `ImportError` to
gracefully deal with the case that any of these modules do not exist. However,
in the case that any of these modules themselves contain an import statement
that raises `ImportError`, I want those "real" (from the point of view of my
application) exceptions to be raised -- and at the original location as far as
debugging tools are concerned.
Answer: Just do:
raise
instead of `raise e`. See the tutorial section on [raising
exceptions](http://docs.python.org/tutorial/errors.html#raising-exceptions),
and also the [language reference on `raise`
statements](http://docs.python.org/reference/simple_stmts.html#raise):
> If no expressions are present, raise re-raises the last exception that was
> active in the current scope. If no exception is active in the current scope,
> a TypeError exception is raised indicating that this is an error (if running
> under IDLE, a Queue.Empty exception is raised instead).
|
Main Window resize in Python with Qt
Question: I have a gui in `ui_main.py` file (designed in Qt Designer). I load it in my
main `*.py` file in this way:
from PyQt4 import QtGui, QtCore
from ui_main import Ui_MainWindow
class MyForm(QtGui.QMainWindow):
def __init__(self, parent=None):
QtGui.QMainWindow.__init__(self)
self.ui = Ui_MainWindow()
self.ui.setupUi(self)
...
ui_main.py file:
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
_fromUtf8 = lambda s: s
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName(_fromUtf8("MainWindow"))
MainWindow.resize(667, 559)
I need to resize my main window during application start. My main problem is
that I can't get access to Main window object.
self.ui.MainWindow.resize(300,300)
AttributeError: 'Ui_MainWindow' object has no attribute 'MainWindow'
If I add to `ui_main.py` file:
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
self.uiui = MainWindow
I can resize my main window with:
self.ui.uiui.resize(300,300)
But I know that it's very bad to edit generated ui file. How can I resize my
main window without editing a gui file?
Answer: The ui module created by `pyuic` (via Qt Designer) is just a simple helper
class with a couple of initialization methods.
Both methods take an instance of the top-level ui class from Qt Designer
(usually a `QMainWindow`), and then _add_ the all the ui elements _to_ that
instance. This ensures that everything created in Qt Designer is accessible
from the top-level widget.
So your `MyForm` subclass passes `self` (an instance of `QMainWindow`) to
`setupUi`, which then becomes the `MainWindow` variable you see in your ui
file.
This means you can resize your main window by simply doing:
self.resize(300, 300)
|
Matrix creation in python
Question: In a block of code I found the following thing
M = [3,4,5]
from math import *
class matrix:
# implements basic operations of a matrix class
def __init__(self, value):
self.value = value
self.dimx = len(value)
self.dimy = len(value[0])
if value == [[]]:
self.dimx = 0
def zero(self, dimx, dimy):
# check if valid dimensions
if dimx < 1 or dimy < 1:
raise ValueError, "Invalid size of matrix"
else:
self.dimx = dimx
self.dimy = dimy
self.value = [[0 for row in range(dimy)] for col in range(dimx)]
def identity(self, dim):
# check if valid dimension
if dim < 1:
raise ValueError, "Invalid size of matrix"
else:
self.dimx = dim
self.dimy = dim
self.value = [[0 for row in range(dim)] for col in range(dim)]
for i in range(dim):
self.value[i][i] = 1
def show(self):
for i in range(self.dimx):
print self.value[i]
print ' '
def __add__(self, other):
# check if correct dimensions
if self.dimx != other.dimx or self.dimy != other.dimy:
raise ValueError, "Matrices must be of equal dimensions to add"
else:
# add if correct dimensions
res = matrix([[]])
res.zero(self.dimx, self.dimy)
for i in range(self.dimx):
for j in range(self.dimy):
res.value[i][j] = self.value[i][j] + other.value[i][j]
return res
def __sub__(self, other):
# check if correct dimensions
if self.dimx != other.dimx or self.dimy != other.dimy:
raise ValueError, "Matrices must be of equal dimensions to subtract"
else:
# subtract if correct dimensions
res = matrix([[]])
res.zero(self.dimx, self.dimy)
for i in range(self.dimx):
for j in range(self.dimy):
res.value[i][j] = self.value[i][j] - other.value[i][j]
return res
def __mul__(self, other):
# check if correct dimensions
if self.dimy != other.dimx:
raise ValueError, "Matrices must be m*n and n*p to multiply"
else:
# subtract if correct dimensions
res = matrix([[]])
res.zero(self.dimx, other.dimy)
for i in range(self.dimx):
for j in range(other.dimy):
for k in range(self.dimy):
res.value[i][j] += self.value[i][k] * other.value[k][j]
return res
def transpose(self):
# compute transpose
res = matrix([[]])
res.zero(self.dimy, self.dimx)
for i in range(self.dimx):
for j in range(self.dimy):
res.value[j][i] = self.value[i][j]
return res
# Thanks to Ernesto P. Adorio for use of Cholesky and CholeskyInverse functions
def Cholesky(self, ztol=1.0e-5):
# Computes the upper triangular Cholesky factorization of
# a positive definite matrix.
res = matrix([[]])
res.zero(self.dimx, self.dimx)
for i in range(self.dimx):
S = sum([(res.value[k][i])**2 for k in range(i)])
d = self.value[i][i] - S
if abs(d) < ztol:
res.value[i][i] = 0.0
else:
if d < 0.0:
raise ValueError, "Matrix not positive-definite"
res.value[i][i] = sqrt(d)
for j in range(i+1, self.dimx):
S = sum([res.value[k][i] * res.value[k][j] for k in range(self.dimx)])
if abs(S) < ztol:
S = 0.0
res.value[i][j] = (self.value[i][j] - S)/res.value[i][i]
return res
def CholeskyInverse(self):
# Computes inverse of matrix given its Cholesky upper Triangular
# decomposition of matrix.
res = matrix([[]])
res.zero(self.dimx, self.dimx)
# Backward step for inverse.
for j in reversed(range(self.dimx)):
tjj = self.value[j][j]
S = sum([self.value[j][k]*res.value[j][k] for k in range(j+1, self.dimx)])
res.value[j][j] = 1.0/tjj**2 - S/tjj
for i in reversed(range(j)):
res.value[j][i] = res.value[i][j] = -sum([self.value[i][k]*res.value[k][j] for k in range(i+1, self.dimx)])/self.value[i][i]
return res
def inverse(self):
aux = self.Cholesky()
res = aux.CholeskyInverse()
return res
def __repr__(self):
return repr(self.value)
for n in range(len(M)):
Z = matrix([[M[n]]])
Z.show()
Now If I run the code I got the following output:
[3]
[4]
[5]
Now I don't understand what the output means and how to interpret this one.
Specifically the following line of code in above I could not understand
Z = matrix([[M[n]]])
Can anyone please explain me the output of the code and the single line above?
Answer: The code creates three 1x1 matrices, i.e. three matrices where each matrix
contains a single element, and prints them. That's the `[3]`, `[4]`, `[5]`
you're seeing: three 1x1 matrices.
For the `[[M[n]]]`: The matrix constructor expects a value for the matrix,
which is a two-dimensional array. This explains the `[[ .. ]]`. You could
construct a 2x2 unit matrix by calling
data = [ [1,0], [0,1] ]
matrix(data)
(That is a list that contains two other lists, each of which have two
elements.)
In this case, the matrices are initialised with a single element each, which
happens to be `M[n]`.
The code could be simplified to:
for n in M:
Z = matrix([[n]])
Z.show()
which makes it easier to read
|
Python - problems with shared variable
Question: I'd like to use a shared queue from multiple threads and modules. I have the
following Python code:
# moda.py
import queue
import modb
q = queue.Queue()
def myPut(x):
q.put(x)
def main():
print('moda:', str(id(q)))
modb.go()
q.get()
if __name__ == '__main__':
main()
and
# modb.py
import moda
import threading
def something():
print('modb:', str(id(moda.q)))
moda.myPut('hi')
def go():
threading.Thread(target = something).start()
`something` gets called on thread 1, `somethingElse` gets called on thread 2.
The addresses of `q` are different in these two methods - which is why the
call to `get` never returns. How can I avoid this? Is it because of the cyclic
import or because of multithreading?
Answer: [The link](http://stackoverflow.com/a/744403/222914) posted by Austin Phillips
in the comments has the answer:
> Finally, the executing script runs in a module named __main__, importing the
> script under its own name will create a new module unrelated to __main__.
So, `__main__.q` and `moda.q` (as imported into `modb`) are two different
objects.
One way to make it work is to create a separate main module like this and run
it instead of `moda`:
# modmain.py
import moda
if __name__ == '__main__':
moda.main()
However, you should still consider putting `q` and other shared stuff into a
new module that you import into both `moda` and `modb` to avoid some other
pitfalls.
|
Are there any benefits to using Python for GUI development?
Question: I was looking into Tkinter, but I was wondering if there are any other
packages that help with GUI. Even though Java is more robust than python,
python is faster to write and would be very helpful if I am just trying to
create a quick application. I usually use .Swing and .awt in java to make all
of my applications.
Also, Is there something similar to JWS for python? I am looking to add a link
on a website that will fire up a python coded application
Thanks
(P.S. I'm using Python 3.2.2)
Answer: Yes, python and tkinter are excellent for developing GUIs. While this is true
of most scripting languages, I think these two make a particularly good
combination.
For most GUIs, functionality is more important than form (ie: eye candy is not
top priority). Even then, it's possible to make very nice looking GUIs with
Tkinter. Tk has been my go-to toolkit for close to twenty years. It is most
easily used with Tcl since that is its native language, but there's no denying
the advantage python has over Tcl as far as built-in functionality goes.
That being said, all of the toolkits available to a python programmer are
good. Tkinter's main advantages are that it is extremely easy to use and you
probably already have it installed, so the barrier for entry is very low.
|
GTK Label wrapping in a dialog
Question: I'm trying to create a **non-resizable dialog** with a label in it. This label
has a lot of text, so I want it to **wrap without making the dialog
ridiculously wide**.
For some reason, I can't find out what it takes to get GTK to allow this to
happen. I can't even find a way of setting a max-width on the dialog, which
would be great.
Here's an running example of what I mean:
#!/usr/bin/env python
#-*- coding:utf-8 -*-
from gi.repository import Gtk
class DialogExample(Gtk.Dialog):
def __init__(self, parent):
Gtk.Dialog.__init__(self, "My Dialog", parent, 0,
(Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL,
Gtk.STOCK_OK, Gtk.ResponseType.OK))
self.set_default_size(150, 100)
self.set_resizable(False)
label = Gtk.Label("This is a dialog to display additional information, with a bunch of text in it just to make sure it will wrap enough for demonstration purposes")
label.set_line_wrap(True)
box = self.get_content_area()
box.add(label)
self.show_all()
class DialogWindow(Gtk.Window):
def __init__(self):
Gtk.Window.__init__(self, title="Dialog Example")
self.set_default_size(250, 200)
button = Gtk.Button("Open dialog")
button.connect("clicked", self.on_button_clicked)
self.add(button)
def on_button_clicked(self, widget):
dialog = DialogExample(self)
response = dialog.run()
if response == Gtk.ResponseType.OK:
print "The OK button was clicked"
elif response == Gtk.ResponseType.CANCEL:
print "The Cancel button was clicked"
dialog.destroy()
win = DialogWindow()
win.connect("delete-event", Gtk.main_quit)
win.show_all()
Gtk.main()
Answer: I solved this (besides setting the line wrap to True) putting the Gtk.Label
inside a Gtk.Table, using the FILL and SHRINK flags and setting a fixed width
for the label. Something like this:
label = Gtk.Label("This is a dialog to display additional information, with a bunch of text in it just to make sure it will wrap enough for demonstration purposes")
label.set_line_wrap(True)
label.set_size_request(250, -1) # 250 or whatever width you want. -1 to keep height automatic
table = Gtk.Table(1, 1, False)
table.attach(label, 0, 1, 0, 1, Gtk.AttachOptions.SHRINK | Gtk.AttachOptions.FILL)
That should do the trick
|
Adding Colorbar and Contour Map to PyPlot surface imshow in Python
Question: I am using code written by a former employee. In this code, the person
performs a plot on Python using Qt. I'm trying to add a colorbar and contour
maps, but to no avail. Here's part of the code that's relevant.
#IMPORTS
from matplotlib.figure import Figure
from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigureCanvas
#SETUP
fig = Figure()
axes = fig.add_subplot(111)
canvas = FigureCanvas(fig)
#DRAWING
axes.clear()
axes.imshow(data_and_extents, interpolation='bilinear')
axes.autoscale(False)
canvas.draw()
#TODO: Add colorbar and contour maps with some values
I have tried axes.colorbar() to no avail. How do I add colorbars and contour
maps to surface plots set up like this?
Thanks SOCommunity!
Answer: The first error is in the first line:
from matplotlib.figure import Figure
is the correct one. Next: Instead of `axes.colorbar` it's `fig.corbar`. With
your variable names you should be able to create a colorbar with:
cbar = fig.colorbar(axes)
See [this
example](http://matplotlib.sourceforge.net/examples/pylab_examples/colorbar_tick_labelling_demo.html)
for more.
|
Installing Swampy for python 3 in Windows
Question: I am a beginner python learner using The book 'Think Python' where I have to
install module name Swampy. The link provided fro instruction and download has
a [tar.gz](http://pypi.python.org/pypi/swampy/2.1.1) file. I found the python
3 version of the swampy with google search
[here](http://code.google.com/p/swampy/downloads/detail?name=swampy.1.4.python3.zip&can=2&q=).
All [setup tools](http://pypi.python.org/pypi/setuptools#downloads) for
modules are under python 3. I am pretty lost, how do i install/use the module?
Thanks
Answer: You don't have to install Python modules. Just `import` them. (In fact,
`swampy` is a package, which is basically a collection of modules.)
import swampy
Of course, Python has to know where to import them _from_. In this case, the
simplest thing for you to do is to invoke Python from the directory containing
the folder `swampy`, since the interpreter will first search for modules in
the current directory. You can equivalently `os.chdir` to the directory after
invoking Python from anywhere.
Don't worry about `setuptools` yet.
|
Using sparse matrices/online learning in Naive Bayes (Python, scikit)
Question: I'm trying to do Naive Bayes on a dataset that has over 6,000,000 entries and
each entry 150k features. I've tried to implement the code from the following
link: [Implementing Bag-of-Words Naive-Bayes classifier in
NLTK](http://stackoverflow.com/questions/10098533/implementing-bag-of-words-
naive-bayes-classifier-in-nltk)
The problem is (as I understand), that when I try to run the train-method with
a dok_matrix as it's parameter, it cannot find iterkeys (I've paired the rows
with OrderedDict as labels):
Traceback (most recent call last):
File "skitest.py", line 96, in <module>
classif.train(add_label(matr, labels))
File "/usr/lib/pymodules/python2.6/nltk/classify/scikitlearn.py", line 92, in train
for f in fs.iterkeys():
File "/usr/lib/python2.6/dist-packages/scipy/sparse/csr.py", line 88, in __getattr__
return _cs_matrix.__getattr__(self, attr)
File "/usr/lib/python2.6/dist-packages/scipy/sparse/base.py", line 429, in __getattr__
raise AttributeError, attr + " not found"
AttributeError: iterkeys not found
My question is, is there a way to either avoid using a sparse matrix by
teaching the classifier entry by entry (online), or is there a sparse matrix
format I could use in this case efficiently instead of dok_matrix? Or am I
missing something obvious?
Thanks for anyone's time. :)
EDIT, 6th sep:
Found the iterkeys, so atleast the code runs. It's still too slow, as it has
taken several hours with a dataset of the size of 32k, and still hasn't
finished. Here's what I got at the moment:
matr = dok_matrix((6000000, 150000), dtype=float32)
labels = OrderedDict()
#collect the data into the matrix
pipeline = Pipeline([('nb', MultinomialNB())])
classif = SklearnClassifier(pipeline)
add_label = lambda lst, lab: [(lst.getrow(x).todok(), lab[x])
for x in xrange(lentweets-foldsize)]
classif.train(add_label(matr[:(lentweets-foldsize),0], labels))
readrow = [matr.getrow(x + foldsize).todok() for x in xrange(lentweets-foldsize)]
data = np.array(classif.batch_classify(readrow))
The problem might be that each row that is taken doesn't utilize the
sparseness of the vector, but goes through each of the 150k entry. As a
continuation for the issue, does anyone know how to utilize this Naive Bayes
with sparse matrices, or is there any other way to optimize the above code?
Answer: Check out the [document classification example](http://scikit-
learn.org/stable/auto_examples/document_classification_20newsgroups.html#example-
document-classification-20newsgroups-py) in scikit-learn. The trick is to let
the library handle the feature extraction for you. Skip the NLTK wrapper, as
it's not intended for such large datasets.(*)
If you have the documents in text files, then you can just hand those text
files to the `TfidfVectorizer`, which creates a sparse matrix from them:
from sklearn.feature_extraction.text import TfidfVectorizer
vect = TfidfVectorizer(input='filename')
X = vect.fit_transform(list_of_filenames)
You now have a training set `X` in the CSR sparse matrix format, that you can
feed to a Naive Bayes classifier if you also have a list of labels `y`
(perhaps derived from the filenames, if you encoded the class in them):
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
nb.fit(X, y)
If it turns out this doesn't work because the set of documents is too large
(unlikely since the `TfidfVectorizer` was optimized for just this number of
documents), look at the [out-of-core document classification](http://scikit-
learn.org/stable/auto_examples/applications/plot_out_of_core_classification.html)
example, which demonstrates the `HashingVectorizer` and the `partial_fit` API
for minibatch learning. You'll need scikit-learn 0.14 for this to work.
(*) I know, because I wrote that wrapper. Like the rest of NLTK, it's intended
for educational purposes. I also worked on performance improvements in scikit-
learn, and some of the code I'm advertising is my own.
|
How to decrypt unsalted openssl compatible blowfish CBC/PKCS5Padding password in python?
Question: I've been looking for a python library to help decrypt an openssl blowfish
encrypted password.
I have managed to achieve this in Java but the python libraries to support
this appeared more of a learning curve, and required rolling your own.
In terms of what we need to achieve, the password is unsalted and uses a
passphrase, for the purposes of this question I've set this to "AAAAAAAA". The
Cipher is "Blowfish/CBC/PKCS5Padding". The encrypted text will be read in as a
string, same as the key and iv.
In openssl, this is 'simply':
~$ # This is encrypting
~$ echo -n 'password' | openssl enc -bf -nosalt -a -K AAAAAAAA -iv AAAAAAAA
eAIUXziwB8QbBexkiIDR3A==
~$ # This is reversing the encryption
~$ echo 'eAIUXziwB8QbBexkiIDR3A==' | openssl enc -d -bf -nosalt -a -K AAAAAAAA -iv AAAAAAAA
password
In java, the descryption is along the lines of:
private static final String KEY = "AAAAAAAA000000000000000000000000";
private static final String IV = "AAAAAAAA00000000";
private static final String FCN = "Blowfish/CBC/PKCS5Padding";
private static final String CN = "Blowfish";
final byte[] encoded = Base64.decode("eAIUXziwB8QbBexkiIDR3A==");
final SecretKeySpec key =
new SecretKeySpec(Hex.decodeHex(KEY.toCharArray()), CN);
final Cipher cipher = Cipher.getInstance(FCN, JCE_PROVIDER);
cipher.init(Cipher.DECRYPT_MODE, key, new IvParameterSpec(Hex.decodeHex(IV.toCharArray())));
final byte[] decrypted = cipher.doFinal(encoded);
return new String(decrypted);
Can someone provide some guidance for python?
Answer: Decoding hexadecimal and base64 encoded strings is built-in:
In [1]: "AAAAAAAA000000000000000000000000".decode('hex')
Out[1]: '\xaa\xaa\xaa\xaa\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
In [2]: "eAIUXziwB8QbBexkiIDR3A==".decode('base64')
Out[2]: 'x\x02\x14_8\xb0\x07\xc4\x1b\x05\xecd\x88\x80\xd1\xdc'
The [PyCrypto](https://www.dlitz.net/software/pycrypto/) library handles
[BlowFish](https://www.dlitz.net/software/pycrypto/api/current/Crypto.Cipher.Blowfish-
module.html) (among others).
In [1]: from Crypto.Cipher import Blowfish
In [2]: KEY = "AAAAAAAA000000000000000000000000".decode('hex')
In [3]: IV = "AAAAAAAA00000000".decode('hex')
In [4]: cipher = Blowfish.new(KEY, Blowfish.MODE_CBC, IV)
In [5]: ciphertext = "eAIUXziwB8QbBexkiIDR3A==".decode('base64')
In [6]: cipher.decrypt(ciphertext)
Out[6]: 'password\x08\x08\x08\x08\x08\x08\x08\x08'
If you want to strip off the padding from the plaintext in one go:
In [14]: cipher.decrypt(ciphertext).replace('\x08', '')
Out[14]: 'password'
|
matplotlib: (re-)plot ping standard-input data. Doable?
Question: I recently started tinkering with python and the matplotlib module(1.1.0)
shipped with Enthought Python Distribution(the free version one). I thought of
an interesting project I could do and came up with something like this:
* get ping of an internet address
* pipe that via sys.stdin into python script
now in the python script:
* regex the answer time, if there is no answer time: use NaN or just 0 as number
* plot data via matplotlib
* add continuously data
I managed to get the ping via this: ping stackoverflow.com | python script.py get the answertime wasn't particulary hard. But when it comes to plotting the data. I am stuck. I know there is an animation module inside matplotlib, but I think a timer-based plot would be harder to program than this, and I don't know how events are used anyway. What I want:
* Wait for sys.stdin to get a new string and thus the ping time
* Add it to the data array
* Plot the data array But it doesn't seem to be that easy. Besides this, Error handling is not yet done.. Unfortunately I couldn't find any comparable code, although I did a lot of googling about it :/ Maybe this design is not meant to be like this..
Does anybody have an idea how to accomplish this replotting? It doesn't need
to be efficient, as the ping only comes in every second or so.. I thought
about buffering the incoming stdin and do a regular timer-based plot, but I
have now idea to do that.
Thank you in advance,
Jonas
# Update1:
I can get rid of this error by using:
l1.set_xdata(range(len(data)))
before the l1.set_ydata(..), but still it doesn't plot anything and the window
doesn't respond either. At least it shows the plot axis.
# Code
import sys
import re
import numpy as np
import matplotlib.pyplot as plt
def main():
if sys.stdin.isatty():
print "Please use a pipe as stdin\nExample: ping stackoverflow.com | python script.py"
return 0
regex = re.compile('time=(\d+.\d+)')
data = []
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_yscale('linear') # or log
ax.grid(True)
l1, = ax.plot(data)
fig.show()
while True:
#get ping, if no string: stream has ended
line = sys.stdin.readline()
if line == '':
break
#conversion: 64 bytes from 127.0.0.1: icmp_seq=0 ttl=45 time=100.873 ms --> 100.873
match = regex.findall(line)
number = 0.
if len(match) > 1:
raise ValueError()
if len(match) == 1:
try:
number = float(match[0])
except ValueError as e:
print e
#add number to array, plot the data
data.append(number)
l1.set_xdata(range(len(data)))
l1.set_ydata(data)
ax.relim()
ax.autoscale()
plt.draw()
fig.canvas.flush_events()
return 0
if __name__ == '__main__':
sys.exit(main())
Answer: Where you first have `plt.show()`, do not use that. Instead use `fig.show()`
there. And then just `plt.draw()` in your loop. The rest seems good. The thing
is that `plt.show()` starts a mainloop, locking further execution (outside the
events for the figure). `fig.show()` seems not to, however the buttons
probably will not work like this (I didn't try).
You will have to add autoscale though:
ax.relim()
ax.autoscale()
plt.draw()
# And this allows you to at least close the window (and crash the program by that ;))
fig.canvas.flush_events()
But really it should be cleaner not use pyplot for this. For that look into
how to [embed
matplotlib](http://matplotlib.sourceforge.net/examples/user_interfaces/index.html)
into a gui, it is really easy and can be done for a couple of different gui
toolkits.
|
Python: importing a sub‑package or sub‑module
Question: Having already use flat packages, I was not expecting the issue I encountered
with nested packages. Here is…
# Directory layout
dir
|
+-- test.py
|
+-- package
|
+-- __init__.py
|
+-- subpackage
|
+-- __init__.py
|
+-- module.py
# Content of **init**.py
Both `package/__init__.py` and `package/subpackage/__init__.py` are empty.
# Content of `module.py`
# file `package/subpackage/module.py`
attribute1 = "value 1"
attribute2 = "value 2"
attribute3 = "value 3"
# and as many more as you want...
# Content of `test.py` (3 versions)
## Version 1
# file test.py
from package.subpackage.module import *
print attribute1 # OK
That's the bad and unsafe way of importing things (import all in a bulk), but
it works.
## Version 2
# file test.py
import package.subpackage.module
from package.subpackage import module # Alternative
from module import attribute1
A safer way to import, item by item, but it fails, Python don't want this:
fails with the message: "No module named module". However …
# file test.py
import package.subpackage.module
from package.subpackage import module # Alternative
print module # Surprise here
… says `<module 'package.subpackage.module' from '...'>`. So that's a module,
but that's not a module /-P 8-O ... uh
## Version 3
# file test.py v3
from package.subpackage.module import attribute1
print attribute1 # OK
This one works. So you are either forced to use the overkill prefix all the
time or use the unsafe way as in version #1 and disallowed by Python to use
the safe handy way? The better way, which is safe and avoid unecessary long
prefix is the only one which Python reject? Is this because it loves `import
*` or because it loves overlong prefixes (which does not help to enforce this
practice)?.
Sorry for the hard words, but that's two days I trying to work around this
stupid‑like behavior. Unless I was totally wrong somewhere, this will leave me
with a feeling something is really broken in Python's model of package and
sub‑packages.
Notes
* I don't want to rely on `sys.path`, to avoid global side effects, nor on `*.pth` files, which are just another way to play with `sys.path` with the same global effets. For the solution to be clean, it has to be local only. Either Python is able to handle subpackage, either it's not, but it should not require to play with global configuration to be able to handle local stuff.
* I also tried use imports in `package/subpackage/__init__.py`, but it solved nothing, it do the same, and complains `subpackage` is not a known module, while `print subpackage` says it's a module (weird behavior, again).
May be I'm entirely wrong tough (the option I would prefer), but this make me
feel a lot disappointed about Python.
Any other known way beside of the three I tried? Something I don't know about?
(sigh)
\----- %< \----- edit ----- >% -----
# Conclusion so far (after people's comments)
There is nothing like real sub‑package in Python, as all package references
goes to a global dictionnary, only, which means there's no local dictionary,
which implies there's is no way to manage local package reference.
You have to either use full prefix or short prefix or alias. As in:
## Full prefix version
from package.subpackage.module import attribute1
# An repeat it again an again
# But after that, you can simply:
use_of (attribute1)
## Short prefix version (but repeated prefix)
from package.subpackage import module
# Short but then you have to do:
use_of (module.attribute1)
# and repeat the prefix at every use place
Or else, a variation of the above.
from package.subpackage import module as m
use_of (m.attribute1)
# `m` is a shorter prefix, but you could as well
# define a more meaningful name after the context
# Factorized version
If you don't mind about importing multiple entity all at once in a batch, you
can:
from package.subpackage.module import attribute1, attribute2
# and etc.
Not in my first favorite taste (I prefer to have one import statement per
imported entity), but may be the one I will personally favor.
_Update (2012-09-14):_
Finally appears to be OK in practice, except with a comment about the layout.
Instead of the above, I used:
from package.subpackage.module import (
attribute1,
attribute2,
attribute3,
...) # and etc.
Answer: You seem to be misunderstanding how `import` searches for modules. When you
use an import statement it _always_ searches the actual module path (and/or
`sys.modules`); it doesn't make use of module _objects_ in the local namespace
that exist because of previous imports. When you do:
import package.subpackage.module
from package.subpackage import module
from module import attribute1
The second line looks for a package called `package.subpackage` and imports
`module` from that package. This line has no effect on the third line. The
third line just looks for a module called `module` and doesn't find one. It
doesn't "re-use" the object called `module` that you got from the line above.
In other words `from someModule import ...` doesn't mean "from the module
called someModule that I imported earlier..." it means "from the module named
someModule that you find on sys.path...". There is no way to "incrementally"
build up a module's path by importing the packages that lead to it. You always
have to refer to the entire module name when importing.
It's not clear what you're trying to achieve. If you only want to import the
particular object attribute1, just do `from package.subpackage.module import
attribute1` and be done with it. You need never worry about the long
`package.subpackage.module` once you've imported the name you want from it.
If you _do_ want to have access to the module to access other names later,
then you can do `from package.subpackage import module` and, as you've seen
you can then do `module.attribute1` and so on as much as you like.
If you want _both_ \--- that is, if you want `attribute1` directly accessible
_and_ you want `module` accessible, just do both of the above:
from package.subpackage import module
from package.subpackage.module import attribute1
attribute1 # works
module.someOtherAttribute # also works
If you don't like typing `package.subpackage` even twice, you can just
manually create a local reference to attribute1:
from package.subpackage import module
attribute1 = module.attribute1
attribute1 # works
module.someOtherAttribute #also works
|
"AttributeError: 'module' object has no attribute 'argv'" when using Python.h
Question: When messing around with Python.h I got this error:
AttributeError: 'module' object has no attribute 'argv'
C++ code:
#include "stdafx.h"
#include "C:/Python27/include/Python.h"
#include <iostream>
using namespace std;
int main()
{
Py_Initialize();
PyRun_SimpleString("import sys\nprint sys.argv[0]");
}
Which in Python is:
import sys
print sys.argv[0]
What am I missing?
Answer: Conceptually, `sys.argv` should contain the arguments that Python was called
with (and what it was called under). What should it have if it were called
like this, though?
You can load the calling program's `argv` into `sys`, if you want:
int main(int argc, char **argv)
{
Py_Initialize();
PySys_SetArgv(argc, argv);
PyRun_SimpleString("import sys\nprint sys.argv");
}
gives
localhost-2:argv $ ./a.out
['./a.out']
localhost-2:argv $ ./a.out arg0 17
['./a.out', 'arg0', '17']
See also `Py_SetProgramName`.
|
How to use python and windows com in pyramid (threads)?
Question: I'm connecting to MS Word by win32com.client.Distpatch function and on the
second request i've got error: 'CoInitialize has not been called.' I've found
this topic:
<http://www.velocityreviews.com/forums/t328097-coinitialize-error-when-using-
adodbapi.html>
and got that I have to call of CoInitialize in new thread. So question is
where to call CoInitialize function in pyramid.
Answer: So i added call of CoInitialize in function:
@subscriber(NewRequest)
def new_request_subscriber(event):
import pythoncom
pythoncom.CoInitialize()
and works without exception.
|
Getting started with speech recognition and python
Question: I would like to know where one could get started with speech recognition. Not
with a library or anything that is fairly "Black Box'ed" But instead, I want
to know where I can Actually make a simple speech recognition script. I have
done some searching and found, not much, but what I have seen is that there
are dictionaries of 'sounds' or syllables that can be pieced together to form
text. So basically my question is where can I get started with this?
Also, since this is a little optimistic, I would also be fine with a library
(for now) to use in my program. I saw that some speech to text libraries and
APIs spit out only one results. This is ok, but it would be unrealiable. My
current program already checks the grammar and everything of any text entered,
so that way if I were to have say, the top ten results from the speech to text
software, than It could check each and rule out any that don't make sense.
Answer: ## UPDATE: this is not working anymore
because google closed her platform
\--
you can use <https://pypi.python.org/pypi/pygsr>
$> pip install pygsr
example usage:
from pygsr import Pygsr
speech = Pygsr()
# duration in seconds
speech.record(3)
# select the language
phrase, complete_response = speech.speech_to_text('en_US')
print phrase
|
What is the meaning of the letters in the output from struct.pack?
Question: when i change number into hex in struct module of python,
>>> import struct
>>> struct.pack("i",89)
'Y\x00\x00\x00'
>>> struct.pack("i",890)
'z\x03\x00\x00'
>>> struct.pack("i",1890)
'b\x07\x00\x00'
what is the meaning of "Y,z,b" in the output?
Answer: You're not converting to hex. You're packing the integer as binary data... in
this case, little-endian binary data. The first characters are just the
corresponding ASCII characters to the raw bytes; e.g. `89` is `Y`, `122` is
`z`, and `98` is `b`.
* The first `pack` produces `'\x59\x00\x00\x00'` for `0x00000059`; `'\x59'` is `'Y'`.
* The second produces `'\x7a\x03\x00\x00'` for `0x0000037a`; `'\x7a'` is `'z'`.
* The third produces `'\x62\x07\x00\x00'` for `0x00000762`; `'\x62'` is `'b'`.
See the below ASCII table.

|
Run Python script from C# with command line params
Question: I need to execute Python script from C# using IronPython and pass into it some
command line parameters. Is there any way to do it?
My sample code:
var e = Python.CreateEngine();
var source = e.CreateScriptSourceFromFile(@"...");
source.Execute();
Answer: Sure. When you create your engine, call the overload that takes in additional
options and set the `"Arguments"` to your array of strings.
var options = new Dictionary<string, object>
{
{ "Arguments", new[] { "foo", "bar", "biz baz" } },
};
var engine = Python.CreateEngine(options);
var script = @"
import sys
print sys.argv # ['foo', 'bar', 'biz baz']
";
engine.Execute(script);
|
joining two numpy matrices
Question: If you have two numpy matrices, how can you join them together into one? They
should be joined horizontally, so that
[[0] [1] [[0][1]
[1] + [0] = [1][0]
[4] [1] [4][1]
[0]] [1]] [0][1]]
For example, with these matrices:
>>type(X)
>>type(Y)
>>X.shape
>>Y.shape
<class 'numpy.matrixlib.defmatrix.matrix'>
<class 'numpy.matrixlib.defmatrix.matrix'>
(53, 1)
(53, 1)
I have tried hstack but get an error:
>>Z = hstack([X,Y])
Traceback (most recent call last):
File "labels.py", line 85, in <module>
Z = hstack([X, Y])
File "C:\Python27\lib\site-packages\scipy\sparse\construct.py", line 263, in h
stack
return bmat([blocks], format=format, dtype=dtype)
File "C:\Python27\lib\site-packages\scipy\sparse\construct.py", line 329, in b
mat
raise ValueError('blocks must have rank 2')
ValueError: blocks must have rank 2
Answer: Judging from the traceback, it seems like you've done `from scipy.sparse
import *` or something similar, so that `numpy.hstack` is shadowed by
`scipy.sparse.hstack`. `numpy.hstack` works fine:
>>> X = np.matrix([[0, 1, 4, 0]]).T
>>> Y = np.matrix([[1, 0, 1, 1]]).T
>>> np.hstack([X, Y])
matrix([[0, 1],
[1, 0],
[4, 1],
[0, 1]])
|
Python - Sending Packets issuing error - Minecraft Packets
Question: **I'm using the following script:**
import socket
import struct
username = "username_value"
verification_key = "verification_key"
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # boilerplate
s.connect(("example.com", 1234)) # adjust accordingly
# now for the packet
# note that the String type is specified as having a length of 64, we'll pad that
packet = ""
packet += struct.pack("B", 1) # packet type
packet += struct.pack("B", 7) # protocol version
packet += "%-64s" % username # magic!
packet += "%-64s" % verification_key
packet += struct.pack("B", 0) # that unused byte, assuming a NULL byte here
# send what we've crafted
s.send(packet)
**and getting a response of:**
packet += struct.pack("B", 1) # packet type
TypeError: Can't convert 'bytes' object to str implicitly
**I am almost brand-new to Python, and just started, but I understand the
language. I read up and found something about Python 3 changing the way you
use packets. I feel kind of hopeless. Help? Thank you**
Answer: In python 3 you have to implicitly define your string `packet` as a bytes
packet = b""
instead of `packet = ""`
|
Parse birth and death dates from Wikipedia?
Question: I'm trying to write a python program that can search wikipedia for the birth
and death dates for people.
For example, Albert Einstein was born: 14 March 1879; died: 18 April 1955.
I started with [Fetch a Wikipedia article with
Python](http://stackoverflow.com/questions/120061/fetch-a-wikipedia-article-
with-python)
import urllib2
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
infile = opener.open('http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&rvsection=0&titles=Albert_Einstein&format=xml')
page2 = infile.read()
This works as far as it goes. `page2` is the xml representation of the section
from Albert Einstein's wikipedia page.
And I looked at this tutorial, now that I have the page in xml format...
<http://www.travisglines.com/web-coding/python-xml-parser-tutorial>, but I
don't understand how to get the information I want (birth and death dates) out
of the xml. I feel like I must be close, and yet, I have no idea how to
proceed from here.
**EDIT**
After a few responses, I've installed BeautifulSoup. I'm now at the stage
where I can print:
import BeautifulSoup as BS
soup = BS.BeautifulSoup(page2)
print soup.getText()
{{Infobox scientist
| name = Albert Einstein
| image = Einstein 1921 portrait2.jpg
| caption = Albert Einstein in 1921
| birth_date = {{Birth date|df=yes|1879|3|14}}
| birth_place = [[Ulm]], [[Kingdom of Württemberg]], [[German Empire]]
| death_date = {{Death date and age|df=yes|1955|4|18|1879|3|14}}
| death_place = [[Princeton, New Jersey|Princeton]], New Jersey, United States
| spouse = [[Mileva Marić]]&nbsp;(1903–1919)<br>{{nowrap|[[Elsa Löwenthal]]&nbsp;(1919–1936)}}
| residence = Germany, Italy, Switzerland, Austria, Belgium, United Kingdom, United States
| citizenship = {{Plainlist|
* [[Kingdom of Württemberg|Württemberg/Germany]] (1879–1896)
* [[Statelessness|Stateless]] (1896–1901)
* [[Switzerland]] (1901–1955)
* [[Austria–Hungary|Austria]] (1911–1912)
* [[German Empire|Germany]] (1914–1933)
* United States (1940–1955)
}}
So, much closer, but I still don't know how to return the death_date in this
format. Unless I start parsing things with `re`? I can do that, but I feel
like I'd be using the wrong tool for this job.
Answer: You can consider using a library such as
[BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) or
[lxml](http://lxml.de) to parse the response html/xml.
You may also want to take a look at [`Requests`](http://docs.python-
requests.org/en/latest/index.html), which has a much cleaner API for making
requests.
* * *
Here is the working code using `Requests`, `BeautifulSoup` and `re`, arguably
not the best solution here, but it is quite flexible and can be extended for
similar problems:
import re
import requests
from bs4 import BeautifulSoup
url = 'http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&rvsection=0&titles=Albert_Einstein&format=xml'
res = requests.get(url)
soup = BeautifulSoup(res.text, "xml")
birth_re = re.search(r'(Birth date(.*?)}})', soup.revisions.getText())
birth_data = birth_re.group(0).split('|')
birth_year = birth_data[2]
birth_month = birth_data[3]
birth_day = birth_data[4]
death_re = re.search(r'(Death date(.*?)}})', soup.revisions.getText())
death_data = death_re.group(0).split('|')
death_year = death_data[2]
death_month = death_data[3]
death_day = death_data[4]
* * *
Per @JBernardo's suggestion using JSON data and `mwparserfromhell`, a better
answer for this particular use case:
import requests
import mwparserfromhell
url = 'http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&rvsection=0&titles=Albert_Einstein&format=json'
res = requests.get(url)
text = res.json["query"]["pages"].values()[0]["revisions"][0]["*"]
wiki = mwparserfromhell.parse(text)
birth_data = wiki.filter_templates(matches="Birth date")[0]
birth_year = birth_data.get(1).value
birth_month = birth_data.get(2).value
birth_day = birth_data.get(3).value
death_data = wiki.filter_templates(matches="Death date")[0]
death_year = death_data.get(1).value
death_month = death_data.get(2).value
death_day = death_data.get(3).value
|
What's the pythonic way to wrap several functions in the same with statements
Question: I am using the Python library,
[Fabric](http://docs.fabfile.org/en/1.4.3/index.html), to do some remote
server maintenance. Fabric automatically outputs all of the responses to
remote and local commands unless you wrap the command in a couple with
statements. Like so, on a local machine,
with settings(warn_only='true'):
with hide('running', 'stdout', 'stderr', 'warnings'):
output = local("uname -a", True)
or like this on a remote machine:
with settings(warn_only='true'):
with hide('running', 'stdout', 'stderr', 'warnings'):
output = run("uname -a")
I am writing a long and complex task and find myself repeating those two with
statements over and over again. I want to write a function called _mute() to
prevent that repetition. It would let me do something like this:
def _mute(fabric_cmd, args):
with settings(warn_only='true'):
with hide('running', 'stdout', 'stderr', 'warnings'):
output = fabric_cmd(args)
return output
def some_remote_task():
# Run a remote task silently
_mute(remote, 'uname -a')
def some_local_task():
# Run a local task silently
_mute(local, 'uname -a', True)
I've looked into some solutions and know that "eval" could do this for me. But
every page I read about eval suggests that it's almost always a bad idea
because of security issues. I looked into partials, but I couldn't figure out
how to make an argument in my _mute function callable. I'm guessing there's a
higher level Python concept I'm missing here. What's the pythonic way to go
about doing this? Thanks for any direction you might be able to provide.
Answer: The better solution would be for you to build your own context manager; by far
the easiest way would be to use the [`contextlib.contextmanager`
decorator](http://docs.python.org/library/contextlib.html#contextlib.contextmanager):
from contextlib import contextmanager
@contextmanager
def _mute():
with settings(warn_only='true'):
with hide('running', 'stdout', 'stderr', 'warnings'):
yield
Then use `_mute` as a context manager:
def some_remote_task():
# Run a remote task silently
with _mute():
output = remote("uname -a")
This is a lot more compact and readable than having to retype the two larger
context manager lines and has the _added_ advantage that now you can run
multiple commands in that same context.
As for your question; you can easily apply arbitrary arguments to a given
function using the `*args` syntax:
def _mute(fabric_cmd, *args):
with settings(warn_only='true'):
with hide('running', 'stdout', 'stderr', 'warnings'):
return fabric_cmd(*args)
def some_remote_task():
# Run a remote task silently
output = _mute(remote, 'uname -a')
See [*args and **kwargs?](http://stackoverflow.com/questions/3394835/args-and-
kwargs) for more information on the `*args` arbitrary argument lists tricks.
|
django import local settings from the server
Question: In my home directory, I have a folder called `local`. In it, there are files
called `__init__.py` and `local_settings.py`. My django app is in a completely
different directory. When the app is **NOT** running in DEBUG mode, I want it
to load the `local_settings.py` file. How can this be acheived? I read the
below:
[python: import a module from a
folder](http://stackoverflow.com/questions/279237/python-import-a-module-from-
a-folder)
[Importing files from different folder in
Python](http://stackoverflow.com/questions/4383571/importing-files-from-
different-folder-in-python)
<http://docs.python.org/tutorial/modules.html>
Basically, those tutorials are allowing to import from another directory, but
what about a completely different working tree? I don't want to keep doing ..,
.., .. etc. Is there a way to goto the home directory?
I tried the following code:
import os, sys
os.chdir(os.path.join(os.getenv("HOME"), 'local'))
from local_settings import *
But i keep seeing errors in my apache error.log for it...
Answer: `os.chdir` just affects the current working directory, which has nothing
whatsoever to do with where Python imports modules from.
What you need to do is to add the the `local` directory to the Pythonpath. You
can either do this from the shell by modifying `PYTHONPATH`, or from inside
Python by modifying `sys.path`:
import sys
sys.path.append(os.path.expanduser("~/local"))
import local_settings
|
Correct usage of multithreading and Queue module in data collection application written in Python
Question: I am working on collecting data from several devices, and since the tests are
long duration, I want to use Python's `threading` and `Queue` modules. I've
written a short script to figure out how to use these, and it is very evident
I don't understand the nuances of getting this to work.
Here is my script:
import ue9
import LJ_Util
import DAQ_Util
import threading
import Queue
from datetime import datetime
from time import sleep
queue = Queue.Queue()
now = datetime.now().isoformat()
def DAQThread(ue9ipAddr):
print '\nExecuting in DAQThread at IP Address: %s' % ue9ipAddr
a = ue9.UE9(ethernet=True, ipAddress=ue9ipAddr)
SN = (a.commConfig()).get('SerialNumber')
count = 5
while count > 0:
reading = a.feedback()
dataReturn = (SN, now, reading)
queue.put(dataReturn)
count -= 1
print count
sleep(5)
def listenThread(counter):
while queue.empty() != True:
try:
outcome = queue.get()
print outcome
counter -=1
print counter
except:
return 'queue.get() command loop failing.'
print "\nOpening device..."
ipAdd = '192.168.1.127'
feedbackThread = threading.Thread(target=DAQThread, args=(ipAdd,))
feedbackThread.start()
print "\nListening for data..."
queryThread = threading.Thread(target=listenThread, args = (10,))
queryThread.start()
print queue.get()
print(threading.activeCount())
print "\nDone"
Here is the output results from executing this script:
$ python threading-and-queue-test.py
Opening device...
Executing in DAQThread at IP Address: 192.168.1.127
Listening for data...
4
(278956853, '2012-09-03T20:02:47.656024', {'AIN4': -0.012, 'AIN5': -0.012, 'CIODir': 0, 'AIN7': -0.012, 'EIODir': 0, 'AIN1': -0.012, 'AIN2': -0.012, 'AIN3': -0.012, 'MIOState': 7, 'AIN8': -0.012, 'AIN6': -0.012, 'AIN9': -0.012, 'CIOState': 15, 'AIN0': -0.012, 'Counter0': 0, 'Counter1': 0, 'EIOState': 255, 'TimerC': 0, 'TimerB': 0, 'TimerA': 0, 'MIODir': 0, 'FIODir': 32, 'AIN14': -0.012, 'AIN15': -0.012, 'AIN12': -0.012, 'AIN13': -0.012, 'AIN10': -0.012, 'AIN11': -0.012, 'FIOState': 255})
2
Done
3
2
1
0
$
It's clear the timing of the thread activities are 'off', but I'm not sure how
to fix it, as I've never programmed with these modules before, nor used
threads in general. Any comments or suggestions would be welcome. Thanks in
advance!
Answer: As mentioned in the comment, one problem is in your listening thread. Once you
"grab" an entry from queue, there are no longer any entries in queue and since
you are only adding an entry every 5 seconds, your listening thread will empty
the queue thus
`while queue.empty() != True` will evaluate False, exiting the loop
For example:
>>> import Queue
>>> q=Queue.Queue()
>>> q.put(1)
>>> q.empty()
False
>>> q.get()
1
>>> q.empty()
True
>>> q.empty()!=True
False
One way to get around this is to use another queue as a stop or cancel queue
so modifying you listening thread you could do something like this:
stopQue=Queue.Queue()
def listenThread(counter):
while True:
if queue.empty()!=True:
outcome = queue.get()
print outcome
counter -=1
print counter
if stopQue.empty()!=True:
break
print 'Exiting Listening Thread'
That way if you put anything in the stopQue, i.e. `stopQue.put(1)`, it should
exit.
Complete example based on your initial code. I cut out the code that does not
have to do with the Queues and threading:
import threading
import Queue
from time import sleep
dataQue = Queue.Queue()
stopQue = Queue.Queue()
def DAQThread(ue9ipAddr):
print 'Executing in DAQThread\n'
count = 5
while count > 0:
dataQue.put('data: %s' % count)
count -= 1
sleep(5)
stopQue.put(1)
print 'Exiting DAQThread\n'
def listenThread(counter):
while True:
if dataQue.empty() != True:
outcome = dataQue.get()
print outcome
counter -=1
if stopQue.empty() != True:
break
print 'Exiting Listening Thread'
print "Opening device..."
ipAdd = '192.168.1.127'
feedbackThread = threading.Thread(target=DAQThread, args=(ipAdd,))
feedbackThread.setDaemon(True)
feedbackThread.start()
print "Listening for data..."
queryThread = threading.Thread(target=listenThread, args = (10,))
queryThread.setDaemon(True)
queryThread.start()
print "Done"
Produces the output:
>>>
Opening device...
Executing in DAQThread
Listening for data...
Donedata: 5 #Notice how the script is done however the threads are still running
>>> data: 4
data: 3
data: 2
data: 1
Exiting DAQThread
Exiting Listening Thread
|
How to assure applying order of function decorators in Python?
Question: Some decorators should only be used in the outermost layer.
A decorator that augments the original function and add a configure parameter
is one example.
from functools import wraps
def special_case(f):
@wraps(f)
def _(a, b, config_x=False):
if config_x:
print "Special case here"
return
return f(a, b)
How can I avoid decorators like this getting decorated by another decorator?
**EDIT**
It is really disgusting to let everyone trying to apply a new decorator worry
about the application order.
So, is it possible to avoid this kind of situation? Is it possible to add a
config option without introducing a new parameter?
Answer: There isn't any way to stop it from being decorated. You just have to document
that it needs to apply last and tell people not to use it inside another
decorator.
Edit responding to your edit: In Python 3 you can give your function a
[keyword-only argument](http://www.python.org/dev/peps/pep-3102/). This
drastically reduces the impact that the change will have on existing uses of
the function. Unfortunately this only works in Python 3.
Ultimately, applying a decorator to a function just means passing the
decorated function as an argument to another function. There's no way for a
function (or any object) to even know _that_ it's being passed as an argument,
let alone what it's being passed to. The reason you can't know about later
decorators is the same reason that in an ordinary function call like
`f(g(x))`, the function `g` can't know that it will later be called by `f`.
This is one reason writing decorators is tricky. Code that relies on heavy use
of decorators that pass explicit arguments to their wrapped functions (as
yours passes `a` and `b`) is inherently going to be fragile. Fortunately, a
lot of the time you can write a decorator that uses `*args` and `**kwargs` so
it can pass all the arguments it doesn't use along to the decorated function.
If someone takes the code you provide, and writes another decorator that
explicitly accepts only `a` and `b` as arguments, and then calls the decorated
function as `f(a, b, True)`, it's their own fault if it fails. They should
have known that other decorators they used might have changed the function
signature.
|
Program returns an "IndexError: string index out of range."
Question:
from random import shuffle
alphabet="abcdefghijklmnopqrstuvwxyz"
def substitution(alphabet,plaintext):
# Create array to use to randomise alphabet position
randarray=range(0,len(alphabet))
shuffle(randarray)
key="Zebra"
#Create our substitution dictionary
dic={}
for i in range(0,len(alphabet)):
key+=alphabet[randarray[i]]
dic[alphabet[i]]=alphabet[randarray[i]]
#Convert each letter of plaintext to the corrsponding
#encrypted letter in our dictionary creating the cryptext
ciphertext=""
for l in plaintext:
if l in dic:
l=dic[l]
ciphertext+=l
for i in alphabet:
print i,
print
for i in key:
print i,
print
return ciphertext,key
# This function decodes the ciphertext using the key and creating
# the reverse of the dictionary created in substitution to retrieve
# the plaintext again
def decode(alphabet,ciphertext,key):
dic={}
for i in range(0,len(key)):
dic[key[i]]=alphabet[i]
plaintext=""
for l in ciphertext:
if l in dic:
l=dic[l]
plaintext+=l
return plaintext
# Example useage
plaintext="the cat sat on the mat"
ciphertext,key=substitution(alphabet,plaintext)
print "Key: ", key
print "Plaintext:", plaintext
print "Cipertext:", ciphertext
print "Decoded :", decode(alphabet,ciphertext,key)
When I run this code, it returns a `"IndexError: String index out of range"`
error. Could someone give me hand troubleshooting it, I can't see the problem.
Traceback (most recent call last):
File"/Users/Devlin/Desktop/Dev/Python/Substitution Cypher.py", line 57, in
print "Decoded :", decode(alphabet,ciphertext,key)
File "/Users/Devlin/Desktop/Dev/Python/Substitution Cypher.py", line 41, in decode
dic[key[i]]=alphabet[i] IndexError: string index out of range
Answer: The problem lies here:
def decode(alphabet,ciphertext,key):
dic={}
for i in range(0,len(key)):
dic[key[i]]=alphabet[i] # this line fails
Your `key` at this point is always 31 characters, which is `len('Zebra') +
len(alphabet)`. As `len(alphabet)` is always 26, `alphabet[i]` fails when i >
25.
I believe you've misunderstood what is represented by `key` here.
`substitution` should _produce_ a randomised key, it's not a password or a
salt of any kind. In fact, if you look at [the original article you got this
code from](http://www.stealthcopter.com/blog/2009/12/python-cryptography-
substitution-cipher-improving-on-the-caesar-cipher/), you'll see that `key=""`
in `substitution` and not some-random-value.
|
Python import: from django import template: module object 'template' has no attribute 'loader'
Question: I'm developing a Django project and deploying it to OpenShift PaaS. At first
everything worked, but after some changes irrelevant to template system (I
added django-hosts) something broke there and a `"module object 'template' has
no attribute 'loader'"` error started to appear. What was even stranger, it
appeared only two times after each wsgi app restart, and on 3rd request
everything started to work. I went back to last commit before breakage, but
the problem persisted. I recreated project from scratch and reinstalled my
Django app, but it didn't go either; error started appearing every time, not
just with first 2 requests. `from django import template` really imports
template module object, but this object lacks about 5 attributes, including
`loader`, as compared to what expected.
Then I noticed that same thing happens if I try to run the same code from
Django shell locally. But it still works in my app's views.py with local
Django development server. And in used to work in OpenShift initially. I tried
replacing `from django import template` with `from django.template import
loader` and calling `loader` directly - and EVERYTHING WORKED
I think I don't understand something about Python import. What's the
difference between
import a
a.b
and
from a import b
b
?
Why can a.b in first example miss attributes b has in second one?
Answer: That happened because template is a package inside django, and loader is a
module, while you expected template to be a module and loader one of it's
attributes. It works as expected.
|
Measuring time using pycuda.driver.Event gives wrong results
Question: I ran
[SimpleSpeedTest.py](http://wiki.tiker.net/PyCuda/Examples/SimpleSpeedTest)
from the PyCuda examples, producing the following output:
Using nbr_values == 8192
Calculating 100000 iterations
SourceModule time and first three results:
0.058294s, [ 0.005477 0.005477 0.005477]
Elementwise time and first three results:
0.102527s, [ 0.005477 0.005477 0.005477]
Elementwise Python looping time and first three results:
2.398071s, [ 0.005477 0.005477 0.005477]
GPUArray time and first three results:
8.207257s, [ 0.005477 0.005477 0.005477]
CPU time measured using :
0.000002s, [ 0.005477 0.005477 0.005477]
**The first four time measurements are reasonable, the last one (0.000002s)
however is way off**. The CPU result should be the slowest one but it is
orders of magnitude faster than the fastest GPU method. So obviously the
measured time must be wrong. This is strange since the same timing method
seems to work fine for the first four results.
So I took some code from SimpleSpeedTest.py and made a small **test file**
[2], which produced:
time measured using option 1:
0.000002s
time measured using option 2:
5.989620s
**Option 1** measures the duration using `pycuda.driver.Event.record()` (as in
SimpleSpeedTest.py), **option 2** uses `time.clock()`. Again, option 1 is off
while option 2 gives a reasonable result (the time it takes to run the test
file is around 6s).
Does anyone have an idea as to why this is happening?
Since using option 1 is endorsed in SimpleSpeedTest.py, could it be my setup
that is causing the problem? I am running a GTX 470, Display Driver 301.42,
CUDA 4.2, Python 2.7 64, PyCuda 2012.1, X5650 Xeon
[2] **Test file:**
import numpy
import time
import pycuda.driver as drv
import pycuda.autoinit
n_iter = 100000
nbr_values = 8192 # = 64 * 128 (values as used in SimpleSpeedTest.py)
start = drv.Event() # option 1 uses pycuda.driver.Event
end = drv.Event()
a = numpy.ones(nbr_values).astype(numpy.float32) # test data
start.record() # start option 1 (inserting recording points into GPU stream)
tic = time.clock() # start option 2 (using CPU time)
for i in range(n_iter):
a = numpy.sin(a) # do some work
end.record() # end option 1
toc = time.clock() # end option 2
end.synchronize()
events_secs = start.time_till(end)*1e-3
time_secs = toc - tic
print "time measured using option 1:"
print "%fs " % events_secs
print "time measured using option 2:"
print "%fs " % time_secs
Answer: I contacted [Andreas Klöckner](http://mathema.tician.de/software/pycuda) and
he suggested to synchronize on the start event, too.
...
start.record()
start.synchronize()
...
And this seems to solve the issue!
time measured using option 1:
5.944461s
time measured using option 2:
5.944314s
Apparently CUDA's behaviour changed in the last two years. I updated
[SimpleSpeedTest.py](http://wiki.tiker.net/PyCuda/Examples/SimpleSpeedTest).
|
Retrieving field formats of numpy record array as list
Question: I am trying regularize the formats of a pytable and recarray for the purposes
of appending the recarray to the pytable. To do this I need to get field
information from the recarray (i.e. names and field formats) I can easily get
a list of the recarray names using:
namelist = Myrecarray.dtype.names
but have not found a corresponding property. The recarray dtypes property
returns a tuple list of name and format but it is not possible to iterate over
this list to retrieve the formats. This seems non-standard as I am able to
iterate over other lists of tuples to obtain specific tuple items. For
example, the following code shows iteration over an array and recarray, and
attempts to iterate over the list of tuples of the dtype of a recarray:
import numpy as np
my_list = [(1, 2), (3, 4), (5, 6)]
print([x[0] for x in my_list])
print('-----------------------')
my_array = np.array([(1.0, 2, 5), (3.0, 4, 77)], dtype=[('x', float), ('y', int), ('z', float)])
print([x[2] for x in my_array])
print('-----------------------')
img = np.array([[(0,0,0), (1,0,0)], [(0,1,0), (0,0,1)]], {'names': ('r','g','b'), 'formats': ('f4', 'i2', 'f4')})
nta = np.array(img.dtype)
print(img.dtype)
print(img.dtype.names)
print(img.dtype[0])
print(nta)
print([x[0] for x in nta])
print([x[0] for x in img.dtype])
The printouts from this code (shown below) suggest that I can: 1) iterate over
a list of tuples;
2) iterate over a recarray; 3) retrieve the dtype of a record array as an
apparent list of tuples;
4) retrieve the names from the recarray 5) retrieve the format of one field of
the recarray
but that I can not:
6) iterate over the recarray dtype to retrieve the format of each field 7)
iterate over the recarray dtype cast as an array to retrieve the format of
each field
the output is:
[1, 3, 5]
-----------------------
[5.0, 77.0]
-----------------------
[('r', '<f4'), ('g', '<i2'), ('b', '<f4')]
('r', 'g', 'b')
float32
[('r', '<f4'), ('g', '<i2'), ('b', '<f4')]
'import sitecustomize' failed; use -v for traceback
Traceback (most recent call last):
File "....\Python\test0.py", line 23, in <module>
print([x[0] for x in nta])
TypeError: iteration over a 0-d array
Does anyone know how to retrieve the field formats from a record array as a
list?
Answer: Hmmmm, newbie found an answer to his own question. I needed the "descr"
property to turn it into an iterable object
print([x[1] for x in img.dtype.descr])
|
Customize the Django settings.py file depending on the OS
Question: I'm working on a Django project using two different machine, a Windows and a
Mac Os X. I have everything synchronized over Dropbox. The problem is that
some settings in settings.py are single strings (e.g. the MEDIA_ROOT or the
STATIC_ROOT) and not tuples. That means that I can set a proper path for,
let's say STATIC_ROOT for only one of the two OS. In the other one of course
it won't work.
I was wondering if there exist a way to recognize the OS python is running on
and choose the proper setting through a condition according to it.
Answer: The settings.py file is just python, so you can easily switch out statements
based on the platform. Use the [`platform`
module](http://docs.python.org/library/platform.html):
import platform
if platform.system() == 'Darwin':
MEDIA_ROOT = 'something'
else:
MEDIA_ROOT = 'somethingelse'
|
locating pywin32 library components python
Question: Hello I am using pywin32 to track several actions on server, currently i am
looking to track the files open per user on server, well I found
`File_Info_Structure_3`, here <http://msdn.microsoft.com/en-
us/library/windows/desktop/bb525375%28v=vs.85%29.aspx>, but i cannot find it
in any of the pywin32 libraries, i checked in `win32net`, in `win32file` but
it is not there. Does anyone know how i can import and use it. Thanks!
What I am getting:
{'num_locks': 0, 'path_name': u'd:\\database\\agdata\\inx\\', 'user_name': u'finance', 'id': -1342162944, 'permissions': 1}
{'num_locks': 0, 'path_name': u'd:\\database\\dealdata\\', 'user_name': u'ntmount', 'id': 1879102464, 'permissions': 1}
{'num_locks': 0, 'path_name': u'd:\\database\\dealdata\\', 'user_name': u'ntmount', 'id': 536973312, 'permissions': 1}
{'num_locks': 0, 'path_name': u'd:\\database\\agdata\\inx\\', 'user_name': u'ntmount', 'id': -469590016, 'permissions': 1}
..........
What I need:
{'num_locks': 0, 'path_name': u'd:\\database\\agdata\\inx\\', '10.2.2.3': u'finance', 'id': -1342162944, 'permissions': 1}
{'num_locks': 0, 'path_name': u'd:\\database\\dealdata\\', '10.5.3.23': u'ntmount', 'id': 1879102464, 'permissions': 1}
..........
Answer: The FILE_INFO_* structs were not translated into python equivalents. They will
be returned or used, e.g. by
[win32net.NetFileEnum](http://timgolden.me.uk/pywin32-docs/win32net__NetFileEnum_meth.html)
as a dictionary.
|
Why am I not getting a live response to Comet server stream events?
Question: I have a CGI script that streams events:
#!/usr/bin/python
...
print 'Content-Type: text/event-stream\n'
while (True):
delay()
print 'event: item'
print 'data: ' + get_random_event()
print ''
From the command line, it sends random events at random intervals (every few
seconds).
However, I have the following JavaScript:
var source = new EventSource('feed.cgi');
source.addEventListener('message', function(current_event)
{
alert(current_event.data);
}, false);
This has never got me an alert. There are no errors in the Chrome JavaScript
console, but so far I have never gotten a discernible recognition that the
code listening to the server script has received an event.
Do you have suggestions for how I can get these?
\--EDIT--
I should probably give the first CGI script for completion. FireFox says
"Firefox can't establish a connection to the server at
http://[deleted]/feed.cgi." But I can execute a CGI script normally that's in
a sibling directory, and Apache's finest-grained Directory setting is still on
a directory that is parent to both and meant to allow scripts with a .cgi
extension to run as CGI. And the script is set to mode 0755.
Adding a file flush after an increment of behaviour doesn't seem to alter the
script's reception.
#!/usr/bin/python
import random
import sys
import time
timescale = 5
names = [
'Adam',
'Bob',
'Carol',
'David',
'Eddie',
'Frank',
'George',
'Harry',
'Ike',
'Jim',
'Kenny',
'Larry',
'Mary',
'Nancy',
'Oliver',
'Peter',
'Quincy',
'Roger',
'Sam',
'Thomas',
'Uncle',
'Vincent',
'William',
'Xavier',
'Yogi',
'Zachary',
]
notifications = [
'applied for a position at ACME.',
'went to an interview at ACME.',
'accepted an offer from ACME.',
]
def delay():
time.sleep((random.random() + random.random()) * timescale)
def get_random_event():
result = '<div class="activity"><span class="name">'
result += random.choice(names)
result += '</span> '
result += random.choice(notifications)
result += '</div>'
return result
print 'Content-Type: text/event-stream\n'
while (True):
print 'event: item'
print 'data: ' + get_random_event()
print ''
sys.stderr.flush()
delay()
Answer: Starting from our code and [W3chools Server Sent Event
documentation](http://www.w3schools.com/html/html5_serversentevents.asp) I
build an example which runs succesfully in chomium and firefox with these
modifications (with lighttpd):
1. add a '\n' at the end of print
2. flush the standard output insteadof the standard error
So the while should be:
while (True):
print 'event: item' + '\n'
print 'data: ' + get_random_event() + '\n'
print ''
sys.stdout.flush()
delay()
|
Retrieving the list of all the Python APIs given by an application
Question: i would like to retrieve the list of all the APIs that an application is
giving to the users.
The Application is written in C/C++ for the biggest part.
I was hoping that python has a standard function for that, i was also trying
to approach this in terms of namespaces, since i'm not interested in all the
keywords but only in the ones provided with the APIs, but i simply do not know
where to start in terms of functions, i do not know about functions that are
doing something related to what i'm trying to achieve.
The application uses Python 3.x to provide APIs.
Answer: Python doesn't have a notion of an API (or _interface_) as a language
primitive. A module or package will expose some of its members (functions and
variables) and hide others, so if you know _which_ modules you are interested
in, "exposing" in this sense is AFAIK the most meaningful concept.
The exposed members are the same ones that will be imported if you run `from
<module> import *`. As you probably know, member names that begin with a
single underscore, or [begin with two
underscores](http://docs.python.org/reference/expressions.html#atom-
identifiers) and do _not_ end with two, are not meant to be part of the API
and will not be exported; by default everything else will be exposed, but a
module can customize its API by listing what should be exported in the
`__all__` variable-- see [Importing * from a
package](http://docs.python.org/tutorial/modules.html#importing-from-a-
package).
So, to find the APIs you are looking for you must first know which top-level
modules you are interested in. If the application in question is available to
python as a single package, start with it. If it has a `__all__` variable, its
contents are the API for the package. If it does not, look through the
contents of `dir(<package>)` and exclude anything that starts with only a
single underscore, or starts with two underscores but does not end with two.
If you're looking at a large package, some of what you'll find are themselves
modules or packages. Examine them the same way, recursively.
|
create JSON with multiple dictionaries, Python
Question: I have this code:
>>> import simplejson as json
>>> keys = dict([(x, x**3) for x in xrange(1, 3)])
>>> nums = json.dumps(keys, indent=4)
>>> print nums
{
"1": 1,
"2": 8
}
But I want to create a loop to make my output looks like this:
[
{
"1": 1,
"2": 8
},
{
"1": 1,
"2": 8
},
{
"1": 1,
"2": 8
}
]
Answer: Your desired output is not valid JSON. I think what you probably meant to do
was to append multiple dictionaries to a list, like this:
>>> import json
>>> multikeys = []
>>> for i in range(3):
... multikeys.append(dict([(x, x**3) for x in xrange(1, 3)]))
...
>>> print json.dumps(multikeys, indent=4)
[
{
"1": 1,
"2": 8
},
{
"1": 1,
"2": 8
},
{
"1": 1,
"2": 8
}
]
|
How to zip up files of same prefix in python?
Question: If I have a directory "C:\Files" that contains a bunch of files:
A_File1 A_File2 B_File1 B_File2
What is the best way to iterate through the files to zip up any file with the
same prefix into a single zip file? For instance, output would be "A.zip" and
"B.zip" and their associated files.
I know how to iterate through the files:
for item in os.listdir("C:\FILES"):
But I do not know what the best way to zip up the files is or if there is some
python library that helps with that.
Answer: Use the glob module from the standard library, instead of os.listdir:
from glob import glob
for file in glob("C:\\FILES\\A_File*"):
...
(If you are using backslashes to separate dirs, use two, because a single one
is a escape character in Python strings)
|
Freeze when using tkinter + pyhook. Two event loops and multithreading
Question: I am writing a tool in python 2.7 registering the amount of times the user
pressed a keyboard or mouse button. The amount of clicks will be displayed in
a small black box in the top left of the screen. The program registers clicks
even when another application is the active one.
It works fine except when I move the mouse over the box. The mouse then
freezes for a few seconds after which the program works again. If I then move
the mouse over the box a second time, the mouse freezes again, but this time
the program crashes.
I have tried commenting out pumpMessages() and then the program works. The
problem looks a lot like this question
[pyhook+tkinter=crash](http://stackoverflow.com/questions/6765362/pyhook-
tkinter-crash), but no solution was given there.
Other answers has shown that there is a bug with the dll files when using wx
and pyhook together in python 2.6. I don't know if that is relevant here.
My own thoughts is that it might have something to do with the two event loops
running parallel. I have read that tkinter isn't thread safe, but I can't see
how I can make this program run in a single thread since I need to have both
pumpmessages() and mainlooop() running.
To sum it up: Why does my program freeze on mouse over?
import pythoncom, pyHook, time, ctypes, sys
from Tkinter import *
from threading import Thread
print 'Welcome to APMtool. To exit the program press delete'
## Creating input hooks
#the function called when a MouseAllButtonsUp event is called
def OnMouseUpEvent(event):
global clicks
clicks+=1
updateCounter()
return True
#the function called when a KeyUp event is called
def OnKeyUpEvent(event):
global clicks
clicks+=1
updateCounter()
if (event.KeyID == 46):
killProgram()
return True
hm = pyHook.HookManager()# create a hook manager
# watch for mouseUp and keyUp events
hm.SubscribeMouseAllButtonsUp(OnMouseUpEvent)
hm.SubscribeKeyUp(OnKeyUpEvent)
clicks = 0
hm.HookMouse()# set the hook
hm.HookKeyboard()
## Creating the window
root = Tk()
label = Label(root,text='something',background='black',foreground='grey')
label.pack(pady=0) #no space around the label
root.wm_attributes("-topmost", 1) #alway the top window
root.overrideredirect(1) #removes the 'Windows 7' box around the label
## starting a new thread to run pumMessages() and mainloop() simultaniusly
def startRootThread():
root.mainloop()
def updateCounter():
label.configure(text=clicks)
def killProgram():
ctypes.windll.user32.PostQuitMessage(0) # stops pumpMessages
root.destroy() #stops the root widget
rootThread.join()
print 'rootThread stopped'
rootThread = Thread(target=startRootThread)
rootThread.start()
pythoncom.PumpMessages() #pump messages is a infinite loop waiting for events
print 'PumpMessages stopped'
Answer: From the information that Tkinter needs to run in the main thread and not be
called outside this thred, I found a solution:
My problem was that both `PumpMessages` and `mainLoop` needed to run in the
main thread. In order to both receive inputs and show a Tkinter label with the
amount of clicks I need to switch between running `pumpMessages` and briefly
running `mainLoop` to update the display.
To make `mainLoop()` quit itself I used:
after(100,root.quit()) #root is the name of the Tk()
mainLoop()
so after 100 milliseconds `root` calls it's `quit` method and breaks out of
its own main loop
To break out of pumpMessages I first found the pointer to the main thread:
mainThreadId = win32api.GetCurrentThreadId()
I then used a new thread that sends the `WM_QUIT` to the main thread (note
`PostQuitMessage(0)` only works if it is called in the main thread):
win32api.PostThreadMessage(mainThreadId, win32con.WM_QUIT, 0, 0)
It was then possible to create a while loop which changed between
`pumpMessages` and `mainLoop`, updating the labeltext in between. After the
two event loops aren't running simultaneously anymore, I have had no problems:
def startTimerThread():
while True:
win32api.PostThreadMessage(mainThreadId, win32con.WM_QUIT, 0, 0)
time.sleep(1)
mainThreadId = win32api.GetCurrentThreadId()
timerThread = Thread(target=startTimerThread)
timerThread.start()
while programRunning:
label.configure(text=clicks)
root.after(100,root.quit)
root.mainloop()
pythoncom.PumpMessages()
Thank you to Bryan Oakley for information about Tkinter and Boaz Yaniv for
providing the information needed to [stop pumpMessages() from a
subthread](http://stackoverflow.com/questions/6023172/ending-a-program-mid-
run/6023566#6023566)
|
Splitting input into two with for-loop
Question: This is where I got stuck, trying to learn Python using web-course.
> > Write a program that takes a single input line of the form
> «number1»+«number2», where both of these represent positive integers, and
> outputs the sum of the two numbers. For example on input 5+12 the output
> should be 17.
We shouldn't use split() or any import.
I think the solution is to use Character and its value of 43 and on that way
substring but I am not sure. Only thing that I am sure is that solution is
simple no fancy functions or imports needed.
Here is what i think might be close to solution
S = input()
for position in range(0,len(S):
#code for substrings 5 to number1, and 12 to number2
result = int(number1)+int(number2)
print(result)
Answer: Use
[`str.partition`](http://docs.python.org/dev/library/stdtypes.html#str.partition):
line = raw_input()
num1, _, num2 = line.partition('+')
print(int(num1) + int(num2))
If you can't use `str.partition` and want to use a `for` loop, then
`enumerate` should help:
for i, c in enumerate(line):
if c == '+':
# what goes here?
|
Cannot import Skype4Py in Python
Question: I'm using Skype4Py with Python 3.2.3 on windows. There was an error while
trying to import Skype4Py package and I did the following to figure out what
it was:
import sys
try:
import Skype4Py
except:
print (sys.exc_info()[0])
print (sys.exc_info()[1])
The output is as follows:
`<class 'Import error'>`
`No module named skype`
I installed Skype4Py with the windows installer. I can see the Skype4Py in
`Python32\Lib\site-packages`. How do I get this to work?
Answer: It is clearly stated in the [Skype Dev
Page](https://developer.skype.com/skypekit/reference/python/html/help.html)
that :
> _The Python wrapper is compatible and is tested with Python version 2.6.5 -
> Python versions 3.x are not supported at this time._
guess its bad luck then,
I reckon that skype dev's had given up the _SkypeKit Python Wrapper Reference_
due to lack of ...
**but**
you can find a independently maintained version of skype4py at
[Github](https://github.com/awahlig/skype4py) though it also only works in
python 2.x versions, it's updated regularly and, It has a far bigger community
than the skype maintained project which is nearly dead and also supports
latest 2.x versions rather than only supporting 2.6.5. here you can find the
Documentation for using the github maintained version of skype4py [Skype4py
usage](https://github.com/awahlig/skype4py#id3)
|
Compile python code then place it somewhere else
Question: I have python file, for example, named blah.py, that I would like to have
compiled then placed into another folder using cmake. Right now, I am capable
of doing this with the following code in my cmake file:
ADD_CUSTOM_TARGET(output ALL /usr/bin/python -m py_compile src/blah.py
COMMAND /bin/mv src/blah.pyc build VERBATIM)
This is on ubuntu 12.04. This code works as intended; the only problem is that
the python file is being compiled in the source directory, then being put in
the build directory.
However, I can't assume that this src directory will have read AND write
privileges, meaning what I need to do is combine these two commands into one
(compile the python file and place the compiled python file into my build
directory, instead of compiling it in the src directory then moving it)
I'm sure there must be some way I could be using to specify where I would like
this compiled code to be placed, but I can't find any. Help would be greatly
appreciated! :)
EDIT: This link may have a solution..not sure:
[Can compiled bytecode files (.pyc) get generated in different
directory?](http://stackoverflow.com/questions/611967/can-compiled-bytecode-
files-pyc-get-generated-in-different-directory)
Answer: I was typing out this answer, and then looked at your edited link. This same
answer is given in one of the unaccepted answers:
<http://stackoverflow.com/a/611995/496445>
import py_compile
py_compile.compile('/path/to/source/code.py', cfile='/path/to/build/code.pyc')
To call this via a basic shell command you can format it like this:
python -c "import py_compile; py_compile.compile('/path/to/source/code.py', cfile='/path/to/build/code.pyc')"
|
pypy memory usage grows forever?
Question: I have a complicated python server app, that runs constantly all the time.
Below is a very simplified version of it.
When I run the below app using python; "python Main.py". It uses 8mb of ram
straight away, and stays at 8mb of ram, as it should.
When I run it using pypy "pypy Main.py". It begins by using 22mb of ram and
over time the ram usage grows. After a 30 seconds its at 50mb, after an hour
its at 60mb.
If I change the "b.something()" to be "pass" it doesn't gobble up memory like
that.
I'm using pypy 1.9 on OSX 10.7.4 I'm okay with pypy using more ram than
python.
**Is there a way to stop pypy from eating up memory over long periods of
time?**
import sys
import time
import traceback
class Box(object):
def __init__(self):
self.counter = 0
def something(self):
self.counter += 1
if self.counter > 100:
self.counter = 0
try:
print 'starting...'
boxes = []
for i in range(10000):
boxes.append(Box())
print 'running!'
while True:
for b in boxes:
b.something()
time.sleep(0.02)
except KeyboardInterrupt:
print ''
print '####################################'
print 'KeyboardInterrupt Exception'
sys.exit(1)
except Exception as e:
print ''
print '####################################'
print 'Main Level Exception: %s' % e
print traceback.format_exc()
sys.exit(1)
Below is a list of times and the ram usage at that time (I left it running
over night).
Wed Sep 5 22:57:54 2012, 22mb ram
Wed Sep 5 22:57:54 2012, 23mb ram
Wed Sep 5 22:57:56 2012, 24mb ram
Wed Sep 5 22:57:56 2012, 25mb ram
Wed Sep 5 22:57:58 2012, 26mb ram
Wed Sep 5 22:57:58 2012, 27mb ram
Wed Sep 5 22:57:59 2012, 29mb ram
Wed Sep 5 22:57:59 2012, 30mb ram
Wed Sep 5 22:58:00 2012, 31mb ram
Wed Sep 5 22:58:02 2012, 32mb ram
Wed Sep 5 22:58:03 2012, 33mb ram
Wed Sep 5 22:58:05 2012, 34mb ram
Wed Sep 5 22:58:08 2012, 35mb ram
Wed Sep 5 22:58:10 2012, 36mb ram
Wed Sep 5 22:58:12 2012, 38mb ram
Wed Sep 5 22:58:13 2012, 39mb ram
Wed Sep 5 22:58:16 2012, 40mb ram
Wed Sep 5 22:58:19 2012, 41mb ram
Wed Sep 5 22:58:21 2012, 42mb ram
Wed Sep 5 22:58:23 2012, 43mb ram
Wed Sep 5 22:58:26 2012, 44mb ram
Wed Sep 5 22:58:28 2012, 45mb ram
Wed Sep 5 22:58:31 2012, 46mb ram
Wed Sep 5 22:58:33 2012, 47mb ram
Wed Sep 5 22:58:35 2012, 49mb ram
Wed Sep 5 22:58:35 2012, 50mb ram
Wed Sep 5 22:58:36 2012, 51mb ram
Wed Sep 5 22:58:36 2012, 52mb ram
Wed Sep 5 22:58:37 2012, 54mb ram
Wed Sep 5 22:59:41 2012, 55mb ram
Wed Sep 5 22:59:45 2012, 56mb ram
Wed Sep 5 22:59:45 2012, 57mb ram
Wed Sep 5 23:00:58 2012, 58mb ram
Wed Sep 5 23:02:20 2012, 59mb ram
Wed Sep 5 23:02:20 2012, 60mb ram
Wed Sep 5 23:02:27 2012, 61mb ram
Thu Sep 6 00:18:00 2012, 62mb ram
Answer: <http://doc.pypy.org/en/latest/gc_info.html#minimark-environment-variables>
shows how to tweak the gc
|
subprocess.check_output not accepting long arguments
Question: Sorry for this basic question but it is driving me crazy.
From the same machine, if I do:
$ wkhtmltopdf --title "Transaction 773 (printed version)" --margin-left 5 --margin-right 2 --encoding utf8 www.google.com - > salida.pdf
I get a beautiful pdf as expected but -from a common python shell- I do:
>>> import subprocess
>>> f = open("salida.pdf", "wb")
>>> f.write(subprocess.check_output(["wkhtmltopdf", '--title "Transaction 773 (printed version)"', "--margin-left 5", "--margin-right 2", "--encoding utf8", "www.google.com", "-"]))
and all I get is the common error:
/usr/lib/python2.7/subprocess.pyc in check_output(*popenargs, **kwargs)
542 if cmd is None:
543 cmd = popenargs[0]
--> 544 raise CalledProcessError(retcode, cmd, output=output)
545 return output
546
CalledProcessError: Command '['wkhtmltopdf', '--title "Transaction 773 (printed version)"', '--margin-left 5', '--margin-right 2', '--encoding utf8', 'www.google.com', '-']' returned non-zero exit status 1
looking at the complete error message, it tells me:
Unknown long argument --title "Transaction 773 (printed version)"
why is not accepting the arguments? (worth mentioning that if I remove all
arguments and just leave the input file and the "-", it works like charm).
TIA
Answer: You need to specify each argument in a separate item, and long arguments are
actually _2_ values:
f.write(subprocess.check_output([
"wkhtmltopdf",
'--title', "Transaction 773 (printed version)",
"--margin-left", "5",
"--margin-right", "2",
"--encoding", "utf8",
"www.google.com", "-"]))
Normally, the shell parses such a command line, splits it on whitespace
(except where text has been enclosed in quotes), and passes that on to the new
process as the `argv` value.
Now, you are creating that `argv` list yourself, and you have to do the
splitting yourself. Note that the `--title` argument thus no longer needs to
have the shell-level quotes either.
A short argument (`-ml5` or similar) doesn't have that whitespace, so you
didn't have a problem with those.
|
import cv2 works but import cv2.cv as cv not working
Question: I think the sys path is correct, cv.pyd and cv.pyd reside in
c:\OpenCV2.3\build\Python\2.7\Lib\site-packages.
>>> import sys
>>> sys.path
['', 'C:\\Python27\\Lib\\idlelib', 'C:\\Python27\\lib\\site-packages\\pil-1.1.7-py2.7-win32.egg', 'C:\\Python27\\lib\\site-packages\\cython-0.17-py2.7-win32.egg', 'C:\\Python27\\lib\\site-packages\\pip-1.2-py2.7.egg', 'c:\\OpenCV2.3\\build\\Python\\2.7\\Lib\\site-packages', 'C:\\Python27\\python27.zip', 'C:\\Python27\\DLLs', 'C:\\Python27\\lib', 'C:\\Python27\\lib\\plat-win', 'C:\\Python27\\lib\\lib-tk', 'C:\\Python27', 'C:\\Python27\\lib\\site-packages', 'C:\\Python27\\lib\\site-packages\\IPython\\extensions']
And import cv or cv2 seems to be ok but import cv2.cv not
>>> import cv
>>> import cv2.cv as cv
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
import cv2.cv as cv
ImportError: No module named cv
>>> import cv2
>>> cv.NamedWindow("camera", 1)
...
What could be the reason of the ImportError?
Answer: I had the same issue.This was an issue with OpenCV Engine.Download OpenCV
engine from <https://github.com/thumbor/opencv-engine/releases/tag/1.0.1> and
save it as engine.py in \Python27\Lib\site-packages.use cv2.cv instead of
cv2.cv as cv.
|
Debugging Django/Gunicorn behind Nginx
Question: Fresh install of Nginx, Gunicorn, Supervisor, New Relic, Django, Postgres,
etc. Hitting the URL gives a big fat "Internal Server Error."
Turning debug on in the Nginx configuration gives a whole lot of detail, but
nothing that points to what is causing the 500 error (just that it is
happening.)
Next, I shut down Gunicorn via supervisorctl and started the application up
via `python manage.py runserver`, hit the URL, and everything is running fine.
Step back, shut off `runserver` and started Gunicorn manually using
`bin/gunicorn_django` and this is the closest to a usable trace log that I've
been able to get to:
2012-09-05 21:39:25 [5927] [ERROR] Error handling request
Traceback (most recent call last):
File "/somewhere/local/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 102, in handle_request
respiter = self.wsgi(environ, resp.start_response)
File "/somewhere/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 241, in __call__
response = self.get_response(request)
File "/somewhere/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 82, in get_response
urlconf = settings.ROOT_URLCONF
File "/somewhere/local/lib/python2.7/site-packages/django/utils/functional.py", line 185, in inner
return func(self._wrapped, *args)
AttributeError: 'Settings' object has no attribute 'ROOT_URLCONF'
I'm not asking for an answer to what is going on here, because I know this is
not enough detail to figure out much. The `ROOT_URLCONF` issue is generic and
could be caused by a lot of things. My question is what are some next steps I
can use to continue tracking down what is going on?
Thanks
Answer: The mention of `Settings` not `settings` (notice the capital) is suspicious.
I've noticed personally from using Aptana Studio, that one of the autocomplete
choices that appears after typing `from django.conf import` is `Settings`, but
that's obviously not what you really want. Perhaps something similar happened
and you accepted the incorrect autocomplete choice? I'd search your project
for `Settings` (with case-sensitivity on, of course), and change it to
lowercase as appropriate.
|
from django.utils.functional import wraps and from functools import wraps
Question: What is the difference in functionality between from django.utils.functional
import wraps and from functools import wraps?
i m using django 1.3 and python 2.4.i want to remove sensitive information
from post data but django.views.decorators.debug is available only in django
1.4.So i copy the source code and put in my project from
<https://bitbucket.org/orzel/django-1.4-production/src/507b10c2c0e3/django/views/decorators/debug.py>.
but this decorator uses functools which is not available in python 2.4.so i
use django.utils.functional import wraps instead of from functools import
wraps.But still the sensitive info appear in error mail? any suggestions?
Answer: I don't think there is a difference. It is probably a backport for older
Python versions which don't support `functools.wraps`.
**Edit**
Actually since latest Django does not support any Python version anymore which
does not support `functools` I think its just left there for possible
dependency issues (in Django or in Django projects). It is actually even
importing `functools.wraps` directly now:
<http://code.djangoproject.com/svn/django/trunk/django/utils/functional.py>
|
python module for nslookup
Question: Is there a python-module that's doing the same stuff as nslookup does? I am
planning to use nslookup on digging some information regarding the domain of a
URL to be scrapped. I know I can use os.sys to call nslookup but I am just
wondering if there is a python-module for this already. Thanks in advance!
Answer: You need to use [DNSPython](http://www.dnspython.org/)
import dns.resolver
answers = dns.resolver.query('dnspython.org', 'MX')
for rdata in answers:
print 'Host', rdata.exchange, 'has preference', rdata.preference
|
how to run python project after remove .py file?
Question: i want to remove my source code .py files to avoid rivals to see my code.
so i use
python -c "import compileall; compileall.compile_dir('D:/acc')"
to compile all py files to pyc,next step is to remove py files,and run the
project only use pyc, how to do it?
Answer: you can use shutils, which is in the standard lib but please consider that
removing *.py files is not a very string protection as it is very easy to
rebuild the *.py file from a *.pyc.
Using py2exe with a drm packer would be much more secure.
|
The Requests streaming example does not work in my environment
Question: I've been trying to consume the Twitter Streaming API using Python Requests.
There's a [simple example](http://docs.python-
requests.org/en/latest/user/advanced/#streaming-requests) in the
documentation:
import requests
import json
r = requests.post('https://stream.twitter.com/1/statuses/filter.json',
data={'track': 'requests'}, auth=('username', 'password'))
for line in r.iter_lines():
if line: # filter out keep-alive new lines
print json.loads(line)
When I execute this, the call to `requests.post()` never returns. I've
experimented and proved that it is definitely connecting to Twitter and
receiving data from the API. However, instead of returning a response object,
it just sits there consuming as much data as Twitter sends. Judging by the
code above, I would expect `requests.post()` to return a response object with
an open connection to Twitter down which I could continue to receive realtime
results.
(To prove it was receiving data, I connected to Twitter using the same
credentials in another shell, whereupon Twitter closed the first connection,
and the call returned the response object. The `r.content` attribute contained
all the backed up data received while the connection was open.)
The documentation makes no mention of any other steps required to cause
`requests.post` to return before consuming all the supplied data. Other people
seem to be using similar code without encountering this problem, e.g.
[here](http://kracekumar.com/post/19556427690/n00bs-epic-research-on-twitters-
streaming-api-on).
I'm using:
* Python 2.7
* Ubuntu 11.04
* Requests 0.14.0
Answer: You need to switch off prefetching, which I think is a parameter that changed
defaults:
r = requests.post('https://stream.twitter.com/1/statuses/filter.json',
data={'track': 'requests'}, auth=('username', 'password'),
prefetch=False)
for line in r.iter_lines():
if line: # filter out keep-alive new lines
print json.loads(line)
**UPDATE** : In the latest [`requests`](http://docs.python-
requests.org/en/latest/api/?highlight=prefetch) framework, use `stream`
instead of `prefetch`:
r = requests.post('https://stream.twitter.com/1/statuses/filter.json',
data={'track': 'requests'}, auth=('username', 'password'),
stream=True)
for line in r.iter_lines():
if line: # filter out keep-alive new lines
print json.loads(line)
|
Application design for exchanging values between C/C++ and Python
Question: I'm writing a C/C++ application, this application is used directly by the user
to give inputs, basically numerical values like a precise values, a range and
something like that, think about an UI with some pieces like:
* a numpad
* a slider
* an input field for numbers
and this is the C/C++ part, i expect to get just a bunch of values from this
piece of code, the problem is that this values:
* must be sended to Python in some way
* can change in real time since my C/C++ is like an UI where things can change
i don't know of any design that can fit my request mainly because:
* i know just the really basic stuff about Python, i can script something but i know nothing about Python internals
* i have no idea how to make C++ and Python work together and how to do this in a signal/slot logic ( supposing that the signal/slot design is the right one for this app )
* i have no idea how to provide my APIs to the users that would like to integrate my values in their own script for third part applications, imagine a random app that provides Python APIs too, i want to give to the user the option to code using both APIs in one script and maybe use this script in this third part application as any other set of APIs ( importing external APIs is possible when scripting in a third part application Python environment? )
That's how i would describe my problem and what i'm aiming to do, if something
is not clear please comment.
**recap** : how to provide Python API from a C++ application using a
signal/slot design in real time?
Thanks.
Answer: Check out
[Boost.Python](http://www.boost.org/doc/libs/1_42_0/libs/python/doc/index.html).
It's a library which allows to use Python as a scripting language for a C++
program.
|
Problems using python dragonfly on linux bases machine
Question: I am playing with the Dragonfly lib in python. I am working on Mac OSX, and
this will be my target platform. However when try to run my program I receive
the following error:
Traceback (most recent call last):
File "clock_challenge.py", line 2, in <module>
from dragonfly.all import Grammar,CompoundRule
File "/Users/vikash/.virtualenv/clock_challenge/lib/python2.7/site-packages/dragonfly-0.6.5-py2.7.egg/dragonfly/__init__.py", line 22, in <module>
from .log import get_log
File "/Users/vikash/.virtualenv/clock_challenge/lib/python2.7/site-packages/dragonfly-0.6.5-py2.7.egg/dragonfly/log.py", line 30, in <module>
import win32gui
ImportError: No module named win32gui
How can I get around using the win32gui library, since my target platform is
Linux based?
Answer: That package is only available for Windows. Without lots of code changes you
won't be able to use it on another OS.
|
Django Tutorial: Generic Views. Attribute Error
Question: I'm at the last part of
[this](https://docs.djangoproject.com/en/1.4/intro/tutorial04/) tutorial.
from django.conf.urls import patterns, include, url
from django.views.generic import DetailView, ListView
from polls.models import Poll
urlpatterns = patterns('',
url(r'^$',
ListView.as_view(
queryset=Poll.objects.order_by('-pub_date')[:5],
context_object_name='latest_poll_list',
template_name='polls/index.html')),
url(r'^(?P<pk>\d+)/$',
DetailView.as_view(
model=Poll,
template_name='polls/detail.html')),
url(r'^(?P<pk>\d+)/results/$',
DetailView.as_view(
model=Poll,
template_name='polls/results.html'),
name='poll_results'),
url(r'^(?P<poll_id>\d+)/vote/$', 'polls.views.vote'),
)
The ListView works, but when I visit a url with DetailView, I get.
AttributeError at /polls/2/
Generic detail view DetailView must be called with either an object pk or a slug.
Request Method: GET
Request URL: http://127.0.0.1:8000/polls/2/
Django Version: 1.4.1
Exception Type: AttributeError
Exception Value:
Generic detail view DetailView must be called with either an object pk or a slug.
Exception Location: /home/yasith/coding/django/django-tutorial/lib/python2.7/site-packages/django/views/generic/detail.py in get_object, line 46
Python Executable: /home/yasith/coding/django/django-tutorial/bin/python2
Python Version: 2.7.3
I'm not sure what I'm doing wrong. Any help would be appreciated.
EDIT: Add the main urls.py
from django.conf.urls import patterns, include, url
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
url(r'^polls/', include('polls.urls')),
url(r'^admin/', include(admin.site.urls)),
)
Answer: I think the code you posted above, is not the one you have on your disk.
I had the same problem, but then I looked carefully at both, my code and the
tutorial. The regex I had in my code was different from the tutorial.
This was my code:
url(r'^(?P<poll_id>\d+)/$',-$
url(r'^(?P<poll_id>\d+)/results/$',-$
This is the correct core:
url(r'^(?P<pk>\d+)/$',-$
url(r'^(?P<pk>\d+)/results/$',-$
Note that *poll_id* was in the previous sections of the tutorial, but generic
views require _pk_. Also note that the tutorial is correct, and you posted the
correct code (from the tutorial.)
|
Google App Engine: Random error "ImportError: cannot import name nodes"
Question: I recently went to my App Engine site which has been running just fine and no
errors and got the big ugly error page. In the admin tool App Engine was
logging this error:
2012-09-06 10:53:43.938
Traceback (most recent call last):
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 189, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 227, in _LoadHandler
handler = __import__(path[0])
File "/base/data/home/apps/s~myapp/1.361555922666090832/main.py", line 3, in <module>
from controllers.routes import api_routes, web_routes, admin_routes
File "/base/data/home/apps/s~myapp/1.361555922666090832/controllers/routes/api_routes.py", line 3, in <module>
from ..api import api_obj_controller, api_app_controller, api_path_controller, api_user_controller
File "/base/data/home/apps/s~myapp/1.361555922666090832/controllers/api/api_obj_controller.py", line 2, in <module>
from ..handlers.api_handler import ApiRequestHandler
File "/base/data/home/apps/s~myapp/1.361555922666090832/controllers/handlers/api_handler.py", line 2, in <module>
from ..handlers.content_handler import BaseRequestHandler
File "/base/data/home/apps/s~myapp/1.361555922666090832/controllers/handlers/content_handler.py", line 3, in <module>
from webapp2_extras import jinja2
File "/base/data/home/apps/s~myapp/1.361555922666090832/webapp2_extras/jinja2.py", line 15, in <module>
import jinja2
File "/base/python27_runtime/python27_lib/versions/third_party/jinja2-2.6/jinja2/__init__.py", line 33, in <module>
from jinja2.environment import Environment, Template
File "/base/python27_runtime/python27_lib/versions/third_party/jinja2-2.6/jinja2/environment.py", line 13, in <module>
from jinja2 import nodes
ImportError: cannot import name nodes
W 2012-09-06 10:53:43.967
After handling this request, the process that handled this request was found to have handled too many sequential errors, and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you are likely returning errors continously from your application.
So it continued to get many errors, but then just started working again
without any new code published. My concerns obviously is how can I prevent
this in the future? Why did this happen? And how is it possible it corrected
itself without me deploying any code fixes? This error makes me nervous that
it will randomly happen for my customers.
**Edit:**
Also, the very first error I received was a DeadlineExceededError error, which
includes the message "This request caused a new process to be started for your
application, and thus caused your application code to be loaded for the first
time. This request may thus take longer and use more CPU than a typical
request for your application."
So from this message and what Tim commented on makes complete sense, but how
do I fix it so it doesn't happen again? What can I do so the
DeadlineExceededError doesn't basically error out the entire site until a new
instance is spun up? (that is if I'm understanding it correctly)
Answer: If I understand correctly, you saw DeadlineExceededErrors in some loading
requests, but the instances which handled that request were not killed.
It is probably because you have some naked except blocks like:
try:
do something
except:
do other things
I think, what happened to your instances is that one of such except clauses
caught the DeadlineExceededErrors and the code continued to run instead of
throwing the DeadlineExceededError. If that DeadlineExceededError is thrown
during importing jinja2, jinja2 module can remain half-loaded state. Therefore
the instance keeps living in a corrupt state.
If DeadlineExceededError is thrown correctly, the instance should be killed,
so that there won't be a corrupted instance like you have.
It's just my assumption, let me know if that's likely the case.
|
Difference between command-line tar and shutil.make_archive?
Question: When I generate a tar.gz file using `tar cfz cl.tar.gz Sources` as compared to
the python `shutil.make_archive ( "py", "gztar", "Sources" )`, I get different
results:
$ tar -tf cl.tar.gz | head -3
algorithm/
aligned_storage.hpp
algorithm/cxx11/
$ tar -tf py.tar.gz | head -3
./algorithm/
./aligned_storage.hpp
./algorithm/cxx11/
Most of the time this makes no difference; but if you pass `--strip-
components` when un-tarring then the two archives behave differently.
I'd really like to get the same structure from the python code that I did from
the shell script, but I'm not quite sure how to do that. I'm using Python
2.7.3 on Mac OS 10.8.1, if that makes a difference.
Answer: The
[make_archive](http://docs.python.org/library/shutil.html#shutil.make_archive)
function takes two arguments, `root_dir` and `base_dir`. If you set base_dir
to something else, it gets rid of the `./` prefix. For example:
$ tree
.
├── blah
│ ├── file1
│ └── file2
$ python
>>> import shutil
>>> shutil.make_archive("test", "gztar", ".", "blah")
$ tar -tf test.tar.gz
blah/
blah/file1
blah/file2
This is limited to a single directory. If you want to add more than a single
directory or file like this, you will have to use the
[tarfile](http://docs.python.org/library/tarfile.html) module directly.
|
Python Binary File CTime 4 Bytes
Question: I am trying to parse a binary file for which I have the file format for. I
have 3-4byte variables that are list as "CTime" I read them in using
struct.unpack. I want to convert this number to an actual day time year value.
The value that I have seems to be the number of seconds passed since an
absolute beginning. I remember in R there is some way that you can get the
date back from the seconds elapsed. Is there a way of doing this in python? or
even if I should be reading in the CTime as an integer.
I read: [Convert ctime to unicode and unicode to ctime
python](http://stackoverflow.com/questions/4750522/convert-ctime-to-unicode-
and-unicode-to-ctime-python) but its not quite what I'm trying to do.
Thanks in advance
Answer:
import datetime
import time
seconds_since_epoch = time.time()
print datetime.datetime.fromtimestamp(seconds_since_epoch)
#prints "2012-09-06 16:29:48.709000"
|
chilkat decryptStringENC TypeError: in method 'CkRsa_decryptStringENC'
Question: I downloaded a trial version for 64-bit Python 2.7:
chilkat-9.3.2-python-2.7-x86_64-linux.tar.gz. I found a strange problem: when
I wrote one method (decrypRSA() as follow) which will decode given RSA
encrypted string, it works only if I call it directly in command line in
linux. It will difinitely throw exception when it was called in other method
to response an http request. I haven't found any trouble shoot for this issue
on website.
Here is the exception stack track:
File "/data/api_test.xxx.com/0/v0/share/auth/utils.py", line 301, in decrypRSA
return rsa.decryptStringENC(encodedText,False)
File "/usr/local/lib/python2.7/site-packages/chilkat.py", line 1319, in decryptStringENC
def decryptStringENC(self, *args): return _chilkat.CkRsa_decryptStringENC(self, *args)
TypeError: in method 'CkRsa_decryptStringENC', argument 2 of type 'char const *'
And here is the definition for decrypRSA() method:
@staticmethod
def decrypRSA(encodedText, publicKey):
print ('Utils.decrypRSA()-parameters: encodeText=%s, public key=%s' % (encodedText, publicKey,))
rsa = CkRsa()
success = rsa.UnlockComponent("30-day trial")
if (success != True):
logging.info("Utils.decrypRSA(): RSA component unlock failed")
return ''
# Import the public key into the RSA object:
success = rsa.ImportPublicKey(publicKey)
if (success != True):
logging.info("Utils.decrypRSA(): RSA failed to import public key: %s" % rsa.lastErrorText())
return ''
rsa.put_EncodingMode("base64")
rsa.put_LittleEndian(True)
return rsa.decryptStringENC(encodedText,False)
Answer: I don't know if it's something internal, or if the encodedText is something
you're passing in. It could be an issue with a unicode string being cast to
something other than 'char const *'. In this case, you could either encode the
string, or use a regular ascii string instead of a unicode string.
Saw this: <https://github.com/chokkan/simstring/issues/6>
|
YAML file url and script in GAE python
Question: I'm using Python 2.7 in Google App Engine and can't seem to get my app.yaml
file set up right.
My goal is so that if I go to `http://localhost/carlos/` I get an executed
carlos.py
Here is my directory structure:
app\
\app.yaml
\main.py
\carlos.py
Here is my current app.yaml file:
application: myapp
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /carlos/.*
script: carlos.app
- url: .*
script: main.app
and my carlos.py file is:
import webapp2
class MainHandler(webapp2.RequestHandler):
def get(self):
self.response.out.write("Hello, Carlos!")
app = webapp2.WSGIApplication([('/carlos', MainHandler)],
debug=True)
However all I'm getting now is a 404 Not Found error. Any thoughts?
Answer: I was able to determine the solution and figured I'd post it for anyone out
there.
In my carlos.py file I needed to replace:
app = webapp2.WSGIApplication([('/', MainHandler)],
debug=True)
with
app = webapp2.WSGIApplication([('/carlos/', MainHandler)],
debug=True)
It appears that the first argument for the WSGIApplication is referring to the
TOTAL path from your root web address as opposed to the INCREMENTAL path from
which it was originally directed.
I'm selecting this answer over what was provided by Littm because I'd like to
keep using WSGI
|
Get Python 2.7's 'json' to not throw an exception when it encounters random byte strings
Question: Trying to serialize a dict object into a json string using Python 2.7's `json`
(ie: `import json`).
Example:
json.dumps({
'property1': 'A normal string',
'pickled_property': \u0002]qu0000U\u0012
})
The object has some byte strings in it that are "pickled" data using
`cPickle`, so for json's purposes, they are basically random byte strings. I
was using django.utils's `simplejson` and this worked fine. But I recently
switched to Python 2.7 on google app engine and they don't seem to have
simplejson available anymore.
Now that I am using `json`, it throws an exception when it encounters bytes
that aren't part of UTF-8. The error that I'm getting is:
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0: invalid start byte
It would be nice if it printed out a string of the character codes like the
debugging might do, ie: `\u0002]q\u0000U\u001201`. But I really don't much
care how it handles this data just as long as it doesn't throw an exception
and continues serializing the information that it does recognize.
How can I make this happen?
Thanks!
Answer: The [JSON spec](http://www.json.org/) defines strings in terms of unicode
characters. For this reason, the `json` module assumes that any `str` instance
it receives holds encoded unicode text. It will try UTF-8 as its default
encoding, which causes trouble when you have a string like the output of
`pickle.dumps` which may not be a valid UTF-8 sequence.
Fortunately, fixing the issue is easy. You simply need to tell the
`json.dumps` function what encoding to use instead of UTF-8. The following
will work, even though `my_bytestring` is not valid UTF-8 text:
import json, cPickle as pickle
my_data = ["some data", 1, 2, 3, 4]
my_bytestring = pickle.dumps(my_data, pickle.HIGHEST_PROTOCOL)
json_data = json.dumps(my_bytestring, encoding="latin-1")
I believe that any 8-bit encoding will work in place of the `latin-1` used
here (just be sure to use the same one for decoding later).
When you want to unpickle the JSON encoded data, you'll need to make a call to
`unicode.decode`, since `json.loads` always returns encoded strings as
`unicode` instances. So, to get the `my_data` list back out of `json_data`
above, you'd need this code:
my_unicode_data = json.loads(json_data)
my_new_bytestring = my_unicode_data.encode("latin-1") # equal to my_bytestring
my_new_data = pickle.loads(my_new_bytestring) # equal to my_data
|
Python Error: name 'admin' is not defined
Question: I am creating a Python application in Django for the first time. I know that I
must uncomment the admin tools in the urls.py, I have done that. I have also
added `autodiscover`. Everytime I try to add a new feature to the
administration panel, I get this error:
"NameError: name 'admin' is not defined"
Here is the code I am using in my model to add to the admin panel:
class ChoiceInline(admin.StackedInline):
model = Choice
extra = 3
class PollAdmin(admin.ModelAdmin):
fieldsets = [
(None, {'fields': ['question']}),
('Date information', {'fields': ['pub_date'], 'classes': ['collapse']}),
]
inlines = [ChoiceInline]
here is the code in the python terminal I am using
admin.site.register(Poll, PollAdmin)
and here is the code from my urls.py:
from django.conf.urls import patterns, include, url
# Uncomment the next two lines to enable the admin:
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
# Examples:
# url(r'^$', 'iFriends.views.home', name='home'),
# url(r'^iFriends/', include('iFriends.foo.urls')),
# Uncomment the admin/doc line below to enable admin documentation:
# url(r'^admin/doc/', include('django.contrib.admindocs.urls')),
# Uncomment the next line to enable the admin:
url(r'^admin/', include(admin.site.urls)),
)
Anyone have any idea why it cannot find the admin name?
**EDIT**
Here is my entire model file:
from django.db import models
class Poll(models.Model):
question = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
def __unicode__(self):
return self.question
def was_published_recently(self):
return self.pub_date >= timezone.now() - datetime.timedelta(days=1)
class Choice(models.Model):
poll = models.ForeignKey(Poll)
choice_text = models.CharField(max_length=200)
votes = models.IntegerField()
def __unicode__(self):
return self.choice_text
#COMMENTED OUT UNTIL I FIX THE ADMIN NAME
from django.config import admin
class ChoiceInline(admin.StackedInline):
model = Choice
extra = 3
class PollAdmin(admin.ModelAdmin):
fieldsets = [
(None, {'fields': ['question']}),
('Date information', {'fields': ['pub_date'], 'classes': ['collapse']}),
]
inlines = [ChoiceInline]
#ADD THIS TO THE MAIN PYTHON FUNCTION
admin.site.register(Poll, PollAdmin)
Answer: `from django.config import admin` should be `from django.contrib import admin`
|
Issues when writing an xml file using xml.dom.minidom python
Question: I have an xml file and a python script is used for adding a new node to that
xml file.I used xml.dom.minidom module for processing the xml file.My xml file
after processing with the python module is given below
<?xml version="1.0" ?><Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PostBuildEvent>
<Command>xcopy "SourceLoc" "DestLoc"</Command>
</PostBuildEvent>
<ImportGroup Label="ExtensionTargets">
</ImportGroup>
<Import Project="project.targets"/></Project>
What i actually needed is as given below .The changes are a newline character
after the first line and before the last line and also '"' is converted to
"
<?xml version="1.0" ?>
<Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PostBuildEvent>
<Command>xcopy "SourceLoc" "DestLoc"</Command>
</PostBuildEvent>
<ImportGroup Label="ExtensionTargets">
</ImportGroup>
<Import Project="project.targets"/>
</Project>
The python code i used is given below
xmltree=xml.dom.minidom.parse(xmlFile)
for Import in Project.getElementsByTagName("Import"):
newImport = xml.dom.minidom.Element("Import")
newImport.setAttribute("Project", "project.targets")
vcxprojxmltree.writexml(open(VcxProjFile, 'w'))
What should i update in my code to get the xml in correct format
Thanks,
Answer: From docs of minidom:
Node.toprettyxml([indent=""[, newl=""[, encoding=""]]])
Return a pretty-printed version of the document. indent specifies the indentation string and defaults to a tabulator; newl specifies the string emitted at the end of each line and defaults to \n.
That's all customisation you get from minidom.
Tried inserting a Text node as a root sibling for newline. Hope dies last. I
recommend using regular expressions from re module and inserting it manually.
As for removing SGML entities, there's apparently an undocumented function for
that in python standard library:
import HTMLParser
h = HTMLParser.HTMLParser()
unicode_string = h.unescape(string_with_entities)
Alternatively, you can do this manually, again using re, as all named entity
names and corresponding codepoints are inside the `htmlentitydefs` module.
|
Python threading: can I sleep on two threading.Event()s simultaneously?
Question: If I have two `threading.Event()` objects, and wish to sleep until either one
of them is set, is there an efficient way to do that in python? Clearly I
could do something with polling/timeouts, but I would like to really have the
thread sleep until one is set, akin to how `select` is used for file
descriptors.
So in the following implementation, what would an efficient non-polling
implementation of `wait_for_either` look like?
a = threading.Event()
b = threading.Event()
wait_for_either(a, b)
Answer: Here is a non-polling non-excessive thread solution: modify the existing
`Event`s to fire a callback whenever they change, and handle setting a new
event in that callback:
import threading
def or_set(self):
self._set()
self.changed()
def or_clear(self):
self._clear()
self.changed()
def orify(e, changed_callback):
e._set = e.set
e._clear = e.clear
e.changed = changed_callback
e.set = lambda: or_set(e)
e.clear = lambda: or_clear(e)
def OrEvent(*events):
or_event = threading.Event()
def changed():
bools = [e.is_set() for e in events]
if any(bools):
or_event.set()
else:
or_event.clear()
for e in events:
orify(e, changed)
changed()
return or_event
Sample usage:
def wait_on(name, e):
print "Waiting on %s..." % (name,)
e.wait()
print "%s fired!" % (name,)
def test():
import time
e1 = threading.Event()
e2 = threading.Event()
or_e = OrEvent(e1, e2)
threading.Thread(target=wait_on, args=('e1', e1)).start()
time.sleep(0.05)
threading.Thread(target=wait_on, args=('e2', e2)).start()
time.sleep(0.05)
threading.Thread(target=wait_on, args=('or_e', or_e)).start()
time.sleep(0.05)
print "Firing e1 in 2 seconds..."
time.sleep(2)
e1.set()
time.sleep(0.05)
print "Firing e2 in 2 seconds..."
time.sleep(2)
e2.set()
time.sleep(0.05)
The result of which was:
Waiting on e1...
Waiting on e2...
Waiting on or_e...
Firing e1 in 2 seconds...
e1 fired!or_e fired!
Firing e2 in 2 seconds...
e2 fired!
This should be thread-safe. Any comments are welcome.
EDIT: Oh and here is your `wait_for_either` function, though the way I wrote
the code, it's best to make and pass around an `or_event`. Note that the
`or_event` shouldn't be set or cleared manually.
def wait_for_either(e1, e2):
OrEvent(e1, e2).wait()
|
Having issues running Django with Postgres
Question: I have installed Postgres and Django on my machine(Mac OS), but when I run the
Django server I get this error
50>>
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/django/core/management/commands/runserver.py", line 91, in inner_run
self.validate(display_num_errors=True)
File "/Library/Python/2.7/site-packages/django/core/management/base.py", line 266, in validate
num_errors = get_validation_errors(s, app)
File "/Library/Python/2.7/site-packages/django/core/management/validation.py", line 23, in get_validation_errors
from django.db import models, connection
File "/Library/Python/2.7/site-packages/django/db/__init__.py", line 40, in <module>
backend = load_backend(connection.settings_dict['ENGINE'])
File "/Library/Python/2.7/site-packages/django/db/__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/Library/Python/2.7/site-packages/django/db/utils.py", line 92, in __getitem__
backend = load_backend(db['ENGINE'])
File "/Library/Python/2.7/site-packages/django/db/utils.py", line 24, in load_backend
return import_module('.base', backend_name)
File "/Library/Python/2.7/site-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/Library/Python/2.7/site-packages/django/db/backends/postgresql_psycopg2/base.py", line 13, in <module>
from django.db.backends.postgresql_psycopg2.creation import DatabaseCreation
File "/Library/Python/2.7/site-packages/django/db/backends/postgresql_psycopg2/creation.py", line 1, in <module>
import psycopg2.extensions
File "/Library/Python/2.7/site-packages/psycopg2/__init__.py", line 67, in <module>
from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: dlopen(/Library/Python/2.7/site-packages/psycopg2/_psycopg.so, 2): Library not loaded:
@loader_path/../lib/libssl.dylib
Referenced from: /usr/lib/libpq.5.dylib
Reason: Incompatible library version: libpq.5.dylib requires version 1.0.0 or later, but libssl.0.9.8.dylib provides version
0.9.8
What am I missing here please.
Answer: There's a very similar question (with answers) there: [python pip install
psycopg2 install error](http://stackoverflow.com/questions/11538249/python-
pip-install-psycopg2-install-error).
Have you seen it ?
|
Various errors while parsing JSON in Python
Question: Attempting to parse json from a url requiring login. Including all my code
here as I'm not sure where the error is.
try: import simplejson as json
except ImportError: import json
import urllib2
username = 'user'
password = '1234'
url = "https://www.blah.com/someplace"
# set up the username/password/url request
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, "https://www.blah.com", username, password)
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
request = urllib2.Request(url)
response = opener.open(request)
# option 1
json_object = json.loads(str(response))
#option 2
json_object = json.loads(response)
If I run the code with option 1 (commenting out option 2), I get this error:
Traceback (most recent call last):
File "jsontest.py", line 22, in <module>
json_object = json.loads(str(request))
File "/usr/lib/python2.7/dist-packages/simplejson/__init__.py", line 413, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 402, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 420, in raw_decode
raise JSONDecodeError("No JSON object could be decoded", s, idx)
simplejson.decoder.JSONDecodeError: No JSON object could be decoded: line 1 column 0 (char 0)
If I run option 2:
Traceback (most recent call last):
File "jsontest.py", line 23, in <module>
json_object = json.loads(request)
File "/usr/lib/python2.7/dist-packages/simplejson/__init__.py", line 413, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 402, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
My sample JSON is valid as far as I can tell:
>
> {"set1":[{"data1":"411","data2":"2033","data3":"1","data4":"43968077","data5":"217","data6":"106828","data7":[]}],
> "set2":{"data8":"411","data9":"2033","data10":"43968077","data11":"217223360","data12":"106828"}}
simplejson version = 2.3.2, Python 2.7.3
Very new to all this so any pointers would be very helpful.
Answer: You want to decode the _response_ , not the request:
json_object = json.load(response)
The response is a file-like object, so you can use `.load()` to have the json
library read it directly.
Alternatively (at the cost of some temporary memory use), use the `.loads()`
function with the fully read response:
json_object = json.loads(response.read())
Note that python 2.7 already includes the simplejson library, renamed to
[`json`](http://docs.python.org/library/json.html):
import json
|
Use simplejson with Python 2.7 on GAE
Question: Having trouble using `simplejson` with google app engine running Python 2.7.
Just switched from Python 2.5 on the Master/Slave datastore to Python 2.7 on
the High Replication Datastore. This used to work:
from django.utils import simplejson
Now in order to use json, I can do this:
import json
However, I have a need to use `simplejson`. This works on the localhost
debugger, but not on the server:
import simplejson
How can I use this library when running Python 2.7 on GAE?
Thanks!
Answer: I think json and simplejson are now compatible. If you've got code using
simplejson, you could try
import json as simplejson
|
Hide Python rocket dock icon when using ScriptingBridge
Question: I'm retrieving the current track playing in iTunes, Mac OS X, with
ScriptingBridge.
from ScriptingBridge import SBApplication
iTunes = SBApplication.applicationWithBundleIdentifier_("com.apple.iTunes")
print iTunes.currentTrack().name()
But when I run that last line, actually getting the track name, an application
appears in the dock, and doesn't leave until I close my Python program,
whether I'm running it in the REPL or as a script. The icon is this one, at
least on my machine:
/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/Resources/PythonInterpreter.icns
The script works great, and I can get all the info I need from iTunes via SB.
I would just like to keep the icon from popping up. Why does that particular
method call invoke a dock icon?
Answer: A hacky way to get it off the dock is to prevent `Python.app` from ever
showing up on the dock:
Edit
`/System/Library/Frameworks/Python.framework/Versions/Current/Resources/Python.app/Contents/Info.plist`
and add this key-value pair to the main `<dict>` element:
<key>LSUIElement</key><string>1</string>
I wish there were another way to do this, because this change is global — no
Python script (using the system Python) will ever show up on the dock with
this setting. Since posting this question I have set my LSUIElement back to 0,
because there's no other way to grab, for example, a matplotlib-produced
window, unless it has an icon in the dock.
|
Implementing Home Grown Plugins for a Python Service
Question: I am writing a simple scheduling service. I don't want to hard-code all of the
tasks it can schedule and instead would like to support plugins that can be
dropped in a folder and loaded dynamically at runtime.
My plan is to have a JSON file (or any configuration file) that maps a task
name to the location of a Python file (a module) which will have a class
called `Plugin`. Pretty simple I thought. When someone schedules a task to
run, they pass the task name and the time to run it. When the time elapses,
the plugin is loaded (or reloaded) and is ran with any additional arguments
passed to the scheduler.
I have been looking at the `imp` module to see how to load modules at runtime.
I am not sure whether I want to list plugins using their physical location
(file system path) or to use their module names like you'd see in a `import`
statement. It seems `imp` wants to use physical location.
I got two different versions of this code "working". Here is one that uses
`importlib`:
pluginName = self.__pluginLookup[pluginName]
module = import_module(pluginName)
module = reload(module) # force reload
plugin = module.Plugin()
return plugin
This is one I wrote using `imp`:
path = self.__pluginLookup[pluginName]
path, moduleName = split(path)
moduleName, extension = splitext(moduleName)
file, path, description = find_module(moduleName, [path])
with file:
module = load_module(moduleName, file, path, description)
plugin = module.Plugin()
return plugin
The problem I am running into is handling dependencies. If I have a
`plugin.py` file that depends on a `dependency.py` file in the same folder,
saying `import dependency` doesn't seem to work. Instead, it looks up the
dependency from the `PYTHONPATH`.
How can I make the imports relative to the plugins themselves?
Answer: You could append `path` to `sys.path`:
import sys
sys.path.append(path)
where `path` is the directory containing the dependency.py.
|
Vim code completion for webapp2,flask,etc
Question: I'm using [john andersons](http://sontek.net/blog/detail/turning-vim-into-a-
modern-python-ide#tab-completion-and-documentation) wonderful vim set-up. It
has code completion for python enabled. I'm curious if such a solution exists
for say webapp2,flask
It's possible i'm not describing this very well so here is an example
when i type:
import math
math.a
a host of options appears (acos, acosh, etc...)
I would like that same functionally with
import webapp2
webapp2.
or
import flask
flask.
Does this exist? I searched and couldn't find. Is it build-able?
Answer: I use [python-mode plugin](https://github.com/klen/python-mode), it provides
this kind of smarter python completion.
|
building Python from source with zlib support
Question: When building Python 3.2.3 from source on Ubuntu 12.04, the zlib module is not
available.
I downloaded the official source distribution from python.org, and attempted
to build and install it with the following commands.
tar xfa Python3.2.3.tar.bz2
cd Python-3.2.3
./configure --prefix=/opt/python3.2
make
sudo make install
The make command output includes the following.
Python build finished, but the necessary bits to build these modules were not found:
_curses _curses_panel _dbm
_gdbm _sqlite3 _ssl
_tkinter bz2 readline
zlib
After running `make install` and starting the interpreter, the `zlib` module
cannot be imported.
I confirmed that the `zlib1g-dev` package is installed on my system.
I also found [this similar
question](http://stackoverflow.com/questions/6169522/no-module-named-zlib),
which suggests adding the `--with-zlib` flag to the `./configure` command.
However, that returns an error that it is an unrecognized option and has no
effect.
Answer: I had a similar problem on CentOS 6.3 and python 3.2.3
I solved it by:
Edit `/Modules/Setup` and uncomment the line:
zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz
change to directory /Modules/zlib:
./configure
make
sudo make install
then compiled my python3.2 source.
and was then able to test import zlib and it all worked fine :)
|
Regex in python not taking the specified data in td element
Question: I'm using regex in python to grab the following data from HTML in this line:
<td xyz="123"><a href="blah.html">This is a line</a></td>
The problem is that in the above td line, the `xyz="123"` and `<a href>` are
optional, so it does not appear in all the table cells. So I can have tds like
this:
<tr><td>New line</td></tr>
<tr><td xyz="123"><a href="blah.html">CaptureThis</a></td></tr>
I wrote regex like this:
<tr><td x?y?z?=?"?(\d\d\d)?"?>?<?a?.*?>?(.*?)?<?/?a?>?</td></tr>
I basically want to capture the "123" data (if present) and the "CaptureThis"
data from all tds in each tr.
This regex is not working, and is skipping the the lines without "xyz" data.
I know using regex is not the apt solution here, but was wondering if it could
be done with regex alone.
Answer: You are using a regular expression, and matching XML with such expressions get
too complicated, too fast.
Use a HTML parser instead, Python has several to choose from:
* [ElementTree](http://docs.python.org/library/xml.etree.elementtree.html) is part of the standard library
* [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) is a popular 3rd party library
* [lxml](http://lxml.de/) is a fast and feature-rich C-based library.
ElementTree example:
from xml.etree import ElementTree
tree = ElementTree.parse('filename.html')
for elem in tree.findall('tr'):
print ElementTree.tostring(elem)
|
OpenID login no longer working
Question: We have 2 AppEngine Python apps which do federated login via the open id api
(create_login_url).
We had the login working for some time, but currently receive HTTP 204 on the
/_ah/login_redir.
Had there been any recent changes to the flow or API?
Answer: After some analysis, we found out, that some of the OpenID providers were
still working and that the remaining problems were caused by different issues
(like changed endpoint urls, or not having a correct registration with the
endpoint).
We still have a problem with myspace.com (which is not so important to us for
the moment).
Thanks for your help.
|
MySQL Query error when run within Python but not when run directly
Question: I am passing `'2012-09-10 00:00:00-05:00'` to a MySQL query. This value is
retrieved using the [pytz module](http://pytz.sourceforge.net/) for Python.
import pytz
class MyClass():
def __init____(self):
self.tz = pytz.timezone(settings.TIME_ZONE)
todaystart = self.tz.localize(datetime.now(self.tz).replace(hour=0, minute=0, second=0, microsecond=0, tzinfo=None), is_dst=None).astimezone(self.tz)
The MySQL query is this, after `todaystart` has been substituted in:
SELECT * FROM mytable WHERE created > UNIX_TIMESTAMP(STR_TO_DATE('2012-09-10 00:00:00-05:00','%Y-%m-%d %k:%i:%s') - INTERVAL 1 DAY);
If I execute this query directly, it returns data as expected. If I put this
query into code it gives this error: `Warning: Truncated incorrect datetime
value: '2012-09-09 00:00:00-05:00'`
The code I am using is this (within Django):
query = """SELECT * FROM mytable WHERE created > UNIX_TIMESTAMP(STR_TO_DATE('2012-09-10 00:00:00-05:00','%Y-%m-%d %k:%i:%s') - INTERVAL 1 DAY);"""
myCursor = connections[system_db].cursor()
results = myCursor.execute(query) # Dies on this statement
resultcount = results.fetchall()
I didn't see an offset [format](http://dev.mysql.com/doc/refman/5.1/en/date-
and-time-functions.html#function_date-format) string in the MySQL docs for
str_to_date. I would prefer to keep that offset, as the data is returned for a
3rd party system and by keeping it in place I don't have to do any logic
between that return and executing the query with the returned date. However, I
don't think it has to do with the offset since it works if I run it directly.
What have I done incorrectly that would cause this `Warning` to appear when
Python runs the query against MySQL?
Answer: Marc B had it right.
When executed directly, you are getting the warnings, but probably not
noticing:
mysql> SELECT * FROM mytable WHERE created > UNIX_TIMESTAMP(STR_TO_DATE('2012-09-10 00:00:00-05:00','%Y-%m-%d %k:%i:%s') - INTERVAL 1 DAY);
Empty set, 3 warnings (0.00 sec)
mysql> show warnings;
+---------+------+----------------------------------------------------------------------+
| Level | Code | Message |
+---------+------+----------------------------------------------------------------------+
| Warning | 1292 | Truncated incorrect datetime value: '2012-09-10 00:00:00-05:00' |
| Warning | 1292 | Truncated incorrect datetime value: '2012-09-10 00:00:00-05:00' |
| Warning | 1292 | Incorrect datetime value: '1347148800' for column 'created' at row 1 |
+---------+------+----------------------------------------------------------------------+
3 rows in set (0.00 sec)
|
Edgelines vanish in mplot3d surf when facecolors are specified
Question: I have produced the following surface plot in matlab:

and I need to create this in .NET instead. I'm hoping to use IronPython to do
this. But first I am just trying to create the plot in Python (PyLab). This is
what I have so far:

Please have a look at my code and tell me how I can get python to show the
black edge lines. It appear that these disappear when I add the
`facecolors=UWR(heatmap)` property to the `surf(...)`. Is this a bug in
mplot3d or is it by design? Either way how do I get the lines back?
Here is my code (Apologies for the huge data matrices):
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib.pyplot as plt
import matplotlib
from pylab import *
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
#Sample colour data
heatmap = np.array([(0.304, 0.288, 0.284, 0.26, 0.248, 0.224, 0.204, 0.184, 0.18, 0.18, 0.156, 0.148, 0.144, 0.136, 0.136, 0.128, 0.124, 0.124, 0.128, 0.124, 0.124),
(0.356, 0.348, 0.332, 0.328, 0.308, 0.292, 0.288, 0.272, 0.252, 0.232, 0.216, 0.204, 0.16, 0.148, 0.152, 0.148, 0.132, 0.124, 0.124, 0.132, 0.144),
(0.396, 0.384, 0.372, 0.36, 0.34, 0.316, 0.312, 0.312, 0.3, 0.272, 0.244, 0.236, 0.216, 0.192, 0.176, 0.168, 0.148, 0.148, 0.156, 0.156, 0.16),
(0.452, 0.444, 0.428, 0.408, 0.388, 0.376, 0.364, 0.348, 0.336, 0.336, 0.3, 0.284, 0.264, 0.256, 0.24, 0.244, 0.212, 0.2, 0.22, 0.224, 0.224),
(0.488, 0.476, 0.464, 0.444, 0.424, 0.4, 0.4, 0.384, 0.38, 0.372, 0.356, 0.324, 0.312, 0.312, 0.312, 0.312, 0.308, 0.292, 0.304, 0.332, 0.344),
(0.492, 0.492, 0.48, 0.468, 0.452, 0.432, 0.424, 0.412, 0.404, 0.396, 0.396, 0.392, 0.376, 0.356, 0.356, 0.36, 0.368, 0.372, 0.392, 0.404, 0.42),
(0.5, 0.5, 0.5, 0.484, 0.46, 0.452, 0.444, 0.436, 0.44, 0.44, 0.44, 0.452, 0.44, 0.436, 0.424, 0.42, 0.404, 0.44, 0.452, 0.468, 0.5),
(0.484, 0.48, 0.46, 0.444, 0.44, 0.44, 0.44, 0.44, 0.444, 0.44, 0.456, 0.456, 0.46, 0.448, 0.448, 0.448, 0.436, 0.456, 0.468, 0.492, 0.492),
(0.405737704918033, 0.401639344262295, 0.409836065573771, 0.418032786885246, 0.434426229508197, 0.438524590163934, 0.438524590163934, 0.44672131147541, 0.454918032786885, 0.471311475409836, 0.467213114754098, 0.479508196721311, 0.487704918032787, 0.487704918032787, 0.479508196721311, 0.483606557377049, 0.495901639344262, 0.516393442622951, 0.520491803278689, 0.532786885245902, 0.536885245901639),
(0.320987654320988, 0.329218106995885, 0.349794238683128, 0.362139917695473, 0.374485596707819, 0.395061728395062, 0.42798353909465, 0.440329218106996, 0.465020576131687, 0.477366255144033, 0.48559670781893, 0.493827160493827, 0.506172839506173, 0.518518518518519, 0.51440329218107, 0.518518518518519, 0.547325102880658, 0.555555555555556, 0.555555555555556, 0.584362139917696, 0.580246913580247),
(0.282700421940928, 0.29535864978903, 0.30379746835443, 0.320675105485232, 0.337552742616034, 0.354430379746835, 0.383966244725738, 0.434599156118144, 0.464135021097046, 0.485232067510549, 0.493670886075949, 0.514767932489452, 0.527426160337553, 0.535864978902954, 0.544303797468354, 0.561181434599156, 0.594936708860759, 0.59915611814346, 0.590717299578059, 0.60337552742616, 0.607594936708861),
(0.230434782608696, 0.256521739130435, 0.273913043478261, 0.304347826086957, 0.334782608695652, 0.360869565217391, 0.373913043478261, 0.408695652173913, 0.469565217391304, 0.504347826086957, 0.521739130434783, 0.539130434782609, 0.552173913043478, 0.560869565217391, 0.578260869565217, 0.6, 0.617391304347826, 0.61304347826087, 0.61304347826087, 0.617391304347826, 0.643478260869565),
(0.161137440758294, 0.175355450236967, 0.218009478672986, 0.28436018957346, 0.327014218009479, 0.341232227488152, 0.388625592417062, 0.436018957345972, 0.488151658767773, 0.516587677725119, 0.549763033175356, 0.573459715639811, 0.578199052132701, 0.592417061611374, 0.611374407582938, 0.649289099526066, 0.658767772511848, 0.658767772511848, 0.677725118483412, 0.66824644549763, 0.691943127962085),
(0.224719101123596, 0.269662921348315, 0.303370786516854, 0.365168539325843, 0.382022471910112, 0.404494382022472, 0.443820224719101, 0.48876404494382, 0.5, 0.556179775280899, 0.567415730337079, 0.612359550561798, 0.612359550561798, 0.629213483146067, 0.634831460674157, 0.646067415730337, 0.662921348314607, 0.685393258426966, 0.707865168539326, 0.707865168539326, 0.724719101123596),
(0.333333333333333, 0.363636363636364, 0.401515151515152, 0.431818181818182, 0.446969696969697, 0.46969696969697, 0.515151515151515, 0.53030303030303, 0.553030303030303, 0.583333333333333, 0.613636363636364, 0.621212121212121, 0.636363636363636, 0.643939393939394, 0.651515151515152, 0.651515151515152, 0.666666666666667, 0.666666666666667, 0.674242424242424, 0.681818181818182, 0.696969696969697),
(0.373626373626374, 0.406593406593407, 0.483516483516484, 0.505494505494506, 0.527472527472528, 0.54945054945055, 0.571428571428571, 0.582417582417583, 0.593406593406593, 0.637362637362637, 0.659340659340659, 0.681318681318681, 0.692307692307692, 0.692307692307692, 0.703296703296703, 0.692307692307692, 0.703296703296703, 0.736263736263736, 0.736263736263736, 0.703296703296703, 0.67032967032967),
(0.484375, 0.5625, 0.578125, 0.578125, 0.578125, 0.625, 0.625, 0.640625, 0.65625, 0.671875, 0.703125, 0.734375, 0.75, 0.734375, 0.734375, 0.75, 0.734375, 0.640625, 0.65625, 0.625, 0.609375),
(0.617647058823529, 0.617647058823529, 0.617647058823529, 0.617647058823529, 0.617647058823529, 0.588235294117647, 0.588235294117647, 0.588235294117647, 0.617647058823529, 0.647058823529412, 0.676470588235294, 0.705882352941177, 0.676470588235294, 0.705882352941177, 0.705882352941177, 0.735294117647059, 0.705882352941177, 0.705882352941177, 0.735294117647059, 0.705882352941177, 0.647058823529412),
(0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.6, 0.5, 0.5, 0.5, 0.5, 0.5, 0.4, 0.4, 0.4, 0.4, 0.3, 0.3, 0.3, 0.4, 0.4)
]);
#Sample Z data
volatility = np.array([(0.2964396, 0.28628612, 0.27630128, 0.26648508, 0.25683752, 0.2473586, 0.23804832, 0.22890668, 0.21993368, 0.21112932, 0.2024936, 0.19402652, 0.18572808, 0.17759828, 0.16963712, 0.1618446, 0.15422072, 0.14676548, 0.13947888, 0.13236092, 0.1254116),
(0.2979793, 0.287974509333333, 0.278154444, 0.268519104, 0.259068489333333, 0.2498026, 0.240721436, 0.231824997333333, 0.223113284, 0.214586296, 0.206244033333333, 0.198086496, 0.190113684, 0.182325597333333, 0.174722236, 0.1673036, 0.160069689333333, 0.153020504, 0.146156044, 0.139476309333333, 0.1329813),
(0.299519, 0.289662898666667, 0.280007608, 0.270553128, 0.261299458666667, 0.2522466, 0.243394552, 0.234743314666667, 0.226292888, 0.218043272, 0.209994466666667, 0.202146472, 0.194499288, 0.187052914666667, 0.179807352, 0.1727626, 0.165918658666667, 0.159275528, 0.152833208, 0.146591698666667, 0.140551),
(0.3010587, 0.291351288, 0.281860772, 0.272587152, 0.263530428, 0.2546906, 0.246067668, 0.237661632, 0.229472492, 0.221500248, 0.2137449, 0.206206448, 0.198884892, 0.191780232, 0.184892468, 0.1782216, 0.171767628, 0.165530552, 0.159510372, 0.153707088, 0.1481207),
(0.3025984, 0.293039677333333, 0.283713936, 0.274621176, 0.265761397333333, 0.2571346, 0.248740784, 0.240579949333333, 0.232652096, 0.224957224, 0.217495333333333, 0.210266424, 0.203270496, 0.196507549333333, 0.189977584, 0.1836806, 0.177616597333333, 0.171785576, 0.166187536, 0.160822477333333, 0.1556904),
(0.3041381, 0.294728066666667, 0.2855671, 0.2766552, 0.267992366666667, 0.2595786, 0.2514139, 0.243498266666667, 0.2358317, 0.2284142, 0.221245766666667, 0.2143264, 0.2076561, 0.201234866666667, 0.1950627, 0.1891396, 0.183465566666667, 0.1780406, 0.1728647, 0.167937866666667, 0.1632601),
(0.3056778, 0.296416456, 0.287420264, 0.278689224, 0.270223336, 0.2620226, 0.254087016, 0.246416584, 0.239011304, 0.231871176, 0.2249962, 0.218386376, 0.212041704, 0.205962184, 0.200147816, 0.1945986, 0.189314536, 0.184295624, 0.179541864, 0.175053256, 0.1708298),
(0.3008828768, 0.292424567021333, 0.284187283338667, 0.276171025752, 0.268375794261333, 0.260801588866667, 0.253448409568, 0.246316256365333, 0.239405129258667, 0.232715028248, 0.226245953333333, 0.219997904514667, 0.213970881792, 0.208164885165333, 0.202579914634667, 0.1972159702, 0.192073051861333, 0.187151159618667, 0.182450293472, 0.177970453421333, 0.173711639466667),
(0.2960879536, 0.288432678042667, 0.280954302677333, 0.273652827504, 0.266528252522667, 0.259580577733333, 0.252809803136, 0.246215928730667, 0.239798954517333, 0.233558880496, 0.227495706666667, 0.221609433029333, 0.215900059584, 0.210367586330667, 0.205012013269333, 0.1998333404, 0.194831567722667, 0.190006695237333, 0.185358722944, 0.180887650842667, 0.176593478933333),
(0.2912930304, 0.284440789064, 0.277721322016, 0.271134629256, 0.264680710784, 0.2583595666, 0.252171196704, 0.246115601096, 0.240192779776, 0.234402732744, 0.22874546, 0.223220961544, 0.217829237376, 0.212570287496, 0.207444111904, 0.2024507106, 0.197590083584, 0.192862230856, 0.188267152416, 0.183804848264, 0.1794753184),
(0.2864981072, 0.280448900085333, 0.274488341354667, 0.268616431008, 0.262833169045333, 0.257138555466667, 0.251532590272, 0.246015273461333, 0.240586605034667, 0.235246584992, 0.229995213333333, 0.224832490058667, 0.219758415168, 0.214772988661333, 0.209876210538667, 0.2050680808, 0.200348599445333, 0.195717766474667, 0.191175581888, 0.186722045685333, 0.182357157866667),
(0.281703184, 0.276457011106667, 0.271255360693333, 0.26609823276, 0.260985627306667, 0.255917544333333, 0.25089398384, 0.245914945826667, 0.240980430293333, 0.23609043724, 0.231244966666667, 0.226444018573333, 0.22168759296, 0.216975689826667, 0.212308309173333, 0.207685451, 0.203107115306667, 0.198573302093333, 0.19408401136, 0.189639243106667, 0.185238997333333),
(0.2769082608, 0.272465122128, 0.268022380032, 0.263580034512, 0.259138085568, 0.2546965332, 0.250255377408, 0.245814618192, 0.241374255552, 0.236934289488, 0.23249472, 0.228055547088, 0.223616770752, 0.219178390992, 0.214740407808, 0.2103028212, 0.205865631168, 0.201428837712, 0.196992440832, 0.192556440528, 0.1881208368),
(0.279132175333333, 0.27446485122, 0.26979833968, 0.265132640713333, 0.26046775432, 0.2558036805, 0.251140419253333, 0.24647797058, 0.24181633448, 0.237155510953333, 0.2324955, 0.22783630162, 0.223177915813333, 0.21852034258, 0.21386358192, 0.209207633833333, 0.20455249832, 0.19989817538, 0.195244665013333, 0.19059196722, 0.185940082),
(0.281356089866667, 0.276464580312, 0.271574299328, 0.266685246914667, 0.261797423072, 0.2569108278, 0.252025461098667, 0.247141322968, 0.242258413408, 0.237376732418667, 0.23249628, 0.227617056152, 0.222739060874667, 0.217862294168, 0.212986756032, 0.208112446466667, 0.203239365472, 0.198367513048, 0.193496889194667, 0.188627493912, 0.1837593272),
(0.2835800044, 0.278464309404, 0.273350258976, 0.268237853116, 0.263127091824, 0.2580179751, 0.252910502944, 0.247804675356, 0.242700492336, 0.237597953884, 0.23249706, 0.227397810684, 0.222300205936, 0.217204245756, 0.212109930144, 0.2070172591, 0.201926232624, 0.196836850716, 0.191749113376, 0.186663020604, 0.1815785724),
(0.285803918933333, 0.280464038496, 0.275126218624, 0.269790459317333, 0.264456760576, 0.2591251224, 0.253795544789333, 0.248468027744, 0.243142571264, 0.237819175349333, 0.23249784, 0.227178565216, 0.221861350997333, 0.216546197344, 0.211233104256, 0.205922071733333, 0.200613099776, 0.195306188384, 0.190001337557333, 0.184698547296, 0.1793978176),
(0.288027833466667, 0.282463767588, 0.276902178272, 0.271343065518667, 0.265786429328, 0.2602322697, 0.254680586634667, 0.249131380132, 0.243584650192, 0.238040396814667, 0.23249862, 0.226959319748, 0.221422496058667, 0.215888148932, 0.210356278368, 0.204826884366667, 0.199299966928, 0.193775526052, 0.188253561738667, 0.182734073988, 0.1772170628),
(0.290251748, 0.28446349668, 0.27867813792, 0.27289567172, 0.26711609808, 0.261339417, 0.25556562848, 0.24979473252, 0.24402672912, 0.23826161828, 0.2324994, 0.22674007428, 0.22098364112, 0.21523010052, 0.20947945248, 0.203731697, 0.19798683408, 0.19224486372, 0.18650578592, 0.18076960068, 0.175036308)
]);
#Create X and Y data
x = np.arange(80, 121, 2)
y = np.arange(3, 12.01, 0.5)
X, Y = np.meshgrid(x, y)
#Create a color map that goes from blue to white to red
cdict = {'red': ((0, 0, 0), #i.e. at value 0, red component is 0. First parameter is the value, second is the color component. Ignore the third parameter, it is for discontinuities.
(0.5, 1, 1), # at value 0.5, red component is 1.
(1, 1, 1)), # at value 1, red component is 1
'green': ((0, 0, 0),
(0.5, 1, 1),
(1, 0, 0)),
'blue': ((0, 1, 1),
(0.5, 1, 1),
(1, 0, 0))}
#Make the color map and register it
cmap1 = matplotlib.colors.LinearSegmentedColormap('UWR',cdict,256)
cm.register_cmap(name='UWR', cmap=cmap1)
UWR = cm.get_cmap('UWR')
#Create a variable for the colorbar
m = cm.ScalarMappable(cmap=UWR)
m.set_array(heatmap)
#Create the surface, multiply vol by 100 so axis label can be in units of %.
surf = ax.plot_surface(X, Y, volatility*100, rstride=1, cstride=1, facecolors=UWR(heatmap), linewidth=1, shade=False, edgecolors='#000000', antialiased=True)
#Axis limits
ax.set_xlim3d(80, 120)
ax.set_ylim3d(0, 12)
#Tick locations. 7 ticks for Y axis, 5 ticks for X. For Z axis maximum 6 ticks, only allow integers and only in steps of either 2, 5 or 10.
ax.yaxis.set_major_locator(LinearLocator(7))
ax.xaxis.set_major_locator(LinearLocator(5))
ax.zaxis.set_major_locator(MaxNLocator(6, interger = True, steps=[2, 5, 10]))
#Format X and Z axis tick labels as percentages and as integers
ax.xaxis.set_major_formatter(FormatStrFormatter('%d%%'))
ax.zaxis.set_major_formatter(FormatStrFormatter('%d%%'))
#Create a color bar with 11 ticks
cbar = plt.colorbar(m, ticks=LinearLocator(11), shrink=0.85)
#Make the tick label go from 0 to 1 in steps of 0.1
cbar.ax.set_yticklabels(arange(0,1.01,0.1))
ax.xaxis.set_label_text("Moneyness (Strike / Future)")
ax.yaxis.set_label_text("Term (Months)")
ax.zaxis.set_label_text("Implied Volatility")
cbar.ax.yaxis.set_label_text("Percentile of current volatility compared with historical levels")
#Set view angle
ax.view_init(20, -40)
#Show the plot
plt.show()
Answer: You can put the lines in the surface plot adding the keyword argument
`edgecolors`
# Add black lines in the edges
surf = ax.plot_surface(X, Y, volatility, rstride=1, cstride=1, facecolors= UWR(heatmap), linewidth=1, shade=False, edgecolors='#000000')
Directions over how to format the axis tick labels and locations here:
<http://matplotlib.sourceforge.net/api/ticker_api.html#matplotlib.ticker.FormatStrFormatter>
* * *
**Solved - Moved up from comments:**
Adding `surf.set_edgecolor('k')` after plot_surface overrides the edge color.
I think that may be related to the fact that facecolors is an option of
plot_surface, but the edgecolors is an option of Poly3DCollection. More
details
[here](http://matplotlib.sourceforge.net/mpl_toolkits/mplot3d/tutorial.html#mpl_toolkits.mplot3d.Axes3D.plot_surface).
|
Django Cache cache.set Not storing data
Question: When I run `python manage.py shell` and then:
from django.core.cache import cache
cache.set("stack","overflow",3000)
print cache.get("stack")
(output: ) None
I tried restarting memcache, and here is what's in my settings:
CACHES = {
'default' : {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211',
}
}
Answer: Make sure it's using the correct cache. Try `from django.core.cache import
caches`, and then see the contents of `caches.all()`. It should just have one
instance of `django.core.cache.backends.memcached.MemcachedCache`.
If it is, try accessing that directly, e.g.
from django.core.cache import caches
m_cache = caches.all()[0]
m_cache.set("stack","overflow",3000)
m_cache.get("stack")
This might not solve your problem, but will at least get you closer to
debugging Memcached instead of Django's cache proxy or your configuration.
|
For Python 3.2: Why does adding bytes to a bytes object take longer as more bytes are added?
Question: I recently started learning Python (my first programming language since using
GW BASIC as a kid). I’ve noticed that when adding bytes to a bytes object,
each byte takes more time to add than the last; and by contrast, when adding
integers to a list object, each integer takes the same amount of time to add
as the last. The following program illustrates.
import time
import struct
time.clock() # for Windows
def time_list():
print("adding 9,999,999 0s to one list 9 times:")
a = []
for i in range(9):
start_time = time.clock()
for j in range(9999999):
a += [0]
end_time = time.clock()
print("loop %d took %f seconds" %(i, end_time - start_time))
print()
def time_bytes_object():
print("adding 99,999 pad bytes to a bytes object 9 times:")
a = bytes()
for i in range(9):
start_time = time.clock()
for j in range(99999):
a += struct.pack('<B', 0)
end_time = time.clock()
print("loop %d took %f seconds" %(i, end_time - start_time))
print()
time_list()
time_bytes_object()
What is it about the bytes object (or the struct.pack function) that makes
adding bytes take increasing amounts of time? Or is there a faster way to
collect a bunch of bytes than the way used in my example?
Thanks for any help,
Victor
Answer: Byte strings (and Unicode strings) in Python are immutable, whereas lists are
mutable.
What this means is that every append (`+=`) done on a byte string must make a
copy of that string; the original is not modified (though it will be garbage-
collected later). In contrast, the `append` method of `list` (also used by
`+=`) will actually modify the list.
What you want is the `bytearray` type, which is a mutable type functioning
much like a list of bytes. Appending to a `bytearray` takes (amortized)
constant time, and it is easily converted to and from a byte string.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.