text
stringlengths 226
34.5k
|
---|
Weird output running a fibonacci sequence
Question: Brand new to using python, need help figuring out why my command line is
spitting out huge strings of numbers and not the fib sequence up to the var I
pass in. Here is what I have so far:
import sys
def fib(n):
a, b = 0, 1
while a < n:
print a
a, b = b, a+b
if __name__ == "__main__":
fib(sys.argv[1])
Now before I did sys.argv[1] or [1:] I was able to put in a sequence in n up
to the number I wanted. I.e if I entered n as 12 I would get 0,1,1,3,5,8 which
is correct. However I cannot get this to work. I did a print statement after
the def fib(n): as print n. It would return my sys.argv pass in.
Where am I going wrong? Thanks for your time.
Answer: Don't forget to convert the input argument (a string) into an integer type:
fib(int(sys.argv[1]))
|
loadmat python memory error
Question: I'm new to Python and I want to import a matlab struct of size 850M to it. I
use "loadmat" but I get a memory error:
return self._matrix_reader.array_from_header(header, process)
File "mio5_utils.pyx", line 624, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy\io\matlab\mio5_utils.c:5401)
File "mio5_utils.pyx", line 653, in scipy.io.matlab.mio5_utils.VarReader5.array_from_header (scipy\io\matlab\mio5_utils.c:4849)
File "mio5_utils.pyx", line 706, in scipy.io.matlab.mio5_utils.VarReader5.read_real_complex (scipy\io\matlab\mio5_utils.c:5578)
File "mio5_utils.pyx", line 424, in scipy.io.matlab.mio5_utils.VarReader5.read_numeric (scipy\io\matlab\mio5_utils.c:3439)
File "mio5_utils.pyx", line 360, in scipy.io.matlab.mio5_utils.VarReader5.read_element (scipy\io\matlab\mio5_utils.c:3164)
File "streams.pyx", line 76, in scipy.io.matlab.streams.GenericStream.read_string (scipy\io\matlab\streams.c:1408)
MemoryError
I'm running python 3.2 on a Windows XP with 3.5G of RAM. Here is my code:
from scipy.io import matlab as mio
mat = mio.loadmat(DIR + '/input.mat')
Could you please help me and tell me what I should do to fix this?
Answer: You are probably using 32-bit Python. The maximum limit for all 32-bit
programs (this issue in fact has nothing to do with Python or Scipy) is 2GB
--- how much memory you have installed on the machine does not matter. In
practice, the allocation of large objects starts to fail already earlier, [due
to virtual memory
fragmentation](http://blogs.msdn.com/b/johan/archive/2007/04/19/why-adding-
more-memory-won-t-resolve-outofmemoryexceptions.aspx) (and ~800 MB seems to be
expected as a rule of thumb).
The solution would be to use 64-bit Python instead of the 32-bit one. For
this, you need a 64-bit operating system such as Windows 7 or Linux.
However, if you are stuck with the 32-bit version of Windows XP, there are
[some tricks](http://msdn.microsoft.com/en-
us/library/windows/desktop/bb613473%28v=vs.85%29.aspx) for bumping the 32-bit
memory limit up to 3GB on 32-bit systems, which may help here.
|
Python NLTK - counting occurrence of word in brown corpora based on returning top results by tag
Question: I'm trying to return the top occurring values from a corpora for specific
tags. I can get the tag and the word themselves to return fine however I can't
get the count to return within the output.
import itertools
import collections
import nltk
from nltk.corpus import brown
words = brown.words()
def findtags(tag_prefix, tagged_text):
cfd = nltk.ConditionalFreqDist((tag, word) for (word, tag) in tagged_text
if tag.startswith(tag_prefix))
return dict((tag, cfd[tag].keys()[:5]) for tag in cfd.conditions())
tagdictNNS = findtags('NNS', nltk.corpus.brown.tagged_words())
This returns the following fine
for tag in sorted(tagdictNNS):
print tag, tagdictNNS[tag]
I have managed to return the count of every NN based word using this:
pluralLists = tagdictNNS.values()
pluralList = list(itertools.chain(*pluralLists))
for s in pluralList:
sincident = words.count(s)
print s
print sincident
That returns everything.
Is there a better way of inserting the occurrence into the a dict
`tagdictNN[tag]`?
edit 1:
pluralLists = tagdictNNS.values()[:5]
pluralList = list(itertools.chain(*pluralLists))
returns them in size order from the for s loop. still not the right way to do
it though.
edit 2: updated dictionaries so they actually search for NNS plurals.
Answer: I might not understand, but given your tagdictNNS:
>>> new = {}
>>> for k,v in tagdictNNS.items():
new[k] = len(tagdictNNS[k])
>>> new
{'NNS$-TL-HL': 1, 'NNS-HL': 5, 'NNS$-HL': 4, 'NNS-TL': 5, 'NNS-TL-HL': 5, 'NNS+MD': 2, 'NNS$-NC': 1, 'NNS-TL-NC': 1, 'NNS$-TL': 5, 'NNS': 5, 'NNS$': 5, 'NNS-NC': 5}
Then you can do something like:
>>> sorted(new.items(), key=itemgetter(1), reverse=True)[:2]
[('NNS-HL', 5), ('NNS-TL', 5)]
|
Python Timer Callback Method
Question:
from threading import Timer
class test_timer():
def __init__(self):
self.awesum="hh"
self.timer = Timer(1,self.say_hello,args=["WOW"])
def say_hello(self,message):
self.awesum=messgae
print 'HIHIHIIHIH'
print message
raise Exception("hi")
if __name__ == '__main__':
print 'Got to main'
x=test_timer()
When I run the code above, my callback method is never triggered. I have been
trying to solve this for hours but cannot figure it out >.<
To test, the timer. I run this code and check to see if x.awesum is 'WOW'
Answer: You never start the timer. You need to:
self.timer.start()
|
What is the most pythonic way to support unittest2 features across a range of Python versions?
Question: I can think of two ways to ensure that I can use modern features from the
unittest library across a wide range of Python versions:
try:
from unittest2 import TestCase
except ImportError:
from unittest import TestCase
or
import sys
if sys.verson_info.major>=2 and sys.version_info.minor>=7:
from unittest import TestCase
else:
from unittest2 import TestCase
Which one of these is more Pythonic?
Answer: I don't like the fact that in the second version we have to import another
module (`sys`) so my preference is for the first version:
try:
from unittest2 import TestCase
except ImportError:
from unittest import TestCase
EDIT:
It turns out that `pyflakes` and `flake8` are not happy with the version above
and will report a "redefinition of unused 'import' from line ... " error or
"W402 'TestCase' imported but unused" error. They seem to prefer it to be
written as follows:
try:
import unittest2
TestCase = unittest2.TestCase
except ImportError:
import unittest
TestCase = unittest.TestCase
|
Python: Selenium getting empty results
Question: I am following [this video](http://www.youtube.com/watch?v=DL7gyuqkzzU) to get
myself familiar with selenium. My code is
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from pyvirtualdisplay import Display
import os
chromedriver = "/usr/bin/chromedriver"
os.environ['webdriver.chrome.driver'] = chromedriver
display = Display(visible=0, size=(800,600))
display.start()
br = webdriver.Chrome(chromedriver)
br.get("http://www.google.com")
Now to print the results
q = br.find_element_by_name('q')
q.send_keys('python')
q.send_keys(Keys.RETURN)
print br.title
results = br.find_elements_by_class_name('g')
print results
for result in results:
print result.text
print "-"*140
The output I am getting is just `python` and when I try to print `results` it
is `[]`.
When I try the below code in chrome's javascript console it works fine.
res = document.getElementsByClassName('g')[0]
<li class=β"g">ββ¦β</li>β
res.textContent
" Python Programming Language β Official Websitewww.python.org/Cached - SimilarShareShared on Google+. View the post.You +1'd this publicly. UndoHome page for Python, an interpreted, interactive, object-oriented, extensible programming language. It provides an extraordinary combination of clarity and ...CPython - Documentation - IDEs - GuiProgramming"
So, any idea why am I not getting any results with selenium+python.
Answer: Adding `time.sleep(3)` after `q.send_keys(Keys.RETURN)` seems to solve the
problem. That's because when you press Keys.RETURN, ajax starts working and
when you try to collect the result, they aren't on page yet. Selenium, AFAI
has no stright way to determine whether the scripts like this have finished
execution.
As I think, it would be more reliable to do
br.get("http://www.google.com/search?q=python")
results = br.find_elements_by_class_name('g')
|
argparse optional argument before positional argument
Question: I was wondering if it is possible to have a positional argument follow an
argument with an optional parameter. Ideally the last argument entered into
the command line would always apply toward 'testname'.
import argparse
parser = argparse.ArgumentParser(description='TAF')
parser.add_argument('-r','--release',nargs='?',dest='release',default='trunk')
parser.add_argument('testname',nargs='+')
args = parser.parse_args()
I would like both of these calls to have smoketest apply to testname, but the
second one results in an error.
>> python TAF.py -r 1.0 smoketest
>> python TAF.py -r smoketest
TAF.py: error: too few arguments
I realize that moving the positional argument to the front would result in the
correct behavior of the optional parameter, however this is not quite the
format I am looking for. The choices flag looks like an attractive
alternative, however it throws an error instead of ignoring the unmatched
item.
EDIT: I've found a hacky way around this. If anyone has a nicer solution I
would appreciate it.
import argparse
parser = argparse.ArgumentParser(description='TAF')
parser.add_argument('-r','--release',nargs='?',dest='release',default='trunk')
parser.add_argument('testname',nargs=argparse.REMAINDER)
args = parser.parse_args()
if not args.testname:
args.testname = args.release
args.release = ''
Answer: As stated in the
[documentation](http://docs.python.org/dev/library/argparse.html#nargs):
> `'?'`. One argument will be consumed from the command line if possible, and
> produced as a single item. If no command-line argument is present, the value
> from default will be produced. Note that for optional arguments, there is an
> additional case - the option string is present but not followed by a
> command-line argument. In this case the value from const will be produced.
So, the behaviour you want is not obtainable using `'?'`. Probably you could
write some hack using `argparse.Action` and meddling with the previous
results.(1)
I think the better solution is to split the functionality of that option. Make
it an option that requires an argument(but the option itself is optional) and
add an option without argument that sets the release to `'trunk'`. In this way
you can obtain the same results without any hack. Also I think the interface
is simpler.
In your example:
python TAF.py -r smoketest
It's quite clear that `smoketest` will be interpreted as an argument to `-r`.
At least following unix conventions. If you want to keep `nargs='?'` then the
user _must_ use `--`:
$ python TAF.py -r -- sometest
Namespace(release=None, testname=['sometest']) #parsed result
(1) An idea on how to do this: check if the option has an argument. If it has
one check if it is a valid test name. If so put into by hand into `testname`
and set `release` to the default value. You'll also have to set a "flag" that
tells you that this thing happened.
Now, before parsing `sys.argv` you must redirect `sys.stderr`. When doing the
parsing you must catch `SystemExit`, check the `stderr` and see if the error
was "too few arguments", check if the flag was set, if so ignore the error and
continue running, otherwise you should reprint to the original `stderr` the
error message and exit.
This approach does not look robust, and it's probably buggy.
|
Boxplot dictionaries instead of lists?
Question: Let us say I want to create a boxplot of a list which contains the numbers 1-5
about a million times each.
Such a list would be of about size 5 000 000, however represented as a dict it
takes no space at all:
s = {1: 1000000, 2: 1000000, 3: 1000000, 4: 1000000, 5:1000000}
The problem is, if I try to create a boxplot of that dict I get the error
Traceback (most recent call last):
File "<pyshell#17>", line 1, in <module>
ax.boxplot(s)
File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/site-packages/matplotlib/axes.py", line 5462, in boxplot
if not hasattr(x[0], '__len__'):
KeyError: 0
Is there a clever way of boxplotting the dictionary `s`, without having to put
all the elements in a list?
* * *
A comment suggested I try
boxplot(n for n, count in s.iteritems() for _ in xrange(count))
but this resulted in
Traceback (most recent call last):
File "<pyshell#7>", line 1, in <module>
boxplot(n for n, count in s.iteritems() for _ in xrange(count))
File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/site-packages/matplotlib/pyplot.py", line 2134, in boxplot
ret = ax.boxplot(x, notch, sym, vert, whis, positions, widths, patch_artist, bootstrap)
File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/site-packages/matplotlib/axes.py", line 5462, in boxplot
if not hasattr(x[0], '__len__'):
TypeError: 'generator' object has no attribute '__getitem__'
Answer: The whole point of using pictures to describe data is to get a feeling for the
data as a whole, not to be terribly exact. So there will not be much harm in
condensing your data by generating one representative data point for every
1000 actual data points:
x = [val for val, num in s.items() for i in range(num//1000)]
This should be good enough for the naked eye:
import matplotlib.pyplot as plt
import numpy as np
s = {1: 1000000, 2: 1000000, 3: 1000000, 4: 1000000, 5:1000000}
x = [val for val, num in s.items() for i in range(num//1000)]
dct = plt.boxplot(x)
plt.show()
|
How to undo a string and calculate
Question: > **Possible Duplicate:**
> [parsing math expression in python and solving to find an
> answer](http://stackoverflow.com/questions/13055884/parsing-math-expression-
> in-python-and-solving-to-find-an-answer)
How can I "undo" a string with plus and addition signs in order to calculate
them?
I have a string for example:
> '6*1+7*1+1*7'
I tried int() but I've got error. How can I undo this whole string to just get
a pure integer calculation?
Answer: You have to actually implement the operations you want to support by parsing
the string and calculating the result. A trivial parser would look like:
>>> import functools,operator
>>> sum(functools.reduce(operator.mul, map(int, summand.split('*')), 1)
... for summand in '6*1+7*1+1*7'.split('+'))
20
Note that the built-in
[`eval`](http://docs.python.org/3/library/functions.html#eval) may work in a
one-off script or an interactive console, but it interprets the string as
Python source and therefore allows anyone who controls the string (i.e. the
user) to [execute arbitrary Python
commands](http://en.wikipedia.org/wiki/Arbitrary_code_execution).
|
Why does Mechanize(-Python) seem to overlook some hidden form fields but not others?
Question: I'm working with a form that has several fields, some text, and several
hidden. The problem is that when I look at the list of fields that my
mechanize.Browser object "sees", some important hidden fields are missing, but
not all. According to the most popular answer for [this similar
question](http://stackoverflow.com/questions/3338214/mechanize-does-not-see-
some-hidden-form-inputs), this is happening because the web page is querying
the user-agent string. That is not the case for me, and I know this for two
reasons:
1. When I save the "scraped" form to a file, I can see the missing fields, and
2. I've altered my browser object's user-agent string, as that solution suggests, but it does not help me.
What does help me is the [second most popular
solution](http://stackoverflow.com/a/11394457/1335290) to that issue, but I
don't understand why this is. Why would Mechanize "see" some hidden form
fields but not others, requiring manual input of the missing fields?
Answer: Granted I don't know what you're actually trying to you - but as someone who's
been scraping webpages for years I have to give you some unsolicited advice. I
apologize in advance.
I would strongly urge you to transition over to something that can handle
javascript. Mechanize is a great module, it was amazingly useful back in the
day, but the web is all blinking lights, CSS and dancing babies you have to
click.
The reason I say this, is that the 'hidden' fields could be something fancy,
or they could be javascript modified forms that you'll waste hours trying to
reverse engineer how it works just to hammer the square peg into the round
hole.
The modern but unfortunately titanically heavy-weight replacements for
Mechanize that I would suggest are:
* [phantomjs](http://phantomjs.org/) which provides a WebKit based javascript-centric way to interact with webpages (headlessly, which is a bonus) It's Qt based, but has solid release binaries and if you build from source it actually contains everything it needs to run without having to sync up with some specific version of Qt.
* [PySide](http://www.pyside.org/) bindings for QtWebKit which is nifty although there can be a bit of a learning curve but IMHO my favorite just because it's nice to be able to reach inside the browser and get my hands dirty to see whats going on.
* WebKit also provides a nice (although, poorly supported by Python) interface where you can enable a websocket server in the browser and drive it over websockets using an API as defined [in Inspector.json](http://trac.webkit.org/browser/trunk/Source/WebCore/inspector/Inspector.json). Stock Chrome supports this out of the box. You can find more details [on the Chrome developer website.](https://developers.google.com/chrome-developer-tools/docs/remote-debugging)
So, pretty much WebKit heavy, has nothing to do with what you're asking about
- but in the long run this is where you're going to end up to be able to
really automatically navigate and scrape the web.
|
Python: How to refactor circular imports
Question: I've got a thing that you can do `engine.setState(<state class>)` and it will
instantiate the class type you give it and start running on the new state.
In `SelectFileState` there is a button to go to `NewFileState`, and on
`NewFileState`, there is a button to go back to `SelectFileState`.
Now, at the beginning of `SelectFileState`, I'm importing `NewFileState` (So I
can later in the class do `engine.setState(NewFileState)`. At the beginning of
`NewFileState`, I'm also importing `SelectFileState` (So I can later go back
to `SelectFileState`).
However, this creates a circular import, as described in some other posts.
Some say that circular imports are indicators bad design, and should be
refactored..
I know that I can just fix this problem by importing `SelectFileState` right
before I need to use it, but I'd rather do things the right way and refactor
it.
Now I'm wondering though.. How would you refactor that out?
Thanks.
_Edit_ : _Pydsigner suggests that I merge the two files into one, as they are
both very related to each other. However, I cannot put EVERY state that has a
circular dependency into one file, so there's got to be a better method for
that. Any ideas?_
_2Edit_ : _I'm circumventing this problem for now by not using the`from x
import y` syntax, and instead just doing `import x`. This is not a preferable
solution, and I'd like to know the "Pythonic" way to fix this kind of thing.
Just merging files together can't be the fix forever._
The code:
**SelectFileState**
from states.state import State
from states.newfilestate import NewFileState
from elements.poster import Poster
from elements.label import Label
from elements.button import Button
from elements.trifader import TriFader
import glob
import os
class SelectFileState(State):
def __init__(self, engine):
super().__init__(engine)
def create(self):
self.engine.createElement((0, 0), Poster(self.engine.getImage('gui_loadsave')), 1)
self.engine.createElement((168, 30), Label("Load a game", 40), 2)
self.engine.createElement((400, 470), Button("New save", code=self.engine.createElement, args=((0, 0), TriFader(NewFileState, False), -240)), 3)
ycounter = 150
globs = glob.glob("save\\*.mcw")
for file in globs:
self.engine.createElement((200, ycounter), Button(os.path.basename(file)[:-4]), 2)
ycounter += 50
**NewFileState**
from states.state import State
from states.selectfilestate import SelectFileState
from elements.poster import Poster
from elements.label import Label
from elements.button import Button
from elements.inputbox import InputBox
from elements.trifader import TriFader
class NewFileState(State):
def __init__(self, engine):
super().__init__(engine)
def create(self):
self.engine.createElement((0, 0), Poster(self.engine.getImage('gui_loadsave')), 1)
self.engine.createElement((135, 30), Label("Make a new save", 40), 2)
self.lvlname = self.engine.createElement((180, 212), InputBox(length=25, text="World name"), 2)
self.engine.createElement((200, 240), Button(text="Ok", code=self.createSave, args=()), 2)
def createSave(self):
open("save\\" + self.lvlname.getText() + ".mcw", 'w')
self.engine.createElement((0, 0), TriFader(SelectFileState), -240)
Answer: Without seeing code, what would make the most sense is to merge the two files.
If they are that closely intertwined, you could probably put them together
without anything really oddly out of place.
|
pythonic way to maximize the number of items that fit in a list of available spots
Question: Here is the problem. Each item has an index value, and the slots it could fit
into.
items = ( #(index, [list of possible slots])
(1, ['U', '3']),
(2, ['U', 'L', 'O']),
(3, ['U', '1', 'C']),
(4, ['U', '3', 'C', '1']),
(5, ['U', '3', 'C']),
(6, ['U', '1', 'L']),
)
What is the largest list of slots with these items fit into. No slot can be
you more than once.
My solution seems hard to follow, and very non-pythonic [and fails on the last
item]. I didn't want to ask a "what's better" question before solving the prob
myself [so now hear I am, beggar's hat in hand]. Here's my code:
def find_available_spot(item, spot_list):
spots_taken = [spot for (i,spot) in spot_list]
i, l = item
for spot in l:
if spot not in spots_taken: return (i, spot)
return None
def make_room(item, spot_list, items, tried=[]):
ORDER = ['U','C','M','O','1','3','2','L']
i, l = item
p_list = sorted(l, key=ORDER.index)
spots_taken = [spot for (i, spot) in spot_list]
for p in p_list:
tried.append(p)
spot_found = find_available_spot((i,[p]),spot_list)
if spot_found: return spot_found
else:
spot_item = items[spots_taken.index(p)]
i, l = spot_item
for s in tried:
if s in l: l.remove(s)
if len(l) == 0: return None
spot_found = find_available_spot((i,l),spot_list)
if spot_found: return spot_found
spot_found = make_room((i,l), spot_list, items, tried)
if spot_found: return spot_found
return None
items = ( #(index, [list of possible slots])
(1, ['U', '3']),
(2, ['U', 'L', 'O']),
(3, ['U', '1', 'C']),
(4, ['U', '3', 'C', '1']),
(5, ['U', '3', 'C']),
(6, ['U', '1', 'L']),
)
spot_list = []
spots_taken = []
for item in items:
spot_found = find_available_spot(item, spot_list)
if spot_found:
spot_list.append(spot_found)
else:
spot_found = make_room(item,spot_list,items)
if spot_found: spot_list.append(spot_found)
Answer: Simply trying every possibility has a certain brutal elegance:
>>> items = (
... (1, ['U', '3']),
... (2, ['U', 'L', 'O']),
... (3, ['U', '1', 'C']),
... (4, ['U', '3', 'C', '1']),
... (5, ['U', '3', 'C']),
... (6, ['U', '1', 'L']),
... )
>>> import itertools
>>> locs = zip(*items)[1]
>>> max((len(p), p) for p in itertools.product(*locs) if len(p) == len(set(p)))
(6, ('U', 'O', 'C', '1', '3', 'L'))
Admittedly it doesn't scale very well, though.
[edit]
.. and, as noted in the comments, it only finds a solution if there's a
filling solution. A slightly more efficient (but still brute-force) solution
works even if there isn't:
def find_biggest(items):
for w in reversed(range(len(items)+1)):
for c in itertools.combinations(items, w):
indices, slots = zip(*c)
for p in itertools.product(*slots):
if len(set(p)) == len(p):
return dict(zip(indices, p))
>>> items = ( (1, ['U', '3']), (2, ['U', 'L', 'O']), (3, ['U', '1', 'C']), (4, ['U', '3', 'C', '1']), (5, ['U', '3', 'C']), (6, ['U', '1']), (7, ['U', '1', 'L']), )
>>> find_biggest(items)
{1: 'U', 2: 'O', 3: '1', 4: '3', 5: 'C', 7: 'L'}
|
what is the difference between "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/" and "/Library/Python/2.7/"
Question: I am working on a mac, a quick question, could someone told me the difference
of these two directories?
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/
/Library/Python/2.7/site-packages/
Answer: ### python.org
The installer from python.org installs to
`/Library/Frameworks/Python.framework/`, and only that python executable looks
in the contained site-package dir for packages.
### /Library/Python
In contrast, the dir `/Library/Python/2.7/site-packages/` is a global place
where you can put python packages, all python 2.7 interpreter will. (For
example the python 2.7 that comes with OS X).
### ~/Library/Python
The dir `~/Library/Python/2.7/site-packages`, if it exists, is also used but
for your user only.
### sys.path
From within python, you can check, which directories are currently used by
`import sys; print(sys.path)`
### homebrew
Note, a python installed via homebrew, will put it's site-packages in `$(brew
--prefix)/lib/python2.7/site-packages` but also be able to import packages
from `/Library/Python/2.7/site-packages` and `~/Library/Python/2.7/site-
packages`.
|
Python Convert to date and compare
Question: I have two strings like 1352789792.757637 and 1352789919.235815. How to
convert them back to time and compare?
Thanks for help
Answer: This is assuming that those seconds are seconds since the epoch. If so, this
should work for converting to a `struct_time`:
>>> import time
>>> time.gmtime(1352789792.757637)
time.struct_time(tm_year=2012, tm_mon=11, tm_mday=13, tm_hour=6, tm_min=56, tm_sec=32, tm_wday=1, tm_yday=318, tm_isdst=0)
However note that the format you got back from the file actually represents
seconds, so manipulations like differences, etc. can all be done on those
numbers. Depending on what kind of analysis you want to do, that may make it a
bit easier (i.e. treat them as `floats` instead of converting to time). Not
sure of your use case though, so this may be misguided :)
|
Python regular expression substitute whitespace for hyphen
Question: I get a lob object that may have one or many dates. thinking of the dates as a
table if the first date is empty I get a chr(20). ex 3rd element of array has
a date the first two empty would look similar to " "," ","01/01/01 01:01:01".
I would like to change the " " in the date time string to something different
("-") so I can easily differentiate between just an empty element and an
actual date time value with a space in the middle. Brand new to python and
regular expressions would not be considered my strong suit.
>>> x = '01/01/12 17:11:11'
>>> y = re.sub( r"\d{2} \d{2}", r"\d{2}-\d{2}",x)
>>> print (x,y)
01/01/12 17:11:11 01/01/\d{2}-\d{2}:11:11
Answer: Use lookahead and lookbehind assertions:
>>> import re
... x = '01/01/12 17:11:11 random junk 01/08/10 12:01:33'
... y = re.sub(r'(?<=\d{2}) (?=\d{2})', '-', x)
... y
103: '01/01/12-17:11:11 random junk 01/08/10-12:01:33'
|
Are python Exceptions as class attributes a bad thing?
Question: I find myself often wanting to structure my exception classes like this:
# legends.py
class Error(Exception): pass
class Rick(object):
class Error(Error): pass
class GaveYouUp(Error): pass
class LetYouDown(Error): pass
class Michael(object):
class Error(Error): pass
class BlamedItOnTheSunshine(Error): pass
class BlamedItOnTheMoonlight(Error): pass
I have only seen this pattern used in Django (`DoesNotExist`) and it makes so
much sense. Is there anything I'm missing, why most people seem to favor top-
level Exceptions?
**edit** I would use these classes for versatile granularity, e.g:
import legends
try:
do_stuff()
except legends.Michael.Error:
blame_it_on_the_boogie()
except legends.Rick.GaveYouUp:
let_you_down()
except legends.Error:
pass
except Exception as e:
raise Hell()
Answer: This is the exact pattern used by Django for certain ORM-related exceptions.
The advantage is that you can have an except clause which checks against a
type accessed through an instance:
rick = Rick()
try:
rick.roll()
except rick.GaveYouUp:
never()
except rick.LetYouDown:
never_ever()
This doesn't look that useful here, but if `rick` were a function parameter,
then it would potentially be rather useful.
This is also extremely useful in writing generic code which raises the
exceptions:
GoddamStar(object):
def sing(self,tune):
raise self.Error()
class Rick(GoddamStar):
class Error(Error): pass
class GaveYouUp(Error): pass
class LetYouDown(Error): pass
class Michael(GoddamStar):
class Error(Error): pass
class BlamedItOnTheSunshine(Error): pass
class BlamedItOnTheMoonlight(Error): pass
rick = Rick()
try:
rick.sing()
except Rick.GaveYouUp:
never()
except Michael.Error:
never_ever()
Django's exceptions generally all derive from global base classes, so that you
can also have a catch-all clause which still switches on a type of exception,
in case your `rick` is of an unknown (or otherwise unprovided for) class.
The reason why this isn't much more common is that (a) it doesn't work in
early-bound languages, which attract most of the book writers (b) it's
moderately rare that this is useful to the user, and so application writers
likely figure they aren't going to need it.
|
Elegant grid search in python/numpy
Question: I have a function that has a bunch of parameters. Rather than setting all of
the parameters manually, I want to perform a grid search. I have a list of
possible values for each parameter. For every possible combination of
parameters, I want to run my function which reports the performance of my
algorithm on those parameters. I want to store the results of this in a many-
dimensional matrix, so that afterwords I can just find the index of the
maximum performance, which would in turn give me the best parameters. Here is
how the code is written now:
param1_list = [p11, p12, p13,...]
param2_list = [p21, p22, p23,...] # not necessarily the same number of values
...
results_size = (len(param1_list), len(param2_list),...)
results = np.zeros(results_size, dtype = np.float)
for param1_idx in range(len(param1_list)):
for param2_idx in range(len(param2_list)):
...
param1 = param1_list[param1_idx]
param2 = param2_list[param2_idx]
...
results[param1_idx, param2_idx, ...] = my_func(param1, param2, ...)
max_index = np.argmax(results) # indices of best parameters!
I want to keep the first part, where I define the lists as-is, since I want to
easily be able to manipulate the values over which I search.
I also want to end up with the results matrix as is, since I will be
visualizing how changing different parameters affects the performance of the
algorithm.
The bit in the middle, though, is quite repetitive and bulky (especially
because I have lots of parameters, and I might want to add or remove
parameters), and I feel like there should be a more succinct/elegant way to
initialize the results matrix, iterate over all of the indices, and set the
appropriate parameters.
So, is there?
Answer: I think
[`scipy.optimize.brute`](http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brute.html#scipy.optimize.brute)
is what you're after.
>>> from scipy.optimize import brute
>>> a,f,g,j = brute(my_func,[param1_list,param2_list,...],full_output = True)
Note that if the `full_output` argument is `True`, the evaluation grid will be
returned.
|
for huge arrays is numpy slower than list?
Question: check my following code; it is part of sigma_2 function (using crude sieving)
implemented in python which is one of divisor functions
<http://mathworld.wolfram.com/DivisorFunction.html>
from time import time
from itertools import count
import numpy
def sig2(N, nump=False):
init = time()
#initialize array with value=1 since every positive integer is divisible by 1
if nump:
print 'using numpy'
nums = numpy.ones((N,), dtype=numpy.int64)
else:
nums = [1 for i in xrange(1, N)]
#for each number n < N, add n*n to n's multiples
for n in xrange(2, N):
nn = n*n
for i in count(1):
if n*i >= N: break
nums[n*i-1] += nn
print 'sig2(n) done - {} ms'.format((time()-init)*1000)
I tried it with varying values and with numpy it is quite disappointing.
for 2000:
sig2(n) done - 4.85897064209 ms
took : 33.7610244751 ms
using numpy
sig2(n) done - 31.5930843353 ms
took : 55.6900501251 ms
for 200000:
sig2(n) done - 1113.80600929 ms
took : 1272.8869915 ms
using numpy
sig2(n) done - 4469.48194504 ms
took : 4705.97100258 ms
it goes on and my code isn't really scalable - for it not being O(n), but with
these two and besides these two result using numpy causes performance
problems. Shouldn't numpy be faster than python lists and dicts? That was my
impression on numpy.
Answer: As @unutbu said numpy really shines when you use vectorised operations. Here
is an optimised implementation using numpy (it is consistent with the
definition of divisor function from Mathworld):
import numpy as np
def sig2_numpy(N):
x = np.arange(1,N+1)
x[(N % x) != 0] = 0
return np.sum(x**2)
When you call it, it is much faster:
>> import time
>> init = time.time()
>> print sig2_numpy(20000)
>> print "It took", (time.time()-init)*1000., 'ms'
It took 0.916957855225 ms
|
Why is this loop returning twice?
Question: I have the following code:
import re
from bs4 import BeautifulSoup
f = open('AIDNIndustrySearchAll.txt', 'r')
g = open('AIDNurl.txt', 'w')
t = f.read()
soup = BeautifulSoup(t)
list = []
counter = 0
for link in soup.find_all("a"):
a = link.get('href')
if re.search("V", a) != None:
list.append(a)
counter = counter + 1
new_list = ['http://www.aidn.org.au/{0}'.format(i) for i in list]
output = "\n".join(i for i in new_list)
g.write(output)
print output
print counter
f.close()
g.close()
It is basically going through a saved HTML page and pulling the links I am
interested in. I am new to Python, so I am sure the code is terrible but it is
(almost) working ;)
The current issue is that it is returning two copies of each link, not one. I
am sure it has something to do with the way the loop is set up but am a bit
stuck.
I welcome any help on this question (I can provide more details if required -
such as HTML and more information on links I am looking for) as well as any
general code improvements so I can learn as much as possible.
Answer: Since you've asked for code optimations too, I will post my suggestions as an
answer. Feel free!
from bs4 import BeautifulSoup
f = open('AIDNIndustrySearchAll.txt', 'r')
t = f.read()
f.close()
soup = BeautifulSoup(t)
results = [] ## 'list' is a built-in type and shouldn't be used as variable name
for link in soup.find_all('a'):
a = link.get('href')
if 'V' not in a:
results.append(a)
formatted_results = ['http://www.aidn.org.au/{0}'.format(i) for i in results]
output = "\n".join(formatted_results)
g = open('AIDNurl.txt', 'w')
g.write(output)
g.close()
print output
print len(results)
This still doesn't fix your original problem, see my and other peoples
question comments.
|
How to import wxPython module in Blender 2.64?
Question: I'm trying to import wxPython in my Blender game engine but getting error :
python code (in blender):
import bge
import wx
app = wx.App()
frame = wx.Frame(None, -1, 'frame in blender')
frame.Show()
app.MainLoop()
Error :-
Traceback (most recent call last):
File "Text", line 2, in <module>
ImportError: No module named wx
I googled about this error, but found nothing usefull..
also,i copied wx directory from python's lib installation directory to the
blender's lib directory. after that, i got error:-
Traceback (most recent call last):
File "Text", line 2, in <module>
File "C:\Program Files\Blender Foundation\Blender\2.64\python\lib\wx\__init__.py", line
14, in <module>
import __version__
ImportError: No module named __version__
please, help me!!
system details :-
os: windows 7
blender version : 2.64a
wxPython version : 2.8
Answer: That's because Blender delivers a built-in version of the python interpreter.
You can probably solve this working on [another way for importing the
module](http://www.google.com/#q=python+import+module+another+directory).
I should also remind that the Blender Game Engine is extremely old for the
biggest part, many OpenGL calls are really really really old and deprecated (
it's based on a fixed pipeline behaviour ), I wouldn't consider the game
engine for any "serious" application of any kind. Blender itself is a good
application, the game engine it's not.
|
Import class dynamically in Python
Question: I want to dynamically load a class from a given string. However, I do not know
which file the class will be in, so I will have to search all files. I've
tried this, but I get `AttributeError: 'module' object has no attribute
'MyClass'` even though I'm 100% sure that that module (in the current
iteration) has that class:
target = 'MyClass'
module_names = [mf[0:-3] for mf in os.listdir('application/models') if mf.endswith(".py")]
modules = [imp.new_module(x) for x in module_names]
for module in modules:
try:
target_class = getattr(module, target)
except ImportError, AttributeError:
continue
if target_class:
print 'found class'
It seems I'm getting really close. What I want is not to limit the search to
just one folder, but perhaps multiple folders. What's wrong with my code?
Edit: Ok now I'm trying something like this, but still getting the same error:
for m in module_names:
try:
x = reload(__import__(m))
target_class = getattr(x, target)
except ImportError, AttributeError:
continue
else:
break
if target_class:
print 'found class'
Answer: From the documentation on
[`imp.new_module`](http://docs.python.org/2/library/imp.html#imp.new_module),
the returned module is **empty**. Meaning that it will never contain your
class.
Perhaps what you want to do is add your target directory to `sys.path` and use
`__import__` to dynamically import those modules, then check for your class?
* * *
The following code works for me:
modules = ['foo','bar']
for mod in modules:
try:
x = reload(__import__(mod))
except ImportError:
print "bargh! import error!"
continue
try:
cls = getattr(x,'qux')
except AttributeError:
continue
a = cls()
print a.__class__.__name__
Where `foo.py` and `bar.py` are in the same directory:
#foo.py
class foo(object):
pass
and:
#bar.py
class qux(object):
pass
|
udisks FilesystemUnmount appears to not exist when calling from python
Question: I'm trying to unmount a filesystem that I mounted using FilesystemMount, but I
keep getting UnknownMethod exceptions. I've verified that I can call the
method on the Device interface via D-Feet, but trying to do it via dbus
directly doesn't appear to work at all. I've tried using the following
arguments:
* ''
* None
* []
* ['']
The following code demonstrates the problem:
import dbus
bus = dbus.SystemBus()
proxy = bus.get_object('org.freedesktop.UDisks', '/dev/fd0')
dev = dbus.Interface(proxy, 'org.freedesktop.UDisks.Device')
dev.FilesystemUnmount(['force'])
Exception:
`dbus.exceptions.DBusException: org.freedesktop.DBus.Error.UnknownMethod:
Method "FilesystemUmount" with signature "as" on interface
"org.freedesktop.UDisks.Device" doesn't exist`
Answer: Turns out that the problem is that FilesystemUnmount will only take an
ObjectPath that udisks handed out. So by adding a check for that and then
looking it up I got it to work. See the code below.
import dbus
path = '/dev/fd0'
bus = dbus.SystemBus()
if not isinstance(path, dbus.ObjectPath):
manager_obj = bus.get_object('org.freedesktop.UDisks',
'/org/freedesktop/UDisks')
manager = dbus.Interface(manager_obj, 'org.freedesktop.UDisks')
path = manager.FindDeviceByDeviceFile(path)
proxy = bus.get_object('org.freedesktop.UDisks', path)
dev = dbus.Interface(proxy, 'org.freedesktop.UDisks.Device')
dev.FilesystemUnmount('')
|
Python - create object of class from one package in different package
Question: I started using Python few days back and I think I have a very basic question
where I am stuck. Maybe I am not doing it correctly in Python so wanted some
advice from the experts:
I have a config.cfg & a class test in one package lib as follows:
**myProj/lib/pkg1/config.cfg**
[api_config]
url = https://someapi.com/v1/
username=sumitk
**myProj/lib/pkg1/test.py**
class test(object):
def __init__(self, **kwargs):
config = ConfigParser.ConfigParser()
config.read('config.cfg')
print config.get('api_config', 'username')
#just printing here but will be using this as a class variable
def some other foos()..
Now I want to create an object of test in some other module in a different
package
**myProj/example/useTest.py**
from lib.pkg1.test import test
def temp(a, b, c):
var = test()
def main():
temp("","","")
if __name__ == '__main__':
main()
Running useTest.py is giving me error:
...
print config.get('api_config', 'username')
File "C:\Python27\lib\ConfigParser.py", line 607, in get
raise NoSectionError(section)
ConfigParser.NoSectionError: No section: 'api_config'
Now if I place thie useTest.py in the same package it runs perfectly fine:
myProj/lib/pkg1/useTest.py
myProj/lib/pkg1/test.py
myProj/lib/pkg1/config.cfg
I guess there is some very basic package access concept in Python that I am
not aware of or is there something I am doing wrong here?
Answer: The issue here is that you have a different working directory depending on
which module is your main script. You can check the working directory by
adding the following lines to the top of each script:
import os
print os.getcwd()
Because you just provide `'config.cfg'` as your file name, it will attempt to
find that file inside of the working directory.
To fix this, give an absolute path to your config file.
You should be able to figure out the absolute path with the following method
since you know that config.cfg and test.py are in the same directory:
# inside of test.py
import os
config_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'config.cfg')
|
Generate Markdown tables?
Question: Is there any way to generate tables from objects (Python/Ruby/Java/C#)?
I'd like to create a simple table programatically. I have some objects and I'd
like to map some properties to headers and the collection to rows.
Why Markdown? Because I'd like to edit that document manually later. Right
now, the whole process looks like this:
* reporting engine is in C#
* there are objects from which DOCX are generated (there is intermediate XML or something like that)
* almost always I have to do minor fixes and I have to open that docx documents in MS Word
* it's troublesome to ask the developer team to fix every single bug, because they simply have no time to do it instantly and I have to wait for next release.
I've figured out that if I would get Markdown document, I could edit it
easily, insert some variables and use pandoc to replace those variables with
given data. But to get Markdown I have to know how the devs could generate
tables in Markdown.
Answer: I needed to do just about the same thing to generate Doxygen Markdown tables,
so I thought I'd share. I've run the example code successfully in both Python
2.7 and 3.3, although I can't claim I've tested it rigorously.
# Generates tables for Doxygen flavored Markdown. See the Doxygen
# documentation for details:
# http://www.stack.nl/~dimitri/doxygen/manual/markdown.html#md_tables.
# Translation dictionaries for table alignment
left_rule = {'<': ':', '^': ':', '>': '-'}
right_rule = {'<': '-', '^': ':', '>': ':'}
def evalute_field(record, field_spec):
"""
Evalute a field of a record using the type of the field_spec as a guide.
"""
if type(field_spec) is int:
return str(record[field_spec])
elif type(field_spec) is str:
return str(getattr(record, field_spec))
else:
return str(field_spec(record))
def table(file, records, fields, headings, alignment = None):
"""
Generate a Doxygen-flavor Markdown table from records.
file -- Any object with a 'write' method that takes a single string
parameter.
records -- Iterable. Rows will be generated from this.
fields -- List of fields for each row. Each entry may be an integer,
string or a function. If the entry is an integer, it is assumed to be
an index of each record. If the entry is a string, it is assumed to be
a field of each record. If the entry is a function, it is called with
the record and its return value is taken as the value of the field.
headings -- List of column headings.
alignment - List of pairs alignment characters. The first of the pair
specifies the alignment of the header, (Doxygen won't respect this, but
it might look good, the second specifies the alignment of the cells in
the column.
Possible alignment characters are:
'<' = Left align (default for cells)
'>' = Right align
'^' = Center (default for column headings)
"""
num_columns = len(fields)
assert len(headings) == num_columns
# Compute the table cell data
columns = [[] for i in range(num_columns)]
for record in records:
for i, field in enumerate(fields):
columns[i].append(evalute_field(record, field))
# Fill out any missing alignment characters.
extended_align = alignment if alignment != None else []
if len(extended_align) > num_columns:
extended_align = extended_align[0:num_columns]
elif len(extended_align) < num_columns:
extended_align += [('^', '<')
for i in range[num_columns-len(extended_align)]]
heading_align, cell_align = [x for x in zip(*extended_align)]
field_widths = [len(max(column, key=len)) if len(column) > 0 else 0
for column in columns]
heading_widths = [max(len(head), 2) for head in headings]
column_widths = [max(x) for x in zip(field_widths, heading_widths)]
_ = ' | '.join(['{:' + a + str(w) + '}'
for a, w in zip(heading_align, column_widths)])
heading_template = '| ' + _ + ' |'
_ = ' | '.join(['{:' + a + str(w) + '}'
for a, w in zip(cell_align, column_widths)])
row_template = '| ' + _ + ' |'
_ = ' | '.join([left_rule[a] + '-'*(w-2) + right_rule[a]
for a, w in zip(cell_align, column_widths)])
ruling = '| ' + _ + ' |'
file.write(heading_template.format(*headings).rstrip() + '\n')
file.write(ruling.rstrip() + '\n')
for row in zip(*columns):
file.write(row_template.format(*row).rstrip() + '\n')
Here's a simple test case:
import sys
sys.stdout.write('State Capitals (source: Wikipedia)\n\n')
headings = ['State', 'Abrev.', 'Capital', 'Capital since', 'Population',
'Largest Population?']
data = [('Alabama', 'AL', '1819', 'Montgomery', '1846', 155.4, False,
205764),
('Alaska', 'AK', '1959', 'Juneau', '1906', 2716.7, False, 31275),
('Arizona', 'AZ', '1912', 'Phoenix', '1889',474.9, True, 1445632),
('Arkansas', 'AR', '1836', 'Little Rock', '1821', 116.2, True,
193524)]
fields = [0, 1, 3, 4, 7, lambda rec: 'Yes' if rec[6] else 'No']
align = [('^', '<'), ('^', '^'), ('^', '<'), ('^', '^'), ('^', '>'),
('^','^')]
table(sys.stdout, data, fields, headings, align)
Gives this output:
State Capitals (source: Wikipedia)
| State | Abrev. | Capital | Capital since | Population | Largest Population? |
| :------- | :----: | :---------- | :-----------: | ---------: | :-----------------: |
| Alabama | AL | Montgomery | 1846 | 205764 | No |
| Alaska | AK | Juneau | 1906 | 31275 | No |
| Arizona | AZ | Phoenix | 1889 | 1445632 | Yes |
| Arkansas | AR | Little Rock | 1821 | 193524 | Yes |
Doxygen renders this as:

|
pyopengl framebuffer
Question: I'm trying to work with framebuffer objects in PyOpenGL and have found some
tutorials to teach myself. I'm working on a WinXP machine with Python 2.7.3
and I just installed the binary distributions of PyOpenGL 3.0.2 and PyOpenGL-
accelerate 3.0.2. However, directly at the beginning I encounter a problem, in
the sense that I get the error message that the fbo functions don't seem to
exist. These are the steps to recreate my problem:
Importing the modules:
from OpenGL.GL import *
from OpenGL.GLU import *
from OpenGL.GL.framebufferobjects import *
I now should have the framebuffer objects/functions available to me.
print glGenFramebuffers
print glBindFramebuffer
shows
<OpenGL.extensions.glGenFramebuffers object at 0x03172260>
<OpenGL.extensions.glBindFramebuffer object at 0x03172120>
However, if I try to call (make an instance) of this object, as specified in
the tutorial, with:
fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, fbo )
I get the error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "latebind.pyx", line 31, in OpenGL_accelerate.latebind.LateBind.__call__ (src\latebind.c:645)
File "C:\Python27\lib\site-packages\OpenGL\extensions.py", line 189, in finalise
self.__name__,
OpenGL.error.NullFunctionError: Attempt to call an undefined alternate function (glGenFramebuffers, glGenFramebuffersEXT), check for bool(glGenFramebuffers) before calling
using
bool(glGenFramebuffers)
indeed returns False.
What am I doing wrong? Shouldn't all the required framebuffer libraries be
installed with the binaries of PyOpenGL(-accelerate)?
Thanks in advance to anyone who can help me.
EDIT: I just found [Problems with Frame Buffer Objects (fbos) in
PyOpenGL](http://stackoverflow.com/questions/12953134/problems-with-frame-
buffer-objects-fbos-in-pyopengl), with a similar problem, but no solution
Answer: Apparently the above code doesn't show this behavior and functions well if you
run it from a file. I tried the above commands in the console and then
bool(glGenFramebuffers)
returns False
When run from file it returns True and everything functions normally.
Additionally, you don't seem to need to include
from OpenGL.GL.framebufferobjects import *
in the newer versions of PyOpenGL (>= 3.0.2) as you also have access to
Framebuffer objects without it
|
Error from urlopen "code for hash not found" on linux
Question: I've tried a couple of searches and I don't think this has been asked, but if
this is a duplicate please forgive me. I'm trying to use urllib on python-2.7
to read from a web page. Very simple application, all I want to do is get some
text from a page. Unfortunately the following code:
import urllib
address = "http://google.co.uk"
page = urllib.urlopen(address)
returns an error talking about the "hash code" not being found:
ERROR:root:code for hash sha224 was not found.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module>
globals()[__func_name] = __get_hash(__func_name)
File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type %s' % name)
ValueError: unsupported hash type sha224
ERROR:root:code for hash sha256 was not found.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module>
globals()[__func_name] = __get_hash(__func_name)
File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type %s' % name)
ValueError: unsupported hash type sha256
ERROR:root:code for hash sha384 was not found.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module>
globals()[__func_name] = __get_hash(__func_name)
File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type %s' % name)
ValueError: unsupported hash type sha384
ERROR:root:code for hash sha512 was not found.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module>
globals()[__func_name] = __get_hash(__func_name)
File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor
raise ValueError('unsupported hash type %s' % name)
ValueError: unsupported hash type sha512
I've tried a lot of Google searching but nothing that's turned up so far has
been very useful. Any ideas?
Answer: The following page explains the package you need to install:
<http://new2python.blogspot.com/2012/07/errorrootcode-for-hash-sha224-was-
not.html>
In summary: You need to install hashlib library using the following commands:
tar xvfz hashlib-20081119.tar.gz
cd hashlib-20081119
sudo python setup.py install
|
Unpack binary data with python
Question: I would like to unpack an array of binary data to `uint16` data with Python.
Internet is full of examples using `struct.unpack` but only examples dealing
with binary array of size 4.
Most of these examples are as follow (`B` is a binary array from a file)
U = struct.unpack("HH",B[0:4]);
So i tried to unpack an array of size 6:
U = struct.unpack("HHH",B[0:6]);
It works.
But how to do if I want to unpack an array of size `L` (`L` is pair)? I tried
that:
U = struct.unpack("H"*(L/2),B[0:L]);
but it doesn't work, prompter gives me an error (for `L=512` for example):
struct.error: unpack requires a string argument of length 512
This message is strange because if i want to unpack a binary array to
`uint16`, I need a string `"HHH...HHH"` of half size of this array...
I would be very grateful if someone could provide me with some help.
* * *
I progress a little bit... In fact, i tried:
U = struct.unpack("H"*8,B[0:8]);
It works.
U = struct.unpack("H"*10,B[0:10]);
It works.
U = struct.unpack("H"*222,B[0:444]);
It still works
U = struct.unpack("H"*223,B[0:446]);
It doesn't work! and it never works for size bigger than 446
Hope it will help anyone to answer me.
* * *
@MarkRansom I checked len(B) and in fact, the length is 444. I was so sure
that B is an array of size 512 because B comes from : `B = f.read(512)` where
F is a 8000-bytes-size file. So a problem with read()... Thanks for this
answer! But if someone has a help to unpack binary array of size L, i would be
grateful
Answer: Use array.fromstring or array.fromfile (see
<http://docs.python.org/2/library/array.html> ):
import array
U = array.array("H")
U.fromstring(B)
|
What do empty braces mean in Python?
Question: Please have a look at this snippet:
import xlrd,spss
from xlrd import open_workbook
wb=open_workbook('C:/temp/testbook.xls')
sheetnames=[]
for s in wb.sheets():
sheetnames.append(s.name)
Why should I write "`wb.sheets()`" instead of "`wb.sheets`"? And why is it
"`s.name`" instead of "`s.name()`"?
I often use empty braces when I'm not supposed to and the other way around.
Could anybody tell me what they mean and when I should (not) use them?
Answer: The `()` are necessary when the attribute (sheets) is a function you with to
call. The `()` should not be used when the attribute is a value want to use
directly rather than make a call on.
|
How can I get a full medial-axis line with its perpendicular lines crossing it?
Question: I have an image and I want to get the pixels that cross through its medial
axis. I tried to use _skeletonize_ and _medial axis_ methods in order to get
them but both methods return one dimensional line which is shorter than the
corresponding object.
Here's the code with a sample image:-
>>> import skimage.filter
>>> import skimage.morphology
>>> import numpy as np
>>> import scipy.misc
>>> im=scipy.misc.imread('img.jpg')
>>> thr=skimage.filter.threshold_otsu(im)
>>> im=im > thr # Threshold the image
>>> im_sk=skimage.morphology.skeletonize(im)
>>> mask=np.where(im_sk==1) # pixels of skeleton
>>> im[mask]= 0 # this will color the skeleton with black
Original >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Result


As you can see the black line isn't connected to the tip of the shape.
(1) How can I get a fully connected one dimensional medial axis line that
represents the length of the shapes in the image.
(2) How can I get the pixels that are perpendicular to the medial axis (as I
want to draw perpendicular lines from one side to another crossing the medial
axis of the shape)
* I need any python library that can do this stuff.
Thanks
Answer: I think your question is slightly ill-posed. You have two questions here, but
I'll answer the second one.
You want to draw lines perpendicular to the medial axis. The problem with that
is the medial axis isn't necessarily a line, it is usually curved.
Your best bet is to take 2 sample points from the medial axis that are close
to one another. These two points define a line. You can then compute the
perpendicular bisector of these two points.
|
Python reverse integer using recursion
Question: I am working on a problem that need to reverse a integer input without using
list or string. But my program only return first and last digits.
def reverseDisplay(number):
if number<10:
return number
else:
return reverseDisplay(number//10)
def main():
number=int(input("Enter a number:"))
print(number%10,end='')
print(reverseDisplay(number))
main()
It's seems like the reverse function just worked only once.
Answer: This should work:
from math import log10
def rev(num):
if num < 10:
return num
else:
ones = num % 10
rest = num // 10
#print ones, rest, int(log10(rest) + 1), ones * 10 ** int(log10(rest) + 1)
return ones * 10 ** int(log10(rest) + 1) + rev(rest)
print rev(9000), rev(1234), rev(1234567890123456789)
You could also reduce the number of times you call `log10` and number of math
operations by using a nested recursive function:
def rev(num):
def rec(num, tens):
if num < 10:
return num
else:
return num % 10 * tens + rec(num // 10, tens // 10)
return rec(num, 10 ** int(log10(num)))
|
How to read the contents of active directory using python-ldap?
Question: My script is like this:
import ldap, sys
server = 'ldap://my_server'
l = ldap.initialize(server)
dn="myname@mydomain"
pw = "password"
l.simple_bind_s(dn,pw)
ldap.set_option(ldap.OPT_REFERRALS,0)
print "valid"
I am using Python 2.7 on windows.
Is there any method to read or get the contents of active directory?
Answer: You can do quite a lot also using `win32com.client` (which I had trouble
finding documentation for). For example I've needed to resolve user email
knowing his [`ADS_NAME_TYPE_NT4`](http://msdn.microsoft.com/en-
us/library/aa772267%28v=vs.85%29.aspx) formatted name (`doman\jonjoe`).
First of all you need to convert it to `ADS_NAME_TYPE_1779` format (`CN=Jeff
Smith,CN=users,DC=Fabrikam,DC=com`):
name_resolver = win32com.client.Dispatch(dispatch='NameTranslate')
name_resolver.Set(3, 'domain\\jonjoe')
ldap_query = 'LDAP://{}'.format(name_resolver.Get(1))
Once you have that you can simply call `GetObject()`:
ldap = win32com.client.GetObject(ldap_query)
print(ldap.Get('mail'))
Tested with Python 3.2.5
|
Need to try and count repeated lists within a list
Question: Im trying to count how many repeated lists there are inside a list. But it
doesnt work the same way I could count repeated elements in just a list. Im
fairly new to python, so apologies if it sounds too easy.
this is what i did
x= [["coffee", "cola", "juice" "tea" ],["coffee", "cola", "juice" "tea"]
["cola", "coffee", "juice" "tea" ]]
dictt= {}
for item in x:
dictt[item]= dictt.get(item, 0) +1
return(dictt)
Answer: Your code **_almost_** works. As others have mentioned, lists cannot be used
as dictionary keys but tuples can. The solution is to turn each list into a
tuple.
>>> x= [["coffee", "cola", "juice", "tea"], ### <-- this list appears twice
... ["coffee", "cola", "juice", "tea"],
... ["cola", "coffee", "juice", "tea"]] ### <-- this list appears once
>>>
>>> dictt= {}
>>>
>>> for item in x:
... # turn the list into a tuple
... key = tuple(item)
...
... # use the tuple as the dictionary key
... # get the current count for this key or 0 if the key does not yet exist
... # then increment the count
... dictt[key]= dictt.get(key, 0) + 1
...
>>> dictt
{('cola', 'coffee', 'juice', 'tea'): 1, ('coffee', 'cola', 'juice', 'tea'): 2}
>>>
You can turn the tuples back into lists if you need to.
>>> for key in dictt:
... print list(key), 'appears ', dictt[key], 'times'
...
['cola', 'coffee', 'juice', 'tea'] appears 1 times
['coffee', 'cola', 'juice', 'tea'] appears 2 times
>>>
In addition, Python has a collections.Counter() class which is designed
specifically for counting things. (NOTE: You will still need to turn the lists
into tuples.)
>>> from collections import Counter
>>> counter = Counter()
>>> for item in x:
... counter[tuple(item)] += 1
...
>>> counter
Counter({('coffee', 'cola', 'juice', 'tea'): 2, ('cola', 'coffee', 'juice', 'tea'): 1})
>>>
Counter() is a subclass of dict(), so all the dictionary methods still work.
>>> counter.keys()
[('coffee', 'cola', 'juice', 'tea'), ('cola', 'coffee', 'juice', 'tea')]
>>> k = counter.keys()[0]
>>> k
('coffee', 'cola', 'juice', 'tea')
>>> counter[k]
2
>>>
|
how to check/uncheck the checkboxes using jquery in python web.py
Question: I am using web.py framework to develop a small webpage that displays all the
records from a database.
Below is my code
**list_page.html**
$def with ( select_query )
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>List Page</title>
</head>
<body>
<form method="POST" action="/retrieve">
<table border="1">
<tr>
<td>Select</td><td>Column_two</td><td>Column_three</td><td>Column_four</td><td>Column_five</td>
</tr>
$for r in select_query:
<tr>
<td><p align = "center"><input type="checkbox" id="$r.id" value="" name="$r.id"/></p></td>
<td>$r.Listing_Name</td><td>$r.Address</td><td>$r.Pincode</td><td>$r.Phone</td>
</tr>
</table>
<br/>
<p><button id="submit" name="select_unselect">Select All/Unselect All</button></p>
<p><button id="submit" name="submit">Retrieve</button></p>
</form>
So from the above html page the results from the database will appear in the
form of a table.
Actually, i am trying to implement checking/unchecking the checkboxes with
jquery in python.
Below is my **index.py** code
import web
render = web.template.render('templates/')
db = web.database(dbn='mysql', db='Browser_Date', user='root', pw='redhat')
urls = ('/', 'Listpage',)
app = web.application(urls, globals())
class Listpage:
def GET(self):
select_query = db.select('File_upload')
return render.list_page(select_query)
def POST(self):
i = web.input(groups = {})
ids = i.keys()
........
........
web.header('Content-Type','text/csv')
web.header('Content-disposition', 'attachment; filename=csv_file.csv')
return csv_file.getvalue()
if __name__ == "__main__":
web.internalerror = web.debugerror
app.run()
The concept is the web page(html displayed above) will display the records
from the database, and based on selection of the records by
checking/unchecking and after clicking a `retrieve` button, a csv file will be
generated with the selected records on the page
So now i am trying to check/uncheck all the checkboxes at a time using jquery
, but dont know how to start and where to write the jquery code in the above
html code, i googled on jquery but its really confusing, so I approached SO.
Basically I am newbie in web developing and no idea on implementing/using
jquery, can anyone please let me know on how to implement the
checking/unchecking the checkboxes functionality in the above mentioned
code/html file? so that i can extend the code easily further
Answer: Python, web.py, html, javascript and jQuery ARE confusing.
It is powerful, but also quite confusing when you first try to get your head
around it. You have to think in terms of a hybrid, multi-platform application,
since you're really merging at least three distinct technologies into one user
experience.
The web.py server and templates do nothing more than deliver STATIC content to
a web browser. That's what a web server does. The templating code allows you
to make decisions, in python on the server side, based on querystring
arguments and such, but ultimately all you're really doing is delivering
_code_ (html+javscript) to another interpreter (the browser). PHP, Ruby,
ASP... they're all the same in this regard - building a page for the browser.
Once the request is delivered, the browser takes over. If a user clicks
something, it'll be the javascript interpreter that handles that click.
Javascript (or jquery) can either a) redirect the user to a different page, b)
manipulate things locally or c) submit/receive information via ajax.
(jQuery IS javascript... it's just an awesome way to simplify some of the
eccentricities of pure javascript.)
So, a typical workflow:
* user requests `http://localhost/index`
* web.py delivers the home page via render()
* browser draws the home page html, and processes any onload() javascript (page is now idle, waiting for user interaction)
* user clicks something, javascript function evaluates what was clicked and decides to request more data
* javascript creates and issues an XmlHttpRequest (or in jQuery... $.ajax() )
* web.py handles the request, let's say it was to /home_tab_2
* javascript updates the page (with element.innerHTML() or in jquery `$(element).append()` or whatever)
I hope this helps, as writing this down really helped me with my first ajax
application.
|
Qt formlayout not expanding qplaintextedit vertically
Question: I'm confused why a **QPlainTextEdit** widget will not resize vertically when
added to a **QFormLayout**. In the code below the text field correctly scales
up horizontally, but does not scale up vertically.
Can anyone explain this behavior and offer a solution? I've tried all the
tricks I know to no avail.
from PyQt4 import QtGui
class Diag(QtGui.QDialog):
def __init__(self, parent, *args, **kwargs):
QtGui.QDialog.__init__(self, parent)
layout = QtGui.QFormLayout(self)
widg = QtGui.QPlainTextEdit(self)
layout.addRow('Entry', widg)
if __name__ == '__main__': #pragma: no cover
app = QtGui.QApplication([])
window = Diag(None)
window.show()
app.exec_()
Here is an example of the QPlainTextEdit widget not resizing vertically:

This is on Windows 7 using PyQt 4.5.2 and Python 32-bit 2.6.
Thanks.
Answer: It seems that, by default, a `QFormLayout` will only resize the height of its
fields according to their `sizeHint`.
To change this behaviour, adjust the [vertical
stretch](http://doc.qt.nokia.com/4.8-snapshot/qsizepolicy.html#setVerticalStretch)
as appropriate:
policy = widg.sizePolicy()
policy.setVerticalStretch(1)
widg.setSizePolicy(policy)
|
Argparse subparser: hide metavar in command listing
Question: I'm using the Python argparse module for command line subcommands in my
program. My code basically looks like this:
import argparse
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(title="subcommands", metavar="<command>")
subparser = subparsers.add_parser("this", help="do this")
subparser = subparsers.add_parser("that", help="do that")
parser.parse_args()
When running "python test.py --help" I would like to list the available
subcommands. Currently I get this output:
usage: test.py [-h] <command> ...
optional arguments:
-h, --help show this help message and exit
subcommands:
<command>
this do this
that do that
Can I somehow remove the `<command>` line in the subcommands listing and still
keep it in the usage line? I have tried to give help=argparse.SUPPRESS as
argument to add_subparsers, but that just hides all the subcommands in the
help output.
Answer: I solved it by adding a new HelpFormatter that just removes the line if
formatting a PARSER action:
class SubcommandHelpFormatter(argparse.RawDescriptionHelpFormatter):
def _format_action(self, action):
parts = super(argparse.RawDescriptionHelpFormatter, self)._format_action(action)
if action.nargs == argparse.PARSER:
parts = "\n".join(parts.split("\n")[1:])
return parts
|
match string pattern in python
Question: I have a string that can contain links:
<a href="http://site1.com/">Hello</a> <a href="http://site2.com/">Hello2</a>
<a href="http://site3.com">Hello3</a> ...
How can I extract the text (not the link) of all html tags "Hello", "Hello2",
"Hello3" ... ? I'm thinking of a list that should contain all texts.
Answer: Using [lxml](http://codespeak.net/lxml/):
import lxml.html as LH
content = '''
<a href="http://site1.com/">Hello</a> <a href="http://site2.com/">Hello2</a>
<a href="http://site3.com">Hello3</a>
<a href="/">go <b>home</b>, dude!</a>
'''
doc = LH.fromstring(content)
texts = [elt.text_content() for elt in doc.xpath('//a')]
print(texts)
yields
['Hello', 'Hello2', 'Hello3', 'go home, dude!']
|
Python import Column Data from MySQL as Array
Question: I have this code which retrieves data from a mysql table. I am using Python's
MySQLdb module. I want EACH column's data based on the SELECT WHERE condition
to be retrieved under an array. For instance, in the code below, I want all
the data where location field is 'NY, US' to be retrieved under different
arrays - with each array representing different columns values.
import numpy
import MySQLdb
db = MySQLdb.connect("localhost", "root", "", "test")
cursor = db.cursor()
sql = "SELECT * FROM usa_new WHERE location = 'NY, US'"
try:
cursor.execute(sql)
results = cursor.fetchall()
discresults = {}
for row in results:
id = row[0]
location = row[1]
temp_f = row[2]
pressure_mb = row[3]
wind_dir = row[4]
wind_mph = row[5]
relative_humidity = row[6]
timestamp = row[7]
except:
print "Error: unable to fecth data"
db.close()
Is there something going wrong?
Answer: There is a data structure called 'list' in python which you can use as array.
If your question's semantic what I understood is "Get the result in arrays
categorized by columns, to be stored in local lists", so here is simple
implementation you can do: remember I have fetched rows one by one matching
the given criteria; as its a good practice;
import MySQLdb
db = MySQLdb.connect("localhost", "root", "", "test")
cursor = db.cursor()
id, location, temp_fm, pressure_mb, .. = [],[],[],[],...
//for the number of lists you want to create, just add their names and a empty list
sql = "SELECT * FROM usa_new WHERE location = 'NY, US'"
try:
cursor.execute(sql)
rcount = int(cursor.rowcount)
for r in rcount:
row = cursor.fetchone()
id.append(row[0])
location.append(row[1])
temp_f.append(row[2])
pressure_mb.append(row[3])
wind_dir.append(row[4])
wind_mph.append(row[5])
relative_humidity.append(row[6])
timestamp.append(row[7])
except:
print "Error: unable to fecth data"
db.close()
|
Python-Django: ifchanged template tag
Question: Here i am getting employee with duration from database.Same employee with 2 or
3 records. so gettting employee duration and adding and displaying,if employee
ID changed then again it calculate the employee duration and displaying I need
fo find each employee ID has how many records,through `{{ forloop.counter
}}`individually
Here my problems are...
1. If my loop coming to `{% ifchanged eachSc.laEmpNum %}` then `{{ result.0 }}`. If again then `{{ result.1 }}` and `{{ result.2 }}` then so on...
2. If my loop coming to `{% ifchanged eachSc.laEmpNum %}` then my `{{ forloop.counter }}` should start with 0(zero) again.
`result = [5.5, 4.5, 1.3]` which i am getting from `view.py`
{% for eachSc in DBShots1 %}
<tr>
{% ifchanged eachSc.laEmpNum %}
<td bgcolor="#FFFACD" width="1%">Tot={{ result }}</td>
{% endifchanged %}
</tr>
<td bgcolor="#FFFACD" width="1%">{{ forloop.counter }} </td>
<td bgcolor="#CCFACD" width="1%">{{ eachSc.sName }}</td>
<td bgcolor="#CCF0F5" width="1%">{{ eachSc.duration }}</td>
<td bgcolor="#CCFACD" width="1%">{{ eachSc.frames }}</td>
<td bgcolor="#CCFACD" width="5%">{{ GetEmpDept }} - {{ getEmpName.emp_name }} - {{ eachSc.laEmpNum }}</td>
{% endfor %}
Answer: I believe there are a couple of issues to deal with here. First of all I see
you want to print an item of the 'result' variable, depending on the position
of the forloop counter. This is not directly possible in django templates (for
various fair reasons). To quickly solve this without reorganizing your data in
the view, you can define a custom filter that simply returns a list item on
the specified index. You could put this in your templatetags/myfilters.py:
from django import template
register = template.Library()
@register.filter
def getitem(mylist, index):
return mylist[index]
Then, to reset the forloop counter when 'laEmpNum' changes you should use the
'regroup' django template tag like this:
{% load myfilters %}
{% regroup DBShots1 by laEmpNum as eachScList %}
{% for eachScGrp in eachScList %}
{% for eachSc in eachScGrp.list %}
<tr>
<td bgcolor="#FFFACD" width="1%">{{ forloop.counter }} </td>
<td bgcolor="#CCFACD" width="1%">{{ eachSc.sName }}</td>
<td bgcolor="#CCF0F5" width="1%">{{ eachSc.duration }}</td>
<td bgcolor="#CCFACD" width="1%">{{ eachSc.frames }}</td>
<td bgcolor="#CCFACD" width="5%">{{ GetEmpDept }} - {{ getEmpName.emp_name }} - {{ eachSc.laEmpNum }}</td>
</tr>
{% endfor %}
<tr>
<td> </td><td> </td>
<td bgcolor="#FFFACD" width="1%">Tot={{ result|getitem:forloop.counter0 }}</td>
</tr>
{% endfor %}
The first line loads our custom filter library. The last part uses the custom
filter to retrieve a result item based on the iteration over the 'laEmpNum'
groupper.
|
python html parsing
Question: I have the following problem:
I would like to parse html files and get links from the html file. I can get
links with the following code:
class MyHTMLParser(HTMLParser):
links=[]
def __init__(self,url):
HTMLParser.__init__(self)
self.url = url
def handle_starttag(self, tag, attrs):
try:
if tag == 'a':
for name, value in attrs:
if name == 'href':
if value[:5]=="http:":
self.links.append(value)
except:
pass
But I dont want to get audio files, video files, etc. I only want to get html
links. How can I do that?
Answer: > I can check link ending and if it is particular format I can avoid appending
> that link to the list. Is there other way?
You could look at the `'Content-Type'` header:
import urllib2
url = 'http://stackoverflow.com/questions/13431060/python-html-parsing'
req = urllib2.Request(url)
req.get_method = lambda : 'HEAD'
response = urllib2.urlopen(req)
content_type = response.headers.getheader('Content-Type')
print(content_type)
yields
text/html; charset=utf-8
* * *
Many thanks to @JonClements for `req.get_method = lambda : 'HEAD'`. More info
on this and alternate methods for sending a HEAD request can be found
[here](http://stackoverflow.com/q/107405/190597).
|
Python configuration library
Question: I am looking for a python configuration library that merge multiple text
configuration files into single object just like json.
has anybody know a good one?
Answer: I wrote the [pymlconf](http://pypi.python.org/pypi/pymlconf) for this
purpose.the configuration syntax is [yaml](http://www.yaml.org/).
**For example:**
_Config files:_
#app/conf/users/sites.mysite.conf:
name: mysite.com
owner:
name: My Name
phone: My Phone Number
address: My Address
#app/conf/admin/root.conf:
server:
version: 0.3a
sites:
admin:
name: admin.site.com
owner:
name: Admin Name
phone: Admin Phone Number
address: Admin Address
#app/conf/admin/server.conf:
host: 0.0.0.0
port: 80
#../other_path/../special.conf:
licence_file: /path/to/file
log_file: /path/to/file
#app/src/builtin_config.py:
_builtin_config={
'server':{'name':'Power Server'}
}
OR:
_builtin_config="""
server:
name: Power Server
"""
_Then look at single line usage:_
from pymlconf import ConfigManager
from app.builtin_config import _builtin_config
config_root = ConfigManager(
_builtin_config,
['app/conf/admin','app/conf/users'],
'../other_path/../special.conf')
_Fetching config entries:_
# All from app/conf/users/sites.mysite.conf
print config_root.sites.mysite.name
print config_root.sites.mysite.owner.name
print config_root.sites.mysite.owner.address
print config_root.sites.mysite.owner.phone
# All from app/conf/admin/root.conf
print config_root.sites.admin.name
print config_root.sites.admin.owner.name
print config_root.sites.admin.owner.address
print config_root.sites.admin.owner.phone
print config_root.server.name # from _builtin_config
print config_root.server.version # from app/conf/admin/root.conf
print config_root.server.host # from app/conf/admin/server.conf
print config_root.server.port # from app/conf/admin/server.conf
print config_root.licence_file # from ../other_path/../special.conf
print config_root.log_file # from ../other_path/../special.conf
It seems this covers your problem.but you can fork it on
[github](https://github.com/pylover/pymlconf)
Links:
1. [Python package index](http://pypi.python.org/pypi/pymlconf)
2. [Source on github](https://github.com/pylover/pymlconf)
3. [Documentation](http://packages.python.org/pymlconf/)
|
How to use the Python GTK3 (gi.repository) gdk_event_get_scroll_deltas() method
Question: I'm attempting to port a Python program from GTK2 to GTK3.
I understand the use of GDK_SMOOTH_SCROLL_MASK but can find no reference to an
implementation of
[gdk_event_get_scroll_deltas()](http://developer.gnome.org/gdk3/3.4/gdk3-Events.html#gdk-
event-get-scroll-deltas).
Answer: It should be in Gdk.Event.get_scroll_deltas. If not, you should check your
library version.
eric@hum:():~$ python3
Python 3.2.3 (default, Jun 25 2012, 23:10:56)
[GCC 4.7.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from gi.repository import Gdk
>>> "get_scroll_deltas" in dir(Gdk.Event)
True
|
How to support multiple versions of python for urllib2?
Question: I want my code to suuport python >= 2.5 The code uses urllib like this:
handle = urllib2.urlopen(req, timeout)
This worked fine for python2.6 & python2.7 but python2.5 does not support
explicit timeout so we have to set default timeout for sockets like this:
import socket
socket.setdefaulttimeout(timeout)
handle = urllib2.urlopen(req)
Although this works fine for higher versions of python (2.6 & 2.7) but I want
to set timeout explicitly where it is supported. My question is how to do this
elegantly? Should I do a version check?
Answer: There's always `try`, which is pretty much perfect for this sort of thing:
import socket
import urllib2
try:
handle = urllib2.urlopen(req, timeout = timeout)
except TypeError:
socket.setdefaulttimeout(timeout)
handle = urllib2.urlopen(req)
(Make sure to use the keyword argument for this in the first call, otherwise
python 2.5 will think you are passing it the 'data' argument.)
|
Issue getting response from AJAX call in GAE Python
Question: I'm just trying to make a simple ajax call on click of a button to pass some
data from a textbox using ajax and retrieve the same after ajax call.But some
thing is messy in here causin an alert without any data
Here is my ajaxTest.html
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.js"
type="text/javascript">
</script>
<script>
function getData() {
var vdata = $('#txt').val();
$.ajax({
type: "GET",
url: "/processAjax",
data: vdata,
success: function (responseText) {
alert(responseText);
}
});
}
</script>
</head>
<body>
<input id="txt" type="textbox" runat="server" />
<input type="button" runat="server" onclick="getData();" />
</body>
</html>
Here is my main.py
import httplib2
import os
class AjaxCall(webapp.RequestHandler):
def get(self):
template_data = {}
template_path = 'ajaxTest.html'
self.response.out.write(template.render(template_path,template_data))
class ProcessAjax(webapp.RequestHandler):
def get(self):
inputdata = self.request.get("inputData")
self.response.out.write(inputdata)
application = webapp.WSGIApplication(
[('/processAjax',ProcessAjax),
('/ajaxPage',AjaxCall)
],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
Answer: Your AJAX call doesn't include a `inputData` query field at all. Update your
jQuery `$.ajax()` `data` parameter:
data: { inputData: vdata },
|
How to redirect JVM output without tear up output from the application?
Question: Recently I am writing some micro-benchmark code, so I have to print out the
JVM behaviors along with my benchmark information. I use
-XX:+PrintCompilation
-XX:+PrintGCDetails
and other options to get the JVM status. For benchmark information, I simply
use `System.out.print()` method. Because I need to know the order of the
message I printed and the JVM output.
I can get good result when I just print them out in the console, although the
JVM output sometimes tear my messages up, but since they are in different
threads, it is understandable and acceptable.
When I need to do some batch benchmarks, I'd like to _**`redirect the output
into a file`**_ with `pipe (> in Linux system)`, and use python to get the
result from the file and analyse it.
Here is the problem:
**`The JVM output always overlapped with the messages I printed in the Java
application.`** It ruined the completion of the messages.
Any idea how to deal with this situation? I need **`both the JVM output and
application output in the same place in order to preserve the sequence because
it is important. And they do not overlap on each other so I don't lose
anything.`**
Answer: I would suggest taking a slight detour and looking at using Java
Instrumentation APIs - use (write) a simple **Java Agent** to do this. From
your benchmarking perspective, this will give you far more power as well. You
could use your Java Agent to log everything (and hence there would be no
contention between different logger threads).
You can read more at <http://www.javabeat.net/2012/06/introduction-to-java-
agents/> or <http://today.java.net/pub/a/today/2008/04/24/add-logging-at-
class-load-time-with-instrumentation.html>
|
Type error in Python: need a single Unicode character as parameter
Question: When I try to convert a unicode variable to float using
`unicodedata.numeric(variable_name)`, I get this error "need a single Unicode
character as parameter". Does anyone know how to resolve this?
Thanks!
Here is the code snippet I'm using :
f = urllib.urlopen("http://compling.org/cgi-bin/DAL_sentence_xml.cgi?sentence=good")
s = f.read()
f.close()
doc = libxml2dom.parseString(s)
measure = doc.getElementsByTagName("measure")
valence = unicodedata.numeric(measure[0].getAttribute("valence"))
activation = unicodedata.numeric(measure[0].getAttribute("activation"))
This is the error I'm getting when I run the code above
Traceback (most recent call last):
File "sentiment.py", line 61, in <module>
valence = unicodedata.numeric(measure[0].getAttribute("valence"))
TypeError: need a single Unicode character as parameter
Answer: **Summary:** Use `float()` instead.
The `numeric` function takes a **single** character. It does not do general
conversions:
>>> import unicodedata
>>> unicodedata.numeric('Β½')
0.5
>>> unicodedata.numeric('12')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: need a single Unicode character as parameter
If you want to convert a number to a `float`, use the `float()` function.
>>> float('12')
12.0
It won't do that Unicode magic, however:
>>> float('Β½')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: could not convert string to float: 'Β½'
|
imdbpy2sql must supply URI fpr the database connection
Question: I'm providing the following command, Please tell me where I'm going wrong.
**$ imdbpy2sql.py -d /home/santoshvm/Documents/IMDB DataBase/DataFiles -u URI sqlite:////home/santoshvm/Documents/IMDB DataBase/SQLite Database File/IMDB.sqlite --sqlite-transactions**
2012-11-17 20:11:34,585 WARNING [imdbpy.parser.sql.aux] /usr/local/lib/python2.7/dist-packages/IMDbPY-4.9-py2.7.egg/imdb/parser/sql/__init__.py:125: Unable to import the cutils.ratcliff function. Searching names and titles using the "sql" data access system will be slower.
2012-11-17 20:11:34,586 WARNING [imdbpy.parser.sql.aux] /usr/local/lib/python2.7/dist-packages/IMDbPY-4.9-py2.7.egg/imdb/parser/sql/__init__.py:332: Unable to import the cutils.soundex function. Searches of movie titles and person names will be a bit slower.
You must supply the URI for the database connection
imdbpy2sql.py usage:
/usr/local/bin/imdbpy2sql.py -d /directory/with/PlainTextDataFiles/ -u URI [-c /directory/for/CSV_files] [-o sqlobject,sqlalchemy] [-i table,dbm] [--CSV-OPTIONS] [--COMPATIBILITY-OPTIONS]
**What is the right way of giving the URI?**
Answer: The problem is that you have whitespaces in your path :
home/santoshvm/Documents/IMDB DataBase/SQLite Database File/IMDB.sqlite
You should either rename your directories or escape the whitespace with a "\"
:
home/santoshvm/Documents/IMDB\ DataBase/SQLite\ Database\ File/IMDB.sqlite
|
Understanding imports in views.py - Django
Question: I have a very big python list ( ~ 1M strings) defined in a .py file. I import
it in my views.py to access the list in my views. My question is does the list
gets loaded in RAM for every user coming to the web app, or does it loads just
one single time and is used for all users ?
Answer: A Django process is loaded once and remains active to handle incoming
requests. So if you define the list as a global variable, it stays in RAM and
all is fine. It is discouraged to manipulate the list though.
|
Iterating over rows in a column with XLRD
Question: I have been able to get the column to output the values of the column in a
separated list. However I need to retain these values and use them one by one
to perform an Amazon lookup with them. The amazon lookup is not the problem.
Getting XLRD to give one value at a time has been a problem. Is there also an
efficient method of setting a time in Python? The only answer I have found to
the timer issue is recording the time the process started and counting from
there. I would prefer just a timer. This question is somewhat two parts here
is what I have done so far.
I load the spreadsheet with xlrd using argv[1] i copy it to a new spreadsheet
name using argv[2]; argv[3] i need to be the timer entity however I am not
that far yet.
I have tried:
import sys
import datetime
import os
import xlrd
from xlrd.book import colname
from xlrd.book import row
import xlwt
import xlutils
import shutil
import bottlenose
AMAZON_ACCESS_KEY_ID = "######"
AMAZON_SECRET_KEY = "####"
print "Executing ISBN Amazon Lookup Script -- Please be sure to execute it python amazon.py input.xls output.xls 60(seconds between database queries)"
print "Copying original XLS spreadsheet to new spreadsheet file specified as the second arguement on the command line."
print "Loading Amazon Account information . . "
amazon = bottlenose.Amazon(AMAZON_ACCESS_KEY_ID, AMAZON_SECRET_KEY)
response = amazon.ItemLookup(ItemId="row", ResponseGroup="Offer Summaries", SearchIndex="Books", IdType="ISBN")
shutil.copy2(sys.argv[1], sys.argv[2])
print "Opening copied spreadsheet and beginning ISBN extraction. . ."
wb = xlrd.open_workbook(sys.argv[2])
print "Beginning Amazon lookup for the first ISBN number."
for row in colname(colx=2):
print amazon.ItemLookup(ItemId="row", ResponseGroup="Offer Summaries", SearchIndex="Books", IdType="ISBN")
I know this is a little vague. Should I perhaps try doing something like
column = colname(colx=2) then i could do for row in column: Any help or
direction is greatly appreciated.
Answer: The use of `colname()` in your code is simply going to return the **name** of
the column (e.g. 'C' by default in your case unless you've overridden the
name). Also, the use of colname is outside the context of the contents of your
workbook. I would think you would want to work with a specific _sheet_ from
the workbook you are loading, and from within that sheet you would want to
reference the values of a column (2 in the case of your example), does this
sound somewhat correct?
wb = xlrd.open_workbook(sys.argv[2])
sheet = wb.sheet_by_index(0)
for row in sheet.col(2):
print amazon.ItemLookup(ItemId="row", ResponseGroup="Offer Summaries", SearchIndex="Books", IdType="ISBN")
Although I think looking at the call to `amazon.ItemLookup()` you probably
want to refer to `row` and **not** to `"row"` as the latter is simply a string
and the former is the actual contents of the variable named `row` from your
for loop.
|
Read database entries from mysql in the form of dictionary in python
Question: > **Possible Duplicate:**
> [Python: use mysqldb to import a MySQL table as a
> dictionary?](http://stackoverflow.com/questions/2180226/python-use-mysqldb-
> to-import-a-mysql-table-as-a-dictionary)
I currently get the db entry in the form of array using the commands
db_conn = mdb.connect(host = "localhost", user = "username", passwd = "password", db = "db_name", charset='utf8')
cursor = db_conn.cursor()
cursor.execute(sql_query)
rows = cursor.fetchall()
I get the row values in the form of array. Can I get in the form of dictionary
with column names as keys. I couldn't find any such command.
Answer: Set the cursor factory when you connect:
db_conn = mdb.connect(
host="localhost", user="username", passwd="password", db="db_name",
charset='utf8', cursorclass=mdb.cursors.DictCursor)
|
HttpError 403 when requesting https://www.googleapis.com/bigquery/v2/projects/publicdata/queries?alt=json returned "Access Denied: Job publicdata:
Question: i'm getting the following error while trying to run a query on Bigquery using
GAE python.
HttpError 403 when requesting https://www.googleapis.com/bigquery/v2/projects/publicdata/queries?alt=json returned "Access Denied: Job publicdata:job_c08d8f254c0449c2b3e26202e62ca5fa: RUN_QUERY_JOB">
Here is main.py code
import httplib2
import os
from apiclient.discovery import build
from google.appengine.ext import webapp
from google.appengine.ext.webapp import template
from google.appengine.ext.webapp.util import run_wsgi_app
from oauth2client.appengine import AppAssertionCredentials
# BigQuery API Settings
SCOPE = 'https://www.googleapis.com/auth/bigquery'
PROJECT_NUMBER = 'publicdata' # REPLACE WITH YOUR Project ID
# Create a new API service for interacting with BigQuery
credentials = AppAssertionCredentials(scope=SCOPE)
httpss = credentials.authorize(httplib2.Http())
bigquery_service = build('bigquery', 'v2', http=httpss)
class GetTableData(webapp.RequestHandler):
def get(self):
queryData = {'query':'SELECT word,count(word) AS count FROM publicdata:samples.shakespeare GROUP BY word;',
'timeoutMs':10000}
queryData = bigquery_service.jobs()
queryReply = queryData.query(projectId=PROJECT_NUMBER,body=queryData).execute()
self.response.out.write(queryReply)
application = webapp.WSGIApplication(
[('/queryTableData',GetTableData)
],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
Here is App.yaml
application: bigquerymashup
version: 1
runtime: python
api_version: 1
handlers:
- url: /favicon\.ico
static_files: favicon.ico
upload: favicon\.ico
- url: /css
static_dir: css
- url: /js
static_dir: js
- url: /img
static_dir: img
- url: .*
script: main.py
i'm using App Engine Service Account for authentication
Answer: It looks like you are trying to run a query under the 'publicdata' project
(which is internal to BigQuery, and thus you don't have permission to access
it). Queries must be run using the Project Number of the Google Developer
project you created. See the [BigQuery REST API Quick
Start](https://developers.google.com/bigquery/docs/hello_bigquery_api) page
for more information.
One thing: your Class name in the example is "GetTableData" - not sure if you
are trying to [List
Tabledata](https://developers.google.com/bigquery/docs/reference/v2/tabledata/list),
or retrieve a [Table
resource](https://developers.google.com/bigquery/docs/reference/v2/tables/get)?
In any case, here are some Python snippets that demonstrate how you might make
these API calls using the [Google Python API
client](http://code.google.com/p/google-api-python-client/).
def get_table(service, project_number, dataset_id, table_id):
"""Get Table information.
Args:
service: Authorized BigQuery API client.
project_number: The current Project number.
dataset_id: The name of the dataset.
table_id: Id of the relevant table.
"""
tables = service.tables()
try:
table_info = tables.get(projectId=project_number,
datasetId=dataset_id,
tableId=table_id).execute()
print 'Table information:\n'
print 'Table name: %s' % table_info['id']
print 'Table creation time: %s' % table_info['creationTime']
except errors.HttpError, error:
print 'Could not get Table information: %s' % error
def list_table_data(service, project_number, dataset_id, table_id):
"""Returns table data from a specific set of rows.
Args:
service: Authorized BigQuery API client.
project_number: The current Project number.
dataset_id: The name of the dataset.
table_id: The name of the table.
"""
try:
table = service.tabledata()
table_data = table.list(projectId=project_number,
datasetId=dataset_id,
tableId=table_id,
maxResults=10).execute(http)
print 'Total Rows: %s' % table_data['totalRows']
for row in table_data['rows']:
data = []
for values in row['f']:
value = values['v'] if values['v'] is not None else ''
data.append(value)
print ' '.join(data)
except HttpError, error:
print 'Could not list Table data. %s' % error
|
A more pythonic approach to the following algorithm
Question: I change the final structure by a more logical:
{'state1': {'city1': ['dict1', 'dict2']}, 'state2': {'City2': ['dict3']}}
and the code:
dir_dict = {}
for one in objects:
state = one.dir.city.state.name
city ββ= one.dir.city.name
address = one.dir.address
if state not in dir_dict:
dir_dict[state] = {}
if city not in dir_dict[state]:
dir_dict[state][city] = []
dir_dict[state][city].append(address)
I still implement code @ Eric
Answer:
from collections import defaultdict
dir_dict = defaultdict(lambda: defaultdict(set))
for x in objects:
dir_dict[x.state][x.city].add(x.address)
dir_list = dir_dict.keys()
|
Go to in Python 3
Question: Python 3 have no GOTO or something like this. But I have some algoritm, that
need GOTO type functionality. May be someone can suggest way out?
Main menu
1-New Game 2-Options 3-Exit
User actions - enter to main menu - enter to options menu - enter to main menu
AGAIN and so on. So in code I don't know how turn back and teleport to upper
code with main menu.
Answer: You could use a dictionary: 'user choice' -> 'corresponding action' e.g.:
import sys
def foo():
print('foo')
actions = {'1': foo, '2': sys.exit}
def read_choice(choices, prompt):
c = None
while c not in choices:
c = input(prompt)
return c
while True:
# get user input
x = read_choice(actions, 'Input 1 to do foo or 2 to exit')
actions[x]() # act on it
See [complete example](http://ideone.com/I3T3wq) that also shows how to create
menu dynamically from a configuration file.
|
Python Mapper on Amazon EMR
Question: I'm trying to run a Python script as a mapper on Amazon EMR.
The initial portion of my script resembles:
import sys
import decimal
def some_function(sensor_record):
return 1
That results in the following error output:
[...]/mapper_script.py: line 1: import: command not found
[...]/mapper_script.py: line 2: import: command not found
[...]/mapper_script.py: line 5: syntax error near unexpected token `('
[...]/mapper_script.py: line 5: `def some_function(sensor_record):'
Any thoughts on this? I've used EMR successfully before but not with Python. I
get the same result running the job through the web interface and using the
boto library.
Answer: You just need to add
#!/usr/bin/env python
at the beginning of the script to ensure that EMR picks the right language.
|
Python Pandas: remove entries based on the number of occurrences
Question: I'm trying to remove entries from a data frame which occur less than 100
times. The data frame `data` looks like this:
pid tag
1 23
1 45
1 62
2 24
2 45
3 34
3 25
3 62
Now I count the number of tag occurrences like this:
bytag = data.groupby('tag').aggregate(np.count_nonzero)
But then I can't figure out how to remove those entries which have low
count...
Answer: Edit: Thanks to @WesMcKinney for showing this much more direct way:
data[data.groupby('tag').pid.transform(len) > 1]
* * *
import pandas
import numpy as np
data = pandas.DataFrame(
{'pid' : [1,1,1,2,2,3,3,3],
'tag' : [23,45,62,24,45,34,25,62],
})
bytag = data.groupby('tag').aggregate(np.count_nonzero)
tags = bytag[bytag.pid >= 2].index
print(data[data['tag'].isin(tags)])
yields
pid tag
1 1 45
2 1 62
4 2 45
7 3 62
|
Fetching cookie enabled page in python
Question: I want to download a webpage using python for some web scraping task. The
problem is that the website requires cookies to be enabled, otherwise it
serves different version of a page. **I did implement a solution that solves
the problem, but it is inefficient in my opinion. Need your help to improve
it!**
This is how I go over it now:
import requests
import cookielib
cj = cookielib.CookieJar()
user_agent = {'User-agent': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)'}
#first request to get the cookies
requests.get('https://ccirecruit.cox.com/psc/RECRUIT/EMPLOYEE/HRMS/c/HRS_HRAM.HRS_CE.GBL?JobOpeningId=42845&SiteId=1&Page=HRS_CE_JOB_DTL&PostingSeq=1&',headers=user_agent, timeout=2, cookies = cj)
# second request reusing cookies served first time
r = requests.get('https://ccirecruit.cox.com/psc/RECRUIT/EMPLOYEE/HRMS/c/HRS_HRAM.HRS_CE.GBL?JobOpeningId=42845&SiteId=1&Page=HRS_CE_JOB_DTL&PostingSeq=1&',headers=user_agent, timeout=2, cookies = cj)
html_text = r.text
Basically, I create a `CookieJar` object and then send two consecutive
requests for the same URL. **First time it serves me the bad page but as
compensation gives cookies. Second request reuses this cookie and I get the
right page.**
The question is: **Is it possible to just use one request and still get the
right cookie enabled version of a page?**
I tried to send `HEAD` request first time instead of `GET` to minimize
traffic, in this case cookies aren't served. Googling for it didn't give me
the answer either. So, it is interesting to understand how to make it
efficiently! Any ideas?!
Answer: You need to make the request to get the cookie, so no, you cannot obtain the
cookie and reuse it without making two separate requests. If by "cookie-
enabled" you mean the version that recognizes your script as having cookies,
then it all depends on the server and you could try:
* hardcoding the cookies before making first request,
* requesting some smallest possible page (with smallest possible response yet containing cookies) to obtain first cookie,
* trying to find some walkaroung (maybe adding some GET argument will fool the site into believing you have cookies - but you would need to find it for this specific site),
|
Groovy expand tuple/map to arguments
Question: Is it possible to expand a map to a list of method arguments
In Python it is possible, eg. [Expanding tuples into
arguments](http://stackoverflow.com/questions/1993727/expanding-tuples-into-
arguments)
I have a `def map = ['a':1, 'b':2]` and a method `def m(a,b)`
I want to write smt like `m(*map)`
Answer: The [spread operator](http://mrhaki.blogspot.com.es/2009/09/groovy-goodness-
spread-operator.html) (*) is used to tear a list apart into single elements.
This can be used to invoke a method with multiple parameters and then spread a
list into the values for the parameters.
List (lists in Groovy are closest related with tuples in
Python[1](http://groovy.329449.n5.nabble.com/Why-has-groovy-no-built-in-tuple-
support-like-map-and-lists-
td360532.html),[2](http://stackoverflow.com/questions/626759/whats-the-
difference-between-list-and-tuples-in-python)):
list = [1, 2]
m(*list)
Map:
map = [a: 1, b: 2]
paramsList = map.values().toList()
m(*paramsList)
An importan point is that you pass the arguments by position.
|
saving data to txt file using python
Question: I am new in python, and I really need some help. I am doing this memory game
where I need to save user, game score and time into a text file using python.
I have tried several ways to do it, but nothing seems to work. I need to get
the text what is shown after game on html page (statsParagraph) into *txt file
using python In my html code I have this:
<form action="http://....python file addres" method="GET">
<p id="statsParagraph" name="user" value=""></p>
</form>
in my javascript code I have this:
function showstatistic(){
var user = prompt ("What is your name?","");
alert (user + ". game is over!")
var s="";
for(var i=0;i<statistic.length;i++){
var t=statistic[i].time;
var timeString= (t-(t%60))/60+":"+(t%60);
s += "User: " + user
+" <br/>"
+ " Game #"+ (i+1) +" "
+" <br/>"
+" Guessed: "+ statistic[i].guessed
+" <br/>"
+" Minus points: "+ statistic[i].minuses
+" <br/>"
+" Guessed right: "+ statistic[i].plusPoints
+" <br/>"
+" Time: "+ timeString
+" <br/>"
+" <br/>";
}
$("#statsParagraph").html(s);
}
Answer: You would open the code with
from sys import argv
script, filename = argv
target = open(filename, 'w')
then assign each piece of info to a variable with an appropriate name
time = #x
score = #x
user = #name
then type the following
target.write(time)
target.write("\n")
target.write(score)
target.write("\n")
target.write(user)
target.write("\n")
target.close()
when you run the file you would run it with an argument, that argument being
the file where you would like to write the info.
>python mymemorygame.py savegamefile.txt
You may also want to consider truncating the file, before writing to it
depending on how you are saving the info (multiple files or one file)
I hope this helps, apologies if it doesn't I may have misunderstood what it is
you are looking to do.
alternatively, it may be harder to write but easier to manage/implement, you
could put the info in to a list that you store in a single file, rather than
writing new .txt files all the time.
|
Python: efficient way to ensure attribute types within an object?
Question: What's the most efficient way (where "efficient" doesn't necessarily mean
_fast_ , but _"elegant"_ , or _"maintainable"_) to do type check when setting
attributes in an object?
I can use `__slots__` to define the allowed attributes, but how should I
constrain the types?
Surely I can write _"setter"_ methods for each attribute, but I find it a bit
cumbersome to maintain since my type checks are usually simple.
So I'm doing something like this:
import datetime
# ------------------------------------------------------------------------------
# MyCustomObject
# ------------------------------------------------------------------------------
class MyCustomObject(object):
pass
# ------------------------------------------------------------------------------
# MyTypedObject
# ------------------------------------------------------------------------------
class MyTypedObject(object):
attr_types = {'id' : int,
'start_time' : datetime.time,
'duration' : float,
'reference' : MyCustomObject,
'result' : bool,
'details' : str}
__slots__ = attr_types.keys()
# --------------------------------------------------------------------------
# __setattr__
# --------------------------------------------------------------------------
def __setattr__(self, name, value):
if name not in self.__slots__:
raise AttributeError(
"'%s' object has no attribute '%s'"
% (self.__class__.__name__, name))
if type(value) is not self.attr_types[name]:
raise TypeError(
"'%s' object attribute '%s' must be of type '%s'"
% (self.__class__.__name__, name,
self.attr_types[name].__name__))
# call __setattr__ on parent class
super(MyTypedObject, self).__setattr__(name, value)
Which works fine for my purpose:
>>> my_typed_object = MyTypedObject()
>>> my_typed_object.id = "XYZ"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 28, in __setattr__
TypeError: 'MyTypedObject' object attribute 'id' must be of type 'int'
>>> my_typed_object.id = 123
>>> my_typed_object.reference = []
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 28, in __setattr__
TypeError: 'MyTypedObject' object attribute 'reference' must be of type 'MyCustomObject'
>>> my_typed_object.reference = MyCustomObject()
>>> my_typed_object.start_time = "13:45"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 28, in __setattr__
TypeError: 'MyTypedObject' object attribute 'start_time' must be of type 'time'
>>> my_typed_object.start_time = datetime.time(13, 45)
Is there a better way to do this? Having worked with Python for a while now, I
feel like I'm reinventing the wheel.
Answer: You should ask yourself why you feel the need to do this. It's certainly not
very Pythonic. Normally in Python we don't demand that attributes have
specific types: instead, we document the expected types, and assume that any
actual parameters conform. Note that this can mean a completely unrelated type
that implements the same method: for example, we might expect that a parameter
is iterable, without specifically demanding that it inherits from list or
tuple.
|
Build variable into python zip script
Question: Before I start, I am trying to create a python zip script which will take a
snapshot of the target_dir, zip it, save it in the temp folder and give it the
filename of "now" variable. This is the code I have:
#!/usr/bin/env python
import os
import sys
import datetime
now = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M")
target_dir = '/var/lib/data'
temp_dir='/tmp'
zip = zipfile.ZipFile('/tmp/example.zip', 'w', zipfile.ZIP_DEFLATED)
rootlen = len(target_dir) + 1
for base, dirs, files in os.walk(target_dir):
for file in files:
fn = os.path.join(base, file)
zip.write(fn, fn[rootlen:])
I can't figure out how to get this line to instead use the variable "now" and
"temp_dir" instead of hardcoding the destination:
zip = zipfile.ZipFile('/tmp/example.zip', 'w', zipfile.ZIP_DEFLATED)
I guess I want something like this (pseudo code):
zip = zipfile.ZipFile('<temp_dir>/<now>.zip', 'w', zipfile.ZIP_DEFLATED)
Can anyone show me how this should be done?
Answer: In short, `os.path.join(temp_dir, now + '.zip')`, but I'm not sure if that's
really what you're asking. Because I'm not sure how you could have gotten as
far as you have and gotten stuck here.
|
No template named index error
Question: I am new to Python, and I am trying to run a web.py app with Python Anywhere,
but I keep getting the `No template named index` error. I've modified
`wsgi.py` to use the following:
import web
import MySQLdb
urls = (
'/', 'index'
)
render = web.template.render('/home/user/templates/')
Any help would be greatly appreciated.
Answer: You've used the literal path `'/home/user/templates/'`. Unless your username
is actually `user`, there is no such directory, and therefore attempting to
read the `index` template out of that directory is going to fail.
If your username is, say, `rhpt`, you'd change that to
`'/home/rhpt/templates/'`.
Even better, you might want to use `os.path.expanduser('~/templates/')`
instead of hardcoding your username. (Then you can give your code to a friend,
or a client, and they can host it without having to edit the code.)
|
Using Java API in Scala to query views in Couchbase throws timeout exception
Question: EDIT: Note that this works perfectly in java 1.6 but fails in java 1.7.
I've been struggling to get the Couchbase 2.0 java API to work with views. It
works perfectly for getting and putting keys into a bucket.
When I run the scala code below using Java 1.7, I get the following exception:
scala> ERROR com.couchbase.client.ViewNode$EventLogger: Connection timed out: [localhost/127.0.0.1:8092(closed)]
I've also tried setting the timeout in the connection builder to no avail.
import java.net.URI
import com.couchbase.client.CouchbaseClient
import scala.collection.JavaConversions._
val uris = List(URI.create("http://127.0.0.1:8091/pools"))
val client = new CouchbaseClient(uris, "test", "")
val view = client.asyncGetView("date", "dates")
However, the python code below works perfectly, connects to the view, and has
the right output:
from couchbase.client import Couchbase
client = Couchbase("localhost:8091", "username", "password")
bucket = client["test"]
view = bucket.view("_design/date/_view/dates")
count = 0
for row in view:
count = count + 1
print(count)
Any ideas how to properly connect? I've tried to copy their examples exactly
in my code. Unfortunately using python is not an option for this project.
Answer: we are aware of this issue (http://www.couchbase.com/issues/browse/JCBC-151).
It's not your fault or scalas, its just that our client currently has some
problems to connect with java 7. If this is fixed, I'm sure your code will
work as expected.
|
pyserial and wxpython matplotlib the read method failed in thread
Question: I am trying to build a gui and receive data from the serial port and display
in a plot(using matplotlib). But when i open the port , the read() failed. I
just can't figure out why. Can anybody give me some advice please? That will
be appreciated! Here is part of my code: `
class PlotFigure(wx.Frame):
"""Matplotlib wxFrame with animation effect"""
def __init__(self):
wx.Frame.__init__(self, None, wx.ID_ANY, title="Figure for figures", size=(1200, 1200))
# Matplotlib Figure
self.fig = Figure((6, 4), 100)
# bind the Figure to the backend specific canvas
self.canvas = FigureCanvas(self, wx.ID_ANY, self.fig)
# add a subplot
self.ax = self.fig.add_subplot(111)
# limit the X and Y axes dimensions
self.ax.set_ylim([-180, 180])
self.ax.set_xlim([0, POINTS])
self.ax.set_autoscale_on(False)
self.ax.set_xticks([])
# we want a tick every 10 point on Y (101 is to have 10
self.ax.set_yticks(range(-180, 180, 50))
# disable autoscale, since we don't want the Axes to ad
# draw a grid (it will be only for Y)
self.ax.grid(True)
# generates first "empty" plots
self.user1=self.user2=self.user3 = [None] * POINTS
self.l_user1,=self.ax.plot(range(POINTS),self.user1,label='data1')
self.l_user2,=self.ax.plot(range(POINTS),self.user2,label='data2')
self.l_user3,=self.ax.plot(range(POINTS),self.user3,label='data3')
# add the legend
self.ax.legend(loc='upper center',
ncol=4,
prop=font_manager.FontProperties(size=10))
# force a draw on the canvas()
# trick to show the grid and the legend
self.canvas.draw()
# save the clean background - everything but the line
# is drawn and saved in the pixel buffer background
self.bg = self.canvas.copy_from_bbox(self.ax.bbox)
# bind events coming from timer with id = TIMER_ID
# to the onTimer callback function
wx.EVT_TIMER(self, TIMER_ID, self.onTimer)
#for serial
self.ser=serial.Serial('COM1', 115200)
#print self.ser
def onTimer(self, evt):
print 'onTimer....'
"""callback function for timer events"""
# restore the clean background, saved at the beginning
self.canvas.restore_region(self.bg)
# update the data
time.sleep(1)
self.ser.flushInput()
print 'before the read method....'
data=self.ser.read(12)
print 'after the read method... just cant reach here...'
t = struct.unpack('3f', data)
temp1 = t[0]
temp2 = t[1]
temp3 = t[2]
self.user1 = self.user1[1:] + [temp1]
print temp2
self.user2 = self.user2[1:] + [temp2]
self.user3 = self.user3[1:] + [temp3]
# update the plots
self.l_user1.set_ydata(self.user1)
self.l_user2.set_ydata(self.user2)
self.l_user3.set_ydata(self.user3)
# just draw the "animated" objects
self.ax.draw_artist(self.l_user1)
self.ax.draw_artist(self.l_user2)
self.ax.draw_artist(self.l_user3)# It is used to efficiently update Axes data (axis ticks, labels, etc are not updated)
self.canvas.blit(self.ax.bbox)
print 'onTimer ends'
def __del__(self):
self.ser.close()
t.Stop()
if __name__ == '__main__':
app = wx.PySimpleApp()
frame = PlotFigure()
t = wx.Timer(frame, TIMER_ID)
t.Start(1)
print 'new test'
frame.Show()
print 'after frame show '
app.MainLoop()
` I wonder if it is the thread problem.so I do an another test: Here is part
of my code: `
ser =serial.Serial(port='COM1',baudrate=115200,xonxoff=0)
def reading(ser):
flag = 1
print 'threading'
while True:
print ser.readable() #true
print 'before the read method.'
data = ser.read(12)
print 'after the read method... cant reach here.'
time.sleep(1)
if __name__ == '__main__':
threading.Thread(target=reading,args=(ser,)).start()
print 'main process1'
` **the read method failed!**
Then is an other simple test,I get rid of the thread and it works ok! Here is
my part of code: `
ser = serial.Serial('COM1', 115200)
num=0
try:
while True:
data = ser.read(12)
print num
time.sleep(1)
ser.flushInput()
finally:
ser.close()
`
Answer:
#
# The following is an example of an application I have running using the
# serial port. Although the baud is lower, it should not matter. This application
# will capture each byte and append them into a buffer before processing. All bytes
# are captured including non-printable characters. The terminator in my application
# is the RETURN character (0x0A) or the (0x0D) denoting end of line.
# I think your port is hung waiting on the full twelve (12) bytes that you are
# expecting on the read. The hang condition is because only a portion of
# the 12 bytes have occurred. I suggest you add a timeout value of 1 second
# to the serial read, then test for the length of what was read to validate
# what you think you just read. If zero, you have a serial cable or problem
# at the other end of the cable. If greater than zero, then the device is
# not sending what you think it should.
import serial
import string
import binascii
import threading
import thread
import fileinput
import datetime
import time
import sys
from configobj import ConfigObj
# Insert following code at top in the import area of your application
#
hdrFile = 'AppName.ini'
config = ConfigObj(hdrFile)
hdrVal = 'COM'
comport = int(config[hdrVal])
hdrBaud = 'Baudrate'
baudrate = config[hdrBaud]
print "Com Config = ",config
print "Com Port = ",comport
boolSerOnline = False
bool_IsALIVE = False # thread has active serial port
class myThread( threading.Thread ):
def __init__(self, intThreadID, strName, intCountVal ):
# --------------------------------------- Constructor
threading.Thread.__init__(self) # <<-- MUST be first in Init
self.threadID = intThreadID
self.threadName = strName
self.intStartCounter = intCountVal
def run(self):
# --------------------------------------- Tooltalk
# Thread to Listen to the RS-232 RX Port
# ------------------------------- TOOLTALK ()
# ------------------------------- THREAD
global bool_IsALIVE, boolThreadRun
#
#
inbuf = [] # Clear Buffer
buffer = "" # Clear Buffer
strFirstByte = ""
boolWriteCRLF = 0
#
print "\n*** Starting %s: %s" % ( self.threadName, time.ctime(time.time()))
print "\n"
#
bool_IsALIVE = False
#
# ----------------------------------- Start of Thread Loop
while ( boolThreadRun ):
# ----------------------------- THREAD is Running
try:
# ------------ Read 1 byte at a time in thread
# Note: reads all characters, Even non-printable characters
strByte = str(ser.read( 1 ))
#
intLength = len( strByte )
#
if ( intLength > 0 ):
bool_IsALIVE = True # Indicate connected
# Process byte
#
if (strFirstByte == ""):
boolWriteCRLF = 0
#
if ( strByte == '\r' ):
strByte = '' # Skip until we have first byte
elif ( strByte == '\n' ):
strByte = '' # Skip until we have first byte
elif ( strByte == '0' ):
inbuf = [] # Clear buffer first
inbuf.append( strByte ) # Add to inbuf
buffer = ''.join(inbuf) # join the two buffers
strFirstByte = strByte
boolWriteCRLF = 1
elif ( strByte == 'S' ):
inbuf = [] # Clear buffer first
inbuf.append( strByte ) # Add to inbuf
buffer = ''.join(inbuf) # join the two buffers
strFirstByte = strByte
boolWriteCRLF = 1
elif ( strByte == 'C' ):
inbuf = [] # Clear buffer first
inbuf.append( strByte ) # Add to inbuf
buffer = ''.join(inbuf) # join the two buffers
strFirstByte = strByte
boolWriteCRLF = 1
elif ( strByte == '*' ):
inbuf = [] # Clear buffer first
inbuf.append( strByte ) # Add to inbuf
buffer = ''.join(inbuf) # join the two buffers
strFirstByte = strByte # save first byte of string
boolWriteCRLF = 1
elif ( strByte == '>' ):
inbuf = [] # Clear buffer first
inbuf.append( strByte ) # Add to inbuf
buffer = ''.join(inbuf) # join the two buffers
strFirstByte = strByte # save first byte of string
boolWriteCRLF = 1
elif ( strByte == 'M' ):
inbuf = [] # Clear buffer first
inbuf.append( strByte ) # Add to inbuf
buffer = ''.join(inbuf) # join the two buffers
strFirstByte = strByte # save first byte of string
boolWriteCRLF = 1
else:
pass
else:
inbuf.append( strByte ) # Add to inbuf
buffer = ''.join(inbuf) # join the two buffers
else:
# serial driver timed out producing No byte
# print "SKIP - NO BYTE (NO POWER?)"
pass
except:
bool_IsALIVE = False
print "[Tooltalk] Error reading Serial Port"
if __name__ == "__main__":
try:
# *** +++++++++++++ Notice the 1 second timeout ...
ser = serial.Serial(comport,baudrate,8,'N',1,timeout=1)
boolSerOnline = True
except:
print "[Main] Serial Port NOT Connected\n"
boolSerOnline = False
boolThreadRun = True # Flag indicating "Tooltalk" to Run
#
thread1 = myThread(1, "Tooltalk-1", 1) # Init Thread
thread1.start() # Start Tooltalk Thread
|
How to export a cookie in a file to use with python scrapy
Question: I want to login automatically to SO with my cookie using scrapy and python.
But i don't know
1. How to export cookie to what format so that i can use it with scrapy
2. There are many cookies listed in chrome for stackoverflow like this
Now which one to use for login `__qca_ _utma_ _utmb_ _utmc __utmz gauthedsgt
usrLocal storage`
3.What will be in that file and how to use that in Scrapy code
Answer: You should pass your cookie when you create first request to the server, as
said on [this page](http://doc.scrapy.org/en/latest/topics/request-
response.html):
request_with_cookies = Request(url="http://www.example.com",
cookies={'currency': 'USD', 'country': 'UY'})
If you don't want which cookie is important for auth, use all of them or try
to use one of them. Also check that
[CookiesMiddleware](http://doc.scrapy.org/en/latest/topics/downloader-
middleware.html#cookies-mw) is enabled to get your cookies preserved in
further requests.
If you want to use cookies stored in a file I suggest you to write simple
parser of this file and convert file contents to python dict. As long as your
file is in simple plain text format, this will be easy.
|
Redirect an output command to a variable or file?
Question: I'm trying to write a python script that will allow me to take the output from
a command and to put that into a file or variable (Preferability a variable).
In my code, I have redirected the output to a `StringIO()` object. From that,
I want take the output a command and to put it into that `StringIO()` object.
Here is a sample of my code:
from StringIO import StringIO
import sys
old_stdout = sys.stdout
result = StringIO()
sys.stdout = result
# This will output to the screen, and not to the variable
# I want this to output to the 'result' variable
os.system('ls -l')
Also, how do I take result and put it into a string?
Thanks in advance!!
Answer:
import subprocess
sp = subprocess.Popen(['ls', '-l'], stdout=subprocess.PIPE)
output, _ = sp.communicate()
print "Status:", sp.wait()
print "Output:"
print output
|
Run a Python script from Python prompt such that variables are loaded into the interactive environment
Question: Say I have a (somewhat pointless) Python script
#!/usr/bin/python
a = 5
Is there a way to run this script from the interactive prompt such that after
running if I type `a` I get
>>> a
5
and not
>>> a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'a' is not defined
More generally, if my script calculates something through a series of steps,
is there a way to access that something after the script has finished (in the
same kind of manner).
Answer: Import it:
from yourscriptname import a
Each and every `.py` file in python is a module, and you can simply import it.
If the file is called `foo.py`, import `foo`.
|
Python subprocesses don't output properly?
Question: I don't think I'm understanding python subprocess properly at all but here's a
simple example to illustrate a point I'm confused about:
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.poll()
lookup_client.stdin.write("magic\n")
print lookup_client.poll()
lookup_client.send_signal(subprocess.signal.SIGINT)
print lookup_client.poll()
lookup_server.wait()
print "Lookup server terminated properly"
The output comes back as
None
None
None
and never completes. Why is this? Also, if I change the first argument of
Popen to an array of all of those arguments, none of the nc calls execute
properly and the script runs through without ever waiting. Why does that
happen?
Ultimately, I'm running into a problem in a much larger program that does
something similar using netcat and another program running locally instead of
two versions of nc. Either way, I haven't been able to write to or read from
them properly. However, when I run them in the python console everything runs
as I would expect. All this has me very frustrated. Let me know if you have
any insights!
EDIT: I'm running this on Ubuntu Linux 12.04, when I `man nc`, I get the BSD
General Commands manual so I'm assuming this is BSD netcat.
Answer: The problem here is that you're sending `SIGINT` to the process. If you just
`close` the `stdin`, `nc` will close its socket and quit, which is what you
want.
It sounds like you're actually using `nc` for the client (although not the
server) in your real program, which means you have two easy fixes:
Instead of `lookup_client.send_signal(subprocess.signal.SIGINT)`, just do
`lookup_client.stdin.close()`. `nc` will see this as an EOF on its input, and
exit normally, at which point your server will also exit.
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.poll()
lookup_client.stdin.write("magic\n")
lookup_client.stdin.close()
print lookup_client.poll()
lookup_server.wait()
print "Lookup server terminated properly"
When I run this, the most common output is:
None
None
magic
Lookup server terminated properly
Occasionally the second `None` is a `0` instead, and/or it comes after `magic`
instead of before, but otherwise, it's always all four lines. (I'm running on
OS X.)
For this simple case (although maybe not your real case), just use
[`communicate`](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate)
instead of trying to do it manually.
#!/usr/bin/env python
import subprocess
lookup_server = subprocess.Popen("nc -l 5050", shell=True)
lookup_client = subprocess.Popen("nc localhost 5050", shell=True, stdin=subprocess.PIPE)
print lookup_client.communicate("magic\n")
lookup_server.wait()
print "Lookup server terminated properly"
Meanwhile:
> Also, if I change the first argument of Popen to an array of all of those
> arguments, none of the nc calls execute properly and the script runs through
> without ever waiting. Why does that happen?
As [the
docs](http://docs.python.org/2/library/subprocess.html#subprocess.Popen) say:
> On Unix with `shell=True`β¦ If args is a sequence, the first item specifies
> the command string, and any additional items will be treated as additional
> arguments to the shell itself.
So, `subprocess.Popen(["nc", "-l", "5050"], shell=True)` does `/bin/sh -c 'nc'
-l 5050`, and `sh` doesn't know what to do with those arguments.
You probably _do_ want to use an array of args, but then you have to get rid
of `shell=True`βwhich is a good idea anyway, because the shell isn't helping
you here.
One more thing:
lookup_client.send_signal(subprocess.signal.SIGINT)
print lookup_client.poll()
This may print either -2 or None, depending on whether the client has finished
responding to the `SIGINT` and been killed before you `poll` it. If you want
to actually get that -2, you have to call `wait` rather than `poll` (or do
something else, like loop until `poll` returns non-None).
Finally, why didn't your original code work? Well, sending `SIGINT` is
asynchronous; there's no guarantee as to when it might take effect. For one
example of what could go wrong, it could take effect before the client even
opens the socket, in which case the server is still sitting around waiting for
a client that never shows up.
You can throw in a `time.sleep(5)` before the `signal` call to test thisβbut
obviously that's not a real fix, or even an acceptable hack; it's only useful
for testing the problem. What you need to do is not kill the client until it's
done everything you want it to do. For complex cases, you'll need to build
some mechanism to do that (e.g., reading its stdout), while for simple cases,
`communicate` is already everything you need (and there's no reason to kill
the child in the first place).
|
Twisted reactor is stopped, but program doesn't end?
Question: So I'm writing a small script to use with Deluge. Deluge uses Twisted, and I
really don't have a firm grasp on how it works. Normally I'd just look up more
info on it, but getting started with Twisted would take a _long_ time and is
beyond the scope of this little project. So I figured I would just ask here.
Now, I have this code. I'll try to explain the specifig parts I need help with
import base64
import processargs
from deluge.ui.client import client
from twisted.internet import reactor
from deluge.log import setupLogger
setupLogger()
options = processargs.readConfig(os.path.expanduser("~/.deluge-automator"))
d = client.connect(
host=options['host'],
port=int(options['port']),
username=options['username'],
password=options['password']
)
def start():
#other code
t = client.core.add_torrent_file(tfile,
base64.encodestring(data), None)
t.addCallback(on_torrent_added_success, tfile)
t.addErrback(on_torrent_added_fail)
def handle_stop_signal(SIGNAL, stack):
client.disconnect()
reactor.stop()
def on_torrent_added_success(result, tfile):
#other code
start()
def on_torrent_added_fail(result):
print "Add torrent failed!"
print "result: ", result
def on_connect_success(result):
#other code
start()
d.addCallback(on_connect_success)
def on_connect_fail(result):
print "Connection failed!"
print "result: ", result
d.addErrback(on_connect_fail)
signal.signal(signal.SIGTERM, handle_stop_signal)
signal.signal(signal.SIGINT, handle_stop_signal)
reactor.run()
When a torrent is successfully added, it should go back to start(), and it
does, but I think it loses the reactor or something. Because now whenever it
recieves a SIGTERM or SIGINT, the reactor closes, but doesn't quit the
program:
Β± % python2 main.py
Connection was successful!
result: 10
^C^CConnection failed!
result: [Failure instance: Traceback: <class 'twisted.internet.error.ReactorNotRunning'>: Can't stop reactor that isn't running.
/usr/lib/python2.7/site-packages/twisted/internet/defer.py:551:_runCallbacks
/usr/lib/python2.7/site-packages/deluge/ui/client.py:412:__on_login
/usr/lib/python2.7/site-packages/twisted/internet/defer.py:368:callback
/usr/lib/python2.7/site-packages/twisted/internet/defer.py:464:_startRunCallbacks
--- <exception caught here> ---
/usr/lib/python2.7/site-packages/twisted/internet/defer.py:551:_runCallbacks
main.py:70:on_connect_success
main.py:32:start
main.py:49:handle_stop_signal
/usr/lib/python2.7/site-packages/twisted/internet/base.py:577:stop
]
So the reactor gets stopped, but it doesn't quit the program. I have to
keyboard interrupt twice. Once to stop the reactor, and a second time to throw
the error. Is there a certain way to set up a loop like this?
Answer: reactor handles sigint, sigterm itself (there might be a parameter of
`reactor.run()` that disables that). Install
`reactor.addSystemEventTrigger('before', 'shutdown', client.disconnect)`
instead.
See [twisted: catch keyboardinterrupt and shutdown
properly](http://stackoverflow.com/q/3453451/4279).
|
Generate SQL string using schema.CreateTable fails with postgresql ARRAY
Question: I'd like to generate the verbatim CREATE TABLE .sql string from a sqlalchemy
class containing a postgresql ARRAY.
The following works fine without the ARRAY column:
from sqlalchemy.dialects.postgresql import ARRAY
from sqlalchemy import *
from geoalchemy import *
from sqlalchemy.ext.declarative import declarative_base
metadata=MetaData(schema='refineries')
Base=declarative_base(metadata)
class woodUsers (Base):
__tablename__='gquery_wood'
id=Column('id', Integer, primary_key=True)
name=Column('name', String)
addr=Column('address', String)
jsn=Column('json', String)
geom=GeometryColumn('geom', Point(2))
this woks just as i'd like it to:
In [1]: from sqlalchemy.schema import CreateTable
In [3]: tab=woodUsers()
In [4]: str(CreateTable(tab.metadata.tables['gquery_wood']))
Out[4]: '\nCREATE TABLE gquery_wood (\n\tid INTEGER NOT NULL, \n\tname VARCHAR, \n\taddress VARCHAR, \n\tjson VARCHAR, \n\tgeom POINT, \n\tPRIMARY KEY (id)\n)\n\n'
however when I add a postgresql ARRAY column in it fails:
class woodUsers (Base):
__tablename__='gquery_wood'
id=Column('id', Integer, primary_key=True)
name=Column('name', String)
addr=Column('address', String)
types=Column('type', ARRAY(String))
jsn=Column('json', String)
geom=GeometryColumn('geom', Point(2))
the same commands as above result in a long traceback string ending in:
/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/visitors.pyc in _compiler_dispatch(self, visitor, **kw)
70 getter = operator.attrgetter("visit_%s" % visit_name)
71 def _compiler_dispatch(self, visitor, **kw):
---> 72 return getter(visitor)(self, **kw)
73 else:
74 # The optimization opportunity is lost for this case because the
AttributeError: 'GenericTypeCompiler' object has no attribute 'visit_ARRAY'
If the full traceback is useful, let me know and I will post.
I think this has to do with specifying a dialect for the compiler (?) but im
not sure. I'd really like to be able to generate the sql without having to
create an engine. I'm not sure if this is possible though, thanks in avance.
Answer: There's probably a complicated solution that involves digging in
sqlalchemy.dialects. You should first try it with an engine though. Fill in a
bogus connection url and just don't call connect().
|
Blender Python scripting, trying to prevent UI lock up while doing large calculations
Question: I am working in blender doing a script for N number of objects. When running
my script, it locks up the user interface while it is doing its work. I want
to write something that prevents this from happening so i can see what is
happening on the screen as well as use my custom UI to show a progress bar.
Any ideas on how this is doable in either python or blender? Most of the
calculations only take a few minutes and i am aware that this request might
make them take longer than normal. Any help would be appreciated.
The function that is doing most of the work is a **for a in b** loop.
Answer: If you want to do large calculations in Blender, and still have a responsive
UI you might want to check out model operators with python timers.
It would be something like this:
class YourOperator(bpy.types.Operator):
bl_idname = "youroperatorname"
bl_label = "Your Operator"
_updating = False
_calcs_done = False
_timer = None
def do_calcs(self):
# would be good if you can break up your calcs
# so when looping over a list, you could do batches
# of 10 or so by slicing through it.
# do your calcs here and when finally done
_calcs_done = True
def modal(self, context, event):
if event.type == 'TIMER' and not self._updating:
self._updating = True
self.do_calcs()
self._updating = False
if _calcs_done:
self.cancel(context)
return {'PASS_THROUGH'}
def execute(self, context):
context.window_manager.modal_handler_add(self)
self._updating = False
self._timer = context.window_manager.event_timer_add(0.5, context.window)
return {'RUNNING_MODAL'}
def cancel(self, context):
context.window_manager.event_timer_remove(self._timer)
self._timer = None
return {'CANCELLED'}
You'll have to take care of proper module imports and operator registration
yourself.
I have a Conways Game Of Life modal operator implementation to show how this
can be used: <https://dl.dropboxusercontent.com/u/1769373/gol.blend>
|
Python subprocess spools too many processes
Question: Hopefully someone can help, I have a challenging situation that I cannot not
seem to script for. My aim is to automate loading SQL files into PostgreSQL.
I wont know how many folders of SQL files I have so intially I check a folder
exists and then loop through each file and load it into PostgreSQL using
psql.exe
My current code looks like this
if os.path.exists("sql1"):
for files in os.listdir("sql1"):
load1 = subprocess.Popen("psql -d data -U postgres -f sql1\%s" %files)
if os.path.exists("sql2"):
for files in os.listdir("sql2"):
load2 = subprocess.Popen("psql -d data -U postgres -f sql2\%s" %files)
However this spools so many subprocesses as it creates a subprocess for each
SQL file in the folder as well as more subprocesses for each folder.
If I change it to a subprocess.call it will of course seriliase the loading
and block loading the files from the next folder, rather than running a single
process for each folder.
Does anyone know how I could create a single process for each folder that
exists?
In addition to this I will then run the indexes but only once all processes
have finished.
I could use load.wait() but that would only work for one process.
thanks for advice and help in advance
EDIT ADDED:
Taking Steve's advice I introduced some threads but it still causes the
indexing to start before the subprocesses have finished
def threads(self):
processors = multiprocessing.cpu_count()
n = 1
name = "sql%i" %n
for i in range(processors):
if os.path.exists(name):
thread = Thread(target=self.loadData, args=(name,))
thread.start()
n += 1
name = "sql%i" %n
def loadData(self, name):
for files in os.listdir(name):
load = subprocess.Popen("psql -d osdata -U postgres -f %s\%s" %(name, files))
load.wait()
But the indexing starts before the processes have finished.
Any ideas how to prevent that
Answer: I would suggest creating a thread for each folder. Then use `subprocess.call`
to serialise calls within each thread.
If you want to throttle the number of threads executing concurrently, you
should look at Python's futures module.
<http://docs.python.org/dev/library/concurrent.futures.html>
from concurrent.futures import ThreadPoolExecutor
with ThreadPoolExecutor(max_workers=2) as executor:
for n in range(processors):
name = "sql%i" % (n + 1)
if os.path.exists(name):
future = executor.submit(loadData, name)
|
Matplotlib Updating slider widget range
Question: I am trying to write a small bit of code that interactively deletes selected
slices in an image series using matplotlib. I have created a button 'delete'
which stores a number of indices to be deleted when the button 'update' is
selected. However, I am currently unable to reset the range of my slider
widget, i.e. removing the number of deleted slices from valmax. What is the
pythonic solution to this problem?
Here is my code:
import dicom
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider, Button
frame = 0
#store indices of slices to be deleted
delete_list = []
def main():
data = np.random.rand(16,256,256)
nframes = data.shape[0]
raw_dicom_stack = []
for x in range (nframes):
raw_dicom_stack.append(data[x,:,:])
#yframe = 0
# Visualize it
viewer = VolumeViewer(raw_dicom_stack, nframes)
viewer.show()
class VolumeViewer(object):
def __init__(self, raw_dicom_stack, nframes):
global delete_list
self.raw_dicom_stack = raw_dicom_stack
self.nframes = nframes
self.delete_list = delete_list
# Setup the axes.
self.fig, self.ax = plt.subplots()
self.slider_ax = self.fig.add_axes([0.2, 0.03, 0.65, 0.03])
self.delete_ax = self.fig.add_axes([0.85,0.84,0.1,0.04])
self.update_ax = self.fig.add_axes([0.85,0.78,0.1,0.04])
self.register_ax = self.fig.add_axes([0.85,0.72,0.1,0.04])
self.add_ax = self.fig.add_axes([0.85,0.66,0.1,0.04])
# Make the slider
self.slider = Slider(self.slider_ax, 'Frame', 1, self.nframes,
valinit=1, valfmt='%1d/{}'.format(self.nframes))
self.slider.on_changed(self.update)
#Make the buttons
self.del_button = Button(self.delete_ax, 'Delete')
self.del_button.on_clicked(self.delete)
self.upd_button = Button(self.update_ax, 'Update')
self.upd_button.on_clicked(self.img_update)
self.reg_button = Button(self.register_ax, 'Register')
self.add_button = Button(self.add_ax, "Add")
# Plot the first slice of the image
self.im = self.ax.imshow(np.array(raw_dicom_stack[0]))
def update(self, value):
global frame
frame = int(np.round(value - 1))
# Update the image data
dat = np.array(self.raw_dicom_stack[frame])
self.im.set_data(dat)
# Reset the image scaling bounds (this may not be necessary for you)
self.im.set_clim([dat.min(), dat.max()])
# Redraw the plot
self.fig.canvas.draw()
def delete(self,event):
global frame
global delete_list
delete_list.append(frame)
print 'Frame %s has been added to list of slices to be deleted' %str(frame+1)
print 'Please click update to delete these slices and show updated image series \n'
#Remove duplicates from delete list
def img_update(self,event):
#function deletes image stacks and updates viewer
global delete_list
#Remove duplicates from list and sort into numerical order
delete_list = list(set(delete_list))
delete_list.sort()
#Make sure delete_list is not empty
if not delete_list:
print "Delete list is empty, no slices to delete"
#Loop through delete list in reverse numerical order and remove slices from series
else:
for i in reversed(delete_list):
self.raw_dicom_stack.pop(i)
print 'Slice %i removed from dicom series \n' %(i+1)
#Can now remove contents from delete_list
del delete_list[:]
#Update slider range
self.nframes = len(self.raw_dicom_stack)
def show(self):
plt.show()
if __name__ == '__main__':
main()
Answer: It looks like the slider does not have a way to update the range
([api](http://matplotlib.org/api/widgets_api.html#matplotlib.widgets.Slider)).
I would suggest setting the range of the slider to be [0,1] and doing
frame = int(self.nframes * value)
On a somewhat related note, I would have made `frame` ~~an instance variable~~
a data attribute instead of a global variable
([tutorial](http://docs.python.org/2/tutorial/classes.html#instance-objects)).
|
Create a file in python
Question: Now i know how to implement a dictionary by file txt. So i have create the
example.txt (generic file) :
aaa.12
bbb.14
ccc.10
and to make a dictionary:
with open('example.text') as f:
hash = {}
for line in f:
key, value = line.strip().split('.', 1)
hash[key] = int(value)
So now i want to order my elemen by value: so i try
with open('example.txt') as f:
hash = {}
for line in f:
key, value = line.strip().split('.', 1)
hash[key] = int(value)
print hash #this print my dict
value_sort=sorted(hash.values())
print value:sort #to check the what return and gave me in this case value_sort=[10, 12, 14]
perfect so now how can i write on example.txt my items order by value:
ccc.10
aaa.12
bbb.14
Answer: You'll need to loop through the `hash` dict separately, where you ask it to be
sorted on value:
from operator import itemgetter
hash = {}
with open('example.text') as f:
for line in f:
key, value = line.strip().split('.', 1)
hash[key] = int(value)
for key, value in sorted(hash.items(), key=itemgetter(1)):
print '{0}.{1}'.format(key, value)
The `sorted()` call is given a key to sort by, namely the second element of
each `.items()` tuple (a key and value pair).
If you wanted to _write_ the sorted items to a file, you'd need to open that
file in writing mode:
with open('example.txt', 'w') as f:
for key, value in sorted(hash.items(), key=itemgetter(1)):
f.write('{0}.{1}\n'.format(key, value))
Note that we write newlines (`\n`) after every entry; `print` includes a
newline for us but when writing to a file you need to include it manually.
|
Proper way to restart HTTP server in Python
Question: I'm writing a HTTP server in Python using the code snippet below. The server
works well until some IOError happens causing it to restart. Something is
wrong with my restart handling since the server starts up fine but does not
accept any requests after that.
Is there anything wrong in this code?
#!/bin/python
from BaseHTTPServer import HTTPServer
import json
import config
import time
import socket
from SimpleHTTPServer import SimpleHTTPRequestHandler
from SocketServer import BaseServer
class MyHandler(SimpleHTTPRequestHandler):
def parseArgs(self):
args = {}
try:
config.logging.info(self.path)
args_list = self.path.split("?")[1].split("&")
for entry in args_list:
args[entry.split("=")[0]] = entry.split("=")[1]
except IndexError:
pass
return args
def do_GET(self):
try:
response = {}
args = self.parseArgs()
config.logging.debug("Handle the request")
self._handle_request(args, response)
config.logging.debug("Write the header back to the client.")
self.send_response(200)
self.send_header('Access-Control-Allow-Origin', '*')
self.send_header('Content-Type:', 'application/json; charset=UTF-8')
self.end_headers()
config.logging.debug("Finish writing the header and start writing JSON response.")
json_encoded = json.dumps(response)
config.logging.debug(json_encoded)
self.wfile.write(json_encoded)
config.logging.debug("JSON Response written successfully.")
except Exception as e:
config.logging.exception('Exception occurs when writing back the response.')
return
def _handle_request(self, args, response):
try:
response["sysTime"] = long(round(time.time() * 1000))
if not "cmd" in args:
response["message"] = "Error: No command provided."
else:
response["message"] = args['cmd']
except Exception as e:
response["message"] = "Error: Exception occurs (check logs)."
config.logging.exception('Exception occurs when handling request.')
def do_POST(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
self.wfile.write("Nothing here :( ")
def main():
while True:
try:
httpd = HTTPServer(('', config.listening_port), MyHandler)
sa = httpd.socket.getsockname()
msg = "Serving HTTP on " + sa[0] + " port " + str(sa[1]) + "..."
print msg
config.logging.info(msg)
httpd.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down server'
config.logging.info('^C received, shutting down server')
httpd.socket.close()
break
except Exception as e:
httpd.shutdown()
httpd.socket.close()
print "Error occurs. Retry in 5 seconds"
config.logging.exception('Exception occurs. Retry in 5 seconds.')
time.sleep(5)
if __name__ == '__main__':
main()
Sorry for missing the module config.
import logging
# global variables
listening_port = 9001
logging.basicConfig(filename='error.log', level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
One url for running this is `http://localhost:9001/?cmd=some_command`. Just
paste that into any browser after the server is started.
**Edit:**
I guess this has something to do with the KVM or the way it is set up on my
company machine. As my server is running on a KVM which can only be accessed
through ssh, every time I exit ssh session, invoking the url command above (in
chrome), wait until chrome notifies not connected, get back to the ssh
session, invoking the url command again, the error happened.
I put the debug message between the write response in the do_GET() in
`self.send_response(200)`. When the exception happens, this is the trace I
got. I guess it has something to do with `sys.stderr.write` inside
log_message.
11/23/2012 10:58:29 AM - INFO - Handle the request
11/23/2012 10:58:29 AM - INFO - /?cmd=get_info&uid=pp
11/23/2012 10:58:29 AM - INFO - Write the header back to the client.
11/23/2012 10:58:29 AM - ERROR - Exception occurs when writing back the response.
Traceback (most recent call last):
File "service.py", line 57, in do_GET
self.send_response(200)
File "/usr/lib/python2.7/BaseHTTPServer.py", line 385, in send_response
self.log_request(code)
File "/usr/lib/python2.7/BaseHTTPServer.py", line 422, in log_request
self.requestline, str(code), str(size))
File "/usr/lib/python2.7/BaseHTTPServer.py", line 458, in log_message
format%args))
IOError: [Errno 5] Input/output error
**EDIT:**
The issue went away when I override the `log_message()` to do nothing. I know
this only hides the issue, but at least it works temporarily for me.
Answer: The sys.stderr writes message to the standard output, usually the shell
console where you started the program. So if the shell console gets closed,
the sys.stderr.writes will raise IO Error.
So the root cause of the issue is the sys.stderr.write in the log_message()
function tries to write a message to the ssh shell console(standard output)
where you originally started your python program, so if you keep the ssh
session alive, the issue won't happen. But if you exit the original ssh
session, the shell console gets closed, then the sys.stderr.write can't write
message to the original console, and then IO error happens.... The log_message
usually writes something like this:
127.0.0.1 - - [01/Dec/2015 20:16:32] "GET / HTTP/1.1" 200 -
So the solution is to run your python script with redirecting the standard
output to /dev/null as follows:
python your_script.py > /dev/null 2>&1 &
Holp it helps.
|
Google Provisioning API no longer allowing restore(unsuspend) of user
Question: Anybody else seeing this? There appears to have been some changes to the
Google provisioning api for multi-domains. I have long running code that could
restore a suspended user that has stopped working. I use Python and 2.0.17 of
the Python GData libraries and the UpdateUser method to do this. I have also
noted that RetrieveUser in the same library is no longer returning the first
and last names of suspended users. I have filed an issue at Google apps-api-
issues, please star if you are seeing this.
<http://code.google.com/a/google.com/p/apps-api-issues/issues/detail?id=3281>
Answer: This is a simple example that will walk through the problem. Note that
user_entry object returned from a RetrieveUser() on a suspended user will not
have a property value for either first name or last name. The modified
user_entry object is passed to UpdateUser() which does not allow the missing
values for first and last names.
#!/usr/bin/python
import sys
import gdata.apps.multidomain.client
if len(sys.argv) < 4:
print "\nUsage:"
print sys.argv[0], "admin_email admin_password user_email\n"
sys.exit(0)
admin = sys.argv[1]
password = sys.argv[2]
email = sys.argv[3]
domain = ""
if '@' in admin:
admin_name,domain = admin.split('@', 1)
else:
print "Specify full email address of administrator.\n"
print "\nUsage:"
print sys.argv[0], "admin_email admin_password user_email\n"
sys.exit(0)
if '@' not in email:
print "Specify full email address of user.\n"
print "\nUsage:"
print sys.argv[0], "admin_email admin_password user_email\n"
sys.exit(0)
md_client = gdata.apps.multidomain.client.MultiDomainProvisioningClient(
domain=domain)
md_client.ClientLogin(email=admin, password=password, source='MDPROVISIONING')
print "Retrieve user: %s\n" %(email)
user_entry = md_client.RetrieveUser(email)
print user_entry
print ('\nRetrieve results: email: %s, suspended: %s,'
' first name: %s, last name: %s\n'
%(user_entry.email,user_entry.suspended,
user_entry.first_name,user_entry.last_name))
print "Update user (suspend): %s\n" %(email)
user_entry.suspended = 'true'
updated_user_entry = md_client.UpdateUser(email, user_entry)
print updated_user_entry
print ('\nSuspend results: email: %s, suspended: %s,'
' first name: %s, last name: %s\n'
%(updated_user_entry.email,updated_user_entry.suspended,
updated_user_entry.first_name,updated_user_entry.last_name))
print "Retrieve user: %s\n" %(email)
user_entry = md_client.RetrieveUser(email)
print user_entry
print ('\nRetrieve results: email: %s, suspended: %s,'
' first name: %s, last name: %s\n'
%(user_entry.email,user_entry.suspended,
user_entry.first_name,user_entry.last_name))
print "Update user (restore): %s\n" %(email)
user_entry.suspended = 'false'
updated_user_entry = md_client.UpdateUser(email, user_entry)
print updated_user_entry
print ('\nRestore results: email: %s, suspended: %s,'
' first name: %s, last name: %s\n'
%(updated_user_entry.email,updated_user_entry.suspended,
updated_user_entry.first_name,updated_user_entry.last_name))
|
Python NLTK NGrams Error
Question: I'm running a code to get the perplexity, number of ngrams from a text corpus.
While doing it, I got a weird error saying:
C:\Users\Rosenkrantz\Documents\NetBeansProjects\JavaApplication2>python ai7.py
C:\Users\Rosenkrantz\Documents\NetBeansProjects\JavaApplication2>python ai7.py
47510
203044
308837
Traceback (most recent call last):
File "ai7.py", line 95, in <module>
tt=NgramModel(1, tText, estimator)
File "C:\Python27\lib\site-packages\nltk\model\ngram.py", line 81, in __init__
assert(isinstance(pad_left, bool))
AssertionError
The Code I am Running to get this is:
f_in = open("science.txt", 'r');
ln = f_in.read()
words = nltk.word_tokenize(ln)
tText = Text(words)
tt=NgramModel(1, tText, estimator)
tt1=NgramModel(2, tText1, estimator)
tt2=NgramModel(3, tText2, estimator)
All the imports seem to be proper.
Answer: Are you sure you're calling `NgramModel` with the right arguments? Looking at
the [source for the current version of
NLTK](https://github.com/nltk/nltk/blob/master/nltk/model/ngram.py),
`NgramModel` looks like this:
def __init__(self, n, train, pad_left=True, pad_right=False,
estimator=None, *estimator_args, **estimator_kwargs):
Which doesn't seem to match up to how you're calling the function. What is
`estimator` in your code? Because you're currently passing `estimator` as the
`pad_left` argument.
|
Python list reordering, remember original order?
Question: I'm working on a Bayesian probability project, in which I need to adjust
probabilities based on new information. I have yet to find an efficient way to
do this. What I'm trying to do is start with an equal probability list for
distinct scenarios. Ex. There are 6 people: E, T, M, Q, L, and Z, and their
initial respective probabilities of being chosen are represented in
myList=[.1667, .1667, .1667, .1667, .1667, .1667]
New information surfaces that people in the first third alphabetically have a
collective 70% chance of being chosen. A new list is made, sorted
alphabetically by name (E, L, M, Q, T, Z), that just includes the new
information. (.7/.333=2.33, .3/.667=.45)
newList=[2.33, 2.33, .45, .45, .45, .45)
I need a way to order the newList the same as myList so I can multiply the
right values in list comprehension, and reach the adjust probabilities. Having
a single consistent order is important because the process will be repeated
several times, each with different criteria (vowels, closest to P, etc), and
in a list with about 1000 items. Each newList could instead be a
newDictionary, and then once the adjustment criteria are created they could be
ordered into a list, but transforming multiple dictionaries seems inefficient.
Is it? Is there a simple way to do this I'm entirely missing?
Thanks!
Answer: For what it's worth, the best thing you can do for the speed of your methods
in Python is to use numpy instead of the standard types (you'll thus be using
pre-compiled C code to perform arithmetic operations). This will lead to a
dramatic speed increase. Numpy arrays have fixed orderings anyway, and syntax
is more directly applicable to mathematical operations. You just need to
consider how to express the operations as matrix operations. E.g. your
example:
myList = np.ones(6) / 6.
newInfo = np.array( [.7/2, .7/2, .3/4, .3/4, .3/4, .3/4] )
result = myList * newInfo
Since both vectors have unit sum there's no need to normalise (I'm not sure
what you were doing in your example, I confess, so if there's a subtlety I've
missed let me know), but if you do need to it's trivial:
result /= np.sum(result)
|
Python os.exec(): Termination on running 'notify-send'
Question: I'm writing a small Python script under Linux that pops up a number of
`libnotify` pop-ups, currently by using the following syntax:
import os
os.execv('/usr/bin/notify-send', ['App Title', 'Message'])
Unfortunately, and for some strange reason, it kills the interpreter right out
to the command-prompt.
It doesn't do this with any other command the script executes, just `notify-
send`.
There's no error given, no known exception thrown, no indication of anything
wrong, it just dies out to the command prompt.
Does anyone have suggestions or alternatives that equally easy to do?
Answer: You should use `subprocess.call` which starts the program named by its
arguments in a new process and waits for the child process to exit rather than
`os.execv` which replaces what is running in the current process with the
program specified by its arguments.
The usage is `subprocess.call(['/usr/bin/notify-send', 'App Title',
'Message'])`
|
Writing a big file with np.save in Python in a while True Loop
Question: I am scraping a website with a while True loop, and then saving all of the
data to a file with np.savez. I want to process the npz file, but the file
updates faster than I can copy it. Here's my code:
while True:
time.sleep(1.5)
for post in new:
all_posts.append(post)
np.savez('records.npz', posts)
new = other_site.get_next()
Initially to process the data I was scraping I just would copy the file, but
now the file is too big and it gets corrupted every time. I could restart this
process from the beginning and save less often so I would have more time to
copy, but I'd like to know if there's a way I can recover the data I've
written. Another idea I had was to truncate the end of the file such that it
still looks like an npz file and python can read it, but I don't know if
that's possible.
Answer: To avoid your file getting trampled or over-written, why not write some python
code to avoid this? For example, you could save to a new file for each site,
and collect these files in a directory;
import os
os.mkdir('scraped_sites')
while True:
time.sleep(1.5)
for post in new:
all_posts.append(post)
# create a unique file path
save_file = os.path.join('scraped_sites', 'records_%s.npz' % other_site)
np.savez(save_file, all_posts)
new = other_site.get_next()
This way your file will never be trampled and so you needn't worry about
processing it before it gets written again. If you don't like the idea of
naming your files, check out
[`tempfile`](http://docs.python.org/2/library/tempfile.html)
Also, `while True` can be dangerous as your loop will never exit - I assume
that you've just written this for brevity, but it would be good to have a
`break` or `while <conditional` so that you don't accidentally force the loop
to exit when it's mid file write.
|
Python XML Pull Parser
Question: I am trying to parse an XML file using Python. Due to the size of the XML, I
want to use a Pull Parser. I found [this](http://wiki.python.org/moin/PullDom)
one.
My code starts with
doc = pulldom.parse("myfile.xml")
for event, node in doc:
# code here...
I am using
if (node.localName == "b"):
to get the XML tag name, and it works fine.
What I can't find how to do is get the text from between the tags. Using
`node.nodeValue` returns `None`.
I can use `node.toxml()` to get the full XML for the node, but I only want the
text between the tags. Is there a way to do this other than using a regex
replace to take the tags out of `node.toxml()`?
Answer: You have two nodes with local name "b" for every tag with text - a
`START_ELEMENT` and an `END_ELEMENT`. Normally you should receive something
like this:
START_ELEMENT
CHARACTERS
END_ELEMENT
So you are looking for the characters after a matching start-element. You may
want to try something like this:
from xml.dom.pulldom import CHARACTERS, START_ELEMENT, parse
doc = parse("myfile.xml")
text_expected = False
for event, node in doc:
print event, node
if text_expected:
text_expected = False
if event != CHARACTERS:
# strange .. there should be some
continue
print node.data
else:
text_expected = (event == START_ELEMENT) and (node.localName == "b")
With this `myfile.xml`
<a>
<b>c1</b>
<b>c2</b>
</a>
I get the output
c1
c2
Note that you might need to `strip()` each string and you must ignore every
other `CHARACTERS`-event. Every linebreak and whitespace between two elements
generate a `CHARACTERS`-event.
|
iterating through nested lists in python
Question: I'm trying to iterate through a list and depending on several conditions to
rearrange the items in the list in sublists, all inside the original list that
is. so with the code below in Python, while the list1 prints correctly by
grouping 0s, 1s and 2s :
new list 1 = [['A0', 'B0', 'C0'], ['A1', 'B1', 'C1'], ['A2', 'B2', 'C2']]
everything gets mixed up on the 2nd list trying to re-group in further
sublists As Bs Cs by getting :
new list 2 = [[['A0', 'A1', 'A2'], ['B0', 'B1', 'B2'], ['C0', 'C1', 'C2']]]
while expecting to get:
new list 1 = [[['A0'], ['B0'], ['C0']], [['A1'], ['B1'], ['C1']], [['A2'], ['B2'], ['C2']]]
below is the code I used (sorry for the length, i'm sure there is a very
reduced version).
def Main():
array = ['A0', 'A1', 'A2', 'B0', 'B1', 'B2', 'C1', 'C0', 'C2']
list1 = NewList1(array)
print 'new list 1 = ' + str(list1)
list2 = NewList2(list1)
print 'new list 2 = ' + str(list2)
def NewList1(thelist):
sublist1=[]
sublist2=[]
sublist3=[]
sublist4=[]
newlist=[]
for item in thelist:
if hasattr(item,"__iter__"):
for subitem in item:
sublist1.append(subitem)
elif "0" in item:
sublist2.append(item)
elif "1" in item:
sublist3.append(item)
elif "2" in item:
sublist4.append(item)
else:
newlist.append(item)
if sublist2: newlist.append(sublist2)
if sublist3: newlist.append(sublist3)
if sublist4: newlist.append(sublist4)
if sublist1: newlist.append(NewList1(sublist1))
return newlist
def NewList2(thelist):
sublist1=[]
sublist2=[]
sublist3=[]
sublist4=[]
newlist=[]
for item in thelist:
if hasattr(item,"__iter__"):
for subitem in item:
sublist1.append(subitem)
elif "A" in item:
sublist2.append(item)
elif "B" in item:
sublist3.append(item)
elif "C" in item:
sublist4.append(item)
else:
newlist.append(item)
if sublist2: newlist.append(sublist2)
if sublist3: newlist.append(sublist3)
if sublist4: newlist.append(sublist4)
if sublist1: newlist.append(NewList2(sublist1))
return newlist
Main();
Answer: Sort on your grouping criteria, then use
[`itertools.groupby()`](http://docs.python.org/2/library/itertools.html#itertools.groupby):
from itertools import groupby
key = lambda el: el[1]
list2 = [[[el] for el in grouped] for group, grouped in groupby(sorted(input, key=key), key)]
Demonstration:
>>> from itertools import groupby
>>> input = ['A0', 'A1', 'A2', 'B0', 'B1', 'B2', 'C1', 'C0', 'C2']
>>> key = lambda el: el[1]
>>> [[[el] for el in grouped] for group, grouped in groupby(sorted(input, key=key), key)]
[[['A0'], ['B0'], ['C0']], [['A1'], ['B1'], ['C1']], [['A2'], ['B2'], ['C2']]]
Your output requirement is a little convoluted; if returning lists of the
grouped elements is enough, calling `list()` on `grouped` is enough:
>>> [list(grouped) for group, grouped in groupby(sorted(input, key=key), key)]
[['A0', 'B0', 'C0'], ['A1', 'B1', 'C1'], ['A2', 'B2', 'C2']]
|
import local python module in HTCondor
Question: This concerns the importing of my own python modules in a HTCondor job.
Suppose 'mymodule.py' is the module I want to import, and is saved in
directory called a XDIR. In another directory called YDIR, I have written a
file called xImport.py:
#!/usr/bin/env python
import os
import sys
print sys.path
import numpy
import mymodule
and a condor submit file:
executable = xImport.py
getenv = True
universe = Vanilla
output = xImport.out
error = xImport.error
log = xImport.log
queue 1
The result of submitting this is that, in xImport.out, the sys.path is printed
out, showing XDIR. But in xImport.error, there is an ImporError saying 'No
module named mymodule'. So it seems that the path to mymodule is in sys.path,
but python does not find it. I'd also like to mention that error message says
that the ImportError originates from the file
/mnt/novowhatsit/YDIR/xImport.py
and not `YDIR/xImport.py`.
How can I edit the above files to import mymodule.py?
Answer: When condor runs your process, it creates a directory on that machine (usually
on a local hard drive). It sets that as the working directory. That's probably
the issue you are seeing. If XDIR is local to the machine where you are
running condor_submit, then it's contents don't exist on the remote machine
where the xImport.py is running.
Try using the .submit feature transfer_input_files mechanism (see
<http://research.cs.wisc.edu/htcondor/manual/v7.6/2_5Submitting_Job.html>) to
copy the mymodule.py to the remote machines.
|
Python reload error
Question: I'm using Python in IDLE, and I have a line `reload(sim_map_training)`.
However, when I run the file, it says `NameError: name 'sim_map_training' is
not defined`, even though I'm sure I have a file `sim_map_training.py` in the
same directory as the file. I'm really confused.. What could be the problem?
Answer: The name `sim_map_training` in the context where you're reloading must refer
to the module. Just writing
import sim_map_training # = sim_map_training = sys.modules['sim_map_training']
reload(sim_map_training)
will do.
|
How to raise this exception or error message?
Question: I have been implementing rsync in Python/Django to transfer data between the
files. Here's my views.py:
def upload_file(request):
'''This function produces the form which allows user to input session_name, their remote host name, username
and password of the server. User can either save, load or cancel the form. Load will execute couple Linux commands
that will list the files in their remote host and server.'''
if request.method == 'POST':
# session_name = request.POST['session']
url = request.POST['hostname']
username = request.POST['username']
global password
password = request.POST['password']
global source
source = str(username) + "@" + str(url)
command = subprocess.Popen(['sshpass', '-p', password, 'rsync', '--list-only', source],
stdout=subprocess.PIPE,
env={'RSYNC_PASSWORD': password}).communicate()[0]
command = command.split(' ')[-1]
result = subprocess.Popen(['ls', '/home/nfs/django/genelaytics/user'], stdout=subprocess.PIPE).communicate()[0].splitlines()
return render_to_response('thanks.html', {'res':result, 'res1':command}, context_instance=RequestContext(request))
else:
pass
return render_to_response('form.html', {'form': 'form'}, context_instance=RequestContext(request))
I take the remotehost, username and password input from the form. But those
passwords, username or servername may not be correct. Even they are not
correct this code transforms me to thanks.html but files on those server are
not listed of course as username, password, hostname were not correct. How do
I validate it? How do I raise exception or wrong username, password or
hostname error?
Answer: In python if you want to work with ssh or sftp(copying files over an ssh
connection) then the [paramiko](http://www.lag.net/paramiko/) library is the
way to go. If you just want to check whether the provided host, username,
password combination is valid this function will do the job:
import paramiko
def test_ssh(host, username, password):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username=username, password=password)
an example call of the function would be:
test_ssh('10.0.0.10', 'myuser', 'mypassword')
if it was able to connect to the host correctly it will return successfully.
Otherwise it will through an exception with the details of exactly what
failed. For example when one puts an invalid host the following exception gets
raised:
socket.error: [Errno 113] No route to host
invalid username, password will raise:
paramiko.AuthenticationException: Authentication failed.
You can catch these exceptions as you would normally do in Python and display
whatever type of message you wish to your user. I would recommend that instead
of using sshpass and subprocess to use paramiko.
|
Python complex dictionary keys
Question: My question pertains to dictionary keys. I want to set up a dictionary that
has 3 keys for any single object. The keys must be in order and can have a
wide range of values. For instance,
dictionary = {(key1,key2,key3) : object}
key1 can be any value between 1 and 10 key2 can be any value between 11 and 20
key3 can be any value between 21 and 30
The order in which the keys are placed does matter.
More specifically, my keys correspond to a range of x,y,z cartesian
coordinates in which many objects are floating around in. I want to be able to
sort the relative position of the objects based off their x,y,z positions.
Is there any way to set this up or will I have to take a different approach?
Thanks for any help!
Answer: Sure you can, as well as to create single string key for this - just merge
string results for your keys like ','.join([k1,k2,k3])
[Read more about
dictionaries.](http://docs.python.org/2/tutorial/datastructures.html#dictionaries)
> dictionaries are indexed by keys, which can be any immutable type; strings
> and numbers can always be keys. Tuples can be used as keys if they contain
> only strings, numbers, or tuples; if a tuple contains any mutable object
> either directly or indirectly, it cannot be used as a key. You canβt use
> lists as keys, since lists can be modified in place using index assignments,
> slice assignments, or methods like append() and extend().
So you trying to use tuples as a keys and this is OK.
Note that dictionaries in python are not sorted. You can use
collections.OrderedDict for this. Also to build correct sorting use
[sort/sorted
functions](http://docs.python.org/2/library/functions.html#sorted) with
parameter key specified to sort the way you want.
EDITED sample:
from itertools import product
myDict = {}
for x,y,z in product(range(10), range(10,20), range(20,30)):
myDict[(x,y,z)] = sum([x,y,z])
|
Python: calling a function as a method of a class
Question: Let's start with some code:
def func(*x):
print('func:', x)
class ABC:
def __init__(self, f):
self.f1 = f
def f2(*x):
print('f2:', x)
Now we do some tests:
>>> a = ABC(func)
>>> a.f1(10)
func: (10,)
>>> a.f2(10)
f2: (<__main__.ABC object at 0xb75381cc>, 10)
>>> a.f3 = func
>>> a.f3(10)
func: (10,)
>>> a.f1
<function func at 0xb74911ec>
>>> a.f2
<bound method ABC.f2 of <__main__.ABC object at 0xb75381cc>>
>>> a.f3
<function func at 0xb74911ec>
Note that `func` is a normal function and we are making it a method `f1` of
the class.
We can see that `f2` is getting the class instance as the first argument, but
`f1` and `f3` are not, even though all functions are called as class methods.
We can also see that if we call a normal function as a method of a class,
Python does not make a bound method from it.
So why is `f1` or `f3` NOT getting a class instance passed to it even when we
are calling it as a method of a class? And also, how does Python know that we
are calling an outer function as a method so that it should not pass an
instance to it.
**\-- EDIT --**
OK, so basically what I am doing wrong is that I am attaching the functions on
the instance and NOT on the class object itself. These functions therefore
simply become instance attributes. We can check this with:
>>> ABC.__dict__
... contents...
>>> a.__dict__
{'f1': <function func at 0xb74911ec>, 'f3': <function func at 0xb74911ec>}
Also note that this dict can not be assigned to:
>>> ABC.__dict__['f4'] = func
TypeError: 'dict_proxy' object does not support item assignment
Answer: You kind of partially answered your own question inspecting the object. In
Python, objects behave like namespaces, so the first attribute points to a
function and the second points to a method.
This is how you can add a method dynamically:
from types import MethodType
def func(*x):
print('func:', x)
class ABC:
def __init__(self, f):
self.f1 = MethodType(f, self, self.__class__)
def f2(*x):
print('f2:', x)
if __name__ == '__main__':
a = ABC(func)
print a.f1(10)
print a.f2(10)
a.f3 = MethodType(func, a, ABC)
print a.f3(10)
Note that it will bind the method to your instance, not to the base class. In
order to [monkeypatch](http://en.wikipedia.org/wiki/Monkey_patch) the ABC
class:
>>> ABC.f4 = MethodType(func, None, ABC)
>>> a.f4(1)
('func:', (<__main__.ABC instance at 0x02AA8AD0>, 1))
Monkeypatching is usually frowned upon in the Python circles, despite being
popular in other dynamic languages (notably in Ruby when the language was
younger).
If you ever resort to this powerful yet dangerous technique, my advice is:
* never, ever override an existing class method. just don't.
|
Python : Sort file by arbitrary column, where column contains time values
Question: I have a .txt file for a person, and next to each person they have two times.
This is the .txt file
Xantippe 09:00 11:00
Erica 10:00 12:06
Marcia 09:30 11:45
Elizabeth 10:15 12:10
Angela 11:30 13:45
Freda 12:00 14:20
Maria 12:30 14:10
I am required to read the file, then get each line, read it, and sort the
whole list by the second time. Remember in the file the numbers are string
objects. So basically the time that is the earliest i.e. 11:00 Should be at
the top of the list along with their previous time and name. eg. `Xantippe
09:00 11:00` and then on another line the next one etc.
So far I have done:
from Practise1 import timeCalc
with open('LadiesRace.txt', 'r') as f:
readf = f.read();
timeX = timeCalc()
lis = readf.split('\n')
with open('sortByFinishTime.txt','w') as w:
def compare(x,y):
if x[1] > y[1]:
return 1
if x[1] < y[1]:
return -1
return 0
#lis.sort()
for l in lis:
#line = l.strip()
slist = l.split(' ')
print slist[2]
The problem is that I cannot use a dictionary, only a list. I have managed to
sort the list by name in ascending order, but how do I sort with the last
time?
Answer: Firstly, you need to get your data into a usable format... So let's load it
into a list in memory - it's important to note that `dict`s do not inherently
have order, so we want to use a list.
with open('myfile.txt') as fin:
lines = [line.split() for line in fin]
This will remove any trailing newlines and break it up by the space
character... so we end up with:
[['Xantippe', '09:00', '11:00'], ['Erica', '10:00', '12:06'], ['Marcia', '09:30', '11:45'], ['Elizabeth', '10:15', '12:10'], ['Angela', '11:30', '13:45'], ['Freda', '12:00', '14:20'], ['Maria', '12:30', '14:10']]
Then, we can use the `.sort` method of a `list` \- `itemgetter` is a handy
method for getting the nth element of a sequence, so we have name, start, end,
where end is the 2nd index (based on zero being the first, which will be the
name)
from operator import itemgetter
lines.sort(key=itemgetter(2))
And we end up with:
[['Xantippe', '09:00', '11:00'], ['Marcia', '09:30', '11:45'], ['Erica', '10:00', '12:06'], ['Elizabeth', '10:15', '12:10'], ['Angela', '11:30', '13:45'], ['Maria', '12:30', '14:10'], ['Freda', '12:00', '14:20']]
Then write it back out:
with open('output.txt', 'w') as fout:
for el in lines:
fout.write('{0}\n'.format(' '.join(el)))
|
Python: read and execute lines from other script (or copy them in)?
Question: Consider a python script:
####
#Do some stuff#
####
#Do stuff from separate file
####
#Do other stuff
What it the best way to implement the middle bit (do stuff that is defined in
another file)? I've been told that in C++ there's a command that can achieve
what I'm looking to do. Is there an equivalent in Python?
e.g. if A.py is:
print 'a'
### import contents of other file here
print 'm'
and B.py is:
print 'b'
print 'c'
print 'd'
then the desired output of A.py is:
a
b
c
d
m
B.py won't actually contain print statements, rather variable assignments and
simple flow control that won't conflict with anything declared or defined in
A.py. I understand that I could put the contents of B.py into a function,
import this into A.py and call this function at the desired place. That would
be fine if I wanted the contents of B.py to return some single value. But the
contents of B.py may contain for example twenty variable assignments.
I guess then what I am really looking to do is not so much execute the
contents of B.py within A.py, but moreover dynamically modify A.Py to contain,
at some desired line in A.py, the contents of B.py. Then, obviously, execute
the updated A.py.
Thoughts and tips much appreciated.
Answer: I don't think this is a good programming practice but you can use
[execfile](http://docs.python.org/2/library/functions.html#execfile) to
execute all the code in another file.
#a.py
a = a + 5
#b.py
a = 10
execfile("a.py")
print a
If you run `b.py` it will print '15'.
|
Strange behavior of eval() breaks unittest
Question: I have been prototyping and not minding the low-quality code that assigned a
variable which took it's value from calling eval() on argv, which in turn
picked up it's value in external file containing API keys. To my surprise it
badly crashed unit testing (None of them even ran).
Here is the code snippet which I believe to be the culprit:
from sys import argv
from apikeys import *
def setKey(the_key=DCK):
global CK
CK = the_key # Currently used key
if len(argv) == 1:
print('---Executing script. Enter optional arguments if you wish to use special API keys.---')
setKey()
elif len(argv) > 1:
setKey(eval(argv[1]))
TOKEN = rget(DOMAIN+'signin', params={'key':CK}).json['response']['token']
PARAMS = {'signature':TESTSIG, 'token':TOKEN}
# Rest of the code uses unittests which rely on PARAMS.
So I pass one of the variables containing key as it's value to test my script,
it produces the following traceback:
[gp@imdev1 dv1/tests]# python 2test_api2.py ANDROID_FILMS_KEY
Traceback (most recent call last):
File "2test_api2.py", line 604, in <module>
unittest.main()
File "/usr/lib/python2.6/site-packages/unittest2/main.py", line 97, in __init__
self.parseArgs(argv)
File "/usr/lib/python2.6/site-packages/unittest2/main.py", line 152, in parseArgs
self.createTests()
File "/usr/lib/python2.6/site-packages/unittest2/main.py", line 161, in createTests
self.module)
File "/usr/lib/python2.6/site-packages/unittest2/loader.py", line 148, in loadTestsFromNames
suites = [self.loadTestsFromName(name, module) for name in names]
File "/usr/lib/python2.6/site-packages/unittest2/loader.py", line 142, in loadTestsFromName
raise TypeError("don't know how to make test from: %s" % obj)
TypeError: don't know how to make test from: 9b269ac759211de6b3c8b238bd758ccf
9b269ac759211de6b3c8b238bd758ccf is basically the result of running
eval(ANDROID_FILMS_KEY) and running setKey function in a separate script
correctly assigns the API key to CK as string
'9b269ac759211de6b3c8b238bd758ccf'
The kicker is as follows: When CK and PARAMS are used in classes containing
methods that should be unit-tested Python surprisingly raises the bizarre
exception which supposedly tells that unittest doesn't consider
9b269ac759211de6b3c8b238bd758ccf a string?
Answer: The `unittest2` loader _also_ inspects `sys.argv`, to let you limit the
modules loaded for testing [from the command
line](http://www.voidspace.org.uk/python/articles/unittest2.shtml#command-
line-behaviour).
What happens here is that the loader is looking for a test module named
`9b269ac759211de6b3c8b238bd758ccf`.
You'll have to manipulate `sys.argv` from your unittest instead; it's a
standard python list that you can alter. Alternatively, create a `main(args)`
function that, by default, you call with `sys.argv[1:]`:
def main(args):
if not args:
print('---Executing script. Enter optional arguments if you wish to use special API keys.---')
setKey()
else:
setKey(eval(args[0]))
if __name__ == '__main__':
import sys
main(sys.argv[1:])
and now you can test `main()` with different arguments without having to rely
on passing arguments to your test script.
|
tips on Parsing a custom file format python
Question: I developed a custom system which simulates web activity, for example
downloading files and such. I also have a custom file format to feed into this
system. I am looking to change this old system which is written in perl to a
newer system in python. But first i have to somehow parse the file.
There are certain fields in the file that I would like to parse, such as the
`[settings]` where I have any arguements for the system. I also have a
`[macro]` section which is the beginning of the important stuff (the steps,
etc).
What i have trouble is parsing these sections have my system write it out in a
different and much more simpler format (i have thousands of these files and I
just want to write a generator to take the old file and write to a new format
in a new file).
Old format:
[settings]
email_to=people
special_websurf_processing=1
period_0_1_only=1
crc_recheck=0
[macro]
%::WebSurfRules =
(
'Step1' =>
{
action => 'NAVIGATE',
inputstring => 'http://www.tda-sgft.com/TdaWeb/jsp/fondos/Fondos.tda',
},
'Step2' =>
{
action => 'CLICK_REFERENCE',
matchtype => 'OUTER',
matchstring => 'phHttpDest->\{\'FirstClick\'\}',
pass => 'phHttpDest->\{\'Step2Pass\'\}',
},
'Step3' =>
{
action => 'CLICK_REFERENCE',
matchtype => 'OUTER',
matchstring => 'phHttpDest->\{\'SecondClick\'\}',
},
'Step4' =>
{
action => 'CLICK_REFERENCE',
matchtype => 'OUTER',
matchstring => 'phHttpDest->\{\'DealClick\'\}',
accept_multi_match => 'ANY_TOP_FIRST',
},
'Step5' =>
{
action => 'CLICK_REFERENCE',
matchtype => 'INNER',
matchstring => 'phHttpDest->\{\'LinkClick2\'\}',
fail => 'Step6',
# accept_multi_match => 'ANY_TOP_LAST',
},
'Step6' =>
{
action => 'CLICK_REFERENCE',
matchtype => 'INNER',
matchstring => 'phHttpDest->\{\'DocClick\'\}',
},
'Step7' =>
{
action => 'CLICK_DOWNLOAD_OK',
},
);
[data]
Print WebAddress______________ Destination_________________________________________________ FirstClick_________________ SecondClick________________ DealClick_________________________ LinkClick2________________________ DocClick___________________________________ PayInterval DueDay Step2Pass__________ QaRule_________________________________________________________________________________________________________________
0 http://www.tda-sgft.com/ d:\\$YYYYMM{$n}\\raw\\remit\\wl\\CXPEN1_apl.pdf Mortgage Loan ABS Caixa Penedes 1 TDA MAINPAGE - FAIL FundΒ΄s Allocation q1 Step3 qa_regexp=Report D?d?ate\\s+\\d\\d\/$MM{$n}\/$YYYY{$n}
0 http://www.tda-sgft.com/ d:\\$YYYYMM{$n}\\raw\\remit\\wl\\CXPEN1_bond.pdf Mortgage Loan ABS Caixa Penedes 1 TDA MAINPAGE - FAIL Investors information on Payment Date q1 Step3 qa_regexp=PAYMENT DATE:\\s+$aCAPMONTHNAMES[$MM{$n}-1].+$YYYY{$n}
0 http://www.tda-sgft.com/ d:\\$YYYYMM{$n}\\raw\\remit\\wl\\CXPEN1_bond.pdf Mortgage Loan ABS Caixa Penedes 1 TDA MAINPAGE - FAIL Investors information on Payment Date q1 Step3 qa_regexp=PAYMENT DATE:\\s+$aCAPSHORTMONTHNAMES[$MM{$n}-1] \\d\\d.+? ?.? $YYYY{$n}
0 http://www.tda-sgft.com/ d:\\$YYYYMM{$n}\\raw\\remit\\wl\\CXPEN1_bond.pdf Mortgage Loan ABS Caixa Penedes 1 TDA MAINPAGE - FAIL Investors information on Payment Date q1 Step3 qa_regexp=PAYMENT DATE:\\s+$aCAPMONTHNAMESSPANISH[$MM{$n}-1] \\d\\d.+? ?.? $YYYY{$n}
And what i want it to spit out:
[settings]
email_to=people
special_websurf_processing=1
period_0_1_only=1
crc_recheck=0
[macro]
%::WebSurfRules =
(
'1' => 'NAVIGATE,phHttpDest->\{\'WebAddress\'\}',
'2' => 'CLICK_REFERENCE,phHttpDest->\{\'FirstClick\'\}',
'3' => 'CLICK_REFERENCE,phHttpDest->\{\'SecondClick\'\}',
'4' => 'CLICK_REFERENCE,phHttpDest->\{\'DealClick\'\}',
'5' => 'CLICK_REFERENCE,phHttpDest->\{\'LinkClick2\'\}',
'6' => 'CLICK_REFERENCE,phHttpDest->\{\'DocClick\'\}',
);
[data]
Print WebAddress______________ Destination_________________________________________________ FirstClick_________________ SecondClick________________ DealClick_________________________ LinkClick2________________________ DocClick___________________________________ PayInterval DueDay Step2Pass__________ QaRule_________________________________________________________________________________________________________________
0 http://www.tda-sgft.com/ d:\\$YYYYMM{$n}\\raw\\remit\\wl\\CXPEN1_apl.pdf Mortgage Loan ABS Caixa Penedes 1 TDA MAINPAGE - FAIL FundΒ΄s Allocation q1 Step3 qa_regexp=Report D?d?ate\\s+\\d\\d\/$MM{$n}\/$YYYY{$n}
0 http://www.tda-sgft.com/ d:\\$YYYYMM{$n}\\raw\\remit\\wl\\CXPEN1_bond.pdf Mortgage Loan ABS Caixa Penedes 1 TDA MAINPAGE - FAIL Investors information on Payment Date q1 Step3 qa_regexp=PAYMENT DATE:\\s+$aCAPMONTHNAMES[$MM{$n}-1].+$YYYY{$n}
0 http://www.tda-sgft.com/ d:\\$YYYYMM{$n}\\raw\\remit\\wl\\CXPEN1_bond.pdf Mortgage Loan ABS Caixa Penedes 1 TDA MAINPAGE - FAIL Investors information on Payment Date q1 Step3 qa_regexp=PAYMENT DATE:\\s+$aCAPSHORTMONTHNAMES[$MM{$n}-1] \\d\\d.+? ?.? $YYYY{$n}
0 http://www.tda-sgft.com/ d:\\$YYYYMM{$n}\\raw\\remit\\wl\\CXPEN1_bond.pdf Mortgage Loan ABS Caixa Penedes 1 TDA MAINPAGE - FAIL Investors information on Payment Date q1 Step3 qa_regexp=PAYMENT DATE:\\s+$aCAPMONTHNAMESSPANISH[$MM{$n}-1] \\d\\d.+? ?.? $YYYY{$n}
Where each of the clicks the phHttpDest and the action correlate to the
Headings of the `[data]` section.
Answer: So one way of doing it is using a set of regular expression replacements to
create the files in the new format. I didn't completely understand the rules
of your format so I generally implemented the whole thing, but there are some
differences. You'll have to go in and make some adjustments to fine tune it.
The output.txt file is what gets produced when one uses your example as
input.txt
**code**
import re
data = open('input.txt').read()
data = re.sub(r" 'Step([0-9]+)' =>\s+{\s+action\s+=> ", r" '\1' => ", data)
data = re.sub(r"',\s+pass\s+[^,]+,", "", data)
data = re.sub(r"',\s+accept_multi_match\s+[^,]+,", "", data)
data = re.sub(r"\n +#.*\n", "\n", data)
data = re.sub(r"',\s+fail\s+[^,]+,", "", data)
data = re.sub(r"',\s+matchtype\s+[^,]+,", "", data)
data = re.sub(r"',\s+inputstring\s+=> '", ",", data)
data = re.sub(r"\s+matchstring\s+=> '", ",", data)
data = re.sub(r"\n },", "',", data)
open('output.txt', 'w').write(data)
**output.txt**
[settings]
email_to=people
special_websurf_processing=1
period_0_1_only=1
crc_recheck=0
[macro]
%::WebSurfRules =
(
'1' => 'NAVIGATE,http://www.tda-sgft.com/TdaWeb/jsp/fondos/Fondos.tda',',
'2' => 'CLICK_REFERENCE,phHttpDest->\{\'FirstClick\'\}',
'3' => 'CLICK_REFERENCE,phHttpDest->\{\'SecondClick\'\}',',
'4' => 'CLICK_REFERENCE,phHttpDest->\{\'DealClick\'\}',
'5' => 'CLICK_REFERENCE,phHttpDest->\{\'LinkClick2\'\}',
'6' => 'CLICK_REFERENCE,phHttpDest->\{\'DocClick\'\}',',
'7' => 'CLICK_DOWNLOAD_OK',',
);
...
|
python debug tools for multiprocessing
Question: I have a python script that works with threads, processes, and connections to
a database. When I run my script, python crashes.
I cannot explicitly detect the case in which this happens.
Now I am looking for tools to get more information when python crashes, or a
viewer to see all my created processes/connections.
Answer: I created a module
[RemoteException.py](https://gist.github.com/niccokunzmann/5763860) that shows
the full traceback of a exception in a process. Python2. [Download
it](https://gist.github.com/niccokunzmann/5763860) and add this to your code:
import RemoteException
@RemoteException.showError
def go():
raise Exception('Error!')
if __name__ == '__main__':
import multiprocessing
p = multiprocessing.Pool(processes = 1)
r = p.apply(go) # full traceback is shown here
**OLD ANSWER**
I had the problem, too.
this is what i did... a RemoteException to debug multiprocessing calls
[RemoteException.py](https://github.com/niccokunzmann/pynet/blob/master/process/RemoteException.py)
copy the source and remove line 19: `file.write('\nin %s ' %
(Process.thisProcess,))` and the line `import Process`
The problem is: multiprocessing only transfers the exception but looses the
traceback. The code below creates an Exception object that saves the
traceback. And prints it in the calling process.
In your script you can do something like this:
import RemoteException
def f():
try:
# here is code that fails but you do know not where
pass
except:
ty, err, tb = RemoteException.exc_info() # like sys.exc_info but with better message
raise ty, err, tb
# here follows your multiprocessing call to f
|
Python setup.py points to . as opposed to the directory specified in setup.py?
Question: This is my current project setup:
.
βββ README.md
βββ build
βΒ Β βββ bdist.macosx-10.8-intel
βΒ Β βββ lib
βββ dist
βΒ Β βββ giordano-0.1-py2.7.egg
βββ giordano.egg-info
βΒ Β βββ PKG-INFO
βΒ Β βββ SOURCES.txt
βΒ Β βββ dependency_links.txt
βΒ Β βββ not-zip-safe
βΒ Β βββ top_level.txt
βββ requirements.txt
βββ setup.py
βββ src
βΒ Β βββ giordano
βΒ Β βββ spider
βββ test.txt
βββ venv
βββ bin
βββ include
βββ lib
βββ share
And this is my setup file:
from setuptools import setup
setup(name='giordano',
version='0.1',
packages=['giordano'],
package_dir={'giordano': 'src/giordano'},
zip_safe=False)
When I do `python setup.py install`, I am able to `import giordano` in my code
without problems.
However, when I am doing `python setup.py develop`, this is the console
output:
[venv] fixSetup$ python setup.py develop
running develop
running egg_info
writing giordano.egg-info/PKG-INFO
writing top-level names to giordano.egg-info/top_level.txt
writing dependency_links to giordano.egg-info/dependency_links.txt
reading manifest file 'giordano.egg-info/SOURCES.txt'
writing manifest file 'giordano.egg-info/SOURCES.txt'
running build_ext
Creating /Users/blah/Dropbox/projects/Giordano/venv/lib/python2.7/site-packages/giordano.egg-link (link to .)
Removing giordano 0.1 from easy-install.pth file
Adding giordano 0.1 to easy-install.pth file
Installed /Users/blah/Dropbox/projects/Giordano
Processing dependencies for giordano==0.1
Finished processing dependencies for giordano==0.1
I noticed that the egg is linked to `.` as opposed to `src/giordano`. I can no
longer `import giordano` in my code.
Any ideas why develop is not respecting `package_dir`?
Answer: Try with 'giordano': 'src'. distutils/distribute looks for the module or
package name in the directory you specify; in the code you pasted, the value
is one directory too deep.
|
Why does running CherryPy with sudo sometimes hang when terminating by Ctrl-C?
Question: I've discovered that when I start a CherryPy server with sudo, then try to
terminate it by pressing Ctrl-C, it sometimes (~1/3 of the time) hangs. I can
reproduce this using the CherryPy hello world:
import cherrypy
class HelloWorld(object):
def index(self):
return "Hello World!"
index.exposed = True
cherrypy.quickstart(HelloWorld())
The output I see is:
$ sudo python hello.py
[23/Nov/2012:21:23:03] ENGINE Listening for SIGHUP.
[23/Nov/2012:21:23:03] ENGINE Listening for SIGTERM.
[23/Nov/2012:21:23:03] ENGINE Listening for SIGUSR1.
[23/Nov/2012:21:23:03] ENGINE Bus STARTING
CherryPy Checker:
The Application mounted at '' has an empty config.
[23/Nov/2012:21:23:03] ENGINE Started monitor thread 'Autoreloader'.
[23/Nov/2012:21:23:03] ENGINE Started monitor thread '_TimeoutMonitor'.
[23/Nov/2012:21:23:04] ENGINE Serving on 127.0.0.1:8080
[23/Nov/2012:21:23:04] ENGINE Bus STARTED
^CTraceback (most recent call last):
File "hello.py", line 7, in <module>
cherrypy.quickstart(HelloWorld())
File "/usr/local/lib/python2.7/site-packages/CherryPy-3.2.2-py2.7.egg/cherrypy/__init__.py", line 161, in quickstart
engine.block()
File "/usr/local/lib/python2.7/site-packages/CherryPy-3.2.2-py2.7.egg/cherrypy/process/wspbus.py", line 303, in block
except (KeyboardInterrupt, IOError):
KeyboardInterrupt
<No subsequent command prompt>
This is an issue for me because I would like to run my server on port 80, and
I have some stuff going on in other threads that should be cleaned up
properly. I guess it's mostly just a nuisance for debugging, but I am curious
nonetheless.
Answer: I opened an issue regarding this with CherryPy. Turns out that the issue was
the version of sudo that I was using (1.7.4):
<https://bitbucket.org/cherrypy/cherrypy/issue/1186/sometimes-hangs-when-
running-with-sudo-and>
I retested with the latest (1.8.6p3) and do not have this issue anymore.
Thanks to Chris Beelby!
|
Get around a 404 with mechanize
Question: I'm creating a Python script that would read a file of URLs, but I know not
all of them will work. I'm trying to figure out how to get around this and
make it read the next line of the file, instead of raising the error that I
have posted below. I know I need some kind of if statement but I can't quite
figure it out.
from mechanize import Browser
from BeautifulSoup import BeautifulSoup
import csv
me = open('C:\Python27\myfile.csv')
reader = csv.reader(me)
mech = Browser()
for url in me:
response = mech.open(url)
html = page.read()
soup = BeautifulSoup(html)
table = soup.find("table", border=3)
for row in table.findAll('tr')[2:]:
col = row.findAll('td')
BusinessName = col[0].string
Phone = col[1].string
Address = col[2].string
City = col[3].string
State = col[4].string
Zip = col[5].string
Restaurantinfo = (BusinessName, Phone, Address, City, State)
print "|".join(Restaurantinfo)
When I run that block of code it raises this error:
> httperror_seek_wrapper: HTTP Error 404: Not Found
Basically what I am asking for is how to make Python ignore that and try the
next URL.
Answer: if you only have url in your file maybe it would be more simple to write one
url per line and use some code like this:
from mechanize import Browser
from BeautifulSoup import BeautifulSoup
me = open('C:\Python27\myfile.csv')
mech = Browser()
for url in me.readlines():
...
if you want to keep your code, you have to use :
for url in reader:
...
|
Python Idle and Terminal Import Differences
Question: I just started using Python and I have a question about idle vs terminal.
In idle, I made a file called Robot.py
I have a class called Robot
class Robot(object)
def __init__(self,x,y):
#some code here etc...
def HelloWorld()
print "Hello World!"
I have another file called testrobot.py, which looks like so:
import Robot
r = Robot(1,4)
In idle, I am able to successfully create a Robot object when I run
testrobot.py. However in terminal, it gives an error Message `NameError: Robot
is not defined`
I'm not sure how to run my program in terminal.
Also:
How can I call my `HelloWorld()` function which is in Robots.py but not of the
class Robot in an external file (such as testrobot.py)?
Thanks in advance!
Answer: When you load and run scripts in IDLE, they are automatically loaded for the
interpreter. That means that as soon as you run the script in IDLE, the Python
shell already has those types defined.
When you want to run it from outside of IDLE, i.e. without running the module
first, you need to import your Robot from that module. To do that, you import
the module, not the type:
import Robot
myRobot = Robot.Robot(...)
Or, if you want to use `Robot` directly, you need to use the `from ... import`
syntax:
from Robot import Robot
myRobot = Robot(...)
Similarily, you can call your function by using `Robot.HelloWorld` in the
first case, or directly if you add `HelloWorld` to the import list in the
second case:
from Robot import Robot, HelloWorld
myRobot = Robot(...)
HelloWorld()
As you can see, it is generally a good idea to name your files in lower case,
as those are the module names (or βnamespacesβ in other languages).
|
Hangman Python Game Index Error in For Loop with the Lists
Question: Okay, I am working on a homework assignment to build a hangman game in python.
So far, it was going well untill I get this annoying error:
Traceback (most recent call last):
File "/Users/Toly/Downloads/ps2 6/ps2_hangman.py", line 82, in <module>
if (remLetters[i] == userGuess):
IndexError: string index out of range
Here is my code:
# 6.00 Problem Set 3
#
# Hangman
#
# -----------------------------------
# Helper code
# (you don't need to understand this helper code)
import random
import string
import time
WORDLIST_FILENAME = "words.txt"
def load_words():
print "Loading word list from file..."
# inFile: file
inFile = open(WORDLIST_FILENAME, 'r', 0)
# line: string
line = inFile.readline()
# wordlist: list of strings
wordlist = string.split(line)
print " ", len(wordlist), "words loaded."
return wordlist
def choose_word(wordlist):
"""
wordlist (list): list of words (strings)
Returns a word from wordlist at random
"""
return random.choice(wordlist)
wordlist = load_words()
blankword="_ "
word=random.choice (wordlist)
remLetters = string.lowercase
remGuesses = 8 #starting number of guesses
remWord=len(word)
#makes a blank with the length of the word
print "Welcome to Hangman"
time.sleep(1)
print
print "Your word is", remWord,"letters long."
print
time.sleep (1)
while (remGuesses != 0 or blankword != word):
remBlankword=len(blankword)
remWordDoubled=remWord*2
while (remWordDoubled!=len(blankword)):
blankword=blankword + "_ "
print blankword
print
print "You have",remGuesses," guesses left."
print
time.sleep(1)
userGuess= str(raw_input ("Guess a letter:"))
print
if (userGuess in word):
print "Excellent guess!"
else:
print "Bad Guess"
remGuesses=remGuesses-1
for i in range (1, len(remLetters)):
if (remLetters[i] == userGuess):
remLetters = remLetters[0:i] + remLetters[i+1:len(remLetters)]
print remLetters
if (remGuesses == 0):
print
print "Sorry, you died! Ha, sucks!"
print
print
time.sleep (1)
print "End of Game"
if (blankword == word):
print
print "Congradulations! You won!"
print
time.sleep(1)
print
print
print
print "End of Game"
Answer: You first get the range of indices in `remLetters`, but if the letter ==
`userGuess`, then you remove one letter out of `remLetters`. Meaning that now
the largest index in `remLetters` is 1 less than it was before. When you try
to index the highest number returned by your range, you are now out of bounds
and so you get an `IndexError`.
Something like this is probably what you're looking for:
remLetters = ''.join(x for x in remLetters if x != userGuess)
Or:
try:
idx = remLetters.index(userGuess)
remLetters = remLetters[:idx] + remLetters[idx+1:]
except ValueError:
pass
|
Can't write text sent by javascript to a .txt file using python cgi
Question: I'm having an error, while tring to write my string variable sent from
javascript to a .txt file in cgi. this is the python cgi code with the error:
1 #!/usr/bin/python
2
3 import cgi, cgitb
4 cgitb.enable()
5
6 print "Content-type: text/html"
7 print
8
9 print "<html><head><title>Nimed</title></head><br/>"
10
12 formdata = cgi.FieldStorage()
13 tulemused = formdata.getvalue('tulemused')
14 f = open('/home/t103692/public_html/prax3/tulemused.txt', 'a')
=> 15 f.write(tulemused)
16 f.close()
f = <open file '/home/t103692/public_html/prax3/tulemused.txt', mode 'a'>, f.write = <built-in method write of file object>, tulemused = None
<type 'exceptions.TypeError'>: expected a character buffer object
args = ('expected a character buffer object',)
message = 'expected a character buffer object'
And here's the javascript code, where I send the variable:
function saadaTulemused() {
var xmlhttp = new XMLHttpRequest();
xmlhttp.open("POST","http://dijkstra.cs.ttu.ee/~t103692/cgi-bin/save.py", true);
xmlhttp.setRequestHeader("content-type","application/x-www-form-urlencoded");
xmlhttp.send(tulemused);
}
"tulemused" is the string variable, which already contains some text without
spaces.
How can I solve the error?
Answer: `tulemused` is `None` and you cannot write that to a file.
You need to actually send `x-www-form-urlencoded` data from your JavaScript
handler; currently you only send the value and nothing else.
Prefixing the string with `'tulemused='` should be enough:
xmlhttp.send("tulemused=" + tulemused);
|
python "ImportError: cannot import name urandom"
Question: Somehow my python is broken and emits the error:
jseidel@EDP15:/etc/default$ python -c 'import random'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python2.6/random.py", line 47, in <module>
from os import urandom as _urandom
ImportError: cannot import name urandom
This is NOT the virtualenv error that is so commonly documented here and
elsewhere: I don't use python directly, I have never setup a virtualenv
explicitly, and there is no virtualenv directory or python script that I can
find anywhere on my system.
I'm running Kubuntu 10.04 and until just recently my KPackageKit worked just
fine and handled updates with no problem. Now it shows nothing... maybe
because of this python error, maybe because of something else.
How do I go about finding the error and fixing python?
Answer: As suggested by @Armin Rigo, this worked for me:
1) Add a `print 42` at the end of the /usr/lib/python2.6/os.py file.
2) If you see "42", then that is the correct os.py file and the urandom module
is not include. Add the statement to include urandom (you can find a sample
from another os.py file). This was what worked for me.
3) If you don't see "42", then that's not the os.py file that you're using.
Find the random.py file that is crashing and insert `import os; print
os.__file__` to get more information about the failure.
|
Python pandas insert long integer
Question: I'm trying to insert long integers in a Pandas Dataframe
import numpy as np
from pandas import DataFrame
data_scores = [(6311132704823138710, 273), (2685045978526272070, 23), (8921811264899370420, 45), (17019687244989530680L, 270), (9930107427299601010L, 273)]
dtype = [('uid', 'u8'), ('score', 'u8')]
data = np.zeros((len(data_scores),),dtype=dtype)
data[:] = data_scores
df_crawls = DataFrame(data)
print df_crawls.head()
But when I look in the dataframe, last values which are long are now negative
:
uid score
0 6311132704823138710 273
1 2685045978526272070 23
2 8921811264899370420 45
3 -1427056828720020936 270
4 -8516636646409950606 273
uids are 64 bits unsigned int, so 'u8' should be the correct dtype ? Any ideas
?
Answer: Yes-- it's a present limitation of pandas-- we do plan to add support for
unsigned integer dtypes in the future. An error message would be much better:
<http://github.com/pydata/pandas/issues/2355>
For now you can make the column `dtype=object` as a workaround.
EDIT 2012-11-27
Detecting overflows now, though will become dtype=object for now until
DataFrame has better support for unsigned data types.
In [3]: df_crawls
Out[3]:
uid score
0 6311132704823138710 273
1 2685045978526272070 23
2 8921811264899370420 45
3 17019687244989530680 270
4 9930107427299601010 273
In [4]: df_crawls.dtypes
Out[4]:
uid object
score int64
|
Use IP list in CSV in python
Question: So, I ahve this list of IP's in a CSV file. Only one column, if I cat the file
they all appear on different lines and the file command tells me it is ASCII
text.
However, when I try to loop though the file and resolve the addesses for the
different IP:s I get the error "socket.herror: [Errno 1] Unknown host".
For some reason the value of the cell isnt properly a string, and I have
trouble converting it to one.
from string import rstrip
from socket import gethostbyaddr
csv_file = open('csv_list.csv', "r")
for line in csv_file:
dns_name = gethostbyaddr(str(line.rstrip('\n')))
print "IP: " + line.rstrip('\n') + "DNS Name:" + dns_name[0]
Is there any way around this? I have been thinking about converting the file
to a plain textfile, adding all the values from the file to a list so far but
I am not sure what the best solution would be.
Anyone have any ideas?
Thanks in advance!
Answer: Your problem is not the reading of the file (this can be optimized using `with
...`, too) but that one IP cannot be looked up reversely. The `gethostbyaddr`
function throws an exception in that case.
I've made up your sample a bit and now it reports errors not as a crash with
an exception but prints out a message.
from string import rstrip
from socket import gethostbyaddr
with open('csv_list.csv', 'r') as csv_file:
for line in csv_file:
ip = line.strip()
try:
dns_name = gethostbyaddr(ip)
print "IP: %s, DNS Name: %s" % (ip, dns_name[0])
except Exception, e:
print "IP: %s, DNS lookup error: %s" % (ip, e)
For example for this list of IPs:
1.2.3.4
8.8.8.8
4.4.4.4
bad IP
1.2.3
it prints
IP: 1.2.3.4, DNS lookup error: [Errno 1] Unknown host
IP: 8.8.8.8, DNS Name: google-public-dns-a.google.com
IP: 4.4.4.4, DNS lookup error: [Errno 1] Unknown host
IP: bad IP, DNS lookup error: [Errno 8] nodename nor servname provided, or not known
IP: 1.2.3, DNS lookup error: [Errno 1] Unknown host
|
python, get encrypted user password from shadow
Question: I'm trying to obtain the encrypted system user password in order to compare it
with another sha512 encrypted one. I tried pwd, but it seems that this module
does not deal with user passwords, or the used system is "too modern" for it
(a debian squeeze). Here's what I obtain:
import pwd
username = 'root' #or another user
pwd_struct = pwd.getpwnam(username)
print pwd_struct
>>>pwd.struct_passwd(pw_name='root', pw_passwd='x', pw_uid=0, pw_gid=0, pw_gecos='root', pw_dir='/root', pw_shell='/bin/bash')
where pw_passwd='x' and not a sha512 string.
Intended to use this with the python crypt module ([example
here](http://docs.python.org/2/library/crypt.html)), I got the exception
"Sorry, currently no support for shadow passwords", which is normal, as my
pw_passwd = 'x'.
Is there another proper method to obtain hashed passwords, or I should white
my own parser for /etc/shadow?
Answer: Try the [spwd module](http://docs.python.org/2/library/spwd.html)
Platforms: Unix
New in version 2.5.
This module provides access to the Unix shadow password database. It is
available on various Unix versions.
You must have enough privileges to access the shadow password database (this
usually means you have to be root).
Shadow password database entries are reported as a tuple-like object, whose
attributes correspond to the members of the spwd structure (Attribute field
below, see ):
>>> import spwd
>>> spwd.getspnam('root')
spwd.struct_spwd(sp_nam='root', sp_pwd='!', sp_lstchg=15238, sp_min=0, sp_max=99999, sp_warn=7, sp_inact=-1, sp_expire=-1, sp_flag=-1)
Remember, you need to have read permission of `/etc/shadow` for this to work
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.