text
stringlengths 226
34.5k
|
---|
FindWindow fails
Question: I've troubles with `FindWindow` using `pywin32` extension. Simple C code:
int main()
{
HWND h = FindWindow(NULL, TEXT("SomeApp"));
if (h != INVALID_HANDLE_VALUE)
SetForegroundWindow(h);
return 0;
}
Works well. Same with python:
import win32gui
h = win32gui.FindWindow(None, "SomeApp")
if h:
win32gui.SetForegroundWindow(h)
else:
print "SomeApp not found"
Fails, SomeApp not found. I suggest text encoding may be cause trouble here
but not found any information in docs how to specify text.
**Update:** I've tested code on other machine and don't see any troubles. So,
configuration on my first machine should be incorrect. I've update my
investigation results if found the problem.
Answer: In the C code, you are checking `h != INVALID_HANDLE_VALUE`, in Python `h !=
None`. `INVALID_HANDLE_VALUE` is not `0`/`null`/`None`.
Python defines `win32file.INVALID_HANDLE_VALUE` via the `win32file` import.
Also, instead of printing "SomeApp not found", you can do something like:
gle = win32api.GetLastError()
err = win32api.FormatMessage(gle)[:-2]
print 'SomeApp not found: LastError=%d - %s' % (gle, err)
This should give you more details on the failure if `FindWindow` has
legitimately failed for some reason (or "Success" if it worked).
|
QtGui.QFileDialog.getExistingDirectory() window won't close after directory has been chosen (PyQt)
Question: I am trying to get a path with the `QtGui.QFileDialog.getExistingDirectory()`
dialog window in a `python` program to ease things up for users while the rest
of the program is in console output. I have this piece of code for this
purpose:
import sys, os
from PyQt4 import QtGui
def getpath(filename,
noPathFileMsg='',
wrongFolderMsg='',
selectFolderMsg=''):
try:
f = open('./'+filename,'r')
except IOError:
folder = get_new_path(filename,
noPathFileMsg,
selectFolderMsg)
else:
folder = f.readline()
f.close()
currentDir = os.getcwd()
try:
os.chdir(folder)
except:
folder = get_new_path(filename,
wrongFolderMsg,
selectFolderMsg)
else:
os.chdir(currentDir)
finally:
return folder
def get_new_path(filename,
infoMsg,
selectFolderMsg):
app = QtGui.QApplication(sys.argv)
QtGui.QMessageBox.about(None, 'No folder', infoMsg)
folder = QtGui.QFileDialog.getExistingDirectory(None, selectFolderMsg)
app.exit()
if os.name == 'posix':
folder += '/'
elif os.name == 'nt':
folder += '\\'
g = open('./'+filename,'w')
g.write(folder)
g.close()
return folder
if __name__ == '__main__':
folderPath = getpath('pathtofolder.txt',
noPathFileMsg='The path to the folder has not been set',
wrongFolderMsg='The path folder saved cannot be reached',
selectFolderMsg='Please select a folder')
print folderPath
var = input('The program stopped at the input instruction, the dialog window should now be closed!')
If I call the getpath function the dialog window stays open until the script
that called the function ends instead of closing just after this instruction:
folder = QtGui.QFileDialog.getExistingDirectory(None, selectFolderMsg)
If you run this code it will create a file that saves the directory saved with
the dialog window in the folder where the script is run.
What did I do wrong?
By the way, I am on Ubuntu 12.04. Thank you! Cheers
Answer: After setting up Ubuntu 12.04 in a VM, I can confirm that the dialog doesn't
close properly after clicking "Open".
The problem seems to be caused by attempting to quit the `QApplication` inside
the `get_new_path` function.
Instead, you should create a single, global `QApplication` object, and only
quit it when the script completes:
def get_new_path(filename, infoMsg, selectFolderMsg):
QtGui.QMessageBox.about(None, 'No folder', infoMsg)
folder = QtGui.QFileDialog.getExistingDirectory(None, selectFolderMsg)
...
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
folderPath = getpath(...)
app.exit()
|
Python pass instance of itself as an argument to another function
Question: I have a UserModel class that will essentially do everything like login and
update things.
I'm trying to pass the instance of itself (the full class) as an argument to
another function of another class.
For example: (obviously not the code, but you get the idea)
from Car import CarFactory
class UserModel:
def __init__(self,username):
self.username = username
def settings(self,colour,age,height):
return {'colour':colour,'age':age,'height':height}
def updateCar(self,car_id):
c = CarFactory(car_id, <<this UserModel instance>>)
So, as you can see from the very last line above I would like to pass an
instance of UserModel to the CarData class, so when within the CarData class I
can access the UserModel.settings(), however, I am unsure of the syntax. I
could of course just do:
c = CarFactory(car_id,self.settings)
Any help would be grateful appreciated.
Thanks
Answer:
c = CarFactory(car_id, self)
doesnt work?
on a side note it would be `self.settings()` not `self.settings` ... unless
you define settings to be a property
|
ImportError: No module named pyobjc
Question: I am new to Python. I am running Mac OS X 10.8.2, Python 2.7.3, Xcode 4.5.1. I
am not able to `import pyobjc` to python.I used `easy_install pyobjc` or
manually downloading it from <http://pypi.python.org/pypi/pyobjc/2.3> and
running `python setup.py install`. Here is a screenshot of my `site-packages
folder` 
How do I solve this?
Here is a screenshot of sys.path.PyOBJc is present in sys.path

Answer: I don't see a `.pth` file for `pyobjc` in your `site-packages` directory
there.
`.pth` files, placed inside a directory already on Python's search path,
contain directories to add to that search path. They're simple text files; you
can review the ones already there to get a feel for how they work.
As to _why_ you didn't get `.pth` files for `pyobjc`, I'm not sure. But you
could create some to fix the problem up.
Further reading: [Modifying Python’s Search
Path](http://docs.python.org/install/index.html#inst-search-path)
|
Complex query with Django (posts from all friends)
Question: I'm new to Python and Django, so please be patient with me.
I have the following models:
class User(models.Model):
name = models.CharField(max_length = 50)
...
class Post(models.Model):
userBy = models.ForeignKey(User, related_name='post_user')
userWall = models.ForeignKey(User, related_name='receive_user')
timestamp = models.DateTimeField()
post = models.TextField()
class Friend(models.Model):
user1 = models.ForeignKey(User, related_name='request_user')
user2 = models.ForeignKey(User, related_name='accept_user')
isApproved = models.BooleanField()
class Meta:
unique_together = (('user1', 'user2'), )
I know that this may not be the best/easiest way to handle it with Django, but
I learned it this way and I want to keep it like this.
Now, all I want to do is **get all the post from one person and it's
friends**. The question now is how to do it with the Django filters?
I think in SQL it would look something like this:
SELECT p.* FORM Post p, Friend f
WHERE p.userBy=THEUSER OR (
(f.user1=THEUSER AND f.user2=p.userBy) OR
(f.user2=THEUSER AND f.user1=p.userBy)
)
With no guarantee of correctness, just to give an idea of the result I'm
looking for.
Answer:
from django.db.models import Q
Post.objects.filter( \
Q(userBy=some_user) | \
Q(userBy__accept_user__user1=some_user) | \
Q(userBy__request_user__user2=some_user)).distinct()
**UPDATE**
Sorry, that was my fault. I didn't pay attention to your `related_name`
values. See updated code above. Using `userBy__accept_user` or
`userBy__request_user` alone won't work because that'll be a reference to
`Friend` which you can't compare to to `User`. What we're doing here is
following the reverse relationship to `Friend` and then once we're there,
seeing if the _other_ user on the friend request is the user in question.
This also illustrates the importance of describing reverse relationships
appropriately. Many people make the same mistake you've made here and name the
`related_name` after the model they're creating the FK to (`User`), when
actually, when we're talking about reversing the FK, we're now talking about
`Friend`. Simply, your related names would make more sense as something like:
`friend_requests` and `accepted_friends`.
|
How to recover python dateutil.rrule object from a dictionary string?
Question: I want to store dateutil.rrule objects to a database and recreate them after
reading from the database.
Given the following issue, I think I need to use a workaround. [Python
dateutils print recurrence rule according to iCalendar format (see RFC
5545)](http://stackoverflow.com/questions/12331652/python-dateutils-print-
recurrence-rule-according-to-icalendar-format-see-rfc-55)
I am thinking of storing the output of myrrule.**dict** to the database as a
string and then recreate the rrule object when required.
This is how the dict looks like:
{'_cache_complete': False, '_byhour': (0,), '_wkst': 0,
'_timeset': (datetime.time(0, 10),), '_bysecond': (0,),
'_bymonthday': (), '_byweekno': None, '_bysetpos': None,
'_cache': None, '_bymonth': None, '_bynweekday': ((4, 3),),
'_tzinfo': None, '_byyearday': None, '_byweekday': None,
'_byminute': (10,), '_len': 10, '_until': None,
'_bynmonthday': (), '_dtstart': datetime.datetime(2012, 10, 13, 0, 10),
'_count': 10, '_freq': 1, '_interval': 1, '_byeaster': None}
Is it a good idea? Any other suggestions?
How do I recover the python object from the dictionary? Is python setattr() on
my best bet or is there something easier?
Should I consider using something like this instead?
<http://stackoverflow.com/a/1305663/161628>
Answer: What you need is python's `pickle` module. The `pickle` module is a simple
serialization module which can convert any python object into a string. The
string can later be de-serialized back to the original object.
import pickle
serial_str = pickle.dumps(your_rrule_object)
db.store(serial_str)
...
serial_str = db.retrieve()
new_rrule_object = pickle.loads(serial_str)
Check the documentation for the pickle module for more details.
|
What code does Python's timeit(...) method actually time in this bit of code?
Question:
timeit.timeit("A0([randint(1,256) * (-1) ** randint(1,2) for j in range("+str(n)+")])", setup="from HW2 import A0", number=1000000)
I want to measure the time that the A0 algorithm takes to complete its job on
a list of size n, but I can't find a way to generate the list and then pass it
in as a variable to the timeit.timeit(...) method. Will the timer only measure
how long A0 takes to complete, or will the list generation also be included in
the measurement?
Answer: It will measure the execution time of everything in the statement ( the first
arg ), so if you only want the measurment of the call to A0, then the list
creation will skew the results.
Try creating the list in the setup:
timeit.timeit("A0(aList)", setup="from HW2 import A0; aList = [randint(1,256) * (-1) ** randint(1,2) for j in range("+str(n)+")] ", number=1000000)
The list will only be created once, at the start of the timer, and not be
included in the timing.
|
can I use FUSE with Cython bindings
Question: I know FUSE has bindings for C, C++, Python etc. Which effectively means that
I can develop FUSE filesystems using those languages. I wish to use Cython, as
it offers much faster speeds as compared to pure Python. That is stressed in a
filesystem. Is it possible to produce a FUSE filesystem by coding in Cython?
As far as I understand, Python documentation is all that I require to write
Cython code for FUSE. But (if it is indeed possible) should I be using Cython
as a Python FUSE system or C system??
Answer: You should use it as a Python Fuse System, then you can write your cython
stuff in it's own module and just import that module in your python code.
Usually file system related operations are IO bound and not CPU bound so I'm
not sure how much of a speedup you would get with cython, but I guess try out
and compare the results.
|
UnicodeEncodeError when using os.listdir
Question: * OS: Windows 7, 64-bit
* Python 3.1.3
When I try to do this
os.listdir("F:\\music")
I get this
UnicodeEncodeError: 'gbk' codec can't encode character '\xe3' in position 643: illegal multibyte sequence
`os.listdir` works with other directories so the cause of the problem is
obviously some strangely-encoded file or folder within `F:\music` itself. How
do I find the source of this error?
Answer: It's a windows console unicode problem, you can fix it by install the [win-
unicode-console](https://github.com/Drekin/win-unicode-console) library
$ pip install win-unicode-console
$ edit a.py
import win_unicode_console
win_unicode_console.enable()
print('non-gbk-character Résumé or 欧•亨利 works')
I've tested in python 3.4 in Chinese Windows 8
|
Tornado websocket logging
Question: I'm trying to implement websockets using Tornado webserver.
My setup looks as follows:
from tornado.options import options, define, parse_command_line
import django.core.handlers.wsgi
import logging
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.wsgi
from pogows.tornado_sockets import GetSocketHandler, UpdateSocketHandler
from mobile.cleaner import start_cleaning
define('port', type=int, default=8080)
tornado.options.options['log_file_prefix'].set('/var/www/pogo_django/logs/tornado_server.log')
tornado.options.parse_command_line()
<snip>
def main():
logger = logging.getLogger(__name__)
wsgi_app = tornado.wsgi.WSGIContainer(
django.core.handlers.wsgi.WSGIHandler())
tornado_app = tornado.web.Application(
[
('/hello-tornado', HelloHandler),
('/socket/get', GetSocketHandler),
('/socket/update', UpdateSocketHandler),
('.*', tornado.web.FallbackHandler, dict(fallback=wsgi_app)),
], debug=True)
logger.info("Tornado POGO server starting...")
server = tornado.httpserver.HTTPServer(tornado_app)
server.listen(options.port)
start_cleaning()
tornado.ioloop.IOLoop.instance().start()
So far everything looks fine, tornado logs, I see the info message. Now, I'm
trying to log some stuff from websocket handler classes.
class GetSocketHandler(tornado.websocket.WebSocketHandler):
def open(self):
print "opening"
def on_closed(self):
print "closing"
def on_message(self, message):
last_update=datetime.datetime.utcnow().replace(tzinfo=utc)
try:
print "getting_user"
...
Tornado is governed by [supervisord](http://supervisord.org/), with the
following configuration:
> [program:pogo_tornado] command=/var/www/pogo_django/tornado_server.py
> user=www-data stdout_logfile=/var/www/pogo_django/logs/pogo_stdout.log
> stderr_logfile=/var/www/pogo_django/logs/pogo_stderr.log
> environment=PYTHONPATH="/var/www/pogo_django/",DJANGO_SETTINGS_MODULE="pogo.settings"
I tried a few things.
1. Just use `print` statements, as you see from the above snippet, hoping for supervisord to catch it and send to stdout/stderr logs.
2. Create a separate `logging.getLogger()` instance inside the websocket class and use that.
None of it produces desired results.
When I run tornado from commandline by hand, I do see the `print` version
printed to console, but `logging` doesn't work anyway.
Where do I go wrong?
Answer: Bah, I got it. I was using `getLogger()` without setting logging level and
just blindly logging to `DEBUG`.
Explicitly using `logger.setLevel(logging.DEBUG)` showed me my messages in the
logs.
Apparently Tornado sets some other level by defaults.. Stupid me.
|
flask url_for TypeError
Question: I have an error when trying to use the url_for method in Flask. I'm not sure
what the cause of it because I only follow the Flask quick start. I'm a Java
guy with a bit of Python experience and want to learn Flask.
Here's the trace:
Traceback (most recent call last):
File "hello.py", line 36, in <module>
print url_for(login)
File "/home/cobi/Dev/env/flask/latest/flask/helpers.py", line 259, in url_for
if endpoint[:1] == '.':
TypeError: 'function' object has no attribute '__getitem__
My code is like this:
from flask import Flask, url_for
app = Flask(__name__)
app.debug = True
@app.route('/login/<username>')
def login(): pass
with app.test_request_context():
print url_for(login)
I'm have tried both the stable and development version of Flask and the error
still occurs. Any help will be much appreciated! Thank you and sorry if my
English is not very good.
Answer: The [docs](http://flask.pocoo.org/docs/api/#flask.url_for) say that `url_for`
takes a string, not a function. You also need to provide a username since the
route you created requires one.
Do this instead:
with app.test_request_context():
print url_for('login', username='testuser')
You are receiving this error because strings have a `__getitem__` method but
functions do not.
>>> def myfunc():
... pass
...
>>> myfunc.__getitem__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'function' object has no attribute '__getitem__'
>>> 'myfunc'.__getitem__
<method-wrapper '__getitem__' of str object at 0x10049fde0>
>>>
|
Python script that gets my latest tweet stopped working on my server
Question: Consider the following code:
import twitter
api = twitter.Api()
most_recent_status = api.GetUserTimeline('nemesisdesign')[0].text
On my server (nemesisdesign.net) stopped working a few days ago.
If I try the same code from my own machine it works fine.
This is the stack trace:
>>> most_recent_status = api.GetUserTimeline('nemesisdesign')[0].text
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "build/bdist.linux-i686/egg/twitter.py", line 1414, in GetUserTimeline
json = self._FetchUrl(url, parameters=parameters)
File "build/bdist.linux-i686/egg/twitter.py", line 2032, in _FetchUrl
url_data = opener.open(url, encoded_post_data).read()
File "/usr/local/lib/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
File "/usr/local/lib/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/local/lib/python2.6/urllib2.py", line 435, in error
return self._call_chain(*args)
File "/usr/local/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
File "/usr/local/lib/python2.6/urllib2.py", line 518, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 404: Not Found
Any hint? I have no clue ... :-(
Answer: First 3 general points:
1. Twitter deprecated many portions of their API as Paul noted. You should check to see if you're using a currently supported mechanism.
2. The official twitter python docs are beyond bad. Last I checked, they did not reflect any current libraries or connection protocols. Use them as a last resort.
3. The twitter recommended python libraries are all grossly out of date. Many don't work at all, most don't work against the current API.
I'd suggest you switch libraries to Twython. It's actively maintained and
supports the current API. There are a handful of other actively maintained
twitter libraries too. If something has been patched in 2 months though - find
another library.
Looking at your exact code:
1. Your API object isn't passing in any credentials. IIRC, This was allowed in the 1.0 API , but the 1.1 API disallows this - you must authenticate with API credentials now :
* <https://dev.twitter.com/docs/api/1.1/get/statuses/user_timeline> (current)
* <https://dev.twitter.com/docs/api/1/get/statuses/user_timeline> (past)
2. You should try to inspect that object and find out the URL that created the 404. I'm guessing it as a 1.0 endpoint. you might be using a newer version of the "twitter" package ( whatever that is , there are multiple ones in that namespace ) locally, but a different one on the server. you should check what versions are on each machine.
3. Your code should trap the HTTP errors. Twitter's API is nice enough to use HTTP status codes are part of the error messaging ( <https://dev.twitter.com/docs/error-codes-responses> ) and will also send info on the headers. This can ofter tell you exactly what is going on. Libraries like Twython do most of this for you.
Your code as-presented couldn't have caused that error though -- so i'm at a
loss as to what is going on.
Not passing in credentials should generate a 400; bad credentials a 401. Rate
Limiting is 420 , 429. Blocking could be a 404 or 420.
If i had to guess though - locally you have a package that is going after the
1.1 API endpoint, but your server has a version of that package that is going
after 1.0 - which no longer exists.
|
Python 3: creating a Letter Pyramid
Question: I'm trying to create a text pyramid with a height of 291 lines. By this I
mean:
Here is an example of a pyramid of height 6:
-----a-----
----bcd----
---efghi---
--jklmnop--
-qrstuvwxy-
zabcdefghij
Notice:
-- each line has the same number of characters
-- the letters a-z form the pyramid, and are re-used.
So far all I have is this:
letters="abcdefghijklmnopqrstuvwxyz"
#(291*2)-1
for i in range (581):
I really want to learn how to do this, so any help or pushes in the right
direction would be greatly appreciated, rather than the answer itself :)
Answer:
from itertools import cycle, islice
letters = "abcdefghijklmnopqrstuvwxyz"
height = 6
width = height*2-1
it = cycle(letters)
for count in range(1, width+1, 2):
print(''.join(islice(it, count)).center(width, '-'))
This solution uses the
[**_itertools_**](http://docs.python.org/dev/library/itertools.html) module.
The
[**`cycle`**](http://docs.python.org/dev/library/itertools.html#itertools.cycle)
function makes an iterator that repeats our sequence of letters indefinitely,
and
[**`islice`**](http://docs.python.org/dev/library/itertools.html#itertools.islice)
is used to take the next `count` letters from it each time (as they're spitted
out character by character, we make them into one string with
`''.**[join](http://docs.python.org/dev/library/stdtypes.html#str.join)**(...)`).
So we have the next line, but without the dashes. The rest is easy: just
[**`center`**](http://docs.python.org/dev/library/stdtypes.html#str.center)
it.
* * *
Another very similar solution, just without _itertools_ :
letters = "abcdefghijklmnopqrstuvwxyz"
height = 6
width = height*2-1
buf = ""
def next_letters(n):
global buf, letters
while len(buf)<n:
buf += letters
ret, buf = buf[:n], buf[n:]
return ret
for count in range(1, width+1, 2):
print(next_letters(count).center(width, '-'))
The `buf` (buffer) variable will hold the next letters from the sequence. The
`next_letters` function will check if there are enough letters in it and
extend it as necessary, then return its first `n` letters and "remove" them.
Let's see how it works with a "debug version":
>
> letters = "abcdefghijklmnopqrstuvwxyz"
> height = 6
> width = height*2-1
>
> buf = ""
> def next_letters(n):
> global buf, letters
> print("Requested {} letters. Buffer is '{}'".format(n, buf))
> while len(buf)<n:
> buf += letters
> print("Buffer was extended to '{}'".format(buf))
> ret, buf = buf[:n], buf[n:]
> print("Returned '{}'; buffer is now '{}'".format(ret, buf))
> return ret
>
> for count in range(1, width+1, 2):
> print(next_letters(count).center(width, '-'))
>
>
> Requested 1 letters. Buffer is ''
> Buffer was extended to 'abcdefghijklmnopqrstuvwxyz'
> Returned 'a'; buffer is now 'bcdefghijklmnopqrstuvwxyz'
> \-----a-----
> Requested 3 letters. Buffer is 'bcdefghijklmnopqrstuvwxyz'
> Returned 'bcd'; buffer is now 'efghijklmnopqrstuvwxyz'
> \----bcd----
> Requested 5 letters. Buffer is 'efghijklmnopqrstuvwxyz'
> Returned 'efghi'; buffer is now 'jklmnopqrstuvwxyz'
> \---efghi---
> Requested 7 letters. Buffer is 'jklmnopqrstuvwxyz'
> Returned 'jklmnop'; buffer is now 'qrstuvwxyz'
> \--jklmnop--
> Requested 9 letters. Buffer is 'qrstuvwxyz'
> Returned 'qrstuvwxy'; buffer is now 'z'
> -qrstuvwxy-
> Requested 11 letters. Buffer is 'z'
> Buffer was extended to 'zabcdefghijklmnopqrstuvwxyz'
> Returned 'zabcdefghij'; buffer is now 'klmnopqrstuvwxyz'
> zabcdefghij
|
Python log time window
Question: I was asked to edit a script that would allow you time print out a time window
of information from a log file. I am having a hard time figuring out what
would be the best way of going about this. The logs have the time at the start
of the log as follows:
[11-Oct-2012 07:41:22.205608
I would need to be able to narrow it down by hour, min, or seconds. I don't
care about the day month or year.
Answer:
import datetime
datetime.datetime.strptime("11-Oct-2012 07:41:22.205608", "%d-%b-%Y %H:%M:%S.%f")
would be the way to do it.
> <http://docs.python.org/library/datetime.html#datetime.datetime.strptime> \-
> return a datetime corresponding to date_string, parsed according to
> format...
|
Read from a gzip file in python
Question: I've just make excises of gzip on python.
import gzip
f=gzip.open('Onlyfinnaly.log.gz','rb')
file_content=f.read()
print file_content
And I get no output on the screen. As a beginner of python, I'm wondering what
should I do if I want to read the content of the file in the gzip file. Thank
you.
Answer: Try gzipping some data through the gzip libary like this...
import gzip
content = "Lots of content here"
f = gzip.open('Onlyfinnaly.log.gz', 'wb')
f.write(content)
f.close()
... then run your code as posted ...
import gzip
f=gzip.open('Onlyfinnaly.log.gz','rb')
file_content=f.read()
print file_content
This method worked for me as for some reason the gzip library fails to read
some files.
|
How to match a string against a set of wildcard strings efficiently?
Question: I am looking for a solution to match a single string against a set of wildcard
strings. For example
>>> match("ab", ["a*", "b*", "*", "c", "*b"])
["a*", "*", "*b"]
The order of the output is of no importance.
I will have in the order of 10^4 wildcard strings to match against and I will
do around ~10^9 match calls. This means I will probably have to rewrite my
code like so:
>>> matcher = prepare(["a*", "b*", "*", "c", "*b"]
>>> for line in lines: yield matcher.match("ab")
["a*", "*", "*b"]
I've started writing a trie implementation in Python that handles wildcards
and I just need to get those corner cases right. Despite this I am curious to
hear; **How would you solve this? Are there any Python libraries out there
that make me solve this faster?**
Some insights so far:
* Named (Python, re) regular expressions will not help me here since they'll only return one match.
* [pyparsing](http://pyparsing.wikispaces.com/) seems like an awesome library, but is sparsely documented and does not, as I see it, support matching multiple patterns.
Answer: Seems like [Aho-Corasick
algorithm](http://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_string_matching_algorithm)
would work. [esmre](http://code.google.com/p/esmre/) seem to do what I'm
looking for. I got this information from [this
question](http://stackoverflow.com/questions/7049894/how-to-efficiently-match-
an-input-string-against-several-regular-expressions-at).
|
Country name from ISO short code in dictionary, how to deal with non-ascii chars
Question: I'm making a webapp that takes country short code (google app engine get from
request header) and I want to get the country name (full name) not just the 2
letter initials.
I tried making a python dictionary but it breaks bkz the names have non-ascii
chars (accent marks, etc).. I used the python library "pycountry" but I'm not
sure how to include that in my google app engine project. unfortunately
pycountries output also has accent marks, so i can't just copy their txt
values and make a dictionary...
Besides, I just want the country code to name lookup table, no other
details...
Here's a copy of the dictionarys i've been trying to make but they have these
annoying accent marks...
Thanks for the help in advance
short2long = {"AF":"Afghanistan",
"AX":"Aland Islands",
"AL":"Albania",
"DZ":"Algeria",
"AS":"American Samoa",
"AD":"Andorra",
"AO":"Angola",
"AI":"Anguilla",
"AQ":"Antarctica",
"AG":"Antigua and Barbuda",
"AR":"Argentina",
"AM":"Armenia",
"AW":"Aruba",
"AU":"Australia",
"AT":"Austria",
"AZ":"Azerbaijan",
"BS":"Bahamas",
"BH":"Bahrain",
"BD":"Bangladesh",
"BB":"Barbados",
"BY":"Belarus",
"BE":"Belgium",
"BZ":"Belize",
"BJ":"Benin",
"BM":"Bermuda",
"BT":"Bhutan",
"BO":"Bolivia, Plurinational State of",
"BQ":"Bonaire, Sint Eustatius and Saba",
"BA":"Bosnia and Herzegovina",
"BW":"Botswana",
"BV":"Bouvet Island",
"BR":"Brazil",
"IO":"British Indian Ocean Territory",
"BN":"Brunei Darussalam",
"BG":"Bulgaria",
"BF":"Burkina Faso",
"BI":"Burundi",
"KH":"Cambodia",
"CM":"Cameroon",
"CA":"Canada",
"CV":"Cape Verde",
"KY":"Cayman Islands",
"CF":"Central African Republic",
"TD":"Chad",
"CL":"Chile",
"CN":"China",
"CX":"Christmas Island",
"CC":"Cocos (Keeling) Islands",
"CO":"Colombia",
"KM":"Comoros",
"CG":"Congo",
"CD":"Congo, The Democratic Republic of the",
"CK":"Cook Islands",
"CR":"Costa Rica",
"CI":"Côte d'Ivoire",
"HR":"Croatia",
"CU":"Cuba",
"CW":"Curaçao",
"CY":"Cyprus",
"CZ":"Czech Republic",
"DK":"Denmark",
"DJ":"Djibouti",
"DM":"Dominica",
"DO":"Dominican Republic",
"EC":"Ecuador",
"EG":"Egypt",
"SV":"El Salvador",
"GQ":"Equatorial Guinea",
"ER":"Eritrea",
"EE":"Estonia",
"ET":"Ethiopia",
"FK":"Falkland Islands (Malvinas)",
"FO":"Faroe Islands",
"FJ":"Fiji",
"FI":"Finland",
"FR":"France",
"GF":"French Guiana",
"PF":"French Polynesia",
"TF":"French Southern Territories",
"GA":"Gabon",
"GM":"Gambia",
"GE":"Georgia",
"DE":"Germany",
"GH":"Ghana",
"GI":"Gibraltar",
"GR":"Greece",
"GL":"Greenland",
"GD":"Grenada",
"GP":"Guadeloupe",
"GU":"Guam",
"GT":"Guatemala",
"GG":"Guernsey",
"GN":"Guinea",
"GW":"Guinea-Bissau",
"GY":"Guyana",
"HT":"Haiti",
"HM":"Heard Island and McDonald Islands",
"VA":"Holy See (Vatican City State)",
"HN":"Honduras",
"HK":"Hong Kong",
"HU":"Hungary",
"IS":"Iceland",
"IN":"India",
"ID":"Indonesia",
"IR":"Iran, Islamic Republic of",
"IQ":"Iraq",
"IE":"Ireland",
"IM":"Isle of Man",
"IL":"Israel",
"IT":"Italy",
"JM":"Jamaica",
"JP":"Japan",
"JE":"Jersey",
"JO":"Jordan",
"KZ":"Kazakhstan",
"KE":"Kenya",
"KI":"Kiribati",
"KP":"Korea, Democratic People's Republic of",
"KR":"Korea, Republic of",
"KW":"Kuwait",
"KG":"Kyrgyzstan",
"LA":"Lao People's Democratic Republic",
"LV":"Latvia",
"LB":"Lebanon",
"LS":"Lesotho",
"LR":"Liberia",
"LY":"Libya",
"LI":"Liechtenstein",
"LT":"Lithuania",
"LU":"Luxembourg",
"MO":"Macao",
"MK":"Macedonia, Republic of",
"MG":"Madagascar",
"MW":"Malawi",
"MY":"Malaysia",
"MV":"Maldives",
"ML":"Mali",
"MT":"Malta",
"MH":"Marshall Islands",
"MQ":"Martinique",
"MR":"Mauritania",
"MU":"Mauritius",
"YT":"Mayotte",
"MX":"Mexico",
"FM":"Micronesia, Federated States of",
"MD":"Moldova, Republic of",
"MC":"Monaco",
"MN":"Mongolia",
"ME":"Montenegro",
"MS":"Montserrat",
"MA":"Morocco",
"MZ":"Mozambique",
"MM":"Myanmar",
"NA":"Namibia",
"NR":"Nauru",
"NP":"Nepal",
"NL":"Netherlands",
"NC":"New Caledonia",
"NZ":"New Zealand",
"NI":"Nicaragua",
"NE":"Niger",
"NG":"Nigeria",
"NU":"Niue",
"NF":"Norfolk Island",
"MP":"Northern Mariana Islands",
"NO":"Norway",
"OM":"Oman",
"PK":"Pakistan",
"PW":"Palau",
"PS":"Palestinian Territory, Occupied",
"PA":"Panama",
"PG":"Papua New Guinea",
"PY":"Paraguay",
"PE":"Peru",
"PH":"Philippines",
"PN":"Pitcairn",
"PL":"Poland",
"PT":"Portugal",
"PR":"Puerto Rico",
"QA":"Qatar",
"RE":"Réunion",
"RO":"Romania",
"RU":"Russian Federation",
"RW":"Rwanda",
"BL":"Saint Barthélemy",
"SH":"Saint Helena, Ascension and Tristan da Cunha",
"KN":"Saint Kitts and Nevis",
"LC":"Saint Lucia",
"MF":"Saint Martin (French part)",
"PM":"Saint Pierre and Miquelon",
"VC":"Saint Vincent and the Grenadines",
"WS":"Samoa",
"SM":"San Marino",
"ST":"Sao Tome and Principe",
"SA":"Saudi Arabia",
"SN":"Senegal",
"RS":"Serbia",
"SC":"Seychelles",
"SL":"Sierra Leone",
"SG":"Singapore",
"SX":"Sint Maarten (Dutch part)",
"SK":"Slovakia",
"SI":"Slovenia",
"SB":"Solomon Islands",
"SO":"Somalia",
"ZA":"South Africa",
"GS":"South Georgia and the South Sandwich Islands",
"ES":"Spain",
"LK":"Sri Lanka",
"SD":"Sudan",
"SR":"Suriname",
"SS":"South Sudan",
"SJ":"Svalbard and Jan Mayen",
"SZ":"Swaziland",
"SE":"Sweden",
"CH":"Switzerland",
"SY":"Syrian Arab Republic",
"TW":"Taiwan, Province of China",
"TJ":"Tajikistan",
"TZ":"Tanzania, United Republic of",
"TH":"Thailand",
"TL":"Timor-Leste",
"TG":"Togo",
"TK":"Tokelau",
"TO":"Tonga",
"TT":"Trinidad and Tobago",
"TN":"Tunisia",
"TR":"Turkey",
"TM":"Turkmenistan",
"TC":"Turks and Caicos Islands",
"TV":"Tuvalu",
"UG":"Uganda",
"UA":"Ukraine",
"AE":"United Arab Emirates",
"GB":"United Kingdom",
"US":"United States",
"UM":"United States Minor Outlying Islands",
"UY":"Uruguay",
"UZ":"Uzbekistan",
"VU":"Vanuatu",
"VE":"Venezuela, Bolivarian Republic of",
"VN":"Viet Nam",
"VG":"Virgin Islands, British",
"VI":"Virgin Islands, U.S.",
"WF":"Wallis and Futuna",
"EH":"Western Sahara",
"YE":"Yemen",
"ZM":"Zambia",
"ZW":"Zimbabwe"}
I tried to use this code to build the dictionary
import pycountry
t = list(pycountry.countries)
for country in t:
print '"' + country.alpha2 + '":"' + country.name + '",'
Answer: Python source files default to the ASCII character encoding. If you want to
include characters outside of this range in your source code, then you will
need to declare the file's character encoding as described in [PEP
0263](http://www.python.org/dev/peps/pep-0263/). For example, adding the
following line to the top of the file might do what you want (assuming the
file is encoded in UTF-8):
# -*- coding: utf-8 -*-
This should cause the string objects to contain UTF-8 encoded versions of the
country names. If you were using unicode string literals instead, then the
non-ASCII characters would be decoded correctly too.
|
Transformed Data Takes up 4x more space for a doubling of variables in pandas for python
Question: I've performed some simple z-transforms on some variables I have in a pandas
DataFrame. There was a total of 216 columns in the dataframe, I transformed
196 of them and then concatenated the 197 onto the original 216 for a total of
412 total columns. I then used the `to_csv` function to write the new
dataframe to a csv. The original data is about 300mb, while the new dataset is
1.2gb. It seems odd that adding less than double the columns leads to around a
4x increase in size for the final csv. The code I used is below. Am I missing
something? Or is there a more efficient way to write DataFrames to .csv files?
Everything looks fine when I take a look at the first row of the data. Also,
the number of rows are the same between all three of the DataFrames created in
the code below.
import pandas as pd
full_data = pd.read_csv('data.csv')
names = full_data.columns.tolist()
names = names[16:-2]
len(names) #197 as expected
transform = (full_data[names] - full_data[names].mean())/full_data[names].std() #Transform has 197 columns as expected.
column_names = transform.columns.tolist()
new_names = {}
for name in column_names:
new_names[name] = name + '_standardized'
transform = transform.rename(columns=new_names)
to_concat = [full_data, transform]
final_data = pd.concat(to_concat, axis=1)
final_data.to_csv('transformed_data.csv', index = False)
Answer: The CSV stores string representations of data, so it's not necessarily going
to scale in an obvious way with the number of columns unless all columns have
roughly the same size in string representation. It's quite plausible that your
CSV could increase a lot in size if your original data had only a few decimal
places. If you read in numbers like 0.1, 0.2, 3, 1.7, whatever, and then
z-scale them, you're likely to get results with many decimal places. As a
simple example, I did this:
>>> df = pandas.DataFrame([[2, 3, 5]], columns=["A", "B", "C"])
>>> df
A B C
0 2 3 5
>>> df.to_csv('someCSV.csv')
>>> df**0.5
A B C
0 1.414214 1.732051 2.236068
>>> (df**0.5).to_csv('someCSV2.csv')
I didn't add any rows or columns to the data at all, just took the square
root, but the second CSV is 4 times the size of the first, because the second
one has lots of decimal places that take more bytes to write out in string
form. You're likely to get similarly long decimals when you divide by the
standard deviation.
|
Unit test python udev interaction
Question: I've inherited some python code that writes a new '/etc/udev/rules.d' mapping
file and then makes a subprocess call to udev to have it refresh its devices
list:
call(['/sbin/udevadm', 'trigger', '--action=change'])
The trigger call is necessary as we need the mapping to update without wanting
to unplug and plug back in the device being mapped. My problem is the 'call'
line was removed at one point, causing non-obvious side-effects in other parts
of the program and so was not caught.
My usual way of fixing something like this is to throw a unit-test on this
method (which writes the mapping file and calls trigger) to enforce the
expected behaviour, but this behaviour seems outside of the realm of unit-
testing. It's a system call, not to mention udevadm trigger requires sudo
access. I can't figure out how/what to mock out in this instance.
I considered using the pyudev library as I saw that it can mock certain
behaviour of udev, but it doesn't look like it can mock the trigger behaviour
(or even access it for that matter).
Short of throwing a big "#DO NOT DELETE THIS LINE EVER!" above the "call"
line, is there anything I can do here to prevent this from being removed in
the future? "DO NOT DELETE" lines are easily ignored say a year from now when
no one has a clue why it's there.
Answer: Here's what I decided to do in this instance, if anyone disagrees with my
answer please chime in!
It's a two parter.
First I wrapped the call I was concerned about and switched the straight
'call' to this in my code.
class UdevWrapper:
def udevadm_trigger(self):
call(['/sbin/udevadm', 'trigger', '--action=change'])
(The class contains more than this, just simplified here for clarity)
Next I mocked out the wrapper method and tested to ensure that it was called
@patch.object(utils.UdevWrapper,'udevadm_trigger')
def test_trigger_called(self,mock_udevadm_trigger):
mock_udevadm_trigger.return_value = True
# name changed for clarity
ClassWhereTriggerCalled.func()
assert mock_udevadm_trigger.called
Doing it this way was inspired by reading about [Behaviour Driven
Development](http://dannorth.net/introducing-bdd/). The whole idea of BDD is
brand new to me so I'm not sure stylistically/functionally how BDD proponents
will feel about my solution but it does what I wanted - something obviously
breaks (my test) if this line of code is removed in the future.
I plan on switching it to use _monkeypatch_ in the future so that I can create
a stub function that can check state (the order in which trigger is called is
also important). However the principle remains the same:
1. Wrap in a method that can be mocked
2. Write test to ensure correct functional behaviour
3. Mock wrapped method
|
JDBC driver not found error in monkeyrunner/jython
Question: I need to Insert something in the `DB`. im using `JDBC` as a `connector,
jython the script`, `mysql` the DB and the script is running in `CentOS`.
my code looks something like this:
> from `com.android.monkeyrunner import MonkeyRunner, MonkeyDevice,
> MonkeyImage`
>
> from `com.ziclix.python.sql import zxJDBC`
>
>
>
> db=zxJDBC.connect("jdbc:mysql://XXX.XXX.XXX.XXX:3306/dbname","USER","PASSWORD","org.gjt.mm.mysql.Driver")
>
`c=db.cursor() c.execute("INSERT INTO tablename values ('X','X','X')")`
before that, I downloaded and decompressed the file from
[here](http://dev.mysql.com/downloads/connector/j/) (in the desktop)
I added the path to classpath by doing this
export PATH=/home/XX/Desktop/mysql-connector-java-5.1.22
and when I ran the script, it gave me this error
> `zxJDBC.DatabaseError.driver [org.gjt.mm.mysql.Driver]` not found
what have I done wrong? is the name of the driver name correct? because I just
copied it in one of the tutorials that I've seen. or probably did I install
the driver correctly?
Thanks.
Answer: this is how I managed to solve the error:
1. Download the JDBC driver [here](http://dev.mysql.com/downloads/connector/j/)
2. Extract the tar.gz file anywhere you want.
3. You will find mysql-connector-java-5.1.22-bin.jar inside that folder. Copy that and paste to (in my case) /%android-sdk%/tools/lib
4. Add the new location of mysql-connector-java-5.1.22-bin.jar to classpath
5. do the script like this
> from com.android.monkeyrunner import MonkeyRunner, MonkeyDevice, MonkeyImage
>
> from com.ziclix.python.sql import zxJDBC
>
>
> db=zxJDBC.connect("jdbc:mysql://XXX.XXX.XXX.XXX:3306/dbname","USER","PASSWORD","com.mysql.jdbc.Driver")
>
> c=db.cursor()
>
> c.execute("INSERT INTO tablename values ('X','X','X')")
>
> db.commit()
Hope this helps to those who will need it in the future. :)
|
Issues with translating the strings
Question: I am using **multiple translation** in my project
For that I have updated my **settings file** as
LANGUAGE_CODE = 'en-us'
gettext = lambda s: s
LANGUAGES = (
('es', gettext('Spanish')),
('en', gettext('English')),
)
LOCALE_PATHS = (
'/mnt/aviesta/pythondev/django/locale',
)
USE_I18N = True
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.locale.LocaleMiddleware',
TEMPLATE_CONTEXT_PROCESSORS = (
"django.contrib.auth.context_processors.auth",
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.media",
"django.core.context_processors.static",
"django.core.context_processors.request",
)
And my **template file** as:
{% load i18n %}
{% trans "Hello" %}
<p>Already a user <a href="/login/"><b>{% trans "login here" %}</b></a></p>
after that I create a locale folder parallel to my app and than create the
specific language folders in it as :**django-admin.py makemessages -l es**
which creates the .po file and then update this .po file as:
#: customer_reg/customer_register.html:14
msgid "Hello"
msgstr "¡Hola"
#: customer_reg/customer_register.html:17
msgid "login here"
msgstr "ingresa aquí"
And finally i _compiled_ my msg **django-admin.py compilemessages**
BUt my strings "hello" and "login here" is remain in **English** , they are
not translated.I don't know why it happens ??
Answer: Everything looks great with your code. The only thing I've never seen is the
`LOCALE_PATHS` in `settings.py`
Mabye this answer, I did some time ago, could help you: [Issues with multiple
languages](http://stackoverflow.com/questions/12852805/issues-with-multiple-
languages/12853503#12853503)
**Edit // Answer to comment**
from django.utils import translation
translation.activate('es')
|
wxPython: Problems with GridBagSizer
Question: I'm new to wxpython and it seems to be very powerful tool for building up GUI,
but I have a question about GridBagSizer. Could you please tell me how to
adjust the size of the items that are placed inside GridBagSizer to the size
of the frame they are supposed to be placed in. I attached the sample at the
end: the buttons from 8 to 13 go beyond the frame, but what I want is to make
the size of the buttons automatically resize to fit the frame. Any advices and
suggestions are appreciated.
Best regards, Nikita
import wx
class Frame ( wx.Frame ):
def __init__( self ):
wx.Frame.__init__ ( self, None, id = wx.ID_ANY, title = wx.EmptyString, pos = wx.DefaultPosition, size = wx.Size( 500,300 ), style = wx.DEFAULT_FRAME_STYLE & ~ (wx.MAXIMIZE_BOX|wx.MINIMIZE_BOX|wx.RESIZE_BORDER) )
nums = (('1',(0,3)), ('2',(2,3)), ('14', (3,3)), ('15',(4,3)), ('16',(5,3)), ('17',(6,3)), ('18',(7,3)),
('3',(1,0)), ('4',(1,1)), ('5',(1,2)), ('6',(1,3)), ('7',(1,4)), ('8',(1,5)), ('9',(1,6)), ('10',(1,7)),
('11',(1,8)), ('12',(1,9)), ('13',(1,10)))
panel = wx.Panel(self,-1)
panel.SetBackgroundColour("green")
grid = wx.GridBagSizer(2,2)
for num, pos in nums:
button = wx.Button(panel, wx.NewId(), label = num)
grid.Add(button, pos, flag=wx.EXPAND)
bs2 = wx.BoxSizer(wx.VERTICAL)
bs2.Add(grid, 0, wx.EXPAND,0)
panel.SetSizer(bs2)
panel.Layout()
self.mainSizer = wx.BoxSizer(wx.VERTICAL)
self.mainSizer.Add(panel, 0, wx.EXPAND, 0)
self.SetSizer(self.mainSizer)
self.Layout()
if __name__=='__main__':
app=wx.PySimpleApp()
app.frame = Frame()
app.frame.Center()
app.frame.Show(True)
app.MainLoop()
Answer: You must allow the columns to grow by calling `AddGrowableCol`:
for i in range(11):
grid.AddGrowableCol(i)
Now, they can grow, but they won't shrink beyond their initial size! So you
should also reduce their starting size:
for num, pos in nums:
button = wx.Button(panel, wx.NewId(), size=(10,-1), label = num)
#original code was: button = wx.Button(panel, wx.NewId(), label = num)
grid.Add(button, pos, flag=wx.EXPAND)
Here is the complete listing:
import wx
class Frame ( wx.Frame ):
def __init__( self ):
wx.Frame.__init__ ( self, None, id = wx.ID_ANY, title = wx.EmptyString, pos =
wx.DefaultPosition, size = wx.Size( 500,300 ),
style = wx.DEFAULT_FRAME_STYLE & ~ (wx.MAXIMIZE_BOX|wx.MINIMIZE_BOX|wx.RESIZE_BORDER) )
nums = (('1',(0,3)), ('2',(2,3)), ('14', (3,3)), ('15',(4,3)), ('16',(5,3)), ('17',(6,3)), ('18',(7,3)),
('3',(1,0)), ('4',(1,1)), ('5',(1,2)), ('6',(1,3)), ('7',(1,4)), ('8',(1,5)), ('9',(1,6)), ('10',(1,7)),
('11',(1,8)), ('12',(1,9)), ('13',(1,10)))
panel = wx.Panel(self,-1)
panel.SetBackgroundColour("green")
grid = wx.GridBagSizer(2,2)
for num, pos in nums:
button = wx.Button(panel, wx.NewId(), size=(10,-1), label = num)
grid.Add(button, pos, flag=wx.EXPAND)
for i in range(11):
grid.AddGrowableCol(i)
bs2 = wx.BoxSizer(wx.VERTICAL)
bs2.Add(grid, 0, wx.EXPAND,0)
panel.SetSizer(bs2)
panel.Layout()
self.mainSizer = wx.BoxSizer(wx.VERTICAL)
self.mainSizer.Add(panel, 0, wx.EXPAND, 0)
self.SetSizer(self.mainSizer)
self.Layout()
if __name__=='__main__':
app=wx.PySimpleApp()
app.frame = Frame()
app.frame.Center()
app.frame.Show(True)
app.MainLoop()
|
Python: is the system function too slow?
Question: I have a small network with 3 computers. I have a C++ program on the 2nd
computer that reads packets from a network interface while the first computer
sends it data. I need to run that from the third computer. I wrote a small
python script using `flask`
import os
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
os.system("./a.out &")
if __name__ == "__main__":
app.run(host='0.0.0.0')
The problem is, there seems to be a small delay while starting the executable
using `os.system()` and the second computer misses some packets. Is there a
way to speed this up?
Answer: I would highly recommend the use of a queue of some sort. The performance of a
high level Web server calling a system function can be unpredictable (based on
how busy the machine is etc).
If you had a process reading in the background doing the communication and
queuing the packets and the Web endpoint reads from the queue, would be the
most reliable solution.
|
Spynner doesn't load html from URL
Question: I use spynner for scraping data from a site. My code is this:
import spynner
br = spynner.Browser()
br.load("http://www.venere.com/it/hotel/roma/hotel-ferrari/#reviews")
text = br._get_html()
This code fails to load the entire html page. This is the html that I
received:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head>
<script type="text/javascript">(function(){var d=document,m=d.cookie.match(/_abs=(([or])[a-z]*)/i)
v_abs=m?m[1].toUpperCase():'N'
if(m){d.cookie='_abs='+v_abs+'; path=/; domain=.venere.com';if(m[2]=='r')location.reload(true)}
v_abp='--OO--OOO-OO-O'
v_abu=[,,1,1,,,1,1,1,,1,1,,1]})()
My question is: how do I load the complete html?
More information:
I tried with:
import spynner
br = spynner.Browser()
respond = br.load("http://www.venere.com/it/hotel/roma/hotel-ferrari/#reviews")
if respond == None:
br.wait_load ()
but loading html is never complete or certain. What is the problem? I'm going
crazy.
Again: I'm working in Django 1.3. If I use the same code in Python (2.7)
sometimes load all html.
Answer: Now after you check the contents of test.html you will find the p elements
with id="feedback-...somenumber..." :
import spynner
def content_ready(browser):
if 'id="feedback-' in browser.html:
return True
br = spynner.Browser()
br.load("http://www.venere.com/it/hotel/roma/hotel-ferrari/#reviews", wait_callback=content_ready)
with open("test.html", "w") as hf:
hf.write(br.html.encode("utf-8"))
|
How to share secondary y-axis between subplots in matplotlib
Question: If you have multiple subplots containing a secondary y-axis (created using
_twinx_), how can you share these secondary y-axis between the subplots? I
want them to scale equally in an automatic way (so not setting the y-limits
afterwards by hand). For the primary y-axis, this is possible by using the
keyword _sharey_ in the call of _subplot_.
Below example shows my attempt, but it fails to share the secondary y-axis of
both subplots. I'm using Matplotlib/Pylab:
ax = []
#create upper subplot
ax.append(subplot(211))
plot(rand(1) * rand(10),'r')
#create plot on secondary y-axis of upper subplot
ax.append(ax[0].twinx())
plot(10*rand(1) * rand(10),'b')
#create lower subplot and share y-axis with primary y-axis of upper subplot
ax.append(subplot(212, sharey = ax[0]))
plot(3*rand(1) * rand(10),'g')
#create plot on secondary y-axis of lower subplot
ax.append(ax[2].twinx())
#set twinxed axes as the current axes again,
#but now attempt to share the secondary y-axis
axes(ax[3], sharey = ax[1])
plot(10*rand(1) * rand(10),'y')
This gets me something like:

The reason I used the _axes()_ function to set the shared y-axis is that
_twinx_ doesn't accept the _sharey_ keyword.
I'am using Python 3.2 on Win7 x64. Matplotlib version is 1.2.0rc2.
Answer: You can use `Axes.get_shared_y_axes()` like so:
from numpy.random import rand
import matplotlib
matplotlib.use('gtkagg')
import matplotlib.pyplot as plt
# create all axes we need
ax0 = plt.subplot(211)
ax1 = ax0.twinx()
ax2 = plt.subplot(212)
ax3 = ax2.twinx()
# share the secondary axes
ax1.get_shared_y_axes().join(ax1, ax3)
ax0.plot(rand(1) * rand(10),'r')
ax1.plot(10*rand(1) * rand(10),'b')
ax2.plot(3*rand(1) * rand(10),'g')
ax3.plot(10*rand(1) * rand(10),'y')
plt.show()
Here we're just joining the secondary axes together.
Hope that helps.
|
looping over large csv python
Question: I have a large csv-file(several hundreds of lines) containing following
structure:
_filename, sitename, servername_
this csv-file contains several doubles, since the servernames are those from a
cluster(always the same couples) and language-aliases for the sitenames(eg.
mijnhuis, myhouse, mamaison)
the result I'd like to have is the following(notice the ; and the , :
_filename; sitename1, sitename2, sitename3; servername1, servername2,
servername3;_
How could I do this in the most optimized way?
PS: actual code would be nice, but if you can give me directions, I'd be
equally happy.
Answer: Use at your own risk:
import collections as c
f = open('input', 'r')
sites, servers = c.defaultdict(set), c.defaultdict(set)
files = set()
for line in f:
parts = line.split(',')
fi,site,server = [p.strip() for p in parts]
files.add(fi)
sites[fi].add(site)
servers[fi].add(server)
for f in files:
print "%s; %s; %s" % (f, ", ".join(sites[f]), ", ".join(servers[f]))
|
Sorting: Return an array with new positions of each element
Question: I need to sort an array whilst also returning an array which contains the
sorted positions of the original elements. (N.B. not an argsort, the indexes
to sort the array)
At present this requires two steps:
1. An argsort
2. A scatter operation on a new array i.e. pos[argsort[i]] = i
I feel like I am missing a trick here. Is this a well known algorithm that I
have overlooked that can be achieved in one step?
Step 2 can also be implemented with a search, but I think the scatter is more
efficient.
I have included some example python code to illustrate the problem.
import numpy as np
l = [0,-8,1,10,13,2]
a = np.argsort(l)
# returns [1 0 2 5 3 4], the order required to sort l
# init new list to zero
pos = [0 for x in range(0,len(l))]
# scatter http://en.wikipedia.org/wiki/Gather-scatter_(vector_addressing)
for i in range(0,len(l)):
pos[a[i]] = i
print pos
# prints [1, 0, 2, 4, 5, 3], i.e. each original indexes new position in the sorted array
Searching for references to this problem has left me frustrated and maybe that
I am missing the correct terminology for this type of operation.
Any help or guidance would be much appreciated.
Answer: Here's a simple implementation, although it's not "in-place" in any meaningful
sense. I'm not sure what you mean by "in-place", since the output is an
np.array of type int and the input could contain doubles.
import numpy as np
l = np.array([0,-8,1,10,13,2])
def myargsort(numbers):
tuples = enumerate(numbers) # returns iterable of index,value
return np.array([idx for idx,val in sorted(tuples,key = lambda pair: pair[1])])
print(l)
print(myargsort(l))
print(np.argsort(l))
|
Importing a class to another class in python
Question: I am trying to learn python i tried to import a class in another class but it
is not working
`Application.py`:
class Application:
def example(self):
return "i am from Application class"
`Main.py`
class Main:
def main():
application = Application()
application.example()
if __name__ == "__main__":
Main.main()
This gives me :
File "Main.py", line 11, in <module>
Main.main()
TypeError: unbound method main() must be called with Main instance as first argument (got nothing instead)
Answer: You should instantiate your Main class first.
if __name__ == '__main__':
myMain = Main()
myMain.main()
But this will give you another error:
> TypeError: main() takes no arguments (1 given)
There are two ways to fix this. Either make Main.main take one argument:
class Main:
def main(self):
application = Application()
application.example()
or make Main.main a static method. In which case you don't have to instantiate
your Main class:
class Main:
@staticmethod
def main():
application = Application()
application.example()
if __name__ == "__main__":
Main.main()
|
PyQt4 QPixmap Rotating Jpg according to EXIF
Question: Consider [simple `PyQt4` app, loading picture with
`QPixmap`](http://zetcode.com/tutorials/pyqt4/widgets2/) and [scaling with
ratio](http://stackoverflow.com/a/9351984/544721).
Crucial part of code:
from PyQt4 import QtGui, QtCore
(...)
pixmap = QtGui.QPixmap("example.jpg")
pixmap = pixmap.scaled(1100, 1800, QtCore.Qt.KeepAspectRatio)
I've been surprised when I saw my photos in wrong rotation.
I suppose, the reason is that photos contain `EXIF` information about camera
position, which should be considered and rotation applied:
$ exiftool example.jpg | grep -i rot
Orientation : Rotate 270 CW
Auto Rotate : Rotate 270 CW
Rotation : 270
How to make this with `PyQt4`, staying close to original short form of
program... preferably _short &sweet_ and _pythonic_ way?
Answer: You need a way to read and parse the EXIF information. (Py)Qt doesn't have
that functionality built-in. But there are nice Python tools out there, like
[`pyexiv2`](http://tilloy.net/dev/pyexiv2/overview.html),
[`EXIF.py`](https://github.com/ianare/exif-py),
[`PyExifTool`](http://smarnach.github.com/pyexiftool/).
Once you get the rotation angle, then it's easy to apply it to your `QPixmap`:
pixmap = pixmap.scaled(1100, 1800, QtCore.Qt.KeepAspectRatio)
pixmap = pixmap.transformed(QtGui.QTransform().rotate(angle))
|
Python - Twisted and PyAudio + Chat
Question: I've been playing around with the Twisted extension and have fooled around
with a chat room sort of system. However I want to expand upon it. As of now
it only support multi-client chats with usernames ect. But I want to try and
use the pyAudio extension to build a sort of VoIP application. I have a
clientFactory and a simple echo protocol in the client and a serverFactory and
protocol on the server. but I'm not entirely sure how to build off of this.
What would be the best way to go about doing something like this?
Code to help if you need it client.py:
import wx
from twisted.internet import wxreactor
wxreactor.install()
# import twisted reactor
import sys
from twisted.internet import reactor, protocol, task
from twisted.protocols import basic
class ChatFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, parent=None, title="WhiteNOISE")
self.protocol = None # twisted Protocol
sizer = wx.BoxSizer(wx.VERTICAL)
self.text = wx.TextCtrl(self, style=wx.TE_MULTILINE | wx.TE_READONLY)
self.ctrl = wx.TextCtrl(self, style=wx.TE_PROCESS_ENTER, size=(300, 25))
sizer.Add(self.text, 5, wx.EXPAND)
sizer.Add(self.ctrl, 0, wx.EXPAND)
self.SetSizer(sizer)
self.ctrl.Bind(wx.EVT_TEXT_ENTER, self.send)
def send(self, evt):
self.protocol.sendLine(str(self.ctrl.GetValue()))
self.ctrl.SetValue("")
class DataForwardingProtocol(basic.LineReceiver):
def __init__(self):
self.output = None
def dataReceived(self, data):
gui = self.factory.gui
gui.protocol = self
if gui:
val = gui.text.GetValue()
gui.text.SetValue(val + data)
gui.text.SetInsertionPointEnd()
def connectionMade(self):
self.output = self.factory.gui.text # redirect Twisted's output
class ChatFactory(protocol.ClientFactory):
def __init__(self, gui):
self.gui = gui
self.protocol = DataForwardingProtocol
def clientConnectionLost(self, transport, reason):
reactor.stop()
def clientConnectionFailed(self, transport, reason):
reactor.stop()
if __name__ == '__main__':
app = wx.App(False)
frame = ChatFrame()
frame.Show()
reactor.registerWxApp(app)
reactor.connectTCP("192.168.1.115", 5001, ChatFactory(frame))
reactor.run()
server.py:
from twisted.internet import reactor, protocol
from twisted.protocols import basic
import time
def t():
return "["+ time.strftime("%H:%M:%S") +"] "
class EchoProtocol(basic.LineReceiver):
name = "Unnamed"
def connectionMade(self):
#on client connection made
self.sendLine("WhiteNOISE")
self.sendLine("Enter A Username Below...")
self.sendLine("")
self.count = 0
self.factory.clients.append(self)
print t() + "+ Connection from: "+ self.transport.getPeer().host
def connectionLost(self, reason):
#on client connection lost
self.sendMsg("- %s left." % self.name)
print t() + "- Connection lost: "+ self.name
self.factory.clients.remove(self)
def lineReceived(self, line):
#actions to do on message recive
if line == '/quit':
#close client connection
self.sendLine("Goodbye.")
self.transport.loseConnection()
return
elif line == "/userlist":
#send user list to single client, the one who requested it
self.chatters()
return
elif line.startswith("/me"):
#send an action formatted message
self.sendLine("**" + self.name + ": " + line.replace("/me",""))
return
elif line == "/?":
self.sendLine("Commands: /? /me /userlist /quit")
return
if not self.count:
self.username(line)
else:
self.sendMsg(self.name +": " + line)
def username(self, line):
#check if username already in use
for x in self.factory.clients:
if x.name == line:
self.sendLine("This username is taken; please choose another")
return
self.name = line
self.chatters()
self.sendLine("You have been connected!")
self.sendLine("")
self.count += 1
self.sendMsg("+ %s joined." % self.name)
print '%s~ %s connected as: %s' % (t(), self.transport.getPeer().host, self.name)
def chatters(self):
x = len(self.factory.clients) - 1
s = 'is' if x == 1 else 'are'
p = 'person' if x == 1 else 'people'
self.sendLine("There %s %i other %s connected:" % (s, x, p) )
for client in self.factory.clients:
if client is not self:
self.sendLine(client.name)
self.sendLine("")
def sendMsg(self, message):
#send message to all clients
for client in self.factory.clients:
client.sendLine(t() + message)
class EchoServerFactory(protocol.ServerFactory):
protocol = EchoProtocol
clients = []
if __name__ == "__main__":
reactor.listenTCP(5001, EchoServerFactory())
reactor.run()
Answer: Take a look at [Divmod Sine](http://bazaar.launchpad.net/~divmod-
dev/divmod.org/trunk/files/head:/Sine/), a SIP application server. The basic
idea is that you need an additional network server in your application that
will support the VoIP parts of the application.
|
Weird Error in scrapy (CENTOS 6.2)
Question:
>>> import scrapy
>>> from scrapy.selector import HtmlXPathSelector
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/Scrapy-0.14.4-py2.7.egg/scrapy/selector /__init__.py", line 28, in <module>
from scrapy.selector.lxmlsel import *
File "/usr/local/lib/python2.7/site-packages/Scrapy-0.14.4-py2.7.egg/scrapy/selector /lxmlsel.py", line 7, in <module>
from scrapy.utils.misc import extract_regex
File "/usr/local/lib/python2.7/site-packages/Scrapy-0.14.4-py2.7.egg/scrapy/utils/misc.py", line 7, in <module>
from w3lib.html import remove_entities
File "/usr/local/lib/python2.7/site-packages/w3lib-1.2-py2.7.egg/w3lib/html.py", line 10, in <module>
from w3lib.url import safe_url_string
File "/usr/local/lib/python2.7/site-packages/w3lib-1.2-py2.7.egg/w3lib/url.py", line 11, in <module>
import cgi
File "/usr/local/lib/python2.7/cgi.py", line 51, in <module>
import mimetools
File "/usr/local/lib/python2.7/mimetools.py", line 6, in <module>
import tempfile
File "/usr/local/lib/python2.7/tempfile.py", line 34, in <module>
from random import Random as _Random
File "/usr/local/lib/python2.7/random.py", line 45, in <module>
from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil
File "math.py", line 3, in <module>
from scrapy.selector import HtmlXPathSelector
ImportError: cannot import name HtmlXPathSelector
I am using python2.7, I used to work on ubuntu and I never faced this kind of
problem, but in CentOS I am getting this error, by the way I installed scrapy
thats why you can see the first command working perfectly.
>>> import scrapy
>>> from scrapy.selector import HtmlXPathSelector
One more thing, what "from scrapy.selector import HtmlXPathSelector" has to do
with math.py which is in the same folder
Answer: Well ... This is not a problem with you, rather with CentOS vs. Ubuntu.
How did you install it? using `pip` or `yum`?
If you installed it from `yum` most likely the package is broken... (just out
of curiosity, it would be good to know where you downloaded the package from).
|
Python 3.2.3 running error
Question: I using this code to get the organization in ip address
import urllib
import lxml.html as lh
req= urllib.Request("http://www.ip-address.com/ip_tracer/157.123.22.11", headers={'User-Agent' : "Magic Browser"})
html = urllib.urlopen(req).read()
doc = lh.fromstring(html)
print (''.join(doc.xpath('.//*[@class="odd"]')[-1].text_content().split()))
My python is 3.2.3, it shows
Traceback (most recent call last):
File "ext.py", line 3, in <module>
req= urllib.Request("http:// www.ip-address.com/ip_tracer/157.123.22.11", headers={'User-Agent' : "Magic Browser"})
AttributeError: 'module' object has no attribute 'Request'
how to solve it. Thanks,
Answer: If you want to use
[`urllib.request.Request`](http://docs.python.org/py3k/library/urllib.request.html#urllib.request.Request),
import it like this:
import urllib.request
r = urllib.request.Request(...)
html = urllib.request.urlopen(r).read()
If you need nothing else from `urllib.request`, you can import only `Request`:
from urllib.request import Request
r = Request(...)
|
Change/Add payload existing TCP packages?
Question: I want to change the payload of all existing outgoing packets, so all packets
that have "wordA" in it will be changed to "wordB", this will be done by a
regex match.
I tried Python's scapy, but I don't know how to get it working.
PS: There won't be any wifi involved in here. Options that require port
forwarding are depreciated.
Answer:
import re
text = re.sub("\bwordA\b", "wordB", text)
|
Get a list of object properties from list of objects in python
Question: I will give you my specific example but this is a general python question.
I have a list of apscheduler job objects
[Link](http://packages.python.org/APScheduler/modules/job.html) I am trying to
figure out what is the most efficient way to get a list of the apscheduler job
property, 'kwargs' from my list of apscheduler jobs.
I know I can just iterate through the whole list and make a new list of
kwargs, but I was wondering if there is a more efficient/ cleaner way to do
this in python, since I am new to it. Thanks!
from apscheduler.scheduler import Scheduler
schedule = Scheduler()
jobs = schedule.get_jobs() #jobs is a list of apscheduler jobs
jobs_kwargs = ???
Answer: Well, you pretty much said it:
jobs = schedule.get_jobs()
jobs_kwargs = [j.kwargs for j in jobs]
As far as I know, there's really no "cleaner" way than that.
|
input with delimiters in python
Question: I have a problem with the following specification:
**Input:**
First line contains an integer N , the number of element in the given sequnce.
Then follows N integers A1, A2.... An, Ai is ith element of the given
sequence. These numbers may be either space separated or newline separated.
How can I handle an input like that? I tried the following,but it only works
for space separated elements.When used with newline separated elements the
grader throws "time exceeded"
import sys
counter=0
A=[]
for line in sys.stdin:
if counter!=0:
A+=[int(Ai) for Ai in line.split()]
else:
N=int(line)
counter+=1
Answer: Assuming that you're reading from a file (`sys.stdin` is also a file):
f = open('path/to/file')
N = int(f.readline().strip())
A = []
while len(A) < N:
A.extend(int(i) for i in f.readline().strip().split())
Hope this helps
|
OpenCV Python single (rather than multiple) blob tracking?
Question: I've been trying to get single color blob tracking thru OpenCV on Python. The
below code is working, but it finds the centroid of all the tracked pixels,
not just the centroid of the biggest blob. This is because I'm taking the
moments of all the pixels, but I'm not sure how else to color track. I'm kind
of stuck on what exactly I need to do to make this a single blob tracker
instead of a multi-blob average-er.
Here's the code:
#! /usr/bin/env python
#if using newer versions of opencv, just "import cv"
import cv2.cv as cv
color_tracker_window = "Color Tracker"
class ColorTracker:
def __init__(self):
cv.NamedWindow( color_tracker_window, 1 )
self.capture = cv.CaptureFromCAM(0)
def run(self):
while True:
img = cv.QueryFrame( self.capture )
#blur the source image to reduce color noise
cv.Smooth(img, img, cv.CV_BLUR, 3);
#convert the image to hsv(Hue, Saturation, Value) so its
#easier to determine the color to track(hue)
hsv_img = cv.CreateImage(cv.GetSize(img), 8, 3)
cv.CvtColor(img, hsv_img, cv.CV_BGR2HSV)
#limit all pixels that don't match our criteria, in this case we are
#looking for purple but if you want you can adjust the first value in
#both turples which is the hue range(120,140). OpenCV uses 0-180 as
#a hue range for the HSV color model
thresholded_img = cv.CreateImage(cv.GetSize(hsv_img), 8, 1)
cv.InRangeS(hsv_img, (120, 80, 80), (140, 255, 255), thresholded_img)
#determine the objects moments and check that the area is large
#enough to be our object
moments = cv.Moments(thresholded_img, 0)
area = cv.GetCentralMoment(moments, 0, 0)
#there can be noise in the video so ignore objects with small areas
if(area > 100000):
#determine the x and y coordinates of the center of the object
#we are tracking by dividing the 1, 0 and 0, 1 moments by the area
x = cv.GetSpatialMoment(moments, 1, 0)/area
y = cv.GetSpatialMoment(moments, 0, 1)/area
#print 'x: ' + str(x) + ' y: ' + str(y) + ' area: ' + str(area)
#create an overlay to mark the center of the tracked object
overlay = cv.CreateImage(cv.GetSize(img), 8, 3)
cv.Circle(overlay, (x, y), 2, (255, 255, 255), 20)
cv.Add(img, overlay, img)
#add the thresholded image back to the img so we can see what was
#left after it was applied
cv.Merge(thresholded_img, None, None, None, img)
#display the image
cv.ShowImage(color_tracker_window, img)
if cv.WaitKey(10) == 27:
break
if __name__=="__main__":
color_tracker = ColorTracker()
color_tracker.run()
Answer: You need to do it like this :
1) Get the thresholded image using inRange function, and you can apply some
erosion and dilation to remove small noisy particles. It will help to improve
the processing speed.
2) find Contours using 'findContours' function
3) find areas of contours using 'contourArea' function and select one with
maximum area.
4) Now find its center as you did, and track it.
Below is a sample code for this in new cv2 module :
import cv2
import numpy as np
# create video capture
cap = cv2.VideoCapture(0)
while(1):
# read the frames
_,frame = cap.read()
# smooth it
frame = cv2.blur(frame,(3,3))
# convert to hsv and find range of colors
hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
thresh = cv2.inRange(hsv,np.array((0, 80, 80)), np.array((20, 255, 255)))
thresh2 = thresh.copy()
# find contours in the threshold image
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
# finding contour with maximum area and store it as best_cnt
max_area = 0
for cnt in contours:
area = cv2.contourArea(cnt)
if area > max_area:
max_area = area
best_cnt = cnt
# finding centroids of best_cnt and draw a circle there
M = cv2.moments(best_cnt)
cx,cy = int(M['m10']/M['m00']), int(M['m01']/M['m00'])
cv2.circle(frame,(cx,cy),5,255,-1)
# Show it, if key pressed is 'Esc', exit the loop
cv2.imshow('frame',frame)
cv2.imshow('thresh',thresh2)
if cv2.waitKey(33)== 27:
break
# Clean up everything before leaving
cv2.destroyAllWindows()
cap.release()
You can find some samples on tracking colored objects here :
<https://github.com/abidrahmank/OpenCV-Python/tree/master/Other_Examples>
Also, try to use new cv2 interface. It is a lot simpler and faster than old
cv. For more details, checkout this : [What is different between all these
OpenCV Python interfaces?](http://stackoverflow.com/questions/10417108/what-
is-different-between-all-these-opencv-python-interfaces/10425504#10425504)
|
handing Null return in rpy2
Question: I am using rpy2 to call R function from python.
from rpy2.robjects import *
result=r.someFunctioninR() #someFunctioninR is a R function written by myself
if result==r('as.null()'):
do something
else:
do something else
The "result" returned by someFunctioninR() can be either NULL or other format.
result==r('as.null()') does not work. Is there any other way I can get a True
when result is NULL?
Answer: Try `rpy2.rinterface.NULL` or `robj.r("NULL")`:
In [1]: from rpy2 import robjects as robj
Create an R function that returns NULL:
In [2]: robj.r("nullfn = function(){NULL}")
Out[2]: <SignatureTranslatedFunction - Python:0x104ea5ef0 / R:0x1058be4a8>
In [3]: result = robj.r.nullfn()
In [4]: print result
NULL
In [5]: print result == robj.rinterface.NULL
True
In [6]: print result == None
False
**edit** : I just realized that your code above should work as is. I think
your error is probably due to the return type of r.someFunctionInR() -- is it
possible it's returning a vector of NULL values? Have you tried printing the
value of `result`? If this doesn't work, you might share what
`someFunctionInR()` is.
|
Using python to parse XML file
Question: I have a python module that was already written for me to download and parse
data from googles patent listing. The code works great until I do anything
before 2005. I have no knowledge of python except how to run the module. How
do I fix it?
The traceback I receive is:
Traceback (most recent call last):
File "C:\Users\John\Desktop\FINAL BART ALL INFO-Magic Bullet.py", line 46, in <module>
assert xml_file is not None
AssertionError
And this is the code I'm using:
#Ignore all this information
import urllib2, os, zipfile
from lxml import etree
#-------------------------------------------------------------------------------
#Ignore all this information
def xmlSplitter(data,separator=lambda x: x.startswith('<?xml')):
buff = []
for line in data:
if separator(line):
if buff:
yield ''.join(buff)
buff[:] = []
buff.append(line)
yield ''.join(buff)
def first(seq,default=None):
"""Return the first item from sequence, seq or the default(None) value"""
for item in seq:
return item
return default
#-------------------------------------------------------------------------------
#This is where you change the internet source file- Use the file extensions from the sheet provided.
datasrc = "http://storage.googleapis.com/patents/grant_full_text/2003/pg030107.zip"
#http://commondatastorage.googleapis.com/patents/grant_full_text/2012/ipg120117.zip
filename = datasrc.split('/')[-1]
#-------------------------------------------------------------------------------
#Ignore all this information
if not os.path.exists(filename):
with open(filename,'wb') as file_write:
r = urllib2.urlopen(datasrc)
file_write.write(r.read())
zf = zipfile.ZipFile(filename)
xml_file = first([ x for x in zf.namelist() if x.endswith('.xml')])
assert xml_file is not None
#-------------------------------------------------------------------------------
#output set your folder location here, keep double \\ between
outFolder = "C:\\PatentFiles\\"
outFilename = os.path.splitext(filename)[0]
#-------------------------------------------------------------------------------
#These outputs are the names of the files-Ignore all this information
output = outFolder + outFilename + "_general.txt"
output2 = outFolder + outFilename + "_USCL.txt"
output3 = outFolder + outFilename + "_citation.txt"
output4 = outFolder + outFilename + "_inventor.txt"
#Open files
outFile = open(output, "w")
outFile2 = open(output2, "w")
outFile3 = open(output3, "w")
outFile4 = open(output4, "w")
#write the headers
outFile.write("Patent No.|GrantDate|Application Date|Number of Claims|Examiners|US Primary Main Classification|Assignee|Assignee Address City_State_Country|First Inventor|First Inventor Address City_State_Country| \n")
outFile2.write("Patent No.|Primary|U.S Classification| \n")
outFile3.write ("Patent No.|Citation|Citation Date|Who Cited This| \n")
outFile4.write ("Patent No.|Inventor Last Name|First Name|City|State|Country|Nationality Country|Residence Country|\n")
#-------------------------------------------------------------------------------
#Here is the count- adjust this each time you run the program for the first time.
#Run at 10 for the 1st run then 5500 afterward.
count = 0
for item in xmlSplitter(zf.open(xml_file)):
count += 1
#5500
if count > 10: break
doc = etree.XML(item)
#-------------------------------------------------------------------------------
#This is where the python starts parsing the infomation.
#This is the Start of the General Infomation file.
docID = "~".join(doc.xpath('//publication-reference/document-id/country/text()|//publication-reference/document-id/doc-number/text()'))
docID = docID.replace("D0","D")
docID = docID.replace("H000","H")
docID = docID.replace("PP0","PP")
docID = docID.replace("PP0","PP")
docID = docID.replace("RE0","RE")
docID = docID.replace("~0","~")
docID = docID.replace("US~","")
grantdate = first(doc.xpath('//publication-reference/document-id/date/text()'))
applicationdate = first(doc.xpath('//application-reference/document-id/date/text()'))
claimsNum = first(doc.xpath('//number-of-claims/text()'))
assignee1 = "-".join(doc.xpath('//assignees/assignee/addressbook/orgname/text()|//assignees/assignee/addressbook/last-name/text()|//assignees/assignee/addressbook/first-name/text()'))
assignee1 = assignee1.replace('-',', ')
assignee2 = "_".join(doc.xpath('//assignee/addressbook/address/*/text()'))
assignees = str(assignee1.encode("UTF-8")) + "|" + str(assignee2.encode("UTF-8"))
inventors1 = first(doc.xpath('//applicants/applicant/addressbook/last-name/text()'))
inventor2 = first(doc.xpath('//applicants/applicant/addressbook/first-name/text()'))
inventor3 = first(doc.xpath('//applicants/applicant/addressbook/address/city/text()'))
inventor4 = first(doc.xpath('//applicants/applicant/addressbook/address/state/text()'))
inventor5 = first(doc.xpath('//applicants/applicant/addressbook/address/country/text()'))
inventor = str(inventor2.encode("UTF-8") if inventor2 else inventor2) + " " + str(inventors1.encode("UTF-8") if inventors1 else inventors1)
inventors2 = str(inventor3.encode("UTF-8") if inventor3 else inventor3) + "_" + str(inventor4) + "_" + str(inventor5)
inventors = str(inventor) + "|" + str(inventors2)
examiners = "~".join(doc.xpath('//examiners/primary-examiner/first-name/text()|//examiners/primary-examiner/last-name/text()'))
examiners = examiners.replace("~",", ")
uscl1 = first(doc.xpath('//classification-national/main-classification/text()'))
#END FIRST TEXT FILE #-------------------------------------------------------------------------------
#This begings the USCL file
notprimary = first(doc.xpath('//publication-reference/document-id/country/text()'))
notprimary = notprimary.replace("US","0")
primary1 = first(doc.xpath('//publication-reference/document-id/country/text()'))
primary1 = primary1.replace("US","1")
uscl2 = "~".join(doc.xpath('//us-bibliographic-data-grant/classification-national/*/text()|//sequence-cwu/publication-reference/document-id/country/text()'))
#-------------------------NOTE--------------------------------------------------
#--------------------------NOTE-------------------------------------------------
#-----------------------NOTE----------------------------------------------------
#NOTE- RUN through count 10 then remove pound signs from two below
uscl2 = uscl2.replace("US~", str(primary1) + "|")
uscl2 = uscl2.replace("~", "|" + "\n" + str(docID) + "|" + str(notprimary) + "|")
uscl2 = uscl2.replace("US", "|")
#END SECOND TEXT FILE #-------------------------------------------------------------------------------
#Begin the Citation file
citation = '~'.join(doc.xpath('//publication-reference/document-id/country/text()|//references-cited/citation/patcit/document-id/country/text()|//references-cited/citation/patcit/document-id/doc-number/text()|//references-cited/citation/patcit/document-id/kind/text()|//references-cited/citation/patcit/document-id/date/text()|//references-cited/citation/category/text()'))
#Here is the start of the patent connectors- in the patents they exist at the end. They are replaced in this code to make pipes | for the final output
citation = citation.replace("~A~", "$@")
citation = citation.replace("~S~", "$@")
citation = citation.replace("~S1~", "$@")
citation = citation.replace("~B1~", "$@")
citation = citation.replace("~B2~", "$@")
citation = citation.replace("~A1~", "$@")
citation = citation.replace("~H~", "$@")
citation = citation.replace("~E~", "$@")
#citation = citation.replace("~QQ~", "$@")
#make unique citation changes here-for example when "US" or "DE" in imbeded in citation see below
citation = citation.replace("05225US~", "05225U$|" )
citation = citation.replace("063106 DE", "063106D!" )
citation = citation.replace("US~US~", "US~" )
citation = citation.replace("PCT/US", "PCT/U$")
citation = citation.replace("PCTUS", "PCTU$")
citation = citation.replace("WO US", "WO U$")
citation = citation.replace("WO~US", "WO~ U$")
#fixes for cites without pipes-see below -DONT TOUCH THESE
citation = citation.replace("US~cited by examiner", "||cited by examiner" )
citation = citation.replace("US~cited by other", "||cited by other" )
#Here are the changes to return each citation into a unique row
#If a country is only listed in the columns in Excel they need a fix like this, If KR is alone then use the code:::: citation = citation.replace("KR~", "Foreign -KR-" )
citation = citation.replace("$@", "|")
citation = citation.replace("~US~", "|" + "\n" + str(docID) +"|")
citation = citation.replace("US~", "")
citation = citation.replace("~JP~", "|" + "\n" + str(docID) +"|"+ "Foreign -JP-")
citation = citation.replace("JP~", "Foreign -JP-" )
citation = citation.replace("~GB~", "|" + "\n" + str(docID) +"|"+ "Foreign -GB-")
citation = citation.replace("GB~", "Foreign -GB-" )
citation = citation.replace("~WO~", "|" + "\n" + str(docID) +"|"+ "Foreign -WO-")
citation = citation.replace("WO~", "Foreign -WO-" )
citation = citation.replace("~CA~", "|" + "\n" + str(docID) +"|"+ "Foreign -CA-")
citation = citation.replace("~DE~EP~", "~DE~ EP-" )
citation = citation.replace("~DE~", "|" + "\n" + str(docID) +"|"+ "Foreign -DE-")
citation = citation.replace("DE~", "Foreign -DE-" )
citation = citation.replace("~KR~", "|" + "\n" + str(docID) +"|"+ "Foreign -KR-")
citation = citation.replace("KR~", "Foreign -KR-" )
citation = citation.replace("~EM~", "|" + "\n" + str(docID) +"|"+ "Foreign -EM-")
citation = citation.replace("~CH~", "|" + "\n" + str(docID) +"|"+ "Foreign -CH-")
citation = citation.replace("~DE~", "|" + "\n" + str(docID) +"|"+ "Foreign -DE-")
citation = citation.replace("~SE~", "|" + "\n" + str(docID) +"|"+ "Foreign -SE-")
citation = citation.replace("~FR~", "|" + "\n" + str(docID) +"|"+ "Foreign -FR-")
citation = citation.replace("~FR~EP~", "~FR~ EP-" )
citation = citation.replace("FR~", "Foreign -FR-" )
citation = citation.replace("~CN~", "|" + "\n" + str(docID) +"|"+ "Foreign -CN-")
citation = citation.replace("~TW~", "|" + "\n" + str(docID) +"|"+ "Foreign -TW-")
citation = citation.replace("~TW", "|" + "\n" + str(docID) +"|"+ "Foreign -TW-")
citation = citation.replace("TW~", "Foreign -TW-" )
citation = citation.replace("~NL~", "|" + "\n" + str(docID) +"|"+ "Foreign -NL-")
citation = citation.replace("~BR~", "|" + "\n" + str(docID) +"|"+ "Foreign -BR-")
citation = citation.replace("~AU~", "|" + "\n" + str(docID) +"|"+ "Foreign -AU-")
citation = citation.replace("~ES~", "|" + "\n" + str(docID) +"|"+ "Foreign -ES-")
citation = citation.replace("~IT~", "|" + "\n" + str(docID) +"|"+ "Foreign -IT-")
citation = citation.replace("~SU~", "|" + "\n" + str(docID) +"|"+ "Foreign -SU-")
citation = citation.replace("~AT~", "|" + "\n" + str(docID) +"|"+ "Foreign -AT-")
citation = citation.replace("~BE~", "|" + "\n" + str(docID) +"|"+ "Foreign -BE-")
citation = citation.replace("~DK~", "|" + "\n" + str(docID) +"|"+ "Foreign -DK-")
citation = citation.replace("~RU~", "|" + "\n" + str(docID) +"|"+ "Foreign -RU-")
citation = citation.replace("RU~", "Foreign -RU-" )
#citation = citation.replace("~QQ~", "|" + "\n" + str(docID) +"|"+ "Foreign -QQ-")
#These are just end of citation fixes-DONT TOUCH THESE
citation = citation.replace("cited by other~cited by other~cited by other~cited by other~cited by other~cited by other~cited by other~cited by other~cited by other~cited by other", "cited by other" )
citation = citation.replace("cited by examiner~cited by other~cited by other", "cited by examiner" )
citation = citation.replace("cited by other~cited by examiner~cited by examiner", "cited by other" )
citation = citation.replace("cited by other~cited by other~cited by other~cited by other", "cited by other" )
citation = citation.replace("cited by examiner~cited by examiner~cited by examiner~cited by examiner", "cited by examiner" )
citation = citation.replace("cited by other~cited by other", "cited by other" )
citation = citation.replace("cited by examiner~cited by examiner", "cited by examiner" )
citation = citation.replace("cited by other~cited by examiner", "cited by other" )
citation = citation.replace("cited by examiner~cited by other", "cited by examiner" )
citation = citation.replace("cited by examiner~cited by other~cited by other", "cited by examiner" )
citation = citation.replace("cited by other~cited by examiner~cited by examiner", "cited by other" )
citation = citation.replace("cited by other~cited by other~cited by other~cited by other", "cited by other" )
citation = citation.replace("cited by examiner~cited by examiner~cited by examiner~cited by examiner", "cited by examiner" )
citation = citation.replace("cited by other~cited by other", "cited by other" )
citation = citation.replace("cited by examiner~cited by examiner", "cited by examiner" )
citation = citation.replace("cited by other~cited by examiner", "cited by other" )
citation = citation.replace("cited by examiner~cited by other", "cited by examiner" )
citation = citation.replace("cited by examiner~cited by other~cited by other", "cited by examiner" )
citation = citation.replace("cited by other~cited by examiner~cited by examiner", "cited by other" )
citation = citation.replace("cited by other~cited by other~cited by other~cited by other", "cited by other" )
citation = citation.replace("cited by examiner~cited by examiner~cited by examiner~cited by examiner", "cited by examiner" )
citation = citation.replace("cited by other~cited by other", "cited by other" )
citation = citation.replace("cited by examiner~cited by examiner", "cited by examiner" )
citation = citation.replace("cited by other~cited by examiner", "cited by other" )
citation = citation.replace("cited by examiner~cited by other", "cited by examiner" )
citation = citation.replace("cited by other~cited by other", "cited by other" )
citation = citation.replace("cited by examiner~cited by examiner", "cited by examiner" )
citation = citation.replace("cited by other~cited by examiner", "cited by other" )
citation = citation.replace("cited by examiner~cited by other", "cited by examiner" )
citation = citation.replace("~", "|" )
citation = citation.replace("US", "||")
#make unique post-processing citation changes here-If needed for the end of the scripts
citation = citation.replace("CA|", "Foreign -CA-" )
citation = citation.replace("EP|", "Foreign -EP-" )
citation = citation.replace("CN|", "Foreign -CN-" )
citation = citation.replace("$", "S")
citation = citation.replace("D!", "DE")
#citation = citation.replace(" ", " " )
#END CITATION FILE-------------------------------------------------------------------------------
#START the inventors file
inventor1 = doc.xpath('//applicants/applicant/addressbook/last-name/text()|//applicants/applicant/addressbook/first-name/text()|//applicants/applicant/addressbook/address/city/text()|//applicants/applicant/addressbook/address/state/text()|//applicants/applicant/addressbook/address/country/text()|//applicants/applicant/nationality/*/text()|//applicants/applicant/residence/*/text()|//sequence-cwu/publication-reference/document-id/country/text()|//sequence-cwu/number/text()')
inventor1 = '~'.join(inventor1).replace('\n-','')
#For files after 2009 use this to replace State errors in the Excel- If the output is short then use this to add in a None value for State
inventor1 = inventor1.replace('~KR~omitted','~None~KR~omitted')
inventor1 = inventor1.replace('~GB~omitted','~None~GB~omitted')
inventor1 = inventor1.replace('~IT~omitted','~None~IT~omitted')
inventor1 = inventor1.replace('~JP~omitted','~None~JP~omitted')
inventor1 = inventor1.replace('~FR~omitted','~None~FR~omitted')
inventor1 = inventor1.replace('~BR~omitted','~None~BR~omitted')
inventor1 = inventor1.replace('~NO~omitted','~None~NO~omitted')
inventor1 = inventor1.replace('~HK~omitted','~None~HK~omitted')
inventor1 = inventor1.replace('~CA~omitted','~None~CA~omitted')
inventor1 = inventor1.replace('~TW~omitted','~None~TW~omitted')
inventor1 = inventor1.replace('~SE~omitted','~None~SE~omitted')
inventor1 = inventor1.replace('~CH~omitted','~None~CH~omitted')
inventor1 = inventor1.replace('~DE~omitted','~None~DE~omitted')
inventor1 = inventor1.replace('~SG~omitted','~None~SG~omitted')
inventor1 = inventor1.replace('~IN~omitted','~None~IN~omitted')
inventor1 = inventor1.replace('~IL~omitted','~None~IL~omitted')
inventor1 = inventor1.replace('~CN~omitted','~None~CN~omitted')
inventor1 = inventor1.replace('~FI~omitted','~None~FI~omitted')
inventor1 = inventor1.replace('~ZA~omitted','~None~ZA~omitted')
inventor1 = inventor1.replace('~NL~omitted','~None~NL~omitted')
inventor1 = inventor1.replace('~AT~omitted','~None~AT~omitted')
inventor1 = inventor1.replace('~AU~omitted','~None~AU~omitted')
inventor1 = inventor1.replace('~BE~omitted','~None~BE~omitted')
inventor1 = inventor1.replace('~CZ~omitted','~None~CZ~omitted')
inventor1 = inventor1.replace('~RU~omitted','~None~RU~omitted')
inventor1 = inventor1.replace('~IE~omitted','~None~IE~omitted')
inventor1 = inventor1.replace('~AR~omitted','~None~AR~omitted')
inventor1 = inventor1.replace('~MY~omitted','~None~MY~omitted')
inventor1 = inventor1.replace('~SK~omitted','~None~SK~omitted')
inventor1 = inventor1.replace('~ES~omitted','~None~ES~omitted')
inventor1 = inventor1.replace('~NZ~omitted','~None~NZ~omitted')
inventor1 = inventor1.replace('~HU~omitted','~None~HU~omitted')
inventor1 = inventor1.replace('~UA~omitted','~None~UA~omitted')
inventor1 = inventor1.replace('~DK~omitted','~None~DK~omitted')
inventor1 = inventor1.replace('~TH~omitted','~None~TH~omitted')
inventor1 = inventor1.replace('~MX~omitted','~None~MX~omitted')
#inventor1 = inventor1.replace('~QQ~omitted','~None~QQ~omitted')
#For the 2005-2008 files use these lines
inventor1 = inventor1.replace('~NO~NO~NO','~None~NO~NO~NO')
inventor1 = inventor1.replace('~NZ~NZ~NZ','~None~NZ~NZ~NZ')
inventor1 = inventor1.replace('~RU~RU~RU','~None~RU~RU~RU')
inventor1 = inventor1.replace('~RO~RO~RO','~None~RO~RO~RO')
inventor1 = inventor1.replace('~SE~SE~SE','~None~SE~SE~SE')
inventor1 = inventor1.replace('~SG~SG~SG','~None~SG~SG~SG')
inventor1 = inventor1.replace('~SI~SI~SI','~None~SI~SI~SI')
inventor1 = inventor1.replace('~TH~TH~TH','~None~TH~TH~TH')
inventor1 = inventor1.replace('~TR~TR~TR','~None~TR~TR~TR')
inventor1 = inventor1.replace('~TW~TW~TW','~None~TW~TW~TW')
inventor1 = inventor1.replace('~VE~VE~VE','~None~VE~VE~VE')
inventor1 = inventor1.replace('~ZA~ZA~ZA','~None~ZA~ZA~ZA')
inventor1 = inventor1.replace('~AN~AN~AN','~None~AN~AN~AN')
inventor1 = inventor1.replace('~AR~AR~AR','~None~AR~AR~AR')
inventor1 = inventor1.replace('~BA~BA~BA','~None~BA~BA~BA')
inventor1 = inventor1.replace('~PH~PH~PH','~None~PH~PH~PH')
inventor1 = inventor1.replace('~HR~HR~HR','~None~HR~HR~HR')
inventor1 = inventor1.replace('~LT~LT~LT','~None~LT~LT~LT')
inventor1 = inventor1.replace('~EE~EE~EE','~None~EE~EE~EE')
inventor1 = inventor1.replace('~BJ~BJ~BJ','~None~BJ~BJ~BJ')
inventor1 = inventor1.replace('~CR~CR~CR','~None~CR~CR~CR')
inventor1 = inventor1.replace('~PL~PL~PL','~None~PL~PL~PL')
inventor1 = inventor1.replace('~CO~CO~CO','~None~CO~CO~CO')
inventor1 = inventor1.replace('~UA~UA~UA','~None~UA~UA~UA')
inventor1 = inventor1.replace('~KW~KW~KW','~None~KW~KW~KW')
inventor1 = inventor1.replace('~CL~CL~CL','~None~CL~CL~CL')
inventor1 = inventor1.replace('~CY~CY~CY','~None~CY~CY~CY')
inventor1 = inventor1.replace('~LI~LI~LI','~None~LI~LI~LI')
inventor1 = inventor1.replace('~SA~SA~SA','~None~SA~SA~SA')
#inventor1 = inventor1.replace('~QQ~QQ~QQ','~None~QQ~QQ~QQ')
#For lines that don't return use these lines in the code for 2009-
inventor1 = inventor1.replace('omitted~US~','omitted~US' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~FR~','omitted~FR' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~DK~','omitted~DK' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~KR~','omitted~KR' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~JP~','omitted~JP' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~GB~','omitted~GB' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~IT~','omitted~IT' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~CH~','omitted~CH' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~SG~','omitted~SG' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~DE~','omitted~DE' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~IN~','omitted~IN' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~TW~','omitted~TW' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('omitted~CN~','omitted~CN' +"|"+ '\n' + str(docID) +"|")
#inventor1 = inventor1.replace('omitted~QQ~','omitted~QQ' +"|"+ '\n' + str(docID) +"|")
#for lines 2005-2008 use this line for returning countries
inventor1 = inventor1.replace('AT~AT~AT~','AT~AT~AT' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('AN~AN~AN~','AN~AN~AN' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('AR~AR~AR~','AR~AR~AR' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('AU~AU~AU~','AU~AU~AU' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('AZ~AZ~AZ~','AZ~AZ~AZ' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('BA~BA~BA~','BA~BA~BA' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('BE~BE~BE~','BE~BE~BE' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('BR~BR~BR~','BR~BR~BR' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('BS~BS~BS~','BS~BS~BS' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('CA~CA~CA~','CA~CA~CA' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('CH~CH~CH~','CH~CH~CH' +"|"+ '\n' + str(docID) +"|")
inventor1 = inventor1.replace('CN~CN~CN~','CN~CN~CN' +"|"+ '\n' + str(docID) +"|")
#inventor1 = inventor1.replace('QQ~QQ~QQ~','QQ~QQ~QQ' +"|"+ '\n' + str(docID) +"|")
#special case fixes- these are for strange names fixes in the code that may not create the correct amount of columns.
inventor1 = inventor1.replace('~None~None~NO~','~None~NO~')
inventor1 = inventor1.replace('Ramandeep~Chandigarh','Ramandeep|None~Chandigarh')
inventor1 = inventor1.replace('Esk~eh~r','Eskehr')
inventor1 = inventor1.replace('Baychar~Eastport','Baychar~None~Eastport')
inventor1 = inventor1.replace('US~1', '||||||')
inventor1 = inventor1.replace('~','|')
#End the inventor file
#-------------------------------------------------------------------------------
#Here are the output print fields- you can change one if you want but remember to comment out all but the one you wish to view.
print "DocID: {0}\nGrantDate: {1}\nApplicationDate: {2}\nNumber of Claims: {3}\nExaminers: {4}\nAssignee: {5}\nInventor: {6}\nUS Cl.: {7}\n".format(docID,grantdate,applicationdate,claimsNum,examiners.encode("UTF-8"),assignees,inventors,uscl1)
#print "DocID: {0}\nU.S Cl: {1}\nPrimary: {2}\n".format(docID,uscl2,primary1)
#print "DocID: {0}\nCitation: {1}\n".format(docID,citation.encode("UTF-8"))
#print "DocID: {0}\nTitle: {1}\nInventors: {2}\n".format(docID,appID,inventor1.encode("UTF-8"))
#------------------------------------------------------------------------------- IGNORE Everything else below this.
#Output first general info bits
outFile.write(str(docID) +"|"+ str(grantdate) +"|"+ str(applicationdate) + "|"+ str(claimsNum) + "|"+ str(examiners.encode("UTF-8")) + "|"+ str(uscl1) + "|"+ str(assignees) + "|"+ str(inventors) +"|"+"\n")
#Output Classifications only
outFile2.write(str(docID) +"|"+ str(uscl2) +"|"+ "\n")
#Output Citations only
outFile3.write(str(docID) +"|"+ str(citation) +"|"+"\n")
#Output inventors only
outFile4.write(str(docID) + "|"+ str(inventor1.encode("UTF-8")) + "|" +"\n")
outFile.close()
outFile2.close()
outFile3.close()
outFile4.close()
print "output files complete"
Answer: The problem you are seeing is not a python problem. The code unzips a zipfile
and expects to find an xml file inside. The assert statement is a chack
statement to make sure an xml file was found. It is designed to halt your
program if it doesn't find an xml file. If you download the zipfile assigned
to `datasrc` you will find an empty zipfile. When it tries to find the xml
file, it doesn't find one, so `xml_file = None`. Then when it reaches the
`assert` statement, it raises the Assertion Error.
You could probably take out the `assert` and run the code just fine, but then
when your program crashes you won't know why. Having it there gives you a
convenient way to catch the failure when, where, and why it happens.
|
Python 2.6 - Parse arguments
Question: Problem: Need to parse some specific arguments which could be in any order,
non are optional: -h -d -src -dst
Am new to Python and have looked at the alternatives such as getopt and
argparse but couldn't get a working example so went custom as below;
argv=sys.argv[1:]
args=[]
for idx, arg in enumerate(argv):
# if is arg
if arg.startswith("-"):
# find arg match
for i in ("-h","-d:","-src:","-dst:"):
# requires var
if i == arg + ':' and idx < len(argv)-1:
if not argv[idx+1].startswith("-"):
args.append((arg,argv[idx+1]))
break
# no var
elif i == arg:
args.append((arg,""))
break
else:
continue
# may contain duplicates
print(args)
# no dupes
print(set(args))
Can anyone suggest improvements and/or better examples to achieve the problem
objective?
Answer: in python 2.6 there is a module called
[optparse](http://docs.python.org/library/optparse.html)
which does what you want.
example from the docs:
from optparse import OptionParser
[...]
parser = OptionParser()
parser.add_option("-f", "--file", dest="filename",
help="write report to FILE", metavar="FILE")
parser.add_option("-q", "--quiet",
action="store_false", dest="verbose", default=True,
help="don't print status messages to stdout")
(options, args) = parser.parse_args()
another example:
usage = "usage: %prog [options] arg1 arg2"
parser = OptionParser(usage=usage)
parser.add_option("-v", "--verbose",
action="store_true", dest="verbose", default=True,
help="make lots of noise [default]")
parser.add_option("-q", "--quiet",
action="store_false", dest="verbose",
help="be vewwy quiet (I'm hunting wabbits)")
parser.add_option("-f", "--filename",
metavar="FILE", help="write output to FILE")
parser.add_option("-m", "--mode",
default="intermediate",
help="interaction mode: novice, intermediate, "
"or expert [default: %default]")
|
Import error of Cephes Scipy library within WinPython
Question: I'm trying to test the [WinPython
environment](http://code.google.com/p/winpython/), a portable Python
environment, in order to create a version featuring more packages.
I'm working in Windows Vista 32 bit (but the underlying CPU is 64 bit),
Service Pack 2, Python 2.7.3, Scipy 0.11 in WinPython-32bit-2.7.3.1 and Scipy
0.10.1 in WinPython-32bit-2.7.3.0.
I have the same problem with both WinPython-32bit-2.7.3.1.exe and previous
WinPython-32bit-2.7.3.0.exe versions, trying multiple downloads of the
installers, different installations in different folders (without spaces in
pathnames).
When I do the unit testing from Scipy with:
scipy.test()
I get 8 errors, all related to the failed import of the Cephes library
(_cephes.pyd file). The typical error message is:
packages\scipy\special__init__.py", line 525, in
from _cephes import *
ImportError: DLL load failed: Impossibile trovare la procedura specificata.
The distribution creator is not able to reproduce the error in a similar
environment, and a co-worker has no problems (in Win 7, 64 bit).
I have also [Python(x,y)](http://code.google.com/p/pythonxy/) (non-portable
environment, from the same author) installed, where everything is ok, and I
don't remember to ever had a similar problem with Python/Scipy with non-
portable installations.
Thank you for any hint/suggestion.
Answer: fyi i also had the problem, downgraded to 0.9, and it works.
**edit:**
I searched all over, and asside from numerous other people having the exact
same problem with 0.10.x, no solutions have been posted. See here:
<http://mail.scipy.org/pipermail/scipy-user/2012-February/031527.html> and
more generally here: <https://www.google.com/search?q=cephes+dll+load+failed>
This leads me to try either downgrading to 0.9.x or upgrading... my distro
doesn't provide beyond 0.10.1, but the downgrade works.
|
Activating and Disabling button after process in python and pyGTK
Question: Essentially, I am trying to make a button "active" first, run a process, and
then after that process has finished running, disable the button again.
Using pyGTK and Python, the code in question looks like this...
self.MEDIA_PLAYER_STOP_BUTTON.set_sensitive(True) #Set button to be "active"
playProcess = Popen("aplay " + str(pathToWAV) + " >/dev/null 2>&1",shell=True) #Run Process
playProcess.wait() #Wait for process to complete
self.MEDIA_PLAYER_STOP_BUTTON.set_sensitive(False) #After process is complete, disable the button again
However, this does not work at all.
Any help would be greatly appreciated.
Answer: All is working normally (python 2.7.3). But if you call playProcess.wait() in
gui thread - you freeze gui thread without redrawing (sorry, my english isn`t
very well). And are you sure that you try to use subprocess.popen()? Maybe
os.popen()?
My small test:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import pygtk, gtk, gtk.glade
import subprocess
def aplay_func(btn):
btn.set_sensitive(True)
print "init"
playProcess = subprocess.Popen("aplay tara.wav>/dev/null 2>&1", shell=True)
print "aaa"
playProcess.wait()
print "bbb"
btn.set_sensitive(False)
wTree = gtk.glade.XML("localize.glade")
window = wTree.get_widget("window1")
btn1 = wTree.get_widget("button1")
window.connect("delete_event", lambda wid, we: gtk.main_quit())
btn1.connect("clicked", aplay_func)
window.show_all()
gtk.main()
Result:
init
aaa
bbb
And yes, button is working correctly. Sound too.
|
Python decoding errors with BeautifulSoup, requests, and lxml
Question: I'm attempting to pull some data off a popular browser based game, but am
having trouble with some decoding errors:
import requests
from bs4 import BeautifulSoup
r = requests.get("http://www.neopets.com/")
p = BeautifulSoup(r.text)
This produces the following stack trace:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "build/bdist.linux-x86_64/egg/bs4/__init__.py", line 172, in __init__
File "build/bdist.linux-x86_64/egg/bs4/__init__.py", line 185, in _feed
File "build/bdist.linux-x86_64/egg/bs4/builder/_lxml.py", line 195, in feed
File "parser.pxi", line 1187, in lxml.etree._FeedParser.close (src/lxml/lxml.etree.c:87912)
File "parsertarget.pxi", line 130, in lxml.etree._TargetParserContext._handleParseResult (src/lxml/lxml.etree.c:97055)
File "lxml.etree.pyx", line 294, in lxml.etree._ExceptionContext._raise_if_stored (src/lxml/lxml.etree.c:8862)
File "saxparser.pxi", line 274, in lxml.etree._handleSaxCData (src/lxml/lxml.etree.c:93385)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xb1 in position 476: invalid start byte
Doing the following:
print repr(r.text[476 - 10: 476 + 10])
Produces:
u'ttp-equiv="X-UA-Comp'
I'm really not sure what the issue here is. Any help is greatly appreciated.
Thank you.
Answer: `.text` on a response returns a decoded unicode value, but perhaps you should
let BeautifulSoup do the decoding for you:
p = BeautifulSoup(r.content, from_encoding=r.encoding)
`r.content` returns the un-decoded raw bytestring, and `r.encoding` is the
encoding detected from the headers.
|
I keep getting ValueError: frequency must be in 37 thru 32767 on Python with Winsound
Question: This is the code I have:
import winsound from myro import *
def main():
HftM1 = makeSong("REST 1; REST 1; REST 1; REST 1; REST 1; REST 1; REST 1; REST 1; D4 1/6; F4 1/6; D5 2/3; D4 1/6; F4 1/6; D5 2/3; E5 1/2; F5 1/6; E5 1/6; F5 1/6; E5 1/6; C5 1/6; A4 2/3; A4 1/3; D4 1/3; F4 1/6; G4 1/6; A4 1; A4 1/3; D4 1/3; F4 1/6; G4 1/6; E4 1; D4 1/6; F4 1/6; D5 2/3; E5 1/2; F5 1/6; E5 1/6; F5 1/6; E5 1/6; C5 1/6; A4 2/3; A4 1/3; D4 1/3; F4 1/6; G4 1/6; A4 2/3; A4 1/3; D4 1; REST 1; REST 1; REST 1")
saveSong(HftM1, "WindmillHut.txt", append=1)
song = readSong("WindmillHut.txt")
play = []
for n in range(len(song)):
play = song[n]
note = play[0]
duration = play[1]
winsound.Beep(int(note), int(duration*2000))
main()
When I try to run this, I keep getting the error:
Traceback (most recent call last):
File "C:/Users/Gerren.Kids-PC/Desktop/Gerren's Files/School/Programming 1/Mod 5/Code/WindmillHut.py", line 23, in -toplevel-
main()
File "C:/Users/Gerren.Kids-PC/Desktop/Gerren's Files/School/Programming 1/Mod 5/Code/WindmillHut.py", line 22, in main
winsound.Beep(int(note), int(duration*2000))
ValueError: frequency must be in 37 thru 32767
What am I doing wrong and what do I need to change it to? Please be specific.
Answer: the winsound.beep function is just a wrapper around the windows api beep
function. the windows function requires the first parameter (the frequency),
be between 37 and 32767. i suspect any frequency outside of the range is out
of humans range of hearing. it could also be that way because the old sound
cards that this function was meant for only supported that range.
you are calling winsound.beep() and whatever int(note) is returning is out of
that range. you should check for note being valid before calling beep,
probably.
note = int(play[0])
if note > 37 and note < 32767:
winsound.Beep(note, int(duration*2000))
else:
print("error in input")
|
Return XML Values from Multiple Files in Python?
Question: So I am working on a browser based program, written in Python, that parses XML
data from multiple files in a directory, then returns the values of certain
XML tags on the page. I have successfully been able to return the values from
one of the XML files, but am hoping to collect data from every file within the
directory and return the values in spreadsheet format. How do I parse the data
from every XML file? Also, the XML files are not static, there will be new
files coming and going. Thanks! Below is my code:
from xml.dom.minidom import parseString
import os
path = 'C:\Vestigo\XML'
listing = os.listdir(path)
for infile in listing:
print infile
file = open(os.path.join(path,infile),'r')
data = file.read()
file.close()
dom = parseString(data)
xmlTag0 = dom.getElementsByTagName('Extrinsic')[0].toxml()
xmlData0 = xmlTag0.replace('<Extrinsic>','').replace('</Extrinsic>','')
xmlTag1 = dom.getElementsByTagName('DeliverTo')[0].toxml()
xmlData1 = xmlTag1.replace('<DeliverTo>','').replace('</DeliverTo>','')
xmlTag2 = dom.getElementsByTagName('Street1')[0].toxml()
xmlData2 = xmlTag2.replace('<Street1>','').replace('</Street1>','')
xmlTag3 = dom.getElementsByTagName('City')[0].toxml()
xmlData3 = xmlTag3.replace('<City>','').replace('</City>','')
xmlTag4 = dom.getElementsByTagName('State')[0].toxml()
xmlData4 = xmlTag4.replace('<State>','').replace('</State>','')
xmlTag5 = dom.getElementsByTagName('PostalCode')[0].toxml()
xmlData5 = xmlTag5.replace('<PostalCode>','').replace('</PostalCode>','')
import cherrypy
class Root(object):
def index(self):
return ('Order Number:', ' ', xmlData0, '<br>Name: ', xmlData1, '<br>Street Address: ', xmlData2, '<br>City/State/Zip: ', xmlData3, ', ', xmlData4, ' ', xmlData5, ' ', """<br><br><a href="/exit">Quit</a>""")
index.exposed = True
def exit(self):
raise SystemExit(0)
exit.exposed = True
def start():
import webbrowser
cherrypy.tree.mount(Root(), '/')
cherrypy.engine.start_with_callback(
webbrowser.open,
('http://localhost:8080/',),
)
cherrypy.engine.block()
if __name__=='__main__':
start()
Answer: In order to pull data from every file in the directory I used this code below:
from xml.dom.minidom import parse, parseString
import os, glob, re
import cherrypy
class Root(object):
def index(self):
path = 'C:\Vestigo\XML'
TOTALXML = len(glob.glob(os.path.join(path, '*.xml')))
print TOTALXML
i = 0
for XMLFile in glob.glob(os.path.join(path, '*.xml')):
xmldoc = parse(XMLFile)
order_number = xmldoc.getElementsByTagName('Extrinsic')[0].firstChild.data
order_name = xmldoc.getElementsByTagName('DeliverTo')[0].firstChild.data
street1 = xmldoc.getElementsByTagName('Street1')[0].firstChild.data
state = xmldoc.getElementsByTagName('State')[0].firstChild.data
zip_code = xmldoc.getElementsByTagName('PostalCode')[0].firstChild.data
OUTPUTi = order_number+' '+order_name+' '+street1+' '+state+' '+zip_code
i += 1
print OUTPUTi
return (OUTPUTi, """<br><br><a href="/exit">Quit</a>""")
index.exposed = True
def exit(self):
raise SystemExit(0)
exit.exposed = True
def start():
import webbrowser
cherrypy.tree.mount(Root(), '/')
cherrypy.engine.start_with_callback(
webbrowser.open,
('http://localhost:8080/',),
)
cherrypy.engine.block()
if __name__=='__main__':
start()
Thanks for your help everyone, and for the tip on answering my own question
Sheena!
|
Number of pages of a word document with Python
Question: Is there a way to get efficiently the number of pages of a word document
(.doc, .docx) with Python ?
And for an .odt file ?
I want to use this for a web application based on Web2py on Linux.
Thank you !
Answer: Only for those who search for this blog entry....
from win32com.client import Dispatch
#open Word
word = Dispatch('Word.Application')
word.Visible = False
word = word.Documents.Open(doc_path)
#get number of sheets
word.Repaginate()
num_of_sheets = word.ComputeStatistics(2)
|
Python 3 - reading text from a file
Question: This is exercise 15 from Learn Python the Hard Way, but I'm using **Python
3**.
from sys import argv
script, filename = argv
txt = open(filename)
print ("Here's your file %r:") % filename
print txt.read()
print ("I'll also ask you to type it again:")
file_again = input()
txt_again = open(file_again)
print txt_again.read()
**file is saved as ex15.py, and when I run it from terminal it reads ex15.txt
correctly first time, but when I request it second time, I get an error**
user@user:~/Desktop/python$ python ex15.py ex15.txt<br>
Here's your file 'ex15.txt':<br>
This is stuff I typed into a file.<br>
It is really cool stuff.<br>
Lots and lots of fun to have in here.<br>
I'll also ask you to type it again:<br>
ex15.txt <b>#now I type this in again, and I get a following error</b><br>
Traceback (most recent call last):<br>
File "ex15.py", line 11, in <<module>module><br>
file_again = input()<br>
File "<<string\>string>", line 1, in <<module>module><br>
NameError: name 'ex15' is not defined
What's wrong?
Answer: You're definitely not using Python 3. There's a couple things that make this
obvious:
* These `print` statements don't have parenthesis (which are required in Python 3 but not 2):
print ("Here's your file %r:") % filename
print txt.read()
print txt_again.read()
* This is calling [`eval` on `input()`](http://docs.python.org/library/functions.html#input) which was [changed in Python 3](http://docs.python.org/release/3.0.1/library/functions.html#input):
file_again = input()
Most likely Python 2 is the default on your system, but you can make your
script always use Python 3 by adding this as the first line of your script (if
you're running it directly, like `./myscript.py`):
#!/usr/bin/env python3
Or running it explicitly with Python 3:
python3 myscript.py
One more note: You should really close the file when you're done with it. You
can either do this explicitly:
txt = open(filename)
# do stuff needing text
txt.close()
Or use a [`with` statement](http://www.python.org/dev/peps/pep-0343/) and have
it handled when the block ends:
with open(filename) as txt:
# do stuff needing txt
# txt is closed here
|
how do I search for an xml tag with the matching closing tag?
Question: I'm wondering what is the best way in python to search & delete an XML tag,
content inside it(whatever it is doesn't matter) as well as its closing tag?
XML is well formed as well.
Answer: You could identify the element with XPath and then use the
[remove](http://docs.python.org/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.remove)
method:
import xml.etree.ElementTree as ET
data = '''\
<node1>
<node2 a1="x1"> ... </node2>
<node2 a1="x2"> ... </node2>
<node2 a1="x1"> ... </node2>
</node1>
'''
doc = ET.fromstring(data)
e = doc.find('node2/[@a1="x2"]')
doc.remove(e)
print(ET.tostring(doc))
# <node1>
# <node2 a1="x1"> ... </node2>
# <node2 a1="x1"> ... </node2>
# </node1>
|
Python multiprocessing - probably failure when method name pass to Process
Question: I have problem with multiprocessing. Under you have the code (he's in couple
of class and files, but i simplified it). I suppose, that problem lies in pass
method name which I want to multiply in multiprocessing.
Informations: "args" is a list like as [(0,1),(1,2),(2,3)] so single "arg"
like as (0,1) This two files are in totally other calatogs
!!First file!!
from ... import EF
from ... import someclass
class performs():
def action():
for i, arg in enumerate(args):
data.append(EF(self.method,list(arg),i))
someclass.create_processes(*data)
def method(self,fa,la):
...
!!second file!!
from multiprocessing import Process,Event
class EF(object):
def __init__(self,name,args=list(),proc=1):
self.name=name
self.args=args
self.proc=proc
class someclass:
@staticmethod
def create_processes(*functions):
processes=dict()
for function in functions:
process=Process(target=function.name,args=function.args)
process.start()
processes[process.pid]=process
for process in processes.values():
process.join()
When I'm debugging, error comes, when program performing this instruction
"process.start()"
Console:
File "C:\Python32\lib\multiprocessing\forking.py", line 371, in main
self = load(from_parent)
AttributeError: 'performs' object has no attribute 'function'
or in other situation
File "C:\Python32\lib\multiprocessing\process.py", line 267, in _bootstrap
self.run()
File "C:\Python32\lib\multiprocessing\process.py", line 116, in run
self._target(*self._args, **self._kwargs)
File "...\performs.py", line 88, in method
...
I don't know it's important, but I have 64bit system, and installed Python and
accesories for 32 bit
Answer: The comment box is too small for this, but it seems like your code is working
fine. I tested it by copying it in two files (like your environment), but it
of course also works in a single file. That file you can find below: it is
working as expected (save for the print statements output being mixed up, but
that is to be expected).
So, most likely, your issue lies elsewhere? The error seems to indicate
perhaps some inclusion order, like stated in [this
question](http://stackoverflow.com/questions/2782961/yet-another-confusion-
with-multiprocessing-error-module-object-has-no-attribu)?
from multiprocessing import Process,Event
class EF:
def __init__(self, name, args=list(), proc=1):
self.name = name
self.args = args
self.proc = proc
class someclass:
@staticmethod
def create_processes(*functions):
processes=dict()
for function in functions:
process=Process(target=function.name,args=function.args)
process.start()
processes[process.pid]=process
for process in processes.values():
process.join()
class performs:
def action(self, args):
data = []
for i, arg in enumerate(args):
data.append(EF(self.mult, list(arg), i))
someclass.create_processes(*data)
def mult(self,fa,la):
print '%d x %d = %d' % (fa, la, fa * la)
if __name__ == '__main__':
p = performs()
p.action([(x, x+1) for x in xrange(10)])
|
trouble plotting function
Question: Im trying to write a function, and then plot it. I am new to python and am
having some trouble. I have to be missing information, just not sure where.
Can anyone help?
xv= arange(-4,5,1)
def f(x):
if (x<0):
return log(x)
elif (0<=x<2):
return (-x)
elif (2<x):
return x**2/(3-x)
xv= arange(-4,5,1)
y1= log(x)
y2= (-x)
y3= x**2/3-x
fig1=figure()
plot(xv,y1,y2,y3)
show()
Answer: Maybe you are trying to do the following:
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-4, 5, 1)
y = np.sin(x)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, y)
plt.show()
In your example, it looks like you are also attempting to take `log(x)`? If
this is the case, remember that, mathematically, `log(x)` is complex if `x <
0`.
Hope this helps.
|
Handling French text Python
Question: I am trying to read some French text and do some frequency analysis of words.
I want the characters with the umlauts and other diacritics to stay. So, I did
this for testing:
>>> import codecs
>>> f = codecs.open('file','r','utf-8')
>>> for line in f:
... print line
...
Faites savoir à votre famille que vous êtes en sécurité.
So far, so good. But, I have a list of French files which I iterate over in
the following way:
import codecs,sys,os
path = sys.argv[1]
for f in os.listdir(path):
french = codecs.open(os.path.join(path,f),'r','utf-8')
for line in french:
print line
Here, it gives the following error:
rdholaki74: python TestingCodecs.py ../frenchResources | more
Traceback (most recent call last):
File "TestingCodecs.py", line 7, in <module>
print line
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe0' in position 14: ordinal not in range(128)
Why is it that the same file throws up an error when passed as an argument and
not when given explicitly in the code?
Thanks.
Answer: Because you're misinterpreting the cause. The fact that you're piping the
output means that Python can't detect what encoding to use. If
[stdout](http://docs.python.org/library/sys.html#sys.stdout) is [not a
TTY](http://docs.python.org/library/stdtypes.html#file.isatty) then you'll
need to encode as UTF-8 manually before outputting.
|
Trying to use PyOpenGL and having problems
Question: I am following the tutorial from
<http://pyopengl.sourceforge.net/context/tutorials/shader_1.xhtml>
The problem is I am using PyOpenGL 3.0.2, which when I import OpenGL from
python3.2 it works perfectly fine. I just can't find a way to get OpenGL
context working for python3.2:
from OpenGLContext import testingcontext
...and error...
No module named "OpenGLContext"
I have been searching all over google and can't find anything. I just want to
follow the tutorial, but using python3. Hoping someone could help me figure
out how to overcome this.
Much thanks in advance.
Answer: It seems like you do not have OpenGLContext installed. It is available as a
separate package to PyOpenGL and according to the [installation
documentation](http://pyopengl.sourceforge.net/documentation/installation.html),
you can install it with
pip install PyDispatcher PyVRML97 OpenGLContext
|
all() returns different values for the same expression
Question: I'm facing a very weird problem in my python script when I use the function
`all()`.
The console gives me `false` (which is obviously correct) for this line:
all(x == 2 for x in (8,2,2,2))
and in my script the same line returns `true`?!
What is going on here? Are there other `all()` function which could have
overwritten it in my script? I'm importing the following modules:
import os
import sys
import string
import time
from time import gmtime, strftime
from optparse import OptionParser, OptionGroup
import cx_Oracle
from pylab import *
import ROOT
from array import array
import logging
from traceback import format_exc
Answer: To access the builtin `all()` when it's being smashed, you can import
[`builtins`](http://docs.python.org/release/3.1.5/library/builtins.html). E.g:
from builtins import all
(Below the line where `pylab` is imported).
Alternatively, if you need to access `pylab.all()`, you can do:
import builtins
...
builtins.all()
Or, better yet, do `import pylab` rather than `from pylab import *`.
You might want to file a bug report with `pylab` too, that's seriously bad
behaviour. Although do note the `import * from ...` usage of imports is
discouraged for this reason.
As DSM points out in the comments, this is presuming you are using 3.x, under
2.x, it is `__builtin__`.
|
Errors in program for calculating integrals
Question: I am trying to calculate the area of a graph using integrals.
The user is supposed to give me 3 numbers:
* x1, x2 - the bounds of the integral
* N is in how many pieces the program will divide the function
However, I keep getting wrong results.
The first difficulty that I faced is that range accepts only integers.
Also `z=(x2-x1)/N` if I try and make it a float, I can't make it a step after,
and I don't make it float it approaches to zero so Python shows me an error
that the step is zero.
Also how can I summarize `(z*(f(i)+f(i+z)/2)`?
Here is my code:
# -*- coding: UTF-8 -*-
import math
def f(x) :
y = (-1/6.0)*(x-1)*(x-2)*(x+2)*(x-4)
return y
x1=int(raw_input ('Δωστε το χ1 οπου αρχιζει η μετρηση του ολοκληρωματος \n ')) #greek letters
x2=int(raw_input ('Δωστε χ2 οπου θελετε να ολοκληρωνεται η μετρηση \n '))
N=int(raw_input('Δωστε τον αριθμο n που θα ειναι το πληθος \n των τραπεζιων που θα χρησιμοπιουνται στη προσσεγγιση \n '))
z=(x2-x1)/N
for i in range(x1,x2,z):
z=float(z)
x1=float(x1)
x2=float(x2)
print (z*(f(i)+f(i+z))/2)
Answer: You are reading `x1`, you cast it to an `int` (read: throw away information),
then you cast it back to a float upon each iteration. The lost information
will not reappear.
z=(x2-x1)/N
Because your value are integers, it will perform an integer division, i.e. if
N > (x2 - x1), `z` will be zero. And since you always want to choose a large
`N`, `z` will always be zero.
Thus, `range()` cannot not work.
What you need to do instead is read the inputs as floats:
number = float(raw_input("Please enter a number: "))
* * *
Note that applying `float()` repeatedly doesn't do anything useful. Thus, this
code is not necessary:
for ...
z=float(z)
x1=float(x1)
x2=float(x2)
* * *
> Also how can i summarise the (z*(f(i)+f(i+z)/2)?
You can't make it shorter, since `f()` isn't a linear function.
* * *
Another thing about `range()` is, that the step parameter must be an integer.
You can easily create your own range function though:
def frange(start, stop, steps):
x = start
difference = float(stop - start)
for step in range(0, steps):
next_x = start + difference * (1 + step) / steps
yield x, next_x - x
x = next_x
* * *
## Working code
# -*- coding: UTF-8 -*-
import math
def f(x) :
y = (-1/6.0)*(x-1)*(x-2)*(x+2)*(x-4)
return y
x1 = float(raw_input ('Δωστε το χ1 οπου αρχιζει η μετρηση του ολοκληρωματος \n ')) #greek letters
x2 = float(raw_input ('Δωστε χ2 οπου θελετε να ολοκληρωνεται η μετρηση \n '))
N = int(raw_input('Δωστε τον αριθμο n που θα ειναι το πληθος \n των τραπεζιων που θα χρησιμοπιουνται στη προσσεγγιση \n '))
result = 0
for x, delta_x in frange(x1, x2, N):
result += delta_x * (f(x)+f(x+delta_x)) / 2
print result
|
How to create a two-dimensional array in Python?
Question: I want to create 2-d array in python like this:
n1 n2 n3 n4 n5
w1 1 4 0 1 10
w2 3 0 7 0 3
w3 0 12 9 5 4
w4 9 0 0 9 7
Where w1 w2... are the different words and n1 n2 n3 are different blogs.
How can I achieve this?
Answer: Assuming that the text from each blog is available as a string, and you have a
list of such strings available in `blogs`, this is how you'd create your
matrix.
import re
# Sample input for the following code.
blogs = ["This is a blog.","This is another blog.","Cats? Cats are awesome."]
# This is a list that will contain dictionaries counting the wordcounts for each blog
wordcount = []
# This is a list of all unique words in all blogs.
wordlist = []
# Consider each blog sequentially
for blog in blogs:
# Remove all the non-alphanumeric, non-whitespace characters,
# and then split the string at all whitespace after converting to lowercase.
# eg: "That's not mine." -> "Thats not mine" -> ["thats","not","mine"]
words = re.sub("\s+"," ",re.sub("[^\w\s]","",blog)).lower().split(" ")
# Add a new dictionary to the list. As it is at the end,
# it can be referred to by wordcount[-1]
wordcount.append({})
# Consider each word in the list generated above.
for word in words:
# If that word has been encountered before, increment the count
if word in wordcount[-1]: wordcount[-1][word]+=1
# Else, create a new entry in the dictionary
else: wordcount[-1][word]=1
# If it is not already in the list of unique words, add it.
if word not in wordlist: wordlist.append(word)
# We now have wordlist, which has a unique list of all words in all blogs.
# and wordcount, which contains len(blogs) dictionaries, containing word counts.
# Matrix is the table that you need of wordcounts. The number of rows will be
# equal to the number of unique words, and the number of columns = no. of blogs.
matrix = []
# Consider each word in the unique list of words (corresponding to each row)
for word in wordlist:
# Add as many columns as there are blogs, all initialized to zero.
matrix.append([0]*len(wordcount))
# Consider each blog one by one
for i in range(len(wordcount)):
# Check if the currently selected word appears in that blog
if word in wordcount[i]:
# If yes, increment the counter for that blog/column
matrix[-1][i]+=wordcount[i][word]
# For printing matrix, first generate the column headings
temp = "\t"
for i in range(len(blogs)):
temp+="Blog "+str(i+1)+"\t"
print temp
# Then generate each row, with the word at the starting, and tabs between numbers.
for i in range(len(matrix)):
temp = wordlist[i]+"\t"
for j in matrix[i]: temp += str(j)+"\t"
print temp
Now, `matrix[i][j]` will contain the number of times the word `wordlist[i]`
appears in blog `blogs[j]`.
|
How to check if my list has an item from another list(dictionary)?
Question: I'm a beginner in programming with python, and I've got a question which may
have an easy answer.
So I have a dictionary of words which is imported from a .txt file, next my
program asks you to type a sentence, and then it saves every word which you've
typed into another list.
I have to write a program which checks if every word of a list named
`sentence_list` is in the list which is named `words`. If the word is not
present in the dictionary I have to put it in another list which is being
filled by all the words that are mistyped or not in the dictionary.
For easier understanding, my program should work something like that:
Type your sentence:
My naeme is John, I lieve in Italy, which is beatiful country.
['naeme', 'lieve', 'beatiful']
This is what I could do until now:
words = open("dictionary.txt", encoding="latin2").read().lower().split()
sentence=input("Type your sentence: ")
import re
sentence_list = re.findall("\w+", sentence.lower())
I know I have to do something with for, but for is different in python that it
is in Javascript, which I am familiar with.
Answer: **Using sets**
You could use sets to find all of the words not in the dictionary list.
>>> set([1,2,3]).difference([2,3])
set([1])
Note that this will not include duplicates.
So for you, it would be something like this (if you need the result to be a
list):
misspelled_word_list = list( set(sentence_list).difference(words) )
**Using for**
Since you are required to use `for`, here is an alternate (less efficient)
approach:
misspelled_word_list = []
for word in sentence_list:
if (not word in misspelled_word_list) and (not word in words):
misspelled_word_list.append(word)
You just loop over the words in `sentence_list` and check whether they are in
your `words` list or not.
|
Using readlines in python? First time
Question: I have a text file with columns of data and I need to turn these columns into
individual lists or arrays. This is what I have so far
f = open('data.txt', 'r')
temp = []
for row in f.readlines():
Data = row.split()
temp.append(float(Data[0]))
When I run this I get `IndexError: list index out of range`.
Snippet of the data below:
16 0.2000
17 0.3000
18 0.4000
20 0.5000
21 0.6000
22 0.7000
24 0.8000
25 0.9000
26 1.000
I need the first column, if possible to look like this: Data = [16, 17, 18,
20, 21, 22, 24, 25, 26]
Answer: You are getting an empty list `Data=[]` if you read an empty row. You try to
get the first element from the list using `Data[0]`,but because it's an empty
list it doesn't have an element at position 0, so you get an `IndexError`.
Data=''.split()
Data[0]
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-686-0792b03cbbdf> in <module>()
----> 1 Data[0]
IndexError: list index out of range
This will print out the `Data` if `IndexError` occours - you can see yourself
that it prints an empty list:
f=open('file','r')
temp = []
for row in f.readlines():
Data = row.split()
try:
temp.append(float(Data[0]))
except IndexError:
print Data
You can use the `with` statement to open the file, that automatically closes
the file after being processed. Also you can loop over the file itself,
without using `readlines()`.
with open(file,'r') as f:
for row in f:
Data = row.split()
try:
print Data[0]
except IndexError:
print 'You have an empty row'
EDIT: You are better of using the csv module:
import csv
with open('file.csv', 'rb') as f:
reader = csv.reader(f, delimiter=' ')
print [row[0] for row in reader if len(row)]
>>>
['16', '17', '18', '20', '21', '22', '24', '25', '26']
|
Python: Get random value from a array dimension
Question: I am trying to get a random element from an array's single dimension in
Python. So in the case below, I would like to retrieve any one of the 5
floats.
ar = rand(1, 5)
ar = array([[ 0.29889882, 0.84955019, 0.52989055, 0.57220576, 0.16841406]])
I have been able to retrieve a float if there are 5 elements and only one
dimension `(ar = rand(5, 1)),`
using:
ar[randrange(0, p.size)]
but how do I get a value from an array from a single dimension?
Answer: Assuming you are referring to `numpy.array`, you can use the following:
>>> import numpy as np
>>> np.array([[ 0.29889882, 0.84955019, 0.52989055, 0.57220576, 0.16841406]])
array([[ 0.29889882, 0.84955019, 0.52989055, 0.57220576, 0.16841406]])
>>>
>>>
>>> ar = np.array([[ 0.29889882, 0.84955019, 0.52989055, 0.57220576, 0.16841406]])
>>> ar[:, random.randint(0,4)]
array([ 0.29889882])
>>> ar[:, random.randint(0,4)]
array([ 0.52989055])
|
OpenCL Matrix Multiplication - Getting wrong answer
Question: here's a simple OpenCL Matrix Multiplication kernel which is driving me crazy:
By the way I am using pyopencl.
__kernel void matrixMul( __global int* C,
__global int* A,
__global int* B,
int wA, int wB){
int row = get_global_id(1); //2D Threas ID x
int col = get_global_id(0); //2D Threas ID y
//Perform dot-product accumulated into value
int value = 0;
for ( int k = 0; k < wA; k++ ){
value += A[row*wA + k] * B[k*wB+col];
}
C[row*wA+col] = value; //Write to the device memory
}
Where (inputs)
A = [72 45
75 61]
B = [26 53
46 76]
wA = wB = 2
Output I am getting:
Sometime I get:
C = [3942 0
0 5472]
Else I get:
C = [3942 7236
3312 5472]
But the output should be:
C = [3942 7236
4756 8611]
I don't know what mistake I am making here. I have spent the entire day with
no luck.
Please help me with this
Here's the full python code:
import pyopencl as cl
import numpy as np
import os
ORDER = 2
LEN = ORDER*ORDER
ctx = cl.create_some_context()
commandQueue = cl.CommandQueue( ctx )
A = np.array((72, 45, 75, 61), dtype = np.int32)
B = np.array((26, 53, 46, 76), dtype = np.int32)
C = np.empty_like(A)
in_buf1 = cl.Buffer( ctx, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR,
hostbuf = A )
in_buf2 = cl.Buffer( ctx, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR,
hostbuf = B )
out_buf = cl.Buffer( ctx, cl.mem_flags.WRITE_ONLY, C.nbytes )
kernelSrc1 = """__kernel void
matrixMul( /*const int Mdim,
const int Ndim,
const int Pdim,*/
__global int* C,
__global int* A,
__global int* B,
int wA, int wB)
{
int row = get_global_id(1); //2D Threas ID x
int col = get_global_id(0); //2D Threas ID y
//Perform dot-product accumulated into value
int value = 0;
for ( int k = 0; k < wA; k++ ){
value += A[row*wA + k] * B[k*wB+col];
}
C[row*wA+col] = value; //Write to the device memory
}"""
program1 = cl.Program(ctx, kernelSrc1 ).build()
event1 = program1.matrixMul( commandQueue, (LEN, ), None,
out_buf, in_buf1, in_buf2, np.int32(ORDER), np.int32(ORDER));
event1.wait()
cl.enqueue_copy(commandQueue, C, out_buf)
print C
I am using Python 2.7.x, pyopencl 2012.1, AMD APP SDK
Answer: You are setting your _global size_ argument incorrectly. Since you are using
two dimensions of global size in your kernel, you need to set your _global
size_ to (ORDER,ORDER). When you change it to that, you get:
[3942 7236
4756 8611]
|
Using BeautifulSoup Library with Python
Question: I'm following a tutorial on creating a map using Python and the Beautiful Soup
library.
I have downloaded beautiful soup and the folder is called
"beautifulsoup4-4.1.3". The contents of this folder are in the attached image.
During the tutorial I am given the following code to use to import my data and
beautiful soup:
import csv
from BeautifulSoup import BeautifulSoup
Trouble is that there is no file called "beautiful soup" within the beautiful
soup folder. I have also attached the error message I receive in the terminal.
How should I import beautiful soup when there is no file with that name? I
tried simply changing the folders name to "beautiful soup". I did not expect
that to work and I was right - it did not.
Any advice on how to proceed welcome?


Answer: You installed the BeautifulSoup library version 4, which has been renamed:
from bs4 import BeautifulSoup
If you want the old name and matching API, you need to install BeautifulSoup 3
instead:
easy_install BeautifulSoup
Note that since your tutorial is using `from BeautifulSoup import
BeautifulSoup` it may not entirely work with the updated API of BeautifulSoup
version 4. If you run into problems, take a look at the [porting to
BS4](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#porting-code-to-
bs4) section of the BeautifulSoup documentation to 'translate' BS 3 code to
the 4 API.
|
Iterate through XML to get all child nodes text value
Question: i have a xml with following data. i need to get value of and all other
attribute. i return a python code there i get only first driver value.
My xml :
<volume name="sp" type="span" operation="create">
<driver>HDD1</driver>
<driver>HDD2</driver>
<driver>HDD3</driver>
<driver>HDD4</driver>
</volume>
My script:
import xml.etree.ElementTree as ET
doc = ET.parse("vol.xml")
root = doc.getroot() #Returns the root element for this tree.
root.keys() #Returns the elements attribute names as a list. The names are returned in an arbitrary order
root.attrib["name"]
root.attrib["type"]
root.attrib["operation"]
print root.get("name")
print root.get("type")
print root.get("operation")
for child in root:
#print child.tag, child.attrib
print root[0].text
My output:
sr-query:~# python volume_check.py aaa
sp
span
create
HDD1
sr-queryC:~#
I am not get `HDD2`, `HDD3` and `HDD4`. How to itirate through this xml to get
all values? Any optimized way? I think any for loop can do that but not
familiar with Python.
Answer: In your `for` loop, it should be
child.text
not
root[0].text
|
How to get data from the TreelView list
Question: [http://www.vliz.be/vmdcdata/mangroves/aphia.php?p=browser&id=235056&expand=true#ct](http://www.vliz.be/vmdcdata/mangroves/aphia.php?p=browser&id=235056&expand=true#ct)
(That's the information I am trying to scrape)
**I wanna to scrape this detailed taxonomic trees so that I can manipulate
them anyway I like.**
But there are a few problem in geting this tree data.
1. I can' t fully expand the taxonomic tree . when some expanding ,some collapse as the instruction indicated . so saving the full page as html files can not sove my problem. or I can repeat the process some times to get separate files and concatenate them.. but it seems to be a ugly way.
2. I am tired of clicking , there are so many "plus" signs and I have to wait.
Is there a way to solve this out using **Python** ?
Answer: Use `Selenium`, this will expand the tree by clicking on the "plus signs" and
get the entire DOM with all the elements in it after it's done:
from selenium import webdriver
import time
browser=webdriver.Chrome()
browser.get('http://www.vliz.be/vmdcdata/mangroves/aphia.php?p=browser&id=235301&expand=true#ct')
while True:
try:
elem=browser.find_elements_by_xpath('.//*[@src="http://www.marinespecies.org/images/aphia/pnode.gif" or @src="http://www.marinespecies.org/images/aphia/plastnode.gif"]')[1]
elem.click()
time.sleep(2)
except:
break
content=browser.page_source
|
gnuplot - Histogram with histeps connects bars
Question: This is a minimal working example of the code I'm using:
#!/bin/bash
gnuplot << EOF
set term postscript portrait color enhanced
set encoding iso_8859_1
set output 'temp.ps'
set grid noxtics noytics noztics front
set size ratio 1
set multiplot
set lmargin 9; set bmargin 3; set rmargin 2; set tmargin 1
n=32 #number of intervals
max=13. #max value
min=-3.0 #min value
width=(max-min)/n #interval width
hist(x,width)=width*floor(x/width)+width/2.0
set boxwidth width
set style fill solid 0.25 noborder
plot "< awk '{if (3.544068>=\$1) {print \$0}}' /data_file" u (hist(\$2,width)):(1.0) smooth freq w boxes lc rgb "red" lt 1 lw 1.5 notitle
EOF
which gets me this:

What I need is to use `histeps` instead, but when I change `boxes` for
`histeps` in the `plot`command above, I get:

What is going on here??
Here's the [data_file](http://pastebin.com/JCD2Um52). Thank you!
* * *
EDIT: If having `histeps` follow the actual outer bars limits instead of
interpolating values in between (like `boxes`does) is not possible, then how
could I draw **just** the outline of a histogram made with `boxes`?
* * *
EDIT2: As usual mgilson, your answer is beyond useful. One minor glitch
though, this is the output I'm getting which when I combine both plots with
the command:
plot "< awk '{if (3.544068>=\$1) {print \$0}}' data_file" u (hist(\$2,width)):(1.0) smooth freq w boxes lc rgb "red" lt 1 lw 1.5 notitle, \
"<python pyscript.py data_file" u 1:2 w histeps lc rgb "red" lt 1 lw 1.5 notitle

Something appears to be shifting the output of the `python` script and I can't
figure out what it might be. (Fixed in comments)
Answer: The binning is quite easy if you have python + numpy. It's a _very_ popular
package, so you should be able to find it in your distribution's repository if
you're on Linux.
#Call this script as:
#python this_script_name.py 3.14159 data_file.dat
import numpy as np
import sys
n=32 #number of intervals
dmax=13. #max value
dmin=-3.0 #min value
#primitive commandline parsing
limit = float(sys.argv[1]) #first argument is the limit
datafile = sys.argv[2] #second argument is the datafile to read
data = [] #empty list
with open(datafile) as f: #Open first commandline arguement for reading.
for line in f: #iterate through file returning 1 line at a time
line = line.strip() #remove whitespace at start/end of line
if line.startswith('#'): #ignore comment lines.
continue
c1,c2 = [float(x) for x in line.split()] #convert line into 2 floats and unpack
if limit >= c1: #Check to make sure first one is bigger than your 3.544...
data.append(c2) #If c1 is big enough, then c2 is part of the data
counts, edges = np.histogram(data, #data to bin
bins=n, #number of bins
range=(dmin,dmax), #bin range
normed=False #numpy2.0 -- use `density` instead
)
centers = (edges[1:] + edges[:-1])/2. #average the bin edges to the center.
for center,count in zip(centers,counts): #iterate through centers and counts at same time
print center,count #write 'em out for gnuplot to read.
and the gnuplot script looks like:
set term postscript portrait color enhanced
set output 'temp.ps'
set grid noxtics noytics noztics front
set size ratio 1
set multiplot
set lmargin 9
set bmargin 3
set rmargin 2
set tmargin 1
set style fill solid 0.25 noborder
plot "<python pyscript.py 3.445 data_file" u 1:2 w histeps lc rgb "red" lt 1 lw 1.5 notitle
I'll explain more when I get a little more free time ...
|
Identify a shape inside another shape using AForge.Net
Question: I am using the AForge.Net library to do some basic image processing stuff as
part of a project. I have found it trivial to identify individual geometric
shapes in an image (like a square, circle etc.,). However when I have an image
like  , the program can only identify
the outer circle. I would want it to be recognized as a circle and a line.
Similarly another example is,  , where
the program identifies only a square but I need it to recognize it as a square
and a circle.
I guess this library itself is outdated and no longer supported, but I will
really appreciate some help! I have found this to be a really easy library to
use but if my requirement cannot be met with it I am open to other libraries
as well. (in Java, C# or Python). Thanks!
Answer: That's an easy task using `Python`
You'll need libraries like `numpy` and `scipy.ndimage` to be installed,and
with only `scipy.ndimage` you can extract any shape on a black background .
So if your image is in white background you need to invert it first which is
an easy job.
import scipy.ndimage
from scipy.misc import imread # so I can read the image as a numpy array
img=imread('image.png') # I assume your image is a grayscale image with a black bg.
labeled,y=scipy.ndimage.label(img) # this will label all connected areas(shapes).
#y returns how many shapes??
shapes=scipy.ndimage.find_objects(labeled)
# shapes returns indexing slices for ever shape
# So if you have 2 shapes in your image,then y=2.
# to extract the 1st shape you do like this.
first_shape=img[shapes[0]] # that's is a numpy feature if you are familiar with numpy .
second_shape=img[shapes[1]]
After extracting your individual shapes you can actually do some math work to
identify **what it is?** (e.g circularity ratio >> you can google it , it's
helpful in your case)
|
Get country code for timezone using pytz?
Question: I'm using [pytz](http://pytz.sourceforge.net/#country-information). I've read
through the entire documentation sheet, but didn't see how this could be done.
I have a timezone: America/Chicago. All I want is to get the respective
country code for this timezone: US.
It shows that I can do the opposite, such as:
>>> country_timezones('ch')
['Europe/Zurich']
>>> country_timezones('CH')
['Europe/Zurich']
but I need to do it the other way around.
Can this be done in Python, using pytz (or any other way for that matter)?
Answer: You can use the `country_timezones` object from `pytz` and generate an inverse
mapping:
from pytz import country_timezones
timezone_country = {}
for countrycode in country_timezones:
timezones = country_timezones[countrycode]
for timezone in timezones:
timezone_country[timezone] = countrycode
Now just use the resulting dictionary:
>>> timezone_country['Europe/Zurich']
u'CH'
|
Which is the preferred method to use jinja2 on App Engine?
Question: I originally implemented Jinja2 on App Engine using the examples shown on the
App Engine site here:
<https://developers.google.com/appengine/docs/python/gettingstartedpython27/templates>
where jinja2 is imported directly:
import jinja2
import os
jinja_environment = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.path.dirname(__file__)))
class MainPage(webapp2.RequestHandler):
def get(self):
greetings = 'somestring'
template_values = {
'greetings': greetings,
}
template = jinja_environment.get_template('index.html')
self.response.out.write(template.render(template_values))
But I'm currently bolting on Simpleauth (https://github.com/crhym3/simpleauth)
which follows the implementation that Nick Johnson described here:
<http://blog.notdot.net/2011/11/Migrating-to-Python-2-7-part-2-Webapp-and-
templates> where jinja2 is imported from webapp2_extras:
import os
import webapp2
from webapp2_extras import jinja2
class BaseHandler(webapp2.RequestHandler):
@webapp2.cached_property
def jinja2(self):
return jinja2.get_jinja2(app=self.app)
def render_template(self, filename, **template_args):
self.response.write(self.jinja2.render_template(filename, **template_args))
class IndexHandler(BaseHandler):
def get(self):
self.render_template('index.html', name=self.request.get('name'))
**Which of these is the preferred method for using jinja2?** (They don't seem
to play together nicely, and would prefer to standardize on the best option.)
Answer: I had the same question, but the answers here do not satisfy me.
I think it's about encapsulation vs performance. For a small application you
can have a global, no problem. So the first solution is just fine. It lets you
solve an easy problem in an easy way, without the overhead to learn the
details of a framework.
For a bigger application you probably like to encapsulate and bring some order
into your objects. Basically you make a framework, an infrastructure for
scalability. But that's what webapp2 is supposed to give you.
The basic problem behind it: If you decide to make a singleton type of object
local to a class that gets instanciated and freed as part of the logic (like
webapp2.RequestHandler classes in official examples), then that referenced
object (jinja2) will be released when the last class instance is gone... you
might get a lot of freeing and reallocating. So its good to have a link to an
object somewhere (webapp2.registry) to prevent removing even if not referenced
anywhere else. It's like global, but without poluting the global namespace,
and it is accessible from everywhere through webapp2.get_app().registry. It's
also caching. Then, with cached_property you do just another layer of caching.
In short: If you want to encapsulate you better add caching for your app to
stay efficient
In this case you go for the webapp2_extra jinja2 and in every module you can
access the same jinja environment with:
jinja2.get_jinja2().environment
|
Streaming continuous data over a network with python
Question: I have a device that continually outputs data and I would like to send that
data to a client on the same network as it is produced and I'm not finding a
good solution. Here is what I'm trying.
Server:
import SocketServer
from subprocess import Popen,PIPE
class Handler(SocketServer.BaseRequestHandler):
def handle(self):
if not hasattr(self, 'Proc'):
self.Proc = Popen('r.sh', stdout=PIPE)
socket = self.request[1]
socket.sendto(self.Proc.stdout.readline(),self.client_address)
if __name__ == "__main__":
HOST, PORT = "192.168.1.1", 6001
server = SocketServer.UDPServer((HOST, PORT), Handler)
server.serve_forever()
Client:
import socket
data = " ".join(sys.argv[1:])
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.sendto(data + "\n", ("192.168.1.1", 6001))
try:
received = sock.recv(1024)
while True:
print "Sent: {}".format(data)
print "Received: {}".format(received)
sock.sendto('more' + "\n", ("192.168.1.1", 6001))
received = sock.recv(1024)
except:
print "No more messages"
arg[1] for the client is a program that outputs lines of data for several
minutes that I need to process as it is created. The problem seems to be that
every time the client sends another request, a new Handler object is created,
so I loose Proc. How can I stream Proc.stdout?
Edit: The device is a Korebot2, so I have limited access to other python
libraries due to space.
Answer: Using UDP you get a new "connection" each time you send a datagram, which is
the reason you notice that a new object instance is created each time you send
something. You're probably using the wrong kind of protocol here... UDP is
used mostly for sending distinct "datagrams", or when a longer connection is
not needed. TCP is also called a "streaming" protocol, and is often used for
data that has no fixed end.
Also remember that UDP is not a reliable protocol, if used over a network it
is almost guaranteed that you will loose packets.
|
Embed Python HTTP server into C program as RPC server?
Question: I have a program written in C++ with a web interface to for the purpose of
RPC. I can call `http://localhost/ListVariables` or
`http://localhost/RunFunction?var=1` and have the C code execute ListVariables
or RunFunction. It works, but I'd rather not have to manage the web server in
C/C++ when there are so many good Python web servers out there.
What I'm imagining is having the C program call into Python to start a web
server on another thread (i.e. Tornado), return to C and then continue
chugging along doing calculations. Then when the Python server receives a
request on `http://localhost/ListVariables`, it calls _back_ into C and
executes ListVariables on the already running process.
C -----> processing -----------> processing ------------> RPC: FuncA -------->
| ^ |
\---> Python Web Server ---------- Request for: FuncA --/ ... \-------->
^
browser: http://localhost/FuncA ---/
The project has the unfortunate caveat that the program must be started from C
to begin with. After doing some research this seems to be a bit of a border
case, since Python-->C and C-->Python can be done with Cython. However, I
can't find many resources on C-->Python-->C, as most of the examples I've
found describe linking to libraries and not to already-running processes. Is
it possible to have Python call back into a running C program?
Answer: Absolutely. [Create a "fake"
module](http://docs.python.org/extending/extending.html) in the C program
injected into `sys.modules` that the Python code can import and access as it
would any other module.
|
While loop waiting for input python
Question: I have a question about while loops in python. I want to make a program that
performs a while loop in a certain time.I want to add the extra feature that
while the program us running,a certain variable can be changed by pressing a
random key.
from time import sleep
import time
i=0
a=0
while i<10:
i=i+1
i=i+a
a=a+1
time.sleep(1)
print i
I want to do it that the variable a can be reset to 0 by pressing any key.The
loop should continue unchanged if no button is pressed.What command should i
add?
Thanks Edit: I tried:
import pygame
from pygame.locals import *
import time
i=0
a=0
pygame.init()
while i<10:
pygame.event.get()
i=i+a
print i
keys = pygame.key.get_pressed()
if keys[K_ESCAPE]:
i=0
i=i+1
time.sleep(1)
pygame.quit()
But now nothing happens when I press a button.What did i miss?
Answer: What you need is a non-blocking input function
while i<10:
keys = pygame.key.get_pressed()
etc
...
pygame has all sorts of event stuff built in so doing all the hard work of
threading yourself shouldn't be necessary.
If that doesn't work for you check this out:
<http://www.darkcoding.net/software/non-blocking-console-io-is-not-possible/>
|
Interfacing with TUN\TAP for MAC OSX (Lion) using Python
Question: I found the following tun\tap example program and can not get it to work:
<http://www.secdev.org/projects/tuntap_udp/files/tunproxy.py>
I have modified the following lines:
f = os.open("/dev/tun0", os.O_RDWR)
ifs = ioctl(f, TUNSETIFF, struct.pack("16sH", "toto%d", TUNMODE))
ifname = ifs[:16].strip("\x00")
The first line was modified to reflect the real location of the driver. It was
originally
f = os.open("/dev/net/tun", os.O_RDWR)
Upon running I get the following error:
sudo ./tuntap.py -s 9000
Password:
Traceback (most recent call last):
File "./tuntap.py", line 65, in <module>
ifs = ioctl(f, TUNSETIFF, struct.pack("16sH", "toto%d", TUNMODE))
IOError: [Errno 25] Inappropriate ioctl for device
I am using the latest tun\tap drivers installed from
<http://tuntaposx.sourceforge.net/download.xhtml>
Answer: The OSX tun/tap driver seems to work a bit different. The Linux example
dynamically allocates a tun interface, which does not work in OSX, at least
not in the same way.
I stripped the code to create a basic example of how tun can be used on OSX
using a self-selected tun device, printing each packet to the console. I added
[Scapy](http://www.secdev.org/projects/scapy/doc/) as a dependency for pretty
printing, but you can replace it by a raw packet dump if you want:
import os, sys
from select import select
from scapy.all import IP
f = os.open("/dev/tun12", os.O_RDWR)
try:
while 1:
r = select([f],[],[])[0][0]
if r == f:
packet = os.read(f, 4000)
# print len(packet), packet
ip = IP(packet)
ip.show()
except KeyboardInterrupt:
print "Stopped by user."
You will either have to run this as root, or do a `sudo chown your_username
/dev/tun12` to be allowed to open the device.
To configure it as a point-to-point interface, type:
$ sudo ifconfig tun12 10.12.0.2 10.12.0.1
Note that the `tun12` interface will only be available while `/dev/tun12` is
open, i.e. while the program is running. If you interrupt the program, your
tun interface will disappear, and you will need to configure it again next
time you run the program.
If you now ping your endpoint, your packets will be printed to the console:
$ ping 10.12.0.1
Ping itself will print request timeouts, because there is no tunnel endpoint
responding to your ping requests.
|
Permission issue for apache
Question: Environment Details:
Amazon Ec2 Ubuntu 12.04
Django + mod_wsgi + python 2.6
web server: apache2
I have mounted a `10GB` `ebs volume` to an instance to `/mnt/ebs1/`. After
mounting the volume and formatting, I have placed all my project files in
`/mnt/ebs1/project`. the `wsgi` file is in
`/mnt/ebs1/project/apache/django.wsgi`. The content of wsgi file is:
import os, sys
sys.path.insert(0, '/mnt/ebs1/project')
sys.path.insert(1, '/mnt/ebs1')
os.environ['DJANGO_SETTINGS_MODULE'] = 'project.configs.common.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
My `httpd.conf` file looks as:
LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so
WSGIPythonHome /usr/bin/python2.6
WSGIScriptAlias / /mnt/ebs1/project/apache/django.wsgi
<Directory /mnt/ebs1/project>
Order allow,deny
Allow from all
</Directory>
<Directory /mnt/ebs1/project/apache>
Order allow,deny
Allow from all
</Directory>
Alias /static/ /mnt/ebs1/project/static/
<Directory /mnt/ebs1/project/static>
Order deny,allow
Allow from all
</Directory>
The above configurations gives me `Forbidden: You don't have permission to
access / on this server`.
I tried to find the user which is running apache using `ps aux` which is `www-
data` and has group `www-data`. I have tried to change the ownership of
`/mnt/ebs1` and its subdirectories using `chown -R www-data:www-data
/mnt/ebs1` but that still does not solve the problem. Can any one tell me what
I am doing wrong or have missed?
Answer: Duplicate of:
<http://serverfault.com/questions/441444/permission-issue-for-apache>
WSGIPythonHome is wrong.
Go read response in duplicate.
|
Running Python scripts on local machine
Question: I am trying to run a simple hello world example in python that runs against
mongodb. I've set up mongo, bottle and pymong and have the following script
inside `C:\Python27\Scripts`:
import bottle
import pymongo
@bottle.route('/')
def index()
from pymongo import Connection
connection = Connection('localhost', 27017)
db = connection.test
names = db.names
item = names.find_one()
return '<b>Hello %s!</b>' % item['name']
bottle.run(host='localhost', port=8082)
-!-- hello.py All L8 (Python)
I want to run this locally and I go to `http://localhost:8082` but I get not
found not found. How can I run that code to test it locally on my computer so
I can test the code via the browser. I am running Windows 7 and have WAMP
installed.
Answer: 1) Add `:` after function name:
def index():
2) WAMP does not include MongoDB. You need [to install Mongodb locally as
well](http://docs.mongodb.org/manual/tutorial/install-mongodb-on-windows/).
3) If something doesn't work, then you generally should be looking console for
errors.
|
Python calendar: day/month names in specific locale
Question: I am playing with Python's
[calendar](http://docs.python.org/library/calendar.html) module that's in the
standard library. Basically I need a list of all days of a month, like so:
>>> import calendar
>>> calobject = calendar.monthcalendar(2012, 10)
>>> print calobject
[[1, 2, 3, 4, 5, 6, 7], [8, 9, 10, 11, 12, 13, 14], [15, 16, 17, 18, 19, 20, 21], [22, 23, 24, 25, 26, 27, 28], [29, 30, 31, 0, 0, 0, 0]]
Now what I also need are the names of the month and days in a specific locale.
I didn't find a way to get these from the `calobject` itself - but I was able
to get them like so:
>>> import calendar
>>> calobject = calendar.LocaleTextCalendar(calendar.MONDAY, 'de_DE')
>>> calobject.formatmonth(2012, 10)
' Oktober 2012\nMo Di Mi Do Fr Sa So\n 1 2 3 4 5 6 7\n 8 9 10 11 12 13 14\n15 16 17 18 19 20 21\n22 23 24 25 26 27 28\n29 30 31\n'
So `Oktober` is the `de_DE` name for october. Fine. The information must be
there. I'm wondering if I can access that month name somehow on a plain
`calendar` object instead of a `calendar.LocaleTextCalendar` object. The first
example (with the list) is really what I need and I don't like the idea to
create two calendar objects to get localized names.
Anyone got a smart idea?
Answer: Ha! Found an easy way to get localized day/month names:
>>> import locale
>>> locale.setlocale(locale.LC_ALL, 'de_DE')
'de_DE'
>>> import calendar
>>> calendar.month_name[10]
'Oktober'
>>> calendar.day_name[1]
'Dienstag'
|
Python Regular Expression for validate numbers
Question: I want to create a regular expression for python snippet.
import re
pattern = "\d*\.?\d+[Ee]?[+-]?\d*"
r = re.compile(pattern)
txt = """
12
.12
12.5
12.5E4
12.5e4
12.4E+4
12E4
12e-4
"""
x = r.findall(txt)
print(x)
for filtering all valid input from txt this code is fine but invalid input
such as
> .12e, 12.3+4
are also allowed how can I fix this?
Answer: The traditional regexp is along these lines:
pattern = (
"(?:"
r"\d+(?:\.\d+)(:?[Ee][-+]?\d+)"
"|"
r"\.\d+(:?[Ee][+-]?\d+)"
")"
)
But you can always do things the easy way:
def is_number(x):
try:
float(x)
return True
except ValueError:
return False
|
unable to call firefox from selenium in python on AWS machine
Question: I am trying to use selenium from python to scrape some dynamics pages with
javascript. However, I cannot call firefox after I followed the instruction of
selenium on the pypi page(http://pypi.python.org/pypi/selenium). I installed
firefox on AWS ubuntu 12.04. The error message I got is:
In [1]: from selenium import webdriver
In [2]: br = webdriver.Firefox()
---------------------------------------------------------------------------
WebDriverException Traceback (most recent call last)
/home/ubuntu/<ipython-input-2-d6a5d754ea44> in <module>()
----> 1 br = webdriver.Firefox()
/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/webdriver.pyc in __init__(self, firefox_profile, firefox_binary, timeout)
49 RemoteWebDriver.__init__(self,
50 command_executor=ExtensionConnection("127.0.0.1", self.profile,
---> 51 self.binary, timeout),
52 desired_capabilities=DesiredCapabilities.FIREFOX)
53
/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/extension_connection.pyc in __init__(self, host, firefox_profile, firefox_binary, timeout)
45 self.profile.add_extension()
46
---> 47 self.binary.launch_browser(self.profile)
48 _URL = "http://%s:%d/hub" % (HOST, PORT)
49 RemoteConnection.__init__(
/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/firefox_binary.pyc in launch_browser(self, profile)
42
43 self._start_from_profile_path(self.profile.path)
---> 44 self._wait_until_connectable()
45
46 def kill(self):
/usr/local/lib/python2.7/dist-packages/selenium/webdriver/firefox/firefox_binary.pyc in _wait_until_connectable(self)
79 raise WebDriverException("The browser appears to have exited "
80 "before we could connect. The output was: %s" %
---> 81 self._get_firefox_output())
82 if count == 30:
83 self.kill()
WebDriverException: Message: 'The browser appears to have exited before we could connect. The output was: Error: no display specified\n'
I did search on the web and found that this problem happened with other people
(https://groups.google.com/forum/?fromgroups=#!topic/selenium-
users/21sJrOJULZY). But I don't understand the solution, if it is.
Can anyone help me please? Thanks!
Answer: The problem is Firefox requires a display. I've used
[pyvirtualdisplay](http://pypi.python.org/pypi/PyVirtualDisplay) in my example
to simulate a display. The solution is:
from pyvirtualdisplay import Display
from selenium import webdriver
display = Display(visible=0, size=(1024, 768))
display.start()
driver= webdriver.Firefox()
driver.get("http://www.somewebsite.com/")
<---some code--->
#driver.close() # Close the current window.
driver.quit() # Quit the driver and close every associated window.
display.stop()
**Please note that pyvirtualdisplay requires one of the following back-ends:
Xvfb, Xephyr, Xvnc.**
This should resolve your issue.
|
how do I get momoko working?
Question: I am new to python and even newer to tornado/momoko. I am struggling with an
example from [momoko's
website](https://github.com/FSX/momoko/blob/master/examples/gen_example.py). I
have the database.cfg file configured with my settings.
#!/usr/bin/env python
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
from tornado import gen
import momoko
import settings
class BaseHandler(tornado.web.RequestHandler):
@property
def db(self):
return self.application.db
class OverviewHandler(BaseHandler):
def get(self):
self.write('''
<ul>
<li><a href="/query">A single query</a></li>
<li><a href="/batch">A batch of queries</a></li>
<li><a href="/chain">A chain of queries</a></li>
<li><a href="/multi_query">Multiple queries executed with gen.Task</a></li>
<li><a href="/callback_and_wait">Multiple queries executed with gen.Callback and gen.Wait</a></li>
</ul>
''')
self.finish()
class SingleQueryHandler(BaseHandler):
@tornado.web.asynchronous
@gen.engine
def get(self):
# One simple query
cursor = yield gen.Task(self.db.execute, 'SELECT 42, 12, %s, 11;', (25,))
self.write('Query results: %s' % cursor.fetchall())
self.finish()
class BatchQueryHandler(BaseHandler):
@tornado.web.asynchronous
@gen.engine
def get(self):
# These queries are executed all at once and therefore they need to be
# stored in an dictionary so you know where the resulting cursors
# come from, because they won't arrive in the same order.
cursors = yield gen.Task(self.db.batch, {
'query1': ['SELECT 42, 12, %s, %s;', (23, 56)],
'query2': 'SELECT 1, 2, 3, 4, 5;',
'query3': 'SELECT 465767, 4567, 3454;'
})
for key, cursor in cursors.items():
self.write('Query results: %s = %s<br>' % (key, cursor.fetchall()))
self.finish()
class QueryChainHandler(BaseHandler):
@tornado.web.asynchronous
@gen.engine
def get(self):
# Execute a list of queries in the order you specified
cursors = yield gen.Task(self.db.chain, (
['SELECT 42, 12, %s, 11;', (23,)],
'SELECT 1, 2, 3, 4, 5;'
))
for cursor in cursors:
self.write('Query results: %s<br>' % cursor.fetchall())
self.finish()
class MultiQueryHandler(BaseHandler):
@tornado.web.asynchronous
@gen.engine
def get(self):
cursor1, cursor2, cursor3 = yield [
gen.Task(self.db.execute, 'SELECT 42, 12, %s, 11;', (25,)),
gen.Task(self.db.execute, 'SELECT 42, 12, %s, %s;', (23, 56)),
gen.Task(self.db.execute, 'SELECT 465767, 4567, 3454;')
]
self.write('Query 1 results: %s<br>' % cursor1.fetchall())
self.write('Query 2 results: %s<br>' % cursor2.fetchall())
self.write('Query 3 results: %s' % cursor3.fetchall())
self.finish()
class CallbackWaitHandler(BaseHandler):
@tornado.web.asynchronous
@gen.engine
def get(self):
self.db.execute('SELECT 42, 12, %s, 11;', (25,),
callback=(yield gen.Callback('q1')))
self.db.execute('SELECT 42, 12, %s, %s;', (23, 56),
callback=(yield gen.Callback('q2')))
self.db.execute('SELECT 465767, 4567, 3454;',
callback=(yield gen.Callback('q3')))
cursor1 = yield gen.Wait('q1')
cursor2 = yield gen.Wait('q2')
cursor3 = yield gen.Wait('q3')
self.write('Query 1 results: %s<br>' % cursor1.fetchall())
self.write('Query 2 results: %s<br>' % cursor2.fetchall())
self.write('Query 3 results: %s' % cursor3.fetchall())
self.finish()
def main():
try:
tornado.options.parse_command_line()
application = tornado.web.Application([
(r'/', OverviewHandler),
(r'/query', SingleQueryHandler),
(r'/batch', BatchQueryHandler),
(r'/chain', QueryChainHandler),
(r'/multi_query', MultiQueryHandler),
(r'/callback_and_wait', CallbackWaitHandler),
], debug=True)
application.db = momoko.AsyncClient({
'host': settings.host,
'port': settings.port,
'database': settings.database,
'user': settings.user,
'password': settings.password,
'min_conn': settings.min_conn,
'max_conn': settings.max_conn,
'cleanup_timeout': settings.cleanup_timeout
})
http_server = tornado.httpserver.HTTPServer(application)
http_server.listen(8888)
tornado.ioloop.IOLoop.instance().start() # the problem lies here, I believe
except KeyboardInterrupt:
print('Exit')
if __name__ == '__main__':
main()
When I run the script, I get the error:
[E 121023 15:31:07 ioloop:337] Exception in I/O handler for fd 6
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 327, in start
self._handlers[fd](fd, events)
File "/usr/local/lib/python2.7/dist-packages/momoko/pools.py", line 313, in _io_callback
state = self._conn.poll()
OperationalError: asynchronous connection failed
I can connect to the webserver. If I type in "http://localhost:8888/" I see 5
links (ie "A single query", "A batch of queries", etc). When I click on one,
however, I get:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1021, in _stack_context_handle_exception
raise_exc_info((type, value, traceback))
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 265, in _nested
if exit(*exc):
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 161, in __exit__
return self.exception_handler(type, value, traceback)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 114, in handle_exception
return runner.handle_exception(typ, value, tb)
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 388, in handle_exception
self.run()
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 343, in run
yielded = self.gen.throw(*exc_info)
File "/home/desktop-admin/Desktop/eclipse/workspace/jive_backend/test.py", line 35, in get
cursor = yield gen.Task(self.db.execute, 'SELECT 42, 12, %s, 11;', (25,))
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 258, in _nested
yield vars
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 228, in wrapped
callback(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/momoko/pools.py", line 313, in _io_callback
state = self._conn.poll()
OperationalError: asynchronous connection failed
Answer: `Momoko` is wrapper for `psycopg2`, not for `mysql`.
With mysql try to use something from list on [this
page](https://github.com/ovidiucp/pymysql-benchmarks):
* [Twisted's adpapi](http://twistedmatrix.com/documents/10.1.0/api/twisted.enterprise.adbapi.html)
* [txMySQL](https://github.com/hybridlogic/txMySQL)
* [Adb.py](https://github.com/ovidiucp/pymysql-benchmarks/blob/master/adb.py)
If you are using postgresql - try to connect to it in synchronous way first.
|
DataFrame Indexing with a date series
Question: I'm new to Python and Pandas, and am having some trouble indexing by a date
series. I am trying to pull data into a DataFrame from a SQLite db that
consists of a date in format 'mm/dd/yyyy' and an equity price. I then create a
new DataFrame using set_index to index the prices by the dates. How can I set
the new index as a dateseries using the dates from my dataset? Does this
require a datetime conversion or does DataFrame have the ability to convert
from an object to a dateseries?
Below is the code I am using:
import sqlite3 as db
import pandas as p
dbcon = db.connect(...ETF_DATA_TEST.db)
c = dbcon.cursor()
c.execute(""" QUERY """)
rs =p.DataFrame.from_records(c.fetchall(),columns =['Date','Price'])
data = rs.set_index('Date')
Thanks
Answer: You can use datetime.datetime.strptime to parse your 'Date' strings and then
construct numpy.datetime64 values from the datetime.datetime types:
data = rs.reindex(numpy.array([(lambda x : datetime.datetime.strptime(x,'%m/%d/%Y'))(x) for x in rs['Date']],dtype='datetime64[us]')
|
python parse conditional xml value
Question: Here is my XML file:
<METAR>
<wind_dir_degrees>210</wind_dir_degrees>
<wind_speed_kt>14</wind_speed_kt>
<wind_gust_kt>22</wind_gust_kt>
</METAR>
Here is my script to parse the wind direction and speed. However, the wind
gust is a conditional value and doesn't always appear in my xml file. I'd like
to show the value if it does exist and nothing if it doesn't.
import xml.etree.ElementTree as ET
from urllib import urlopen
link = urlopen('xml file')
tree = ET.parse(link)
root = tree.getroot()
data = root.findall('data/METAR')
for metar in data:
print metar.find('wind_dir').text
I tried something like this but get errors
data = root.findall('wind_gust_kt')
for metar in data:
if metar.find((wind_gust_kt') > 0:
print "Wind Gust: ", metar.find('wind_gust_kt').text
Answer: You can use `findtext` with a default value of `''`, eg:
print "Wind Gust: ", meta.findtext('wind_gust_kt', '')
|
Can't compile python bindings using boost-python on OS X
Question: I try to do python bindings on C++ using boost-python, but I can't compile and
I can't find out why.
I'm using a code sample from boost-python `hello.cpp`:
// Copyright Ralf W. Grosse-Kunstleve 2002-2004. Distributed under the Boost
// Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include <boost/python/class.hpp>
#include <boost/python/module.hpp>
#include <boost/python/def.hpp>
#include <iostream>
#include <string>
namespace { // Avoid cluttering the global namespace.
// A friendly class.
class hello
{
public:
hello(const std::string& country) { this->country = country; }
std::string greet() const { return "Hello from " + country; }
private:
std::string country;
};
// A function taking a hello object as an argument.
std::string invite(const hello& w) {
return w.greet() + "! Please come soon!";
}
}
BOOST_PYTHON_MODULE(extending)
{
using namespace boost::python;
class_<hello>("hello", init<std::string>())
// Add a regular member function.
.def("greet", &hello::greet)
// Add invite() as a member of hello!
.def("invite", invite)
;
// Also add invite() as a regular function to the module.
def("invite", invite);
}
With this `Jamroot`:
exe hello : hello.cpp ;
When I run `b2 --debug-configuration -n` everything seems fine:
$ ~/Sites/boost $ b2 --debug-configuration -n
notice: found boost-build.jam at /usr/local/share/boost-build/boost-build.jam
notice: loading Boost.Build from /usr/local/share/boost-build/kernel
notice: Searching /etc /Users/jauneau /usr/local/share/boost-build/kernel /usr/share/boost-build /usr/local/share/boost-build/kernel /usr/local/share/boost-build/util /usr/local/share/boost-build/build /usr/local/share/boost-build/tools /usr/local/share/boost-build/contrib /usr/local/share/boost-build/. for site-config configuration file site-config.jam .
notice: Configuration file site-config.jam not found in /etc /Users/jauneau /usr/local/share/boost-build/kernel /usr/share/boost-build /usr/local/share/boost-build/kernel /usr/local/share/boost-build/util /usr/local/share/boost-build/build /usr/local/share/boost-build/tools /usr/local/share/boost-build/contrib /usr/local/share/boost-build/. .
notice: Searching /Users/jauneau /usr/local/share/boost-build/kernel /usr/share/boost-build /usr/local/share/boost-build/kernel /usr/local/share/boost-build/util /usr/local/share/boost-build/build /usr/local/share/boost-build/tools /usr/local/share/boost-build/contrib /usr/local/share/boost-build/. for user-config configuration file user-config.jam .
notice: Loading user-config configuration file user-config.jam from /Users/jauneau/user-config.jam .
notice: OSX version on this machine is 10.7.5
notice: will use 'g++' for darwin, condition <toolset>darwin-4.2.1
notice: using strip for <toolset>darwin-4.2.1 at /usr/bin/strip
notice: using archiver for <toolset>darwin-4.2.1 at /usr/bin/libtool
notice: available sdk for <toolset>darwin-4.2.1/<macosx-version>10.6 at /Developer/SDKs/MacOSX10.6.sdk
notice: available sdk for <toolset>darwin-4.2.1/<macosx-version>10.7 at /Developer/SDKs/MacOSX10.7.sdk
notice: [python-cfg] Configuring python...
notice: [python-cfg] user-specified version: "2.7"
notice: [python-cfg] Checking interpreter command "python"...
notice: [python-cfg] running command '"python" -c "from sys import *; print('version=%d.%d\nplatform=%s\nprefix=%s\nexec_prefix=%s\nexecutable=%s' % (version_info[0],version_info[1],platform,prefix,exec_prefix,executable))" 2>&1'
notice: [python-cfg] ...requested configuration matched!
notice: [python-cfg] Details of this Python configuration:
notice: [python-cfg] interpreter command: "python"
notice: [python-cfg] include path: "/usr/local/bin/../Cellar/python/2.7.2/include/python2.7"
notice: [python-cfg] library path: "/usr/local/bin/../Cellar/python/2.7.2/lib/python2.7/config" "/usr/local/bin/../Cellar/python/2.7.2/lib"
notice: [python-cfg] no framework directory found; using library path
...found 13 targets...
...updating 2 targets...
darwin.compile.c++ bin/darwin-4.2.1/debug/hello.o
"g++" -ftemplate-depth-128 -O0 -fno-inline -Wall -g -dynamic -no-cpp-precomp -gdwarf-2 -fexceptions -fPIC -c -o "bin/darwin-4.2.1/debug/hello.o" "hello.cpp"
darwin.link bin/darwin-4.2.1/debug/hello
"g++" -o "bin/darwin-4.2.1/debug/hello" "bin/darwin-4.2.1/debug/hello.o" -g
...updated 2 targets...
But when I run b2 to compile my bindings he can't find python header files:
...found 13 targets...
...updating 2 targets...
darwin.compile.c++ bin/darwin-4.2.1/debug/hello.o
In file included from /usr/local/include/boost/python/detail/prefix.hpp:13,
from /usr/local/include/boost/python/class.hpp:8,
from hello.cpp:5:
/usr/local/include/boost/python/detail/wrap_python.hpp:50:23: error: pyconfig.h: No such file or directory
/usr/local/include/boost/python/detail/wrap_python.hpp:75:24: error: patchlevel.h: No such file or directory
/usr/local/include/boost/python/detail/wrap_python.hpp:78:2: error: #error Python 2.2 or higher is required for this version of Boost.Python.
/usr/local/include/boost/python/detail/wrap_python.hpp:142:21: error: Python.h: No such file or directory
In file included from /usr/local/include/boost/python/object/pointer_holder.hpp:14,
from /usr/local/include/boost/python/to_python_indirect.hpp:10,
from /usr/local/include/boost/python/converter/arg_to_python.hpp:10,
from /usr/local/include/boost/python/call.hpp:15,
from /usr/local/include/boost/python/object_core.hpp:14,
from /usr/local/include/boost/python/object/class.hpp:9,
from /usr/local/include/boost/python/class.hpp:13,
from hello.cpp:5:
/usr/local/include/boost/python/instance_holder.hpp:34: error: ‘PyObject’ has not been declared
/usr/local/include/boost/python/instance_holder.hpp:41: error: expected ‘;’ before ‘(’ token
/usr/local/include/boost/python/instance_holder.hpp:45: error: ‘PyObject’ has not been declared
In file included from /usr/local/include/boost/python/object/pointer_holder.hpp:21,
from /usr/local/include/boost/python/to_python_indirect.hpp:10,
from /usr/local/include/boost/python/converter/arg_to_python.hpp:10,
from /usr/local/include/boost/python/call.hpp:15,
from /usr/local/include/boost/python/object_core.hpp:14,
from /usr/local/include/boost/python/object/class.hpp:9,
from /usr/local/include/boost/python/class.hpp:13,
from hello.cpp:5:
...
I have installed both boost and boost-build using Homebrew.
Any ideas ?
Answer: it may be related to the fact that you are using a homebrew python but Boost
is linked to the system Python.
|
Python Scrapy function to be called just before spider_closed signal sent?
Question: I wrote a spider using scrapy, one that makes a whole bunch of
HtmlXPathSelector Requests to separate sites. It creates a row of data in a
.csv file after each request is (asynchronously) satisfied. It's impossible to
see which request is satisfied last, because the request is repeated if no
data was extracted yet (occasionally it misses the data a few times). Even
though I start with a neat list, the output is jumbled because the rows are
written immediately after data is extracted.
Now I'd like to sort that list based on one column, but after every request is
done. **Can the 'spider_closed' signal be used to trigger a real function?**
As below, I tried connecting the signal with dispatcher, but this function
seems to only print out things, rather than work with variables or even call
other functions.
def start_requests(self)
... dispatcher.connect(self.spider_closed, signal=signals.engine_stopped) ....
def spider_closed(spider):
print 'this gets printed alright' # <-only if the next line is omitted...
out = self.AnotherFunction(in) # <-This doesn't seem to run
Answer: I hacked together a pipeline to solve this problem for you.
file: Project.middleware_module.SortedCSVPipeline
import csv
from scrapy import signals
class SortedCSVPipeline(object):
def __init__(self):
self.items = []
self.file_name = r'YOUR_FILE_PATH_HERE'
self.key = 'YOUR_KEY_HERE'
@classmethod
def from_crawler(cls, crawler):
pipeline = cls()
crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
return pipeline
def spider_closed(self, spider):
for item in sorted(self.items, key=lambda k: k[self.key]):
self.write_to_csv(item)
def process_item(self, item, spider):
self.items.append(item)
return item
def write_to_csv(self, item):
writer = csv.writer(open(self.file_name, 'a'), lineterminator='\n')
writer.writerow([item[key] for key in item.keys()])
file: settings.py
ITEM_PIPELINES = {"Project.middleware_module.SortedCSVPipeline.SortedCSVPipeline" : 1000}
When running this you won't need to use an item exporter anymore because this
pipeline will do the csv writing for you. Also, the 1000 in the pipeline entry
in your setting needs to be a higher value than all other pipelines that you
want to run before this one. I tested this in my project and it resulted in a
csv file sorted by the column I specified! HTH
Cheers
|
linker command failed at fastmath installing pycrypto on OSX
Question: I did `pip install pycrypto` (actually wanted to install `fabric` but it
failed at pycrypto) and got the error below. I'm on python 2.7.3. Tried 3.3
too but same error. How do I fix this please?
My clang version:
$ clang --version
Apple clang version 4.1 (tags/Apple/clang-421.11.66) (based on LLVM 3.1svn)
Target: x86_64-apple-darwin11.4.2
Thread model: posix
Error:
...
creating build/lib.macosx-10.7-x86_64-2.7/Crypto/Signature
copying lib/Crypto/Signature/__init__.py -> build/lib.macosx-10.7-x86_64-2.7/Crypto/Signature
copying lib/Crypto/Signature/PKCS1_PSS.py -> build/lib.macosx-10.7-x86_64-2.7/Crypto/Signature
copying lib/Crypto/Signature/PKCS1_v1_5.py -> build/lib.macosx-10.7-x86_64-2.7/Crypto/Signature
running build_ext
running build_configure
building 'Crypto.PublicKey._fastmath' extension
creating build/temp.macosx-10.7-x86_64-2.7
creating build/temp.macosx-10.7-x86_64-2.7/src
cc -fno-strict-aliasing -fno-common -dynamic -I/usr/local/include -Wall -Wstrict-prototypes -std=c99 -O3 -fomit-frame-pointer -Isrc/ -I/usr/include/ -I/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c src/_fastmath.c -o build/temp.macosx-10.7-x86_64-2.7/src/_fastmath.o
src/_fastmath.c:1545:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
src/_fastmath.c:1621:20: warning: comparison of unsigned expression < 0 is always false [-Wtautological-compare]
else if (result < 0)
~~~~~~ ^ ~
2 warnings generated.
cc -bundle -undefined dynamic_lookup -L/usr/local/lib build/temp.macosx-10.7-x86_64-2.7/src/_fastmath.o -lgmp -o build/lib.macosx-10.7-x86_64-2.7/Crypto/PublicKey/_fastmath.so
ld: illegal text-relocation to ___gmp_binvert_limb_table in /usr/local/lib/libgmp.a(mp_minv_tab.o) from ___gmpn_divexact_1 in /usr/local/lib/libgmp.a(dive_1.o) for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command 'cc' failed with exit status 1
----------------------------------------
Command /usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -c "import setuptools;__file__='/var/folders/_2/fwbd8jjn0mj_y_9w4f31g2nm0000gn/T/pip-build/pycrypto/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/_2/fwbd8jjn0mj_y_9w4f31g2nm0000gn/T/pip-riA3zG-record/install-record.txt --single-version-externally-managed failed with error code 1 in /var/folders/_2/fwbd8jjn0mj_y_9w4f31g2nm0000gn/T/pip-build/pycrypto
Exception information:
Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.2.1-py2.7.egg/pip/basecommand.py", line 107, in main
status = self.run(options, args)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.2.1-py2.7.egg/pip/commands/install.py", line 261, in run
requirement_set.install(install_options, global_options)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.2.1-py2.7.egg/pip/req.py", line 1166, in install
requirement.install(install_options, global_options)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.2.1-py2.7.egg/pip/req.py", line 589, in install
cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False)
File "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.2.1-py2.7.egg/pip/util.py", line 612, in call_subprocess
% (command_desc, proc.returncode, cwd))
InstallationError: Command /usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python -c "import setuptools;__file__='/var/folders/_2/fwbd8jjn0mj_y_9w4f31g2nm0000gn/T/pip-build/pycrypto/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/_2/fwbd8jjn0mj_y_9w4f31g2nm0000gn/T/pip-riA3zG-record/install-record.txt --single-version-externally-managed failed with error code 1 in /var/folders/_2/fwbd8jjn0mj_y_9w4f31g2nm0000gn/T/pip-build/pycrypto
Answer: I was using an HPC version of gcc. So I removed that and reinstalled Command
Line Tools in Xcode. Then I was able to install pycrypto and fabric.
|
How to print module documentation in Python
Question: I know this question is very simple, I know it must have been asked a lot of
times and I did my search on both SO and Google but I could not find the
answer, probably due to my lack of ability of putting what I seek into a
proper sentence.
I want to be able to read the docs of what I import.
For example if I import x by "import x", I want to run this command, and have
its docs printed in Python or ipython.
What is this command-function?
Thank you.
PS. I don't mean dir(), I mean the function that will actually print the docs
for me to see and read what functionalities etc. this module x has.
Answer: `pydoc foo.bar` from the command line or `help(foo.bar)` or `help('foo.bar')`
from Python.
|
FFTW3 on complex numpy array directly in scipy.weave.inline
Question: I am trying to implement an FFT based subpixel shifting (translation)
algorithm in `Python`. The Fourier shift theorem allows an array to be
translated by a subpixel amount by: 1\. Forward FFT array 2\. Multiply array
by linear phase ramp in Fourier space 3\. Inverse FFT array
This algorithm is easy to implement in python using numpy/scipy but its
incredibly slow (~10msec) per shift for 256**2 array. I am trying to speed
this up by calling c code directly from python using scipy.weave.inline.
I'm having trouble however in passing complex numpy arrays to FFTW. The c code
looks like:
#include <fftw3.h>
#include <stdlib.h>
#define INVERSE +1
#define FORWARD -1
fftw_complex *i, *o;
int n, m;
fftw_plan pf, pi;
#line 22 "test_scipy_weave.py"
i = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * xdim*ydim);
o = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * xdim*ydim);
pf = fftw_plan_dft_2d(xdim, ydim, i, o, -1, FFTW_PATIENT);
pi = fftw_plan_dft_2d(xdim, ydim, o, i, 1, FFTW_PATIENT);
# Copy data to fftw_complex array. How to use python arrays directly
for (n=0; n<xdim;n++){
for (m=0; m<ydim; m++){
i[n*xdim+m][0]=a[n*xdim+m].real();
i[n*xdim+m][1]=a[n*xdim+m].imag();
}
}
fftw_execute(pf);
/* Mult by linear phase ramp here */
fftw_execute(pi);
for (n=0; n<xdim;n++){
for (m=0; m<ydim; m++){
b[n*xdim+m] = std::complex<double>([in*xdim+m][0], i[n*xdim+m][1]);
}
}
fftw_destroy_plan(p);
So you can see I have to copy the data stored in the numpy array "a" into the
fftw_complex array "i". And again at the end I have to copy the result "i"
into the output numpy array "b". It would be much more efficient to use the
numpy arrays "a" and "b" directly in the fftw but I cannot get this to work.
Does anyone have an idea on how to get fftw to use complex numpy arrays
directly in `scipy.weave.inline`?
Thanks
Answer: According to the [fftw manual](http://www.fftw.org/doc/Complex-numbers.html),
you can import `complex.h` before `fftw.h`, which will guarantee that
`fftw_complex` will correspond to the native C data type. I'm pretty sure that
numpy data types are also guaranteed to be (or in practice are likely to be)
compatible with native C data types.
In this case you can access a pointer to the array data as
`a.data_as(ctypes.c_void_p)`. Unfortunately ctypes doesn't recognise complex
types, but hopefully casting to a void pointer will do the trick.
When doing this, you have to be careful that your array `a` is stored in
C-contiguous fashion, specified by the parameter `order='C'` when creating the
array.
|
Cannot create browser process when using selenium from python on RHEL5
Question: I'm trying to use selenium from python but I'm having a problem running it on
a RHEL5.5 server. I don't seem to be able to really start firefox.
from selenium import webdriver
b = webdriver.Firefox()
On my laptop with ubuntu this works fine and it starts a brings up a firefox
window. When I log in to the server with ssh I can run firefox from the
command line and get it displayed on my laptop. It is clearly firefox from the
server since it has the RHEL5.5 home page.
When I run the python script above on the server it (or run it in ipython) the
script hangs at webdriver.Firefox()
I have also tried
from selenium import webdriver
fb = webdriver.FirefoxProfile()
fb.native_events_enabled=True
b=webdriver.Firefox(fb)
Which also hangs on the final line there.
I'm using python2.7 installed in /opt/python2.7. In installed selenium with
/opt/python2.7/pip-2.7.
I can see the firefox process on the server with top and it is using a lot of
CPU. I can also see from /proc/#/environ that the DISPLAY is set to
localhost:10.0 which seems right.
How can I get a browser started with selenium on RHEL5.5? How can I figure out
why Firefox is not starting?
Answer: It looks like the problem I'm encountering is this selenium bug:
<http://code.google.com/p/selenium/issues/detail?id=2852>
I used the fix described in comment #9
<http://code.google.com/p/selenium/issues/detail?id=2852#c9>
That worked for me.
|
Python embedded in c . Is calling PyRun_SimpleString synchronous?
Question: people. I would be grateful if you can help me. The application's purpose is
to translate lemmas of words present in the sentence from Russian to English.
I'm doing it with help of sdict formatted vocabulary, which is queried by
python script which is called by c++ program.
My purpose is to get the following output :
> Выставка/exhibition::1 конгресс/congress::2 организаторами/organizer::3
> которой/ which::4 являются/appear::5 РАО/NONE::6 ЕЭС/NONE::7 России/NONE::8
> EESR/_NONE_ ::9 нефтяная/oil::10 компания/company::11 ЮКОС/NONE::12
> YUKOS/NONE::13 и/and::14 администрация/administration::15 Томской/NONE::16
> области/region::17 продлится/last::18 четыре/four::19 дня/day::20
The following code succeeded for the sentence, however for the second sentence
and so on I get a wrong output:
> Егор/_NONE_ ::1 Гайдар/NONE::2 возглавлял/NONE::3 первое/head::4
> российское/first::5 правительство/NONE::6 которое/government::7
> называли/which::8 правительством/call::9 камикадзе/government::10
**Note:** `NONE` is used for words lacking translation.
I'm running the following C++ code excerpt which actually calls
`PyRun_SimpleString`:
for (unsigned int i = 0; i < theSentenceRows->size(); i++){
stringstream ss;
ss << (i + 1);
parsedFormattedOutput << theSentenceRows->at(i)[FORMINDEX] << "/";
getline(lemmaOutFileForTranslation, lemma);
PyObject *main_module, *main_dict;
PyObject *toTranslate_obj, *translation, *emptyString;
/* Setup the __main__ module for us to use */
main_module = PyImport_ImportModule("__main__");
main_dict = PyModule_GetDict(main_module);
/* Inject a variable into __main__, in this case toTranslate */
toTranslate_obj = PyString_FromString(lemma.c_str());
PyDict_SetItemString(main_dict, "start_word", toTranslate_obj);
/* Run the code snippet above in the current environment */
PyRun_SimpleString(pycode);
**usleep(2);**
translation = PyDict_GetItemString(main_dict, "translation");
Py_XDECREF(toTranslate_obj);
/* writing results */
parsedFormattedOutput << PyString_AsString(translation) << "::" << ss.str() << " ";
Where pycode is defined as:
const char *pycode =
"import sys\n"
"import re\n"
"import sdictviewer.formats.dct.sdict as sdict\n"
"import sdictviewer.dictutil\n"
"dictionary = sdict.SDictionary( 'rus_eng_full2.dct' )\n"
"dictionary.load()\n"
"translation = \"*NONE*\"\n"
"p = re.compile('( )([a-z]+)(.*?)( )')\n"
"for item in dictionary.get_word_list_iter(start_word):\n"
" try:\n"
" if start_word == str(item):\n"
" instance, definition = item.read_articles()[0]\n"
" translation = p.findall(definition)[0][1]\n"
" except:\n"
" continue\n";
I've noticed some delay in the second sentence's output, so I added the
usleep(2); to C++ while thinking that it happens because calling
`PyRun_SimpleString` is not synchronous. It didn't help, however and I'm not
sure that this is the reason. The delay bug happens for sentences that follow
and increases.
So, is the call to `PyRun_SimpleString` synchronous? Maybe, sharing of
variable values between C++ and Python is not right? Thank you in advance.
Answer: According to [the
docs](http://docs.python.org/c-api/veryhigh.html#PyRun_SimpleString), it is
synchronous.
I would advise you to test the python code seperately from the C++ code, that
would make debugging it much easier. One way of doing that is pasting the code
in the interactive interpreter and executing it line by line. And when
debugging, I would second Winston Ewert's comment to not discard exceptions.
|
Find whether a numpy array is a subset of a larger array in Python
Question: I have 2 arrays, for the sake of simplicity let's say the original one is a
random set of numbers:
import numpy as np
a=np.random.rand(N)
Then I sample and shuffle a subset from this array:
b=np.array() <------size<N
The shuffling I do do not store the index values, so b is an unordered subset
of a
Is there an easy way to get the original indexes of b, so they are in the same
order as a, say, if element 2 of b has the index 4 in a, create an array of
its assignation.
I could use a for cycle checking element by element, but perhaps there is a
more pythonic way
Thanks
Answer: I think the most computationally efficient thing to do is to keep track of the
indices that associate `b` with `a` as `b` is created.
For example, instead of sampling `a`, sample the indices of `a`:
indices = random.sample(range(len(a)), k) # k < N
b = a[indices]
|
Using Flask-SQLAlchemy in Blueprint models without reference to the app
Question: I'm trying to create a "modular application" in Flask using Blueprints.
When creating models, however, I'm running into the problem of having to
reference the app in order to get the `db`-object provided by Flask-
SQLAlchemy. I'd like to be able to use some blueprints with more than one app
(similar to how Django apps can be used), so this is not a good solution.*
* It's possible to do a switcharoo, and have the Blueprint create the `db` instance, which the app then imports together with the rest of the blueprint. But then, any other blueprint wishing to create models need to import from **that** blueprint instead of the app.
My questions are thus:
* Is there a way to let Blueprints define models without any awareness of the app they're being used in later -- and have several Blueprints come together? By this, I mean having to import the app module/package from your Blueprint.
* Am I wrong from the outset? Are Blueprints not meant to be independent of the app and be redistributable (à la Django apps)?
* If not, then what pattern _should_ you use to create something like that? Flask extensions? Should you simply not do it -- and maybe centralize all models/schemas à la Ruby on Rails?
> _Edit_ : I've been thinking about this myself now, and this might be more
> related to SQLAlchemy than Flask because you have to have the
> `declarative_base()` when declaring models. And _that's_ got to come from
> somewhere, anyway!
>
> Perhaps the best solution is to have your project's schema defined in one
> place and spread it around, like Ruby on Rails does. Declarative SQLAlchemy
> class definitions are really more like schema.rb than Django's models.py. I
> imagine this would also make it easier to use migrations (from
> [alembic](http://pypi.python.org/pypi/alembic/) or [sqlalchemy-
> migrate](http://code.google.com/p/sqlalchemy-migrate/)).
* * *
I was asked to provide an example, so let's do something simple: Say I have a
blueprint describing "flatpages" -- simple, "static" content stored in the
database. It uses a table with just shortname (for URLs), a title and a body.
This is `simple_pages/__init__.py`:
from flask import Blueprint, render_template
from .models import Page
flat_pages = Blueprint('flat_pages', __name__, template_folder='templates')
@flat_pages.route('/<page>')
def show(page):
page_object = Page.query.filter_by(name=page).first()
return render_template('pages/{}.html'.format(page), page=page_object)
Then, it would be nice to let this blueprint define its own model (this in
`simple_page/models.py`):
# TODO Somehow get ahold of a `db` instance without referencing the app
# I might get used in!
class Page(db.Model):
name = db.Column(db.String(255), primary_key=True)
title = db.Column(db.String(255))
content = db.Column(db.String(255))
def __init__(self, name, title, content):
self.name = name
self.title = title
self.content = content
* * *
This question is related to:
* [Flask-SQLAlchemy import/context issue](http://stackoverflow.com/questions/9692962/flask-sqlalchemy-import-context-issue)
* [What's your folder layout for a Flask app divided in modules?](http://stackoverflow.com/questions/6089020/whats-your-folder-layout-for-a-flask-app-divided-in-modules?rq=1)
And various others, but all replies seem to rely on import the app's `db`
instance, or doing the reverse. The ["Large app how
to"](https://github.com/mitsuhiko/flask/wiki/Large-app-how-to) wiki page also
uses the "import your app in your blueprint" pattern.
* Since the official documentation shows how to create routes, views, templates and assets in a Blueprint without caring about what app it's "in", I've assumed that Blueprints should, in general, be reusable across apps. However, this modularity doesn't seem _that_ useful without also having independent models.
Since Blueprints can be hooked into an app more than once, it might simply be
the wrong approach to have models in Blueprints?
Answer: I believe the truest answer is that modular blueprints shouldn't concern
themselves directly with data access, but instead rely on the application
providing a compatible implementation.
So given your example blueprint.
from flask import current_app, Blueprint, render_template
flat_pages = Blueprint('flat_pages', __name__, template_folder='templates')
@flat_pages.record
def record(state):
db = state.app.config.get("flat_pages.db")
if db is None:
raise Exception("This blueprint expects you to provide "
"database access through flat_pages.db")
@flat_pages.route('/<page>')
def show(page):
db = current_app.config["flat_pages.db"]
page_object = db.find_page_by_name(page)
return render_template('pages/{}.html'.format(page), page=page_object)
From this, there is nothing preventing you from providing a default
implementation.
def setup_default_flat_pages_db(db):
class Page(db.Model):
name = db.Column(db.String(255), primary_key=True)
title = db.Column(db.String(255))
content = db.Column(db.String(255))
def __init__(self, name, title, content):
self.name = name
self.title = title
self.content = content
class FlatPagesDBO(object):
def find_page_by_name(self, name):
return Page.query.filter_by(name=name).first()
return FlatPagesDBO()
And in your configuration.
app.config["flat_pages.db"] = setup_default_flat_pages_db(db)
The above could be made cleaner by not relying in direct inheritance from
db.Model and instead just use a vanilla declarative_base from sqlalchemy, but
this should represent the gist of it.
|
can you print a file from python?
Question: Is there some way of sending output to the printer instead of the screen in
Python? Or is there a service routine that can be called from within python to
print a file? Maybe there is a module I can import that allows me to do this?
Answer: Most platforms—including Windows—have special file objects that represent the
printer, and let you print text by just writing that text to the file.
On Windows, the special file objects have names like `LPT1:`, `LPT2:`,
`COM1:`, etc. You will need to know which one your printer is connected to (or
ask the user in some way).
It's possible that your printer is not connected to any such special file, in
which case you'll need to fire up the Control Panel and configure it properly.
(For remote printers, this may even require setting up a "virtual port".)
At any rate, writing to `LPT1:` or `COM1:` is exactly the same as writing to
any other file. For example:
with open('LPT1:', 'w') as lpt:
lpt.write(mytext)
Or:
lpt = open('LPT1:', 'w')
print >>lpt, mytext
print >>lpt, moretext
close(lpt)
And so on.
If you've already got the text to print in a file, you can print it like this:
with open(path, 'r') as f, open('LPT1:', 'w') as lpt:
while True:
buf = f.read()
if not buf: break
lpt.write(buf)
Or, more simply (untested, because I don't have a Windows box here), this
should work:
import shutil
with open(path, 'r') as f, open('LPT1:', 'w') as lpt:
shutil.copyfileobj(f, lpt)
It's possible that just `shutil.copyfile(path, 'LPT1:')`, but the
documentation says "Special files such as character or block devices and pipes
cannot be copied with this function", so I think it's safer to use
`copyfileobj`.
|
calling python module dynamically using exec
Question: I have 2 modules
mexec1.py
def exec1func():
print 'exec1'
exec 'c:/python27/exec2.py'
if __name__ == '__main__':
exec1func()
exec2.py
def exec2func(parm=''):
print 'exec2 parm',parm
if __name__ == '__main__':
exec2func(parm='')
From exec1.py I want to call exec2func of the exec2.py using only exec or
execfile...I don't want subprocess.Popen..
Answer: Use `import` instead:
def exec1func():
from exec2 import exec2func
exec2func()
If you want to import using the full path, use `imp.load_source`:
import imp
def exec1func():
exec2 = imp.load_source('exec2', 'c:/python27/exec2.py')
exec2.exec2func()
|
In Python, how can I increment a count conditionallly?
Question: Say I have the list:
list = [a,a,b,b,b]
I'm looping over the list. The variable "count" increments by 1 when the
previous letter is the same as the current letter. Below is only part of the
code:
for item in list:
if item == previous:
count +=1
return count
The example above returns 3, 1 for the repeat a and 2 for the bs. What can I
use to make it so count only increases once for each for a total of 2? I tried
using a variable "found" that returns True or False depending on whether the
letter has been seen before, but this of course doesn't work for something
like [a,a,a,c,a,a,a], which returns 1 for the first run of "a"s and not 2, as
I want.
Edit: I'm probably making this harder than it needs to be. All I want is for
anytime a string is repeated continuously for a count to be incremented by
one. [a,b,b,c,a,a,a,a,c,c,c,] should return 3. [a,a,a,a,a,a,a,a] should return
1.
Answer: Wild guess: since you want `a,a,b,b,b` to be 2 and not 3, and you also want
`a,a,a,c,a,a,a` to give two, I think you're trying to count distinct
contiguous groups of equal elements of length >= 2. If so, you can use
`itertools.groupby`:
>>> import itertools
>>> seq1 = ['a','a','b','b','b']
>>> [(k, list(g)) for k,g in itertools.groupby(seq1)]
[('a', ['a', 'a']), ('b', ['b', 'b', 'b'])]
>>> seq2 = ['a','a','a','c','a','a','a']
>>> [(k, list(g)) for k,g in itertools.groupby(seq2)]
[('a', ['a', 'a', 'a']), ('c', ['c']), ('a', ['a', 'a', 'a'])]
and thus
>>> sum(len(list(g)) >= 2 for k,g in itertools.groupby(seq1))
2
>>> sum(len(list(g)) >= 2 for k,g in itertools.groupby(seq2))
2
but this is just a guess. It's the only thing I can think of which matches the
only two data points you've given, at least assuming that I'm interpreting "1
for the first run of "a"s and not 2, as I want" correctly. That leaves it
unclear whether you want the total to be 2 or the contribution from the first
run of "a" to be 2.
|
convert a dict to sorted dict in python
Question: I want to convert a dict into sorted dict in python
data = pandas.read_csv('D:\myfile.csv')
for colname, dtype in data.dtypes.to_dict().iteritems():
if dtype == 'object':
print colname
count = data[colname].value_counts()
d = dict((str(k), int(v)) for k, v in count.iteritems())
f = dict(sorted(d.iteritems(), key=lambda item: item[1], reverse = True)[:5])
print f
m ={}
m["count"]= int(sum(count))
m["Top 5"]= f
print m
k = json.dumps(m)
print k
f = {'Gears of war 3': 6, 'Batman': 5, 'gears of war 3': 4, 'Rocksmith': 5, 'Madden': 3}
My desired Output is :
f = {'Gears of war 3': 6, 'Batman': 5, 'Rocksmith': 5, 'gears of war 3': 4, 'Madden': 3}
k = {'count':24, 'top 5':{'Gears of war 3': 6, 'Batman': 5, 'Rocksmith': 5, 'gears of war 3': 4, 'Madden': 3}}
(in the descending order of values and the result should be a dict)
Answer: You cannot sort a `dict` because dictionary has no ordering.
Instead, use
[`collections.OrderedDict`](http://docs.python.org/library/collections.html#collections.OrderedDict):
>>> from collections import OrderedDict
>>> d = {'Gears of war 3': 6, 'Batman': 5, 'gears of war 3': 4, 'Rocksmith': 5, 'Madden': 3}
>>> od = OrderedDict(sorted(d.items(), key=lambda x:x[1], reverse=True))
>>> od
OrderedDict([('Gears of war 3', 6), ('Batman', 5), ('gears of war 3', 4), ('Rocksmith', 5), ('Madden', 3)])
>>> od.keys()
['Gears of war 3', 'Batman', 'gears of war 3', 'Rocksmith', 'Madden']
>>> od.values()
[6, 5, 4, 5, 3]
>>> od['Batman']
5
* * *
The "order" you see in an JSON object is not meaningful, as JSON object is
unordered[[RFC4267](http://www.ietf.org/rfc/rfc4267.txt)].
If you want meaningful ordering in your JSON, you need to use a list (that's
sorted the way you wanted). Something like this is what you'd want:
{
"count": 24,
"top 5": [
{"Gears of war 3": 6},
{"Batman": 5},
{"Rocksmith": 5},
{"gears of war 3": 4},
{"Madden": 3}
]
}
Given the same dict `d`, you can generate a sorted list (which is what you
want) by:
>>> l = sorted(d.items(), key=lambda x:x[1], reverse=True)
>>> l
[('Gears of war 3', 6), ('Batman', 5), ('Rocksmith', 5), ('gears of war 3', 4), ('Madden', 3)]
Now you just pass `l` to `m['top5']` and dump it:
m["Top 5"]= l
k = json.dumps(m)
|
when does import module at bottom of file
Question: when I read a file `riak-python-client/riak/riak_object.py`. At the bottom of
the file, I saw this
from mapreduce import *
what's it use for? Why just import at the top of the file.
Answer: This is designed to put all of the module `mapreduce` in the `riak_object`
namespace. If you put this import at the top of `riak_object.py`, then there
would be an error because `mapreduce` imports `RiakObject` from `riak_object`,
which is not defined yet.
|
implementation of command line arg passing in python, doesn't work
Question: I'm very new to python. I wish I could implement the command-line arg passing
in python as my first python script. I have written this code:
def main(argv):
try:
opts, args = getopt.getopt(argv, "hb:b:f", ["help", "bucket=", "folder"])
except getopt.GetoptError:
usage()
sys.exit(2)
s3 = get_files()
for opt, arg in opts:
if opt in ("-h", "--help"):
usage()
sys.exit()
elif opt in ("-b", "--bucket"):
s3.bucketname = arg
elif opt in ("-f", "--folder"):
s3.foldername = arg
print("Came here" + s3.foldername)
s3.download_files()
In the `-f` argument from command is where I'm facing the problem. For
debugging I used the `print` statement. I pass the arguments like this:
./hello -b something -f /path
but sadly it prints out:
Came here
But the `foldername` attribute is not getting assigned any value. Where I'm
making the mistake?
Thanks in advance.
Answer: Argparse really is a much simpler way of solving problems like this
import argparse
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-b', '--bucket', dest='bucketname', action='store')
parser.add_argument('-f', '--folder', dest='folder', action='store')
args = parser.parse_args('-b foo'.split())
print args
args = parser.parse_args('-b foo -f /foo'.split())
print args
Gives:
> Namespace(bucketname='foo', folder=None)
> Namespace(bucketname='foo', folder='/foo')
Which means you could rewrite your function as:
import sys
import argparse
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-b', '--bucket', dest='bucketname', action='store')
parser.add_argument('-f', '--folder', dest='folder', action='store')
args = parser.parse_args()
s3 = get_files()
s3.foldername = args.folder
s3.download_files()
Without any further work, the output of `script.py --help` is:
usage: script.py [-h] [-b BUCKETNAME] [-f FOLDER]
optional arguments:
-h, --help show this help message and exit
-b BUCKETNAME, --bucket BUCKETNAME
-f FOLDER, --folder FOLDE
|
Parallelise python loop with numpy arrays and shared-memory
Question: I am aware of several questions and answers on this topic, but haven't found a
satisfactory answer to this particular problem:
What is the easiest way to do a simple shared-memory parallelisation of a
python loop where numpy arrays are manipulated through numpy/scipy functions?
I am not looking for the most efficient way, I just wanted something simple to
implement that doesn't require a significant rewrite when the loop is not run
in parallel. Just like OpenMP implements in lower level languages.
The best answer I've seen in this regard is [this
one](http://stackoverflow.com/a/5550156/1580351), but this is a rather clunky
way that requires one to express the loop into a function that takes a single
argument, several lines of shared-array converting crud, seems to require that
the parallel function is called from `__main__`, and it doesn't seem to work
well from the interactive prompt (where I spend a lot of my time).
With all of Python's simplicity is this really the best way to parellelise a
loop? Really? This is something trivial to parallelise in OpenMP fashion.
I have painstakingly read through the opaque documentation of the
multiprocessing module, only to find out that it is so general that it seems
suited to everything but a simple loop parallelisation. I am not interested in
setting up Managers, Proxies, Pipes, etc. I just have a simple loop, fully
parallel that doesn't have any communication between tasks. Using MPI to
parallelise such a simple situation seems like overkill, not to mention it
would be memory-inefficient in this case.
I haven't had time to learn about the multitude of different shared-memory
parallel packages for Python, but was wondering if someone has more experience
in this and can show me a simpler way. Please do not suggest serial
optimisation techniques such as Cython (I already use it), or using parallel
numpy/scipy functions such as BLAS (my case is more general, and more
parallel).
Answer: With Cython parallel support:
# asd.pyx
from cython.parallel cimport prange
import numpy as np
def foo():
cdef int i, j, n
x = np.zeros((200, 2000), float)
n = x.shape[0]
for i in prange(n, nogil=True):
with gil:
for j in range(100):
x[i,:] = np.cos(x[i,:])
return x
On a 2-core machine:
$ cython asd.pyx
$ gcc -fPIC -fopenmp -shared -o asd.so asd.c -I/usr/include/python2.7
$ export OMP_NUM_THREADS=1
$ time python -c 'import asd; asd.foo()'
real 0m1.548s
user 0m1.442s
sys 0m0.061s
$ export OMP_NUM_THREADS=2
$ time python -c 'import asd; asd.foo()'
real 0m0.602s
user 0m0.826s
sys 0m0.075s
This runs fine in parallel, since `np.cos` (like other ufuncs) releases the
GIL.
If you want to use this interactively:
# asd.pyxbdl
def make_ext(modname, pyxfilename):
from distutils.extension import Extension
return Extension(name=modname,
sources=[pyxfilename],
extra_link_args=['-fopenmp'],
extra_compile_args=['-fopenmp'])
and (remove `asd.so` and `asd.c` first):
>>> import pyximport
>>> pyximport.install(reload_support=True)
>>> import asd
>>> q1 = asd.foo()
# Go to an editor and change asd.pyx
>>> reload(asd)
>>> q2 = asd.foo()
So yes, in some cases you can parallelize just by using threads. OpenMP is
just a fancy wrapper for threading, and Cython is therefore only needed here
for the easier syntax. Without Cython, you can use the `threading` module ---
works similarly as multiprocessing (and probably more robustly), but you don't
need to do anything special to declare arrays as shared memory.
However, not all operations release the GIL, so YMMV for the performance.
***
And another possibly useful link scraped from other Stackoverflow answers ---
another interface to multiprocessing:
<http://packages.python.org/joblib/parallel.html>
|
how to use webkit code in scrapy/python
Question: In the question at [How to combine scrapy and htmlunit to crawl urls with
javascript](http://stackoverflow.com/questions/8047666/how-to-combine-scrapy-
and-htmlunit-to-crawl-urls-with-javascript), it is advised to use webkit with
scrapy to go through javascript. However, this [example Snipplr
snippet](http://snipplr.com/view/66996/renderedinteractive-javascript-with-
gtkwebkitjswebkit/) says:
import gtk
import webkit
import jswebkit
I am new to Python. From whatever I understand, webkit (written in C/C++ )
needs to have Python wrappers. I looked around but could not find anything
like that. Any advise appreciated.
Answer: This is the simplest example you can start here
import gtk
import webkit
view = webkit.WebView()
sw = gtk.ScrolledWindow()
sw.add(view)
win = gtk.Window(gtk.WINDOW_TOPLEVEL)
win.add(sw)
win.show_all()
view.open("http://w3.org/")
gtk.main()
You can learn more from here <http://code.google.com/p/pywebkitgtk/> Also have
a look here <http://dvlabs.tippingpoint.com/blog/2011/11/28/malicious-content-
harvesting>
|
Syntax Error in Dice game
Question: I made this dice game in python, but am getting a syntax error with my
inputdice function. Below is the dice game in its entirety. When run, the game
should go through 10 rounds and stop after round 10 or when the user runs out
of money. Any suggestions?
from random import *
def dice1():
print("+-----+")
print("| |")
print("| * |")
print("| |")
print("+-----+")
def dice2():
print("+-----+")
print("|* |")
print("| |")
print("| *|")
print("+-----+")
def dice3():
print("+-----+")
print("|* |")
print("| * |")
print("| *|")
print("+-----+")
def dice4():
print("+-----+")
print("| * * |")
print("| |")
print("| * * |")
print("+-----+")
def dice5():
print("+-----+")
print("|* *|")
print("| * |")
print("|* *|")
print("+-----+")
def dice6():
print("+-----+")
print("|* *|")
print("|* *|")
print("|* *|")
print("+-----+")
def drawdice(d):
if d==1:
dice1()
elif d==2:
dice2()
elif d==3:
dice3()
elif d==4:
dice4()
elif d==5:
dice5()
elif d==6:
dice6()
print()
def inputdie():
dice=input(eval("Enter the number you want to bet on --> "))
while dice<1 or dice>6:
print("Sorry, that is not a good number.")
dice=input(eval("Try again. Enter the number you want to bet on --> "))
return dice
def inputbet(s):
bet=input(eval("What is your bet?"))
while bet>s or bet<=0:
if bet>s:
print("Sorry, you can't bet more than you have")
bet=input(eval("What is your bet?"))
elif bet<=0:
print("Sorry, you can't bet 0 or less than 0")
bet=input(eval("What is your bet?"))
return bet
def countmatches(numbet,r1,r2,r3):
n=0
if numbet==r1:
n+=1
if numbet==r2:
n+=1
if number==r3:
n+=1
return n
def payoff(c,betam):
payoff=0
if c==1:
print("a match")
payoff=betam
elif c==2:
print("a double match!")
payoff=betam*5
elif c==3:
print("a triple match!")
payoff=betam*10
else:
payoff=betam*(-1)
return payoff
def main():
dollars=1000
rounds=1
roll=0
single=0
double=0
triple=0
misses=0
flag=True
print("Play the game of Three Dice!!")
print("You have", dollars, "dollars to bet with.")
while dollars>0 and rounds<11 and flag==True:
print("Round", rounds)
dicebet=inputdie()
stake=inputbet(dollars)
for roll in randrange(1,7):
roll1=roll
for roll in randrange(1,7):
roll2=roll
for roll in randrange(1,7):
roll3=roll
drawdice(roll1)
drawdice(roll2)
drawdice(roll3)
matches=countmatches(dicebet,roll1,roll2,roll3)
dollarswon=payoff(matches,stake)
if matches==1:
single+=1
elif matches==2:
double+=1
elif matches==3:
triple+=1
elif matches==0:
misses+=1
if dollarswon>0:
print("You got a match!")
print("You won $", dollarswon, sep='')
dollars=dollars+dollarswon
print("Your stake is $", dollars, sep='')
else:
print("You lost your bet! $", stake, sep='')
dollars=dollarswon+dollars
rounds+=1
if rounds==10:
print("*******Singles", single, "Doubles", double, "Triples", triple, "Misses", misses)
answer=input("Want to play some more? (y or n)")
if answer=="y":
main()
else:
print("Have a good day")
main()
Any help is appreciated. Thanks!
Answer: The proximate error is that `eval()` expects an expression that is valid
python syntax;
`"Enter the number you want to bet on -->"` or any of the other strings in
this program are not valid python expressions, hence the syntax error produced
at run time.
The broader problem with the program, is that `eval()` is not necessary and
should be avoided.
A rule of thumb, particularly for beginners, is that **"eval() is evil" and
should "never" be used.**
Note that "_never_ " is in quotes, to hint at the fact that there are indeed a
[very] few use cases where eval() can be very useful.
The reason why `eval()` is such a "dangerous ally" is that it introduces
[typically user-provided] arbitrary python expressions at run-time, and
there's a good chance that such expression could have an invalid syntax (no
big deal) or worse, could include rather harmful or possibly even malicious
code, which when invoked would perform all sorts of bad things on the host...
This said, **you do not need eval() at all to process the input obtained from
the input() method.**
I think that you may have meant to use patterns like:
`myVar = eval(input("Enter some value for myVar variable"))`
(i.e. with the eval and input in the reverse order)
Actually this would still not work for eval() requires a _string_ argument,
and hence you would have needed
`myVar = eval(str(input("Enter some value for myVar variable")))`
but as said eval() is not warranted here.
Another guess is that you used `eval()` because you expected the return from
input() to be of type string, and that eval() would turn this into a integer
for use with the program logic...
`raw_input()` is the method returning a string, and it is plausibly the one
that you should use to avoid getting run-time errors when the user types in
text without quotes and other invalid values. A common idiom to get the user
to input integer values, is something like
int_in = None
while int_in == None:
str_in = raw_input('some text telling which data is expected')
try:
int_in = int(str_in)
except ValueError:
# optional output of some message to user
int_in = None
Typically we put this kind of logic in a method for easy reuse.
Hope this helps. You seem to be doing practical things with Python: no better
way to learn than to code - along with the occasional review of the
documentation and reading of a related book. A plug for good book: [Python
Cookbook by Alex Martelli](http://books.google.com/books?isbn=0596007973)
|
Call current PyMOL session from python script
Question: I'm trying to call current PyMOL session from python script (wxpython GUI),
and then load some data from PyMOL and send few commands to PyMOL. At the
moment I can open a new PyMOL session in python script:
import sys, os
from wx import *
app = wx.App()
dialog = wx.FileDialog ( None, message = 'Set PyMOL directiry', style = wx.OPEN)
if dialog.ShowModal() == wx.ID_OK:
# Pymol path
moddir = dialog.GetDirectory()
sys.path.insert(0, moddir)
os.environ['PYMOL_PATH'] = moddir
import pymol
pymol.finish_launching()
else:
print 'Nothing was selected.'
from pymol import *
dialog1 = wx.FileDialog ( None, message = 'Set PDB file', style = wx.OPEN)
if dialog1.ShowModal() == wx.ID_OK:
pdbfile = dialog1.GetPath()
cmd.load(pdbfile)
else:
print 'Nothing was selected.'
dialog.Destroy()
app.MainLoop()
BUT actually I'd like to check in my python scrip whether any PyMOL session is
already opened. I found discussion corresponding to this topic here: [Only
call function if PyMOL
running](http://stackoverflow.com/questions/11858681/only-call-function-if-
pymol-running) Following this discussion I tried to call 'print' function in
PyMOL:
try:
from pymol import cmd
print 'Connected'
except:
<open new Pymol sesssion>
but I do not see any text in PyMOL cmd. I tried to determine PyMOL path before
this statement and again I failed.
Does anyone know how to call current PyMOL session from python script?
Answer: To my knowledge, there is no way to interact with an existing PyMOL session
from the Python interpreter. There are a few alternatives:
1. The existing PyMOL command prompt accepts valid Python and you can [run any script](http://www.pymolwiki.org/index.php/Running_Scripts) directly from the prompt. Perhaps you could execute your script from there instead?
2. You can start up a new PyMOL session as part of your script. PyMOL's `-i` flag may be especially helpful- it enables you to work in a headless environment:
-i Disable the internal OpenGL GUI (object list, menus, etc.)
3. Take a careful look at PyMOL's `__init.py__`. You'll find the intricacies of PyMOL's threading there. Perhaps you can find something useful to manipulate?
### Side note:
Calling `print` from your script will _not_ make the text appear in the PyMOL
session, it will merely write the text to standard output (i.e. it will be
printed to your terminal).
* * *
The following PyMOL wiki pages may be of help to you:
* [Running scripts](http://www.pymolwiki.org/index.php/Running_Scripts)
* [Launching PyMOL](http://www.pymolwiki.org/index.php/Category%3aLaunching)
* [Launching from a script](http://www.pymolwiki.org/index.php/Launching_From_a_Script)
|
How to stop the program when the output fills the screen in python?
Question: How can I get the status of the terminal from a Python program? I want the
program to stop printing lines to the screen when the screen is full and wait
for user input.
Answer: The simplest (no code) way to accomplish that is to pipe your program's output
through a pager program like `less` or `more` (assuming *nix), like:
### infinity.py
import random
while 1:
print random.randint(0, 0xffffffff)
### command line
python infinty.py | less
...gives output like:
848605718
899092472
2576425641
3098821373
259379057
164782822
416064876
2488878735
1216764788
2682214542
531576871
2175787865
869960770
:
...and waits for user input.
|
python struct pack with space padding
Question: I need to create/send binary data in python using a given protocol. The
protocol calls for fixed width fields , with space padding thrown in. Using
python's struct.pack, the only thing I can think of is, calculating the space
padding and adding it in myself. Is there a better way to achieve this?
thanks
Answer: struct has a [placeholder](http://docs.python.org/library/struct.html#format-
strings) (`x`) for a padding byte you can use:
# pack 2 16 bit values plus one pad byte
from struct import pack
packedStrWithOneBytePad = pack("hhx", 1000, 2000)
|
reading column from excel and doing specific math
Question: I have an excel file that has 27 columns. I want to write a python code which
reads column by column and stores the final 5 values in the coloum which will
have math equations done on them.
I have this so far:
from math import tan
#Write Header
#outFile.write('Test Name,X+ avg,X+ std,X+ count,X- avg,X- std,X- count,X angle,Y+ avg,Y+ std,Y+ count,Y- avg,Y- std,Y- count,Y angle\n')
#for line in inFile:
if 1==1:
line = [1,2,38.702,37.867,35.821, 44, 49,55,65,20,25,28,89.]
line0= len(line)
print "the list size"
print line0
line1 = len(line) -5 #takes the overall line and subtracts 5.
print "the is the start of the 5 #'s we need"
print line1 #prints the line
d= line[line1:line0] #pops all the values from in the line (..)
print "these are the 5 #'s we need"
print d
lo=min(d)
hi=max(d)
print "the lowest value is"
print lo
print "the highest value is"
print hi
average1 = float (sum(d))/ len(d) #sum
print "the average of the values is"
print average1
the "line = [1,2,38.702,37.867,35.821, 44, 49,55,65,20,25,28,89.]" part I want
python to automatically read in the coloumn, store the 5 last vales and do the
above math analysis.
Answer: You'll probably want to use a library like
[openpyxl](http://packages.python.org/openpyxl/), here's an example from the
docs on how to access certain cells.
columnOffset = 2 # column offset is (the letter as a number - 1), a = (1-1), b = (2-1)
rowOffset = 35 # row offset is row number - 1
wb = load_workbook('test.xlsx')
ws = wb.get_active_sheet()
for column in ws.columns[columnOffset:27]: #because you have 27 columns to parse through
for rowIndex, cell in enumerate(column[-5:]):
if rowIndex >= rowOffset:
print cell.value
Unfortunately openpyxl doesn't have really good support for columns so you'll
have to use indexes to get your columns.
**Based on xlrd**
import xlrd
wb = xlrd.open_workbook("C:\Users\Asus\Desktop\herp.xls")
sh = wb.sheet_by_name(u'A Snazzy Title')
for index in xrange(2, 4):
lastFive = sh.col_values(index, start_rowx=59, end_rowx=64)
print lastFive
It looks like xlrd treats the entire spreadsheet like a 2D list, so remember
to start your indexes at 0. Yes xlrd is based entirely in python.
_credit to[alexis](http://stackoverflow.com/users/699305/alexis) for pointing
out this module._
|
How to iterate over each pair of items in a dictionary
Question: I'm making a gravity simulator and I need to calculate the resultant force
acting upon each body.
In order to do this, I need to iterate through every pair of bodies in a
dictionary `(id: instance of Body class)` and get the gravitational force
between those two bodies. Then, I would add up all the forces and get the
resultants.
But, how do I iterate over each pair of items in a dictionary only once in
Python? If the celestial bodies were kept in a list, it would be simple:
for i in range(len(bodies)):
for j in range(len(bodies) - i - 1):
k = j - i + 1
b1 = bodies[i]
b2 = bodies[k]
Answer: [`values()`](http://docs.python.org/library/stdtypes.html#dict.values) and
`itertools`'
[`combinations`](http://docs.python.org/library/itertools.html#itertools.combinations)
are ideal for this use case.
from itertools import combinations
for a, b in combinations(bodies.values(), 2):
print a, b
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.