text
stringlengths 226
34.5k
|
---|
To send threekeys using send_keys() in selenium python webdriver
Question: I am trying to type a float number into a textbox with default value 0.00.But
it tries to get appended instead of overwriting it.I tried with .clear() and
then send_keys('123.00') but still it gets appended. Then i tried with
send_keys(Keys.CONTROL+'a','123.00').It updates 0.00 only.
Any help is really appreciated.
For more info .. URL : <http://new.ossmoketest.appspot.com> userid:
[email protected] -- mycompanyname = orangescape (sorry to
avoid spam mails) password not needed now. click purchaseorder... in the form
please new product and new price... sample application for automation.. thanks
Answer: Unless you have custom editbox, `click()` should work for you:
from selenium.webdriver import Firefox
b = Firefox()
b.get('http://google.com')
e = b.find_element_by_id('lst-ib')
e.click() # is optional, but makes sure the focus is on editbox.
e.send_keys('12.34')
e.get_attribute('value')
# outputs: u'12.34'
e.click()
e.clear()
e.get_attribute('value')
# outputs: u''
e.send_keys('56.78')
e.get_attribute('value')
# outputs: u'56.78'
|
Restricting the User Input to Alphabets
Question: I'm a technical writer learning python. I wanted to write a program for
validating the Name field input,as a practise, restricting the the user
entries to alphabets.I saw a similar code for validating number (Age)field
here, and adopted it for alphabets as below:
import string
`import re`
r = re.compile(r'[a-zA-Z]+')
print "WELCOME FOR NAME VERIFICATION. TYPE ALPHABETS ONLY!"
print raw_input("Your Name:")
x = r
if x == r:
print x
elif x != r:
print "Come on,'", x,"' can't be your name"
print raw_input("Your Name:")
if 5<=len(x)<=10:
print "Hi,", x, "!"
elif len(x)>10:
print "Mmm,Your name is too long!"
elif len(x)<5:
print "Alas, your name is too short!"
raw_input("Press 'Enter' to exit!")
I intend this code block to do two things. Namely, display the input prompt
until the user inputs alphabets only as 'Name'. Then, if that happens, process
the length of that input and display messages as coded. But, I get two
problems that I could not solve even after a lot of attempts. Either, even the
correct entries are rejected by exception code or wrong entries are also
accepted and their length is processed.
Please help me to debug my code. And, is it possible to do it without using
the reg exp?
Answer: If you're using Python, you don't need regular expressions for this--there are
included libraries which include functions which might help you. From this
page on [String methods](http://docs.python.org/release/2.5.2/lib/string-
methods.html), you can call `isalpha()`:
> Return true if all characters in the string are alphabetic and there is at
> least one character, false otherwise.
I would suggest using `isalpha()` in your if-statement instead of `x==r`.
|
Python multiprocessing pickling error
Question: I am sorry that I can't reproduce the error with a simpler example, and my
code is too complicated to post. If I run the program in IPython shell instead
of the regular python, things work out well.
I looked up some previous notes on this problem. They were all caused by using
pool to call function defined within a class function. But this is not the
case for me.
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 505, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 313, in _handle_tasks
put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
I would appreciate any help.
UPDATE: The function I pickle is defined at the top level of the module.
Though it calls a function that contains a nested function. i.e, f() calls g()
calls h() which has a nested function i(), and I am calling
pool.apply_async(f). f(), g(), h() are all defined at the top level. I tried
simpler example with this pattern and it works though.
Answer: Here is a [list of what can be
pickled](http://docs.python.org/library/pickle.html#what-can-be-pickled-and-
unpickled). In particular, functions are only picklable if they are defined at
the top-level of a module.
This piece of code:
import multiprocessing as mp
class Foo():
@staticmethod
def work(self):
pass
pool = mp.Pool()
foo = Foo()
pool.apply_async(foo.work)
pool.close()
pool.join()
yields an error almost identical to the one you posted:
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 505, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 315, in _handle_tasks
put(task)
PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed
The problem is that the `pool` methods all use a `queue.Queue` to pass tasks
to the worker processes. Everything that goes through the `queue.Queue` must
be pickable, and `foo.work` is not picklable since it is not defined at the
top level of the module.
It can be fixed by defining a function at the top level, which calls
`foo.work()`:
def work(foo):
foo.work()
pool.apply_async(work,args=(foo,))
Notice that `foo` is pickable, since `Foo` is defined at the top level and
`foo.__dict__` is picklable.
|
How to get cookies from web-browser with Python?
Question: **Context:**
_I am working on a backend access to an OpenID consumer (StackExchange in
fact). If I am to provide all possible OpenID providers as an option to the
user, then I'd have to simulate browser interaction to authenticate to each of
these providers before I could submit the Open ID URL. However, I think I
could cut this short by accessing the existing cookies of the user's web-
browser, and requesting authentication to the consumer directly with the URL._
**Problem:**
How to access the user's web-browser's cookies? I've seen very little
information on how to do it with Python. This [previous
question](http://stackoverflow.com/questions/5537592/how-to-extract-firefox-
session-cookie-python-language-preferred) partly answers the problem regarding
Firefox, pointing especially to [the code
sample](http://blog.mithis.net/archives/python/90-firefox3-cookies-in-python)
her below. However, I would need to access cookies from the most common web
browsers used on Linux, not just Firefox.
#! /usr/bin/env python
# Protocol implementation for handling gsocmentors.com transactions
# Author: Noah Fontes nfontes AT cynigram DOT com
# License: MIT
def sqlite2cookie(filename):
from cStringIO import StringIO
from pysqlite2 import dbapi2 as sqlite
con = sqlite.connect(filename)
cur = con.cursor()
cur.execute("select host, path, isSecure, expiry, name, value from moz_cookies")
ftstr = ["FALSE","TRUE"]
s = StringIO()
s.write("""\
# Netscape HTTP Cookie File
# http://www.netscape.com/newsref/std/cookie_spec.html
# This is a generated file! Do not edit.
""")
for item in cur.fetchall():
s.write("%s\t%s\t%s\t%s\t%s\t%s\t%s\n" % (
item[0], ftstr[item[0].startswith('.')], item[1],
ftstr[item[2]], item[3], item[4], item[5]))
s.seek(0)
cookie_jar = cookielib.MozillaCookieJar()
cookie_jar._really_load(s, '', True, True)
return cookie_jar
**Question: Does Python provide a module that can facilitate cookie extraction
from web-browsers?** Otherwise, how should I adapt the above code to draw
cookies from other browsers, like Chromium etc.?
_PS: Or am I looking at the initial problem (i.e. authenticate to the OpenID
provider) the wrong way? (I feel I am just replacing a problem by another.)_
Answer: I created a module to do exactly that, available here:
<https://bitbucket.org/richardpenman/browsercookie/>
Example usage:
import requests
import browser_cookie
cj = browser_cookie.chrome()
r = requests.get('http://stackoverflow.com', cookies=cj)
**python3 fork:** <https://github.com/borisbabic/browser_cookie3>
|
Is it possible to return all computers in AD in python
Question: How could I search AD for all computer names?
Answer: It's not great code -- if you're familiar enough with the Windows API I'd
recommend using the `win32con` to do so. However, this is quite functional:
import subprocess
splitstring = "\\r\\n"
bCompList = subprocess.check_output(['dsquery', 'computer', 'domainroot',
'-scope', 'subtree', '-limit', '0'])
compList = str(bCompList)[2:-5].split(splitstring)
The `subprocess` module is used to run external commands; I suggest [reading
up on the
documentation](http://docs.python.org/py3k/library/subprocess.html#module-
subprocess). `subprocess.check_output` captures and returns the output of the
command from stdout. (Note that the command is a list. That's important!)
In this case, we're using `dsquery computer` to query the entire domain
(`'domainroot'`) for all computer objects (`'limit', '0'`). This returns a
binary string -- stored here in `bCompList`.
Since it's a binary string, we'll probably want to convert it to a 'standard'
string format to work with -- that's why we use `str()`. We can use slicing to
trim off the 'junk' characters (the "`b`" which indicated the string was
binary, and the trailing `'\\r\\n'` garbage.) That resultant string then gets
used by `split()` to break it up into a list.
Incidentally, if you want to trim off the quotation marks `dsquery computer`
places around each computer object, change the slicing to `[3:-6]` and change
`splitstring = "\\r\\n"` to `splitstring = "\"\\r\\n\""`.
There are two significant caveats to this code, however:
* It must be run on a computer with the Windows "AD DS Snap-Ins and Command-Line Tools" feature installed (at least, that's what they're called in Windows Server 2008 R2.)
* You need to be logged in as an AD user with appropriate privileges. (Alternatively, you could include `'-u', username, '-p', password` in the command argument list, replacing `username` and `password` with the appropriate credentials.)
|
How to install python database library to work with robotframework - API issue
Question: I am on a Windows 7 64bit machine, using Python 2.7 and I am trying to use the
python database library in robotframework. I have previously used a java
library file but now I want to use the python library.
I have gone to [github](http://franz-see.github.com/Robotframework-Database-
Library/) and [downloaded version 0.6](https://github.com/franz-
see/Robotframework-Database-Library/archives/0.2).
I have also installed a setup file and MySQL-python from
[here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysql-python)
However when I try to install the database library (using python `setup.py`
build) I get the following error:
Traceback (most recent call last):
File "setup.py", line 25, in <module>
from DatabaseLibrary import __version__
File "src\DatabaseLibrary\__init__.py", line 15, in <module>
from connection_manager import ConnectionManager
File "src\DatabaseLibrary\connection_manager.py", line 16, in <module>
from robot.api import logger
ImportError: No module named api
Why do I not have `robot.api` and how do I get it and install it? Or is there
an easier way to install the python database library?
Answer: It seems that the Database library uses Robot Framework internals, but does
not list Robot Framework as it's dependency. The robot.api package was
introduced in RF 2.6, so upgrading/installing the latest Robot Framework (from
[project pages](http://code.google.com/p/robotframework/downloads)) should
resolve your issue
|
Python closures and replacing surrounding scope
Question: I know when using Groovy closures, I can change the `delegate` on the closure
so function calls made within the closure could be defined externally.
Can I do something similar in Python?
Specifically, if you take the following code:
def configure():
build()
def wrap(function):
def build():
print 'build'
function()
wrap(configure)
I'd like it to print 'build' (only making changes to `wrap()`).
Some notes:
I don't want to pass functions into `configure()` since there may be a large
number of functions that can be called by `configure()`.
I also don't want to define those globally, because, once again, there may be
a large number of functions that can be called by `configure()` and I don't
want to pollute the global namespace.
Answer: Whether or not is a good way to do this is debatable, but here's a solution
that doesn't modify the global namespace.
def configure():
build()
def wrap(f):
import new
def build():
print 'build'
new.function(f.func_code, locals(), f.func_name, f.func_defaults, f.func_closure)()
wrap(configure)
I found it at [How to modify the local namespace in
python](http://stackoverflow.com/questions/1142068/how-to-modify-the-local-
namespace-in-python)
|
How to get string data from a python PIL image object?
Question: I'm trying to send the data of a gif file of a desktop through a socket to a
remote desktop (for desktop sharing) but i can't get the string for the data
using PIL, i don't know how to convert the Pil objects to string, here is my
code (btw i know i can just write to a file then read the data like that but i
think that is inefficient and i think that there is a better way maybe?).
from PIL import ImageGrab
import cStringIO
fakie = cStringIO.StringIO()
ImageGrab.grab().save(fakie, 'GIF')
data = fakie.getvalue()
fakie.close()
# This last bit of code is to see if the var data stored the right info in a str bc i need to send it through a socket
with open('C:\something\something\Desktop\image.gif', 'w') as f:
f.write(data)
The problem is that after the file is written the gif picture only displays
the top 1/10 of the page (the gif file is messed up), and so i'm wondering if
the problem lies within my computer or my code (i'm using vista on a VERY old
computer, at least 6 years i think and i'm getting a new one soon). Any input
is appreciated.
Answer: As @HYRY puts it, you must open the image file with "wb" mode instead of "w" -
Without the "b" Python defaults to open it in text mode - in windows it means
that whenever a 0x0a byte is written to the file, the O.S. writes a 0x0d 0x0a
sequence instead, because it translates line ending sequences to Windows
native line endings.
In the "wb" mode, there is no translation, and your image file won't be
corrupted.
|
How to decode POST data from github in web.py?
Question: (This question is related - but not the same - to
[this](http://stackoverflow.com/questions/8740353/how-to-convert-a-string-
data-to-a-json-object-in-python) one)
This is the POST data that I get from a github hook:
payload=%7B%22pusher%22%3A%7B%22name%22%3A%22none%22%7D%2C%22repository%22%3A%7B%22name%22%3A%22test%22%2C%22size%22%3A84%2C%22has_wiki%22%3Atrue%2C%22created_at%22%3A%222012%2F01%2F12%2001%3A04%3A25%20-0800%22%2C%22watchers%22%3A1%2C%22private%22%3Afalse%2C%22fork%22%3Afalse%2C%22url%22%3A%22https%3A%2F%2Fgithub.com%2Fgonvaled%2Ftest%22%2C%22pushed_at%22%3A%222012%2F01%2F12%2001%3A05%3A26%20-0800%22%2C%22has_downloads%22%3Atrue%2C%22open_issues%22%3A0%2C%22has_issues%22%3Atrue%2C%22homepage%22%3A%22%22%2C%22description%22%3A%22%22%2C%22forks%22%3A1%2C%22owner%22%3A%7B%22name%22%3A%22gonvaled%22%2C%22email%22%3A%22gonvaled%40gonvaled.com%22%7D%7D%2C%22forced%22%3Afalse%2C%22after%22%3A%2214209371dcbdd95cc3ef5c4a07d80edd42f1295c%22%2C%22deleted%22%3Afalse%2C%22ref%22%3A%22refs%2Fheads%2Fmaster%22%2C%22commits%22%3A%5B%5D%2C%22before%22%3A%2214209371dcbdd95cc3ef5c4a07d80edd42f1295c%22%2C%22compare%22%3A%22https%3A%2F%2Fgithub.com%2Fgonvaled%2Ftest%2Fcompare%2F1420937...1420937%22%2C%22created%22%3Afalse%7D
Which I can decode using this:
urllib.unquote(data)
Getting this:
payload={"pusher":{"name":"none"},"repository":{"name":"test","size":84,"has_wiki":true,"created_at":"2012/01/12 01:04:25 -0800","watchers":1,"private":false,"fork":false,"url":"https://github.com/gonvaled/test","pushed_at":"2012/01/12 01:05:26 -0800","has_downloads":true,"open_issues":0,"has_issues":true,"homepage":"","description":"","forks":1,"owner":{"name":"gonvaled","email":"[email protected]"}},"forced":false,"after":"14209371dcbdd95cc3ef5c4a07d80edd42f1295c","deleted":false,"ref":"refs/heads/master","commits":[],"before":"14209371dcbdd95cc3ef5c4a07d80edd42f1295c","compare":"https://github.com/gonvaled/test/compare/1420937...1420937","created":false}
I can see the JSON there, after the `payload=` bit. The question I have is:
what format is the full data? How can I get just the payload bit, using
standard python libraries; I would prefer to avoid splitting the string
myself, since I do not know the special cases.
The [github help page](http://help.github.com/post-receive-hooks/) gives this
suggested implementation for a Sinatra server:
post '/' do
push = JSON.parse(params[:payload])
"I got some JSON: #{push.inspect}"
end
How can this `params` array be handled in python, with standard libraries?
What is the most pythonic implementation of that Ruby code? My end goal is to
have the full POST data accessible as a python dictionary.
Answer: Try this:
import json
import urlparse
data = urlparse.parse_qs(r)
print json.loads(r['payload'][0])
where `r` is the string you received as response.
See <http://docs.python.org/library/urlparse.html#urlparse.parse_qs>
|
login a webpage using python and display the result in the browser
Question: I'm a beginner in python and as a learning project i decided to just make a
little script to lunch a webpage and log into it. Now after doing lots of
googling (got many messy bits and piece of knowlegde) i found out that each
website have it own way of handling login requests, and that there was many
ways to emulate an instance of a browser to retrieve and post data ( urllib2,
selenium, twill and blablabla).
So I know it is possible to log in into a website from python ( for a specific
site and using a specific way) but I can't seem to display the logged page in
a browser.
Can anyone help me please ?
Answer: I think selenium can do this job.
code snippet:
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
browser = webdriver.Firefox() # Get local session of firefox
browser.get(yoursite) # Load page
elem = browser.find_element_by_name("email") # Find the query box
elem.send_keys(email)
ps = browser.find_element_by_name("password") # Find the query box
ps.send_keys(passwd + Keys.RETURN)
this piece of code will drive your firefox to open the logged page.
|
Error importing a .pyd file (as a python module) from a .pyo file
Question: I am running pygame (for Python) on Windows. I have some .pyo files and some
.pyd files. I have another script for somewhere else that is trying to import
one of the .pyd files as a module but I keep getting the error that no such
module exists.
Do .pyo files have issues importing .pyd files as modules? What can I do to
solve this issue?
Answer: It's typically because of one or more of the following:
* **The .pyd is not in your current path** (you said it was in the same folder so that should not be the problem)
* **A DLL the .pyd depends on is not in your current path.** Locate the missing DLL's using depends.exe and either copy these dll's to the same folder or add the containing directories to your system path
* **You're using a debug version of python.** Then the module must be renamed from xyz.pyd to xyz_d.pyd.
|
'str' object has no attribute '_meta' error come when I uncomment admin.autodiscover()
Question: I am facing this error in django:
AttributeError at /
'str' object has no attribute '_meta'
Request Method: GET
Request URL: http://localhost:8000/
Django Version: 1.3
Exception Type: AttributeError
Exception Value:
'str' object has no attribute '_meta'
Exception Location: C:\Python27\lib\site-packages\django\contrib\admin\sites.py in register, line 80
Python Executable: C:\Python27\python.exe
Python Version: 2.7.2
Python Path: ['D:\\programming\\django_projects\\ecomstore',
'C:\\Python27\\lib\\site-packages\\setuptools-0.6c9-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\pymysql-0.3-py2.6.egg',
'C:\\Windows\\system32\\python27.zip',
'C:\\Python27\\DLLs',
'C:\\Python27\\lib',
'C:\\Python27\\lib\\plat-win',
'C:\\Python27\\lib\\lib-tk',
'C:\\Python27',
'C:\\Python27\\lib\\site-packages',
'C:\\Python27\\lib\\site-packages\\PIL']
Server time: Fri, 13 Jan 2012 16:44:18 +0500
this error doesn't occur if I comment out `admin.autodiscover()`, also I used
ModelForm in django so is it because of ModelForm? Following is ModelForm
code:
from django import forms
from catalog.models import Product
class ProductAdminForm(forms.ModelForm):
class Meta:
model=Product
def clean_price(self):
if self.cleaned_data['price']<=0:
raise forms.ValidationError('Price must be greater than zero')
return self.cleaned_data['price']
So if some one understand the problem then please tell.
Following is traceback, pasted:
Environment:
Request Method: GET
Request URL: http://localhost:8000/
Django Version: 1.3
Python Version: 2.7.2
Installed Applications:
['django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'ecomstore.catalog']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware')
Traceback:
File "C:\Python27\lib\site-packages\django\core\handlers\base.py" in get_response
101. request.path_info)
File "C:\Python27\lib\site-packages\django\core\urlresolvers.py" in resolve
250. for pattern in self.url_patterns:
File "C:\Python27\lib\site-packages\django\core\urlresolvers.py" in _get_url_patterns
279. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Python27\lib\site-packages\django\core\urlresolvers.py" in _get_urlconf_module
274. self._urlconf_module = import_module(self.urlconf_name)
File "C:\Python27\lib\site-packages\django\utils\importlib.py" in import_module
35. __import__(name)
File "D:\programming\django_projects\ecomstore\..\ecomstore\urls.py" in <module>
5. admin.autodiscover()
File "C:\Python27\lib\site-packages\django\contrib\admin\__init__.py" in autodiscover
26. import_module('%s.admin' % app)
File "C:\Python27\lib\site-packages\django\utils\importlib.py" in import_module
35. __import__(name)
File "D:\programming\django_projects\ecomstore\..\ecomstore\catalog\admin.py" in <module>
19. admin.site.register('Product', 'ProductAdmin')
File "C:\Python27\lib\site-packages\django\contrib\admin\sites.py" in register
80. if model._meta.abstract:
Exception Type: AttributeError at /
Exception Value: 'str' object has no attribute '_meta'
Answer: The issue is this line:
admin.site.register('Product', 'ProductAdmin')
You have to pass in the class objects, not strings. This should work:
admin.site.register(Product, ProductAdmin)
|
crawler by using urls
Question: Hi evry one i need to crawl a sit by using python(my python version is 2.6.6)
only using urls.
* first i need to open this link(<http://www.google.com/>) and crawl,and i should collect all http links which are present in this page;
* second, from the collected http links,i should go to each and every link, through this link i need to open and do the crawling process and should save the crawled links in a separate folder. iam just trying in this way,if any one is know the solution plz reply i need your help..
Answer: Step1: Use `urllib or urllib2` library of python and visit your page.
Step2: Use any xml/html parser like [lxml](http://lxml.de/) or [Beautiful
Soup](http://www.crummy.com/software/BeautifulSoup/) to fetch all the links
from your web page.
Step3: Now again pass these links to urllib or urllib2 and save your webpage.
And you can also give [Scrapy](http://scrapy.org/) or
[Mechanize](http://wwwsearch.sourceforge.net/mechanize/) a try.
And here is an example of using `xpath expressions with lxml`. You can get the
xpath of any element using `firebug`.
import lxml
from lxml import etree
import urllib2
response = urllib2.urlopen('your url')
html = response.read()
link = etree.HTML(html)
links = link.xpath('xpath expr')
|
Running Django Custom Management Command - Path Issues
Question: I don't understand why this doesn't find the command:
khandelwal@simba:/opt/code/cdcschool$ python schoolcommand/manage.py createcampaign
Unknown command: 'createcampaign'
Type 'manage.py help' for usage.
When this works fine:
khandelwal@simba:/opt/code/cdcschool$ cd schoolcommand/
khandelwal@simba:/opt/code/cdcschool/schoolcommand$ python manage.py createcampaign
Error: Provide: <type start_date end_date>
This is where my command is located:
$ ls schoolcommand/management/
commands __init__.py __init__.pyc
$ ls schoolcommand/management/commands/
campaignmanager.py createcampaign.py __init__.py
campaignmanager.pyc createcampaign.pyc __init__.pyc
How do I fix it so that I can do:
khandelwal@simba:/opt/code/cdcschool$ python schoolcommand/manage.py createcampaign
Here are the values of my PYTHONPATH, DJANGO_SETTINGS_MODULE and the one place
my settings.py is located.
khandelwal@simba:/opt/code/cdcschool$ ls
Procfile README requirements.txt schoolcommand
khandelwal@simba:/opt/code/cdcschool$ echo $PYTHONPATH
khandelwal@simba:/opt/code/cdcschool$ echo $DJANGO_SETTINGS_MODULE
khandelwal@simba:/opt/code/cdcschool$ find . -name settings.py
./schoolcommand/settings.py
khandelwal@simba:/opt/code/cdcschool$
Answer: When you use Django, there are two important rules.
First.
You have a `settings.py` file which **must** be used by the web server and
**all** the manage.py commands. All of them.
The default place to look for the `settings.py` file is the current working
directory. You can change this with the `PYTHONPATH` and the
`DJANGO_SETTINGS_MODULE` environment variable.
The `manage.py` is created for you in the same directory as the `settings.py`.
You can use `django-admin.py --settings=some.module` if you don't want to use
`manage.py`.
Second.
The `manage.py` commands do not have any "path" to them. They're all just one-
word commands, no matter where they happen to live in your application tree.
You **never** do this: `python schoolcommand/manage.py createcampaign` unless
(somehow) your `settings.py` is not in the same directory as your `manage.py`.
You **normally** do this:
cd /path/to/your/settings
python manage.py createcampaign
If your settings is in `code/schoolcommand` that means that your web site and
all your commands will operate in that directory.
|
Add on to: How do I protect my Python codebase so that guests can't see certain modules but so it still works?
Question: [How do I protect my Python codebase so that guests can't see certain modules
but so it still works?](http://stackoverflow.com/questions/1443146/how-do-i-
protect-my-python-codebase-so-that-guests-cant-see-certain-modules-but)
My question is an add on question posted on the page above.
If there are two svn directories; for example, src/private and src/public and
internal users will have both public and private directories and things will
just work fine.
The public users will have only src/public. Is it possible to import the
src/private in **init**.py even though the user doesn't have it checked out?
The user should be able to link to it to resolve any functional dependencies
in src/private but, should not be able to view to content of the files.
Are there any other solutions for this problem?
Answer: Give it up. It's essentially impossible to keep curious eyes out. For
instance, look at the [dis](http://docs.python.org/library/dis.html) module:
import dis
def foo(): print 'bar'
dis.dis(foo)
which would yield:
1 0 LOAD_CONST 1 ('bar')
3 PRINT_ITEM
4 PRINT_NEWLINE
5 LOAD_CONST 0 (None)
8 RETURN_VALUE
Voila - there are any strings you wanted to hide, simply by importing your
modules. There are other modules and services that can do a pretty good job of
converting such disassemblies into readable Python code.
What exactly are you trying to accomplish? That is, what specifically are you
trying to protect?
|
Python Scraper - Socket Error breaks script if target is 404'd
Question: Encountered an error while building a web scrapper to compile data and output
into XLS format; when testing again a list of domains in which I wish to
scrape from, the program faulters when it recieves a socket error. Hoping to
find an 'if' statement that would null parsing a broken website and continue
through my while-loop. Any ideas?
workingList = xlrd.open_workbook(listSelection)
workingSheet = workingList.sheet_by_index(0)
destinationList = xlwt.Workbook()
destinationSheet = destinationList.add_sheet('Gathered')
startX = 1
startY = 0
while startX != 21:
workingCell = workingSheet.cell(startX,startY).value
print ''
print ''
print ''
print workingCell
#Setup
preSite = 'http://www.'+workingCell
theSite = urlopen(preSite).read()
currentSite = BeautifulSoup(theSite)
destinationSheet.write(startX,0,workingCell)
And here's the error:
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
homeMenu()
File "C:\Python27\farming.py", line 31, in homeMenu
openList()
File "C:\Python27\farming.py", line 79, in openList
openList()
File "C:\Python27\farming.py", line 83, in openList
openList()
File "C:\Python27\farming.py", line 86, in openList
homeMenu()
File "C:\Python27\farming.py", line 34, in homeMenu
startScrape()
File "C:\Python27\farming.py", line 112, in startScrape
theSite = urlopen(preSite).read()
File "C:\Python27\lib\urllib.py", line 84, in urlopen
return opener.open(url)
File "C:\Python27\lib\urllib.py", line 205, in open
return getattr(self, name)(url)
File "C:\Python27\lib\urllib.py", line 342, in open_http
h.endheaders(data)
File "C:\Python27\lib\httplib.py", line 951, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 811, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 773, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 754, in connect
self.timeout, self.source_address)
File "C:\Python27\lib\socket.py", line 553, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
IOError: [Errno socket error] [Errno 11004] getaddrinfo failed
Answer: Ummm that looks like the error I get when my internet connection is down. HTTP
404 errors are what you get when you do have a connection but the URL that you
specify can't be found.
There's no if statement to handle exceptions; you need to "catch" them using
the [try/except
construct.](http://docs.python.org/tutorial/errors.html#exceptions)
**Update** : Here's a demonstration:
import urllib
def getconn(url):
try:
conn = urllib.urlopen(url)
return conn, None
except IOError as e:
return None, e
urls = """
qwerty
http://www.foo.bar.net
http://www.google.com
http://www.google.com/nonesuch
"""
for url in urls.split():
print
print url
conn, exc = getconn(url)
if conn:
print "connected; HTTP response is", conn.getcode()
else:
print "failed"
print exc.__class__.__name__
print str(exc)
print exc.args
Output:
qwerty
failed
IOError
[Errno 2] The system cannot find the file specified: 'qwerty'
(2, 'The system cannot find the file specified')
http://www.foo.bar.net
failed
IOError
[Errno socket error] [Errno 11004] getaddrinfo failed
('socket error', gaierror(11004, 'getaddrinfo failed'))
http://www.google.com
connected; HTTP response is 200
http://www.google.com/nonesuch
connected; HTTP response is 404
Note that so far we have just opened the connection. Now what you need to do
is check the HTTP response code and decide whether there is anything worth
retrieving using `conn.read()`
|
(GObjectIntrospection) Segmentation fault when trying to create ibus engine using javascript
Question: GObjectIntrospection allows to use C object in any high level language.
<https://live.gnome.org/GObjectIntrospection>
IBus is a input method framework for linux. code.google.com/p/ibus
I'm in bit of trouble using GObjectIntrospection / javascript. I tried to
create an ibus engine. same code works in vala,python. but in javascript seg
fault. I'm using opensuse 12.1 gnome3. "ibus-devel" package provides the
/usr/share/gir-1.0/IBus-1.0.gir required for GObjectIntrospection.
I'm tring to run the following code.
#!/usr/bin/env gjs
const IBus = imports.gi.IBus;
//get the ibus bus
var bus = new IBus.Bus();
if(bus.is_connected()){
var factory = new IBus.Factory({
connection: bus.get_connection()
});
factory.add_engine({
engine_name:"ibus-sarim",
engine_type:typeof(this)
});
}
It crashes on line 6, in "new IBus.Factory".
Terminal output,
(gjs:13353): GLib-GIO-CRITICAL **: g_dbus_connection_register_object:
assertion `object_path != NULL && g_variant_is_object_path
(object_path)' failed
Segmentation fault
I can't figure out where is the problem. I tried the vala test code provided
with ibus at
<https://github.com/ibus/ibus/blob/master/bindings/vala/test/enchant.vala> It
compiles and runs fine. In enchant.vala line 148,
var factory = new Factory(bus.get_connection());
The code for creating Factory is same that i tried in javascript. also in
python,
from gi.repository import IBus
from gi.repository import GLib
from gi.repository import GObject
IBus.init()
bus = IBus.Bus()
if bus.is_connected():
factory = IBus.Factory.new(bus.get_connection())
This also seems to work fine, no seg fault. But in javascript it fails
everytime. Any idea ? I'm banging on this for couple of days without any avail
:(
Answer: In IBusFactory:
"connection" IBusConnection* : Read / Write / Construct Only
The documentation says `"Construct Only"`. That's subject to interpretation
for now, but it means to me that it is probably a private or protected class
member. That said, the constructor is defined as:
IBusFactory * ibus_factory_new (IBusConnection *connection);
There's that connection variable, in the constructor. Notice when you provide
it exactly that way, your app works fine.
const IBus = imports.gi.IBus;
//get the ibus bus
var bus = new IBus.Bus();
if(bus.is_connected()){
var factory = new IBus.Factory(bus.get_connection());
}
Now as for `factory.add_engine()`, the definition is here:
void ibus_factory_add_engine (IBusFactory *factory,
const gchar *engine_name,
GType engine_type);
That means you will have to provide the `engine_name` and `engine_type` as
function parameters. This works:
factory.add_engine('ibus-engine-name', some-engine-type);
See <http://ibus.googlecode.com/svn/docs/ibus/ch03.html> for engine ideas.
This code doesn't segfault, but it doesn't work either. It indicates the
correct syntax up until `add_engine()`'s second parameter.
#!/usr/bin/env gjs
const IBus = imports.gi.IBus;
//get the ibus bus
var bus = new IBus.Bus();
if(bus.is_connected()){
var factory = new IBus.Factory(bus.get_connection());
factory.add_engine("ibus-sarim", typeof(this));
}
|
"repeat" structure in Java
Question: I'm teaching programming to beginners (starting at 12-15 years old) and one of
the choices we made (because it was natural in Python) was to teach the notion
of "repeating an action" before the notion of variables.
We warted in Python with
for loop in range(10):
without speaking about variables of arrays and in C++ with
#define repeat(nb) for(int _loop = 0 ; _loop < (nb) ; _loop++)
The idea was to hide the complexity of a classical loop in order to insist on
the "repeat" part. We are not hiding from the students the fact that
"repeat(10)" is not a part of C++, it's just a way to simplify the learning.
In Pascal we can't do much more than
for loop := 1 to 10 do
but that's ok because its's not that difficult to remember.
I was looking for something similar in Java and I found that :
import java.util.List;
import java.util.AbstractList;
class Range {
public static List<Integer> size(final int end) {
return new AbstractList<Integer>() {
@Override
public Integer get(int index) {
return 0 + index;
}
@Override
public int size() {
return end;
}
};
};
}
public class Main {
public static void main(String[] argv) {
for (int loop : Range.size(10)) {
System.out.println("xx");
}
}
}
The
for (int loop : Range.size(10))
is still easier to remember than
for(int loop = 0 ; loop < 10 ; loop++)
but there is two problems :
* two variables are needed for imbricated for loops : I dont think we can do much about that
* we are having warnings because the variable `loop` is not used
Do you see a better solution that what we have ?
Once again, we only want to provide some "tool" at the beginning phase in
order for the students to "repeat" actions, before knowing anything about
"variables". We are not hiding from them that's is not in the langage and
after a few exercises (~80-100) we are asking them to use the real syntax.
* * *
We have approximately 20 exercices before introducing variables : some about
printing texts but mostly we are providing one library with objects you can
manipulate (hence the variables are hidden in the object state). You can think
of the "logo-turtle" for example. This way the notion of "loop" can be
manipulated and "seen" before introducing explicit variables and you can have
interresting exercises really fast.
One example, in Python, where you want to visit every case of a 10x10 table
once and only once and then be back at your starting point (lower-left corner)
:
from robot import *
top()
for loop in range(4):
for loop in range(8):
top()
right()
for loop in range(8):
bottom()
right()
for loop in range(8):
top()
right()
for loop in range(9):
bottom()
for loop in range(9):
left()
This exercise is not that easy but the syntax is really simple and allow the
student to concentrate on the "algorithmic" part and not the "langage" part.
After a few exercises the students are getting interrested and we can
introduce more syntax and more difficult concepts like the variables.
Answer: Do you really need to use Java for those exercises? If other languages works
for you then why not to use them? You can always move to Java when you
students know basics like variables.
I agree that variables can be quite confusing from beginners - especially that
their value can change all the time, it is not something people are used from
algebra where values don't change once "assigned".
If you want to use Java, you could use while loop which seems to fit better.
One dirty trick how to avoid use of variable is following code - it use
StackTraceElement instead of variable.
It prints
Hello A
Hello B
Hello C
Hello C
Hello C
Hello B
Hello C
Hello C
Hello C
Hello A
Hello B
Hello C
Hello C
Hello C
Hello B
Hello C
Hello C
Hello C
Hello A
Hello B
Hello C
Hello C
Hello C
Hello B
Hello C
Hello C
Hello C
Here is full source. main(Strinng[] args) method is code with loops, rest is
supporting code.
import java.util.HashMap;
import java.util.Map;
public class Repeater {
public static void main(String[] args) {
while(range(3)) {
System.out.println("Hello A");
while (range(2)) {
System.out.println("Hello B");
while (range(3)) {
System.out.println("Hello C");
}
}
}
}
public static boolean range(int size) {
return Range.range(size);
}
public static class Range {
static Map<StackTraceElement, RangePosition> ranges = new HashMap<StackTraceElement, RangePosition>();
public static boolean range(int size) {
final StackTraceElement stackTraceElement = Thread.currentThread().getStackTrace()[3];
//System.out.println(stackTraceElement);
RangePosition position = ranges.get(stackTraceElement);
if (position == null) {
position = new RangePosition();
position.size = size;
ranges.put(stackTraceElement, position);
}
final boolean next = position.next();
if (!next) {
ranges.remove(stackTraceElement);
}
return next;
}
}
public static class RangePosition {
int current,size;
boolean next() {
current++;
return current <= size;
}
}
}
But I'd prefer to use some language which supports this naturally.
|
Using Python decorators when the class def is in another file?
Question: Say I have `class myClass` in file myClass.py and I want to be able to invoke
`@myClass` without needing to clutter up my file with the myClass code.
Is this doable?
I tried importing `myClass` but it did not seem to work.
Answer:
import mymodule
@mymodule.myClass
class myOtherClass:
pass
|
How to update a remote ms access database?
Question: i need to create a webapp to show and allow editing for a set of data.
This data is contained in an Access Database file, used by another application
(a desktop application).
I'm evaluating the best way to carry out this job.
Unfortunatly my purpose to migrate to another database solution (rdbms such as
MySQL or Postgres) was rejected by the customer.
The issue here is how to keep data integrity and syncronized between the
server and the desktop that executes the application that also uses this data.
All I need to do is, read data, store edited or new data, give to authorized
users an interface to review this new inserted data -thus validating it-, and
import this to the original access database.
I've found the following possible solutions (to update the desktop mdb copy),
but each of them has pros and cons:
* remote access to the windows machine
* exposes the machine to unauthorized access
* use rsync to keep files syncronized (once a day)
* if the mdb on the client has been edited with the desktop application there will be data loss
* can be update only when all data has been validated
* there won't be real syncronized data (until rsync will run)
* client-server applications
* can use secure layers to protect data against attackers
* a 3rd application (on the desktop) is required
* syncronization requires authorized users to use this 3rd application to import data (that will query the remote db and update the local mdb)
Do you know some other way that could help me to get this done? I'm oriented
on the client-server model, also if this would be more expensive, but it's the
only way I see to make this work.
Do you see some other pros/cons of the purposed solution?
I didn't choose the PL to develop this, but I was thinking to use either PHP
and/or Python. The remote environment (for the server) can either be Windows
or *nix (preferred).
Thanks.
Answer: The first idea:
>
> exposes the machine to unauthorized access
>
This is not really a valid argument. Everything you put on the Internet is
exposed. An it is not like it cannot be further protected via SSL/TLS. Even
RDP can be secured via a SSH tunnel, for example.
To my mind, the easiest way and most elegant way to do is by using web
services (SOAP). Write the server code that does inserts/updates on the Access
database with something like a Python or Java. Generate a WSDL from the
working code. From the WSDL you can generate a client for PHP/Python. Now all
you have to do is to write the web interface that uses the PHP/Python client.
For security using SSL and Basic authentication should be enough (supported by
SOAPpy in the case of Python, for example). You can use `pyodbc` to connect to
the Access database.
|
Best hashing method before saving password into database table for django
Question:
def register(request):
flag = True
possible = '0123456789abcdefghijklmnopqrstuvwxyz'
token = ''
current_datetime = datetime.datetime.now()
user = UsersModelForm()
if request.method == 'POST':
userf = UsersModelForm(request.POST)
username = userf.data['username']
password = userf.data['password']
passwordrepeat = userf.data['passwordrepeat']
email = userf.data['email']
if password != passwordrepeat:
flag = False
passVariable = {'user':user, 'flag': False}
return render_to_response('register.html', passVariable, context_instance=RequestContext(request))
elif password == passwordrepeat:
for i in range(1,10):
temp = random.choice(possible)
token = token + temp
print token
if userf.is_valid():
check = userf.save(commit=False)
check.email_token = token
check.email_token_expiry = current_datetime + timedelta(1)
check.save()
return HttpResponseRedirect('/')
else:
return render_to_response('register.html', {"user": user, 'flag': True}, context_instance=RequestContext(request))
I need to apply hashing method for `userf.data['password']` and
`userf.data['repeatpassword']` before saving into database table.
Which hashing method is preferable for hashing using python?
Answer: Use [`bcrypt`](http://code.google.com/p/py-bcrypt/).
Here's an example taken from the [README](http://code.google.com/p/py-
bcrypt/source/browse/README):
import bcrypt
# Hash a password for the first time
hashed = bcrypt.hashpw(password, bcrypt.gensalt())
# gensalt's log_rounds parameter determines the complexity
# the work factor is 2**log_rounds, and the default is 12
hashed = bcrypt.hashpw(password, bcrypt.gensalt(10))
# Check that an unencrypted password matches one that has
# previously been hashed
if bcrypt.hashpw(plaintext, hashed) == hashed:
print "It matches"
else:
print "It does not match"
|
Artefacts from Riemann sum in scipy.signal.convolve
Question: **Short summary** : How do I quickly calculate the finite convolution of two
arrays?
# Problem description
I am trying to obtain the finite convolution of two functions f(x), g(x)
defined by

To achieve this, I have taken discrete samples of the functions and turned
them into arrays of length `steps`:
xarray = [x * i / steps for i in range(steps)]
farray = [f(x) for x in xarray]
garray = [g(x) for x in xarray]
I then tried to calculate the convolution using the `scipy.signal.convolve`
function. This function gives the same results as the algorithm `conv`
suggested
[here](http://www.physics.rutgers.edu/~masud/computing/WPark_recipes_in_python.html).
However, the results differ considerably from analytical solutions. Modifying
the algorithm `conv` to use the trapezoidal rule gives the desired results.
To illustrate this, I let
f(x) = exp(-x)
g(x) = 2 * exp(-2 * x)
the results are:

Here `Riemann` represents a simple Riemann sum, `trapezoidal` is a modified
version of the Riemann algorithm to use the trapezoidal rule,
`scipy.signal.convolve` is the scipy function and `analytical` is the
analytical convolution.
Now let `g(x) = x^2 * exp(-x)` and the results become:

Here 'ratio' is the ratio of the values obtained from scipy to the analytical
values. The above demonstrates that the problem cannot be solved by
renormalising the integral.
# The question
Is it possible to use the speed of scipy but retain the better results of a
trapezoidal rule or do I have to write a C extension to achieve the desired
results?
# An example
Just copy and paste the code below to see the problem I am encountering. The
two results can be brought to closer agreement by increasing the `steps`
variable. I believe that the problem is due to artefacts from right hand
Riemann sums because the integral is overestimated when it is increasing and
approaches the analytical solution again as it is decreasing.
**EDIT** : I have now included the original algorithm
[2](http://www.physics.rutgers.edu/~masud/computing/WPark_recipes_in_python.html)
as a comparison which gives the same results as the `scipy.signal.convolve`
function.
import numpy as np
import scipy.signal as signal
import matplotlib.pyplot as plt
import math
def convolveoriginal(x, y):
'''
The original algorithm from http://www.physics.rutgers.edu/~masud/computing/WPark_recipes_in_python.html.
'''
P, Q, N = len(x), len(y), len(x) + len(y) - 1
z = []
for k in range(N):
t, lower, upper = 0, max(0, k - (Q - 1)), min(P - 1, k)
for i in range(lower, upper + 1):
t = t + x[i] * y[k - i]
z.append(t)
return np.array(z) #Modified to include conversion to numpy array
def convolve(y1, y2, dx = None):
'''
Compute the finite convolution of two signals of equal length.
@param y1: First signal.
@param y2: Second signal.
@param dx: [optional] Integration step width.
@note: Based on the algorithm at http://www.physics.rutgers.edu/~masud/computing/WPark_recipes_in_python.html.
'''
P = len(y1) #Determine the length of the signal
z = [] #Create a list of convolution values
for k in range(P):
t = 0
lower = max(0, k - (P - 1))
upper = min(P - 1, k)
for i in range(lower, upper):
t += (y1[i] * y2[k - i] + y1[i + 1] * y2[k - (i + 1)]) / 2
z.append(t)
z = np.array(z) #Convert to a numpy array
if dx != None: #Is a step width specified?
z *= dx
return z
steps = 50 #Number of integration steps
maxtime = 5 #Maximum time
dt = float(maxtime) / steps #Obtain the width of a time step
time = [dt * i for i in range (steps)] #Create an array of times
exp1 = [math.exp(-t) for t in time] #Create an array of function values
exp2 = [2 * math.exp(-2 * t) for t in time]
#Calculate the analytical expression
analytical = [2 * math.exp(-2 * t) * (-1 + math.exp(t)) for t in time]
#Calculate the trapezoidal convolution
trapezoidal = convolve(exp1, exp2, dt)
#Calculate the scipy convolution
sci = signal.convolve(exp1, exp2, mode = 'full')
#Slice the first half to obtain the causal convolution and multiply by dt
#to account for the step width
sci = sci[0:steps] * dt
#Calculate the convolution using the original Riemann sum algorithm
riemann = convolveoriginal(exp1, exp2)
riemann = riemann[0:steps] * dt
#Plot
plt.plot(time, analytical, label = 'analytical')
plt.plot(time, trapezoidal, 'o', label = 'trapezoidal')
plt.plot(time, riemann, 'o', label = 'Riemann')
plt.plot(time, sci, '.', label = 'scipy.signal.convolve')
plt.legend()
plt.show()
Thank you for your time!
Answer: or, for those who prefer numpy to C. It will be slower than the C
implementation, but it's just a few lines.
>>> t = np.linspace(0, maxtime-dt, 50)
>>> fx = np.exp(-np.array(t))
>>> gx = 2*np.exp(-2*np.array(t))
>>> analytical = 2 * np.exp(-2 * t) * (-1 + np.exp(t))
this looks like trapezoidal in this case (but I didn't check the math)
>>> s2a = signal.convolve(fx[1:], gx, 'full')*dt
>>> s2b = signal.convolve(fx, gx[1:], 'full')*dt
>>> s = (s2a+s2b)/2
>>> s[:10]
array([ 0.17235682, 0.29706872, 0.38433313, 0.44235042, 0.47770012,
0.49564748, 0.50039326, 0.49527721, 0.48294359, 0.46547582])
>>> analytical[:10]
array([ 0. , 0.17221333, 0.29682141, 0.38401317, 0.44198216,
0.47730244, 0.49523485, 0.49997668, 0.49486489, 0.48254154])
largest absolute error:
>>> np.max(np.abs(s[:len(analytical)-1] - analytical[1:]))
0.00041657780840698155
>>> np.argmax(np.abs(s[:len(analytical)-1] - analytical[1:]))
6
|
Understanding eclipse project natures and workflow for web development
Question: I need to code some complicated web page with new html5 features, using
javascript a lot. Later I will use this as a template for Django. I was
advised to use Eclipse for JavaScript Web Developers, and to create a project
«with Javascript nature». Okay, I could do the entire client side development
in this project. The question is: when it comes to server coding, should I
just create Django project and copy template files there, or rather change
something in the project «nature» (or how you call it) to make project handle
both javascript and Python editing features? Or probably, there are another
good ways to organise the work?
Answer: I use Eclipse all the time for Django development of both front-end and back-
end code. I usually create the project with the "django project nature" using
the Aptana PyDev plugin (typically after importing my code, I'll right click
on the django project directory and choose from the menu "PyDev > Set as
Django Project".) I don't know your specific situation, but I've never had to
explicitly set anything to get javascript files to work. Typically clicking on
a file with a .js extension will open a javascript editor and give me
javascript syntax highlighting and such. If you have the PyDev plugin
installed, perhaps you can just click the "Set as Django Project" menu item
mentioned above. I'm not sure if I'm understanding your question properly, but
Eclipse does allow you to edit your javascript and Django code fairly well
within it's interface. Hope that helps,
Joe
|
Pythonic way to Validate time input (only for Hr, Min, Sec)
Question: In my application,I am getting time input in a variable in string format e.g.
`Values = '12,12,12'`.
Now i need to validate it `h<24 M<60 S<60` etc. and i want final output in
`'%H:%M:%S'` format.
To get this i tried `datetime.time().`
I had 1st tried with `values = '12'` then `'12,12,12'`.
In [1]: import datetime
In [2]: values = '12'
In [3]: d = datetime.time(values)
TypeError Traceback (most recent call last)
/mypc/test/<ipython console> in <module>()
TypeError: an integer is required
In [4]: d = datetime.time(int(values))
In [5]: d
Out[5]: datetime.time(12, 0)
In [6]: d.strftime('%H:%M:%S')
Out[6]: '12:00:00'
In [7]: s = d.strftime('%H:%M:%S')
In [8]: s
Out[8]: '12:00:00'
In [9]: values = '12,12,12'
In [10]: d = datetime.time(int(values))
ValueError: invalid literal for int() with base 10: '12,12,12'
But it works as below.
In [24]: datetime.time(12,12,12).strftime('%H:%M:%S')
Out[24]: '12:12:12'
So problem is that datetime.time() is taking input as integer and `'12,12,12'`
string cannot be converted in int.
Is there any other way(otherthan regexp) to do the validation for only Hr:M:S.
Answer: You have to unpack the values:
>>> values = '12,12,12'
>>> values = ast.literal_eval(values)
>>> datetime.time(*values)
datetime.time(12, 12, 12)
This last statement will raise an error if the time given is invalid.
To avoid problems with zero-padded numbers, as "wim" pointed, it's possible to
change the second line to:
values = (int(i) for i in values.split(','))
or
values = map(int, values.split(','))
|
How do i add a python script to the startup registry?
Question: I'm trying to make my python script run upon startup but i get the error
message windowserror access denied, but i should be able to make programs
start upon boot because teamviewer ( a third-party program i downloaded ) runs
everytime i restart my computer so i know that i should be able to make my
program run at startup (i might be doing something different though, so if you
could shed some light on what teamviewer is doing differently to get its
script to run at startup that would be helpful).
Here is my script
import _winreg, webbrowser
key = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER,'Software\Microsoft\Windows\CurrentVersion\Run')
_winreg.SetValueEx(key,'pytest',0,_winreg.REG_BINARY,'C:\Users\"USERNAME"\Desktop\test.py')
key.Close()
webbrowser.open('www.youtube.com')
Any input is appreciated.
Answer:
import webbrowser
webbrowser.open('www.youtube.com')
Get rid of all of that _winreg stuff. Instead, you (assuming double-clicking
on a py file opens the console) should simply place it in your startup folder
(`C:\Users\yourusername\AppData\Roaming\Microsoft\Windows\Start
Menu\Programs\Startup` on Windows 7, and `C:\Documents and
Settings\yourusername\Start Menu\Programs\Startup` in XP). This works because
Windows tries to open all files in the startup folder, and if Python opens PYs
by default, Windows will open the Python console. Try restarting, that should
work.
|
retrieve data from post requests in the form of JSON structure
Question: I am working on app engine in python, with a handler to response the post
requests from javascript, the post requests are sent in JSON structure, in
firebug console window, the Http request is shown:
[{"name":"John","value":"student"},{"name":"Mike","value":"teacher"}]
Can anyone tell me how to get this JSON data off the post requests in the
handler? thanks in advance.
Answer:
from google.appengine.ext import webapp
import json
class Handler(webapp.RequestHandler):
def post(self):
# Convert JSON from the request to Python objects.
data = json.loads(self.request.body)
# For your sample input, data will be a list of dicts.
for person in data:
name = person['name']
value = person['value']
|
New-style Signal and Slot Support, PyQt
Question: I'm writing a GUI application with PyQt4 (Python3). One my friend pointed out
that using pyuic4 is a bad practice and referred me to uic module and
Connecting Slots By Name features. He didn't have time to explain more and the
references I have are rather short, I couldn't grasp the idea from them ([uic
module](http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/designer.html#the-
uic-module),
[LoadingUIFilesAtRuntime](http://diotavelli.net/PyQtWiki/LoadingUIFilesAtRuntime),
[connecting slots by
name](http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/new_style_signals_slots.html#connecting-
slots-by-name)).
On StackOverflow there is at least one [related
question](http://stackoverflow.com/questions/3113002/starting-python-and-pyqt-
tutorials-books-general-approaches) but the links to the literature are broken
there.
I could follow standard tutorials and did simple GUI using pyuic, but now feel
a little bit confused... Any good examples and/or references are welcome.
Thanks.
Answer: Firstly, using `pyuic4` is certainly not "bad practice".
There are three main ways to get PyQt4 UI's into your code:
* Write it all by hand yourself
* Use `pyuic4` to auto-generate a python module that can be imported
* Use the `uic` package to load `ui` files directly at runtime
Of these, the first two are by far the most common, and most documentation,
tutorials, advice, etc that you will come across will use those methods.
A good source for PyQt4 tutorials can be found in [this
section](http://www.diotavelli.net/PyQtWiki/Tutorials) of the [PyQt4
Wiki](http://www.diotavelli.net/PyQtWiki/StartPage). However, I should
probably point out that, although still relevant, many of them are quite old
and so still use the old-style signals and slots.
However, the difference between the old- and new- styles is not that difficult
to understand, so maybe a simple example is all that's needed.
Here's the old-style way to connect a button-click signal to a handler method
(aka slot):
self.connect(self.button, QtCore.SIGNAL('clicked()'), self.handleButtonClick)
and here's the new-style way:
self.button.clicked(self.handleButtonClick)
As you can see, the new-style is much simpler and more pythonic. On the other
hand, the old-style is quite similar to how signals are connected using C++
(and for this reason can still be useful in certain circumstances).
If you have problems with connecting signals when writing your GUIs, you can
always ask a question here - but it's much easier to get good answers if you
ask specific questions that include example code.
|
Global Variable in Python
Question: I am very new to Python. Not learnt classes yet. Using Python 3.2.2. Tried
implement some procedural C logic. My code is spread over 2 files as follows.
## this file is called date.py
dd,mm,yy=0,0,0
def isValidDate(d,m,y):
if(d>=1 and d<=31 and m>=1 and m<=12):
dd,mm,yy=d,m,y #problem
print(dd,mm,yy) #problem
print(d,m,y) #problem
return True
else:
return False
def printDate():
print(dd,mm,yy) #problem
## this file is called module1.py
import date
def main():
dd,mm,yy = 23,1,1984
valid = date.isValidDate(dd,mm,yy) #problem
print (valid)
date.printDate()
date.dd=22 #problem
date.printDate()
if __name__ == '__main__':
main()
The lines that I have commented as "problem" are giving me problems.
When the statement in `module1.py` (which is `"date.dd=22"`) executes, the
value of the global variable in `date.py` changes to `22` and stays that way.
But when I call the `isValidDate()` function, even though the test condition
is passed, the assignment does not seem to be permanent. The local print
statement in `isValidDate()` function shows that `"dd"` is assigned the value,
but in the `main()` function the call to `printDate()` shows the original
values.
What is going on?
Answer: To assign values to a global variable in your code you've got to mark it as
such, otherwise the code would be assigning to a local variable with the same
name (that would be actually shadowing the global variable).
As explained in the
[documentation](http://docs.python.org/reference/simple_stmts.html#the-global-
statement):
> It would be impossible to assign to a global variable without global,
> although free variables may refer to globals without being declared global.
Hence, to fix the problem, you'd need something like this:
def isValidDate(d,m,y):
if(d>=1 and d<=31 and m>=1 and m<=12):
global dd, mm, yy
dd,mm,yy=d,m,y
...
|
What is the recommended method for creating a library with multiple classes?
Question: I am relatively new to C++ (previous experience in Python and dabbled in Java)
and I am writing a small program as a familiarization project. As part of the
program I am writing a class to decode some data and will eventually write a
similar one that will perform the encoding. The code is something that I am
sure I will reuse quite often and thought it would be interesting to create a
library as part of the project. My question is, what is considered the best
practice for creating a library?
**Edit: (Revised)**
After asking this question, I realized that I didn't know what I didn't know.
I have done some more research and this should help make my question more
specific:
* I am developing in Qt Creator. So specifics relating to Qt would be helpful, but not necessary.
* I have created a new static library project in Qt (MyCodec) that, currently, has one class defined called MyDecoder.
* As a library, my assumption is that, to add MyEncoder, I just create another class/header file.
* What happens next is where I am uncertain. Do I just build the library? My understanding is that it will create (in Windows) a .lib and a .h file. Is there something I should do before this step? Are there options that will affect the way I interact with it?
* Do I just include that header file in my program to access both classes that I wrote?
* I found lots of answers on adding a .lib file to a project in Qt, so I don't need that information.
**Original Question: (for context)**
My initial thought is that it would be the most convenient to create MyLib
that includes both MyEncoder and MyDecoder classes.
* If I were to do it that way, do I just declare both classes in the header?
* I would like to create a DLL out of this library for portability and experience. I'm sure there is lots of information out there about creating and using DLLs (which is not the subject of this question), but if there is a particularly good tutorial (for Qt) please pass it along.
* My assumption is that it would be best to use separate namespaces for MyEncoder and MyDecoder for this implementation vs. one namespace for MyLib?
I can see one trade off of this method being the size of the application,
since including MyLib.h would include the code for the encoder and decoder (if
the encoder and decoder were separate applications). This is assuming I am not
using a DLL.
I guess what I am getting at is:
* What methods are available (and recommended)?
* What are the trade offs of each?
* Where can I find documentation (tutorials/examples) on this specific subject? My search efforts have not yielded much for results.
If it helps to be more specific, I am doing my development using Qt 4.7.4 in
Qt Creator.
Answer: One "best practice" in C++ with regards to libraries is typically "you pay for
what you use."
How this applies to your question is that you would have MyEncoder and
MyDecoder in separate header files. So if the user wants to use a MyEncoder he
would include MyEncoder.h, if he wanted to use MyDecoder he would include
MyDecoder.h, and if he wanted to use both he would include both headers.
The linker will typically only include the parts of the code that you use in
the executable, so there is no penalty as far as code size goes, but there is
a penalty in compile times, particularly if you start using advanced template
techniques in your classes. Compile times can get pretty long in large
projects, so it is important to be able to only include what you are going to
use.
Of course, sometimes it is also convenient to include everything with one
header. So what you could have is this:
* MyEncoder.h
* MyDecoder.h
* MyCodec.h
and then MyCodec.h could include both MyEncoder.h and MyDecoder.h
There is probably no good reason to have MyEncoder and MyDecoder in different
namespaces, assuming they are meant to operate on the same type of data.
You might want to have something like a MyCodec namespace, and declare
MyEncoder and MyDecoder within that namespace.
**Updated for your revision:**
> As a library, my assumption is that, to add MyEncoder, I just create another
> class/header file.
That is a correct assumption.
> What happens next is where I am uncertain. Do I just build the library? My
> understanding is that it will create (in Windows) a .lib and a .h file. Is
> there something I should do before this step? Are there options that will
> affect the way I interact with it?
I haven't used Qt creator in a while, so I can't speak with authority on it or
how to access the relevant options. But as a general rule you will want to
have at least 2 versions of your library; a debug version and a release
version. If your library uses the Qt libraries, then when an application links
to the debug version of your library, they will need to have the debug version
of the Qt shared libraries in their path, and if they link to your release
version they will need to have the release version of the Qt libs.
There may also be options of whether you want to statically link to the C++
standard runtime libraries, or dynamically link to the DLLs.
But essentially yes, you just build the library and then the application that
uses it will link the library to the executable.
> Do I just include that header file in my program to access both classes that
> I wrote?
You include the header file, and link to the .lib file. That's all you should
need to do.
|
Invalid syntax in "for item in L" loop
Question: I have a feeling I'm missing something pretty simple here but, in this one
function:
def triplets(perimeter):
triplets, n, a, b, c = 0 #number of triplets, a, b, c, sides of a triangle, n is used to calculate a triple
L = primes(int(math.sqrt(perimeter)) #list of primes to divide the perimeter
for item in L: #iterate through the list of primes
if perimeter % item == 0: #check if a prime divides the perimeter
n = perimeter / item
a = n**2 - (n+1)**2 #http://en.wikipedia.org/wiki/Pythagorean_triple
b = 2n*(n+1)
c = n**2 + n**2
if a+b+c == perimeter: #check if it adds up to the perimeter of the triangle
triplets = triplets + 1
return triplets
I am getting the error:
for item in L:
^
SyntaxError: invalid syntax
For completeness my entire program looks like this:
import math
def primes(n): #get a list of primes below a number
if n==2: return [2]
elif n<2: return []
s=range(3,n+1,2)
mroot = n ** 0.5
half=(n+1)/2-1
i=0
m=3
while m <= mroot:
if s[i]:
j=(m*m-3)/2
s[j]=0
while j<half:
s[j]=0
j+=m
i=i+1
m=2*i+3
return [2]+[x for x in s if x]
def triplets(perimeter):
triplets, n, a, b, c = 0 #number of triplets, a, b, c, sides of a triangle, n is used to calculate a triple
L = primes(int(math.sqrt(perimeter)) #list of primes to divide the perimeter
for item in L: #iterate through the list of primes
if perimeter % item == 0: #check if a prime divides the perimeter
n = perimeter / item
a = n**2 - (n+1)**2 #http://en.wikipedia.org/wiki/Pythagorean_triple
b = 2n*(n+1)
c = n**2 + n**2
if a+b+c == perimeter: #check if it adds up to the perimeter of the triangle
triplets = triplets + 1
return triplets
def solve():
best = 0
perimeter = 0
for i in range(1, 1000):
if triplets(i) > best:
best = triplets(i)
perimeter = i
return perimeter
print solve()
I am using Python 2.7.1. I have a semicolon after the for loop, the
`primes(n)` function works, I have a feeling it is probably something stupid
but I can't figure out what it is that is causing this invalid syntax.
Answer: You are missing a closing parenthesis on the line before:
L = primes(int(math.sqrt(perimeter)) #list of primes to divide the perimeter
# ^ ^ ^ ^^
#nesting count 1 2 3 21
See how we don't reach 0 in the "nesting count" below the line?
|
Python ICMP ping implementation when pinging multiple ips from threads?
Question: I've been using [jedie's python `ping`
implementation](https://github.com/jedie/python-ping/blob/master/ping.py) on
Windows. I could be wrong, but when pinging two computers (A and B) from
separate threads, ping will return the first ping it receives, regardless of
source.
Since it could be an issue with jedie's fork, I [reverted to the previous
version](http://www.g-loaded.eu/2009/10/30/python-ping/). (This is the version
I'm going to explore below)
I added in a line of code in `receive_one_ping`: (Line 134 or similar)
recPacket, addr = my_socket.recvfrom(1024) # Existing line
print "dest: {}, recv addr: {}.".format(dest_addr, addr) # New line
This allows us to see the address of the ping we're receiving. (Should be same
as the destination IP, right?)
**Testing:**
ping1() pings a known offline IP (1.2.3.4),
ping2() pings a known online IP (192.168.1.1 - my router)
>>> from ping import do_one
>>> def ping1():
print "Offline:", do_one("1.2.3.4",1)
>>> ping1()
Offline: None
>>> def ping2():
print "Online:", do_one("192.168.1.1",1)
>>> ping2()
Online: dest: 192.168.1.1, recv addr: ('192.168.1.1', 0).
0.000403682590942
Now if we do them together: (Using Timer for simplicity)
>>> from threading import Timer
>>> t1 = Timer(1, ping1)
>>> t2 = Timer(1, ping2)
>>> t1.start(); t2.start()
>>> Offline:Online: dest: 192.168.1.1, recv addr: ('192.168.1.1', 0).dest: 1.2.3.4, recv addr: ('192.168.1.1', 0).
0.0004508952953870.000423517514093
It's a little mangled (due to print not working nicely with threading), so
here it is a bit clearer:
>>> Online: dest: 192.168.1.1, recv addr: ('192.168.1.1', 0).
Offline:dest: 1.2.3.4, recv addr: ('192.168.1.1', 0). # this is the issue - I assume dest should be the same as recv address?
0.000450895295387
0.000423517514093
**My questions:**
1. Can anyone recreate this?
2. Should ping be behaving like this? I assume not.
3. Is there an existing ICMP ping for python that will not have this behaviour?
Alternatively, can you think of an easy fix - ie polling `receive_one_ping`
until our destination matches our receive address?
**Edit:** I've created an issue on the [python-ping github
page](https://github.com/jedie/python-ping/issues/10)
Answer: This is happening because of the nature of ICMP. ICMP has no concept of ports,
so _all_ ICMP messages are received by _all_ listeners.
The usual way to disambiguate is to set a unique identifier in the ICMP ECHO
REQUEST payload, and look for it in the response. This code appears to do
that, but it uses the current process id to compose the ID. Since this is
multithreaded code, they will share a process id and all listeners in the
current process will think all ECHO REPLYs are ones they themselves sent!
You need to change the `ID` variable in `do_one()` so that it is per-thread
unique. You will need to change this line in `do_one()`:
my_ID = os.getpid() & 0xFFFF
_Possibly_ this will work as an alternative, but ideally you should use a real
16-bit hashing function:
# add to module header
try:
from thread import get_ident
except ImportError:
try:
from _thread import get_ident
except ImportError:
def get_ident():
return 0
# now in do_one() body:
my_ID = (get_ident() ^ os.getpid()) & 0xFFFF
I don't know if this module has any other thread issues, but it _seems_ to be
ok from a cursory examination.
Using the jedie implementation, you would make a similar change for the
`Ping()` `own_id` constructor argument. You can either pass in an id you know
to be unique (like above) and manage `Ping()` objects yourself, or you can
change this line (110) in the constructor:
self.own_id = os.getpid() & 0xFFFF
Also see [this question and answer and answer comment
thread](http://stackoverflow.com/q/6904135/1002469) for more info.
|
Python: Writing output to text file, but text file does not contain the entire string and truncates randomly?
Question: I am parsing an xls file using Python then converting that information into
[SBML](http://en.wikipedia.org/wiki/SBML) (a version of XML).
from mod2sbml import Parser
s = open('sbmltest3.mod', 'r').read()
p = Parser()
d = p.parse(s)
outfile2 = open('sbmlconvert.xml', 'w')
print >> outfile2, d.toSBML()
outfile2.close()
This is a fairly long file (>3000 lines) and when I open the .xml, the string
is truncated randomly around 1400 or 3000 lines. However, when I type: `print
d.toSBML()` and print this string to console, the string is not truncated and
I can see the end of the parsed string.
What could be the problem here?
**Edit:** To further dissect the problem, I have closed the code with
outfile2.close() and also tried to print s and print to console in my script.
This returns both truncated `s` and `d` strings. However, when I type the
exact commands into the interpreter separately, both print correctly. Anyone
know what's going on with this discrepancy?
Answer: Try this:
from mod2sbml import Parser
p = Parser()
with open('sbmlconvert.xml', 'w') as of:
s = open('sbmltest3.mod', 'r').read()
d = p.parse(s)
of.write(d.toSBML())
|
Splitting letters and numbers to figure out and assign value in table
Question: I currently have some code in Python where I have entered the values from a
table:
rules = { "213" : ( 0.00019, 3.5, 0.00019, 3.5 ),
"222" : ( 0.00019, 4.0, 0.00019, min( 4.0, 4.1E-8 * dm**3 - 4.1E-5 * dm**2 + 0.017 * dm + 1.35 ) ),
"223" : ( 0.0003, 4.5, 0.0003, 4.5 ),
"230" : ( 0.00017, 4.5, 0.00017, 3.3 ),
"231" : ( 0.00027, 5.5, 0.00027, 5.1E-6 * dm**2 - 0.0057 * dm + 4.6 ),
"232" : ( 0.00036, 6.0, 0.00036, 7.1E-6 * dm**2 - 0.007 * dm + 5.79 ),
"239" : ( 0.00017, 4.5, 0.00017, 2.9 ),
"240" : ( 0.00027, 6.5, 0.00027, 9.1E-6 * dm**2 - 0.01 * dm + 6.9 ),
"241" : ( 0.00049, 7.0, 0.00049, 3.1E-5 * dm**2 - 0.032 * dm + 8.7 ) }
serialNumber = [ "name" ][ 0 : 3 ]
try:
return rules[ serialNumber ]
The columns (readings from L-R in the brackets): F1ISO, F0ISO, F1COR, F0COR
What I would like to do, with an input 'name' (the name being a combination of
numbers and letters and always in the form: 11111A for example, there can be
two letters at the end).
I want to be able to split the 'name' into the letters and numbers but more
importantly I am looking at the first 3 numbers and the letters. With these
first three numbers I want to be able to read from the 'table' above but the
values which are chosen also depend on the letter.
The main rules are: If the letter is equal to V then the 'ISO' values are
taken. Any other combination takes the 'COR' values.
Thank you to anyone who can help.
Answer: This could be helpful to understand how to get the two parts from your entry
>>> name = "11111A"
>>> ser = name[:3] # first 3 characters
>>> code = name[3:] # rest of chars after the third
>>> ser
'111'
>>> code
'11A'
>>>
or maybe:
>>> code = name[-1] # last character
>>> code
'A'
then, in the same way:
>>> contents = rules[ser]
>>> if code == 'V':
... print contents[:2]
... else:
... print contents[2:]
|
Rename Files Based on File Content
Question: Using Python, I'm trying to rename a series of .txt files in a directory
according to a specific phrase in each given text file. Put differently and
more specifically, I have a few hundred text files with arbitrary names but
within each file is a unique phrase (something like No. 85-2156). I would like
to replace the arbitrary file name with that given phrase for every text file.
The phrase is not always on the same line (though it doesn't deviate that
much) but it always is in the same format and with the No. prefix.
I've looked at the [os module](http://docs.python.org/library/os.html) and I
understand how
* [`os.listdir`](http://docs.python.org/library/os.html#os.listdir)
* [`os.path.join`](http://docs.python.org/library/os.html#os.rename)
* [`os.rename`](http://docs.python.org/library/os.path.html#os.path.join)
could be useful but I don't understand how to combine those functions with
intratext manipulation functions like
[linecache](http://docs.python.org/library/linecache.html) or general line
reading functions.
I've thought through many ways of accomplishing this task but it seems like
easiest and most efficient way would be to create a loop that finds the unique
phrase in a file, assigns it to a variable and use that variable to rename the
file before moving to the next file.
This seems like it should be easy, so much so that I feel silly writing this
question. I've spent the last few hours looking reading documentation and
parsing through StackOverflow but it doesn't seem like anyone has quite had
this issue before -- or at least they haven't asked about their problem.
Can anyone point me in the right direction?
EDIT 1: When I create the regex pattern using [this
website](http://www.txt2re.com/), it creates bulky but seemingly workable
code:
import re
txt='No. 09-1159'
re1='(No)' # Word 1
re2='(\\.)' # Any Single Character 1
re3='( )' # White Space 1
re4='(\\d)' # Any Single Digit 1
re5='(\\d)' # Any Single Digit 2
re6='(-)' # Any Single Character 2
re7='(\\d)' # Any Single Digit 3
re8='(\\d)' # Any Single Digit 4
re9='(\\d)' # Any Single Digit 5
re10='(\\d)' # Any Single Digit 6
rg = re.compile(re1+re2+re3+re4+re5+re6+re7+re8+re9+re10,re.IGNORECASE|re.DOTALL)
m = rg.search(txt)
name = m.group(0)
print name
When I manipulate that to fit the `glob.glob` structure, and make it like
this:
import glob
import os
import re
re1='(No)' # Word 1
re2='(\\.)' # Any Single Character 1
re3='( )' # White Space 1
re4='(\\d)' # Any Single Digit 1
re5='(\\d)' # Any Single Digit 2
re6='(-)' # Any Single Character 2
re7='(\\d)' # Any Single Digit 3
re8='(\\d)' # Any Single Digit 4
re9='(\\d)' # Any Single Digit 5
re10='(\\d)' # Any Single Digit 6
rg = re.compile(re1+re2+re3+re4+re5+re6+re7+re8+re9+re10,re.IGNORECASE|re.DOTALL)
for fname in glob.glob("\file\structure\here\*.txt"):
with open(fname) as f:
contents = f.read()
tname = rg.search(contents)
print tname
Then this prints out the byte location of the the pattern -- signifying that
the regex pattern is correct. However, when I add in the `nname =
tname.group(0)` line after the original `tname = rg.search(contents)` and
change around the print function to reflect the change, it gives me the
following error: AttributeError: 'NoneType' object has no attribute 'group'.
When I tried copying and pasting @joaquin's code line for line, it came up
with the same error. I was going to post this as a comment to the @spatz
answer but I wanted to include so much code that this seemed to be a better
way to express the `new' problem. Thank you all for the help so far.
Edit 2: This is for the @joaquin answer below:
import glob
import os
import re
for fname in glob.glob("/directory/structure/here/*.txt"):
with open(fname) as f:
contents = f.read()
tname = re.search('No\. (\d\d\-\d\d\d\d)', contents)
nname = tname.group(1)
print nname
Last Edit: I got it to work using mostly the code as written. What was
happening is that there were some files that didn't have that regex expression
so I assumed Python would skip them. Silly me. So I spent three days learning
to write two lines of code (I know the lesson is more than that). I also used
the error catching method recommended here. I wish I could check all of you as
the answer, but I bothered @Joaquin the most so I gave it to him. This was a
great learning experience. Thank you all for being so generous with your time.
The final code is below.
import os
import re
pat3 = "No\. (\d\d-\d\d)"
ext = '.txt'
mydir = '/directory/files/here'
for arch in os.listdir(mydir):
archpath = os.path.join(mydir, arch)
with open(archpath) as f:
txt = f.read()
s = re.search(pat3, txt)
if s is None:
continue
name = s.group(1)
newpath = os.path.join(mydir, name)
if not os.path.exists(newpath):
os.rename(archpath, newpath + ext)
else:
print '{} already exists, passing'.format(newpath)
Answer: There is no checking or protection for failures (check is archpath is a file,
if newpath already exists, if the search is succesful, etc...), but this
should work:
import os
import re
pat = "No\. (\d\d\-\d\d\d\d)"
mydir = 'mydir'
for arch in os.listdir(mydir):
archpath = os.path.join(mydir, arch)
with open(archpath) as f:
txt = f.read()
s = re.search(pat, txt)
name = s.group(1)
newpath = os.path.join(mydir, name)
os.rename(archpath, newpath)
* * *
**Edit:** I tested the regex to show how it works:
>>> import re
>>> pat = "No\. (\d\d\-\d\d\d\d)"
>>> txt='nothing here or whatever No. 09-1159 you want, does not matter'
>>> s = re.search(pat, txt)
>>> s.group(1)
'09-1159'
>>>
The regex is very simple:
\. -> a dot
\d -> a decimal digit
\- -> a dash
So, it says: search for the string `"No. "` followed by 2+4 decimal digits
separated by a dash. The parentheses are to create a group that I can recover
with `s.group(1)` and that contains the code number.
And that is what you get, before and after:

Text of files one.txt, two.txt and three.txt is always the same, only the
number changes:
this is the first
file with a number
nothing here or whatever No. 09-1159 you want, does not matter
the number is
|
Scrapy start_urls
Question: [The
script](https://github.com/scrapy/dirbot/blob/master/dirbot/spiders/dmoz.py)
(below) from [this](http://doc.scrapy.org/en/latest/intro/tutorial.html)
tutorial contains two `start_urls`.
from scrapy.spider import Spider
from scrapy.selector import Selector
from dirbot.items import Website
class DmozSpider(Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/",
]
def parse(self, response):
"""
The lines below is a spider contract. For more info see:
http://doc.scrapy.org/en/latest/topics/contracts.html
@url http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/
@scrapes name
"""
sel = Selector(response)
sites = sel.xpath('//ul[@class="directory-url"]/li')
items = []
for site in sites:
item = Website()
item['name'] = site.xpath('a/text()').extract()
item['url'] = site.xpath('a/@href').extract()
item['description'] = site.xpath('text()').re('-\s[^\n]*\\r')
items.append(item)
return items
But why does it scrape only these 2 web pages? I see `allowed_domains =
["dmoz.org"]` but these two pages also contain links to other pages which are
within `dmoz.org` domain! Why doesnt it scrape them too?
Answer: `start_urls` class attribute contains start urls - nothing more. If you have
extracted urls of other pages you want to scrape - yield from `parse` callback
corresponding requests with [another] callback:
class Spider(BaseSpider):
name = 'my_spider'
start_urls = [
'http://www.domain.com/'
]
allowed_domains = ['domain.com']
def parse(self, response):
'''Parse main page and extract categories links.'''
hxs = HtmlXPathSelector(response)
urls = hxs.select("//*[@id='tSubmenuContent']/a[position()>1]/@href").extract()
for url in urls:
url = urlparse.urljoin(response.url, url)
self.log('Found category url: %s' % url)
yield Request(url, callback = self.parseCategory)
def parseCategory(self, response):
'''Parse category page and extract links of the items.'''
hxs = HtmlXPathSelector(response)
links = hxs.select("//*[@id='_list']//td[@class='tListDesc']/a/@href").extract()
for link in links:
itemLink = urlparse.urljoin(response.url, link)
self.log('Found item link: %s' % itemLink, log.DEBUG)
yield Request(itemLink, callback = self.parseItem)
def parseItem(self, response):
...
If you still want to customize start requests creation, override method
[BaseSpider.start_requests()](http://readthedocs.org/docs/scrapy/en/latest/topics/spiders.html#scrapy.spider.BaseSpider.start_requests)
|
Unpickling mid-stream (python)
Question: I am writing scripts to process (very large) files by repeatedly unpickling
objects until EOF. I would like to partition the file and have separate
processes (in the cloud) unpickle and process separate parts.
However my partitioner is not intelligent, it does not know about the
boundaries between pickled objects in the file (since those boundaries depend
on the object types being pickled, etc.).
Is there a way to scan a file for a "start pickled object" sentinel? The naive
way would be to attempt unpickling at successive byte offsets until an object
is successfully pickled, but that yields unexpected errors. It seems that for
certain combinations of input, the unpickler falls out of sync and returns
nothing for the rest of the file (see code below).
import cPickle
import os
def stream_unpickle(file_obj):
while True:
start_pos = file_obj.tell()
try:
yield cPickle.load(file_obj)
except (EOFError, KeyboardInterrupt):
break
except (cPickle.UnpicklingError, ValueError, KeyError, TypeError, ImportError):
file_obj.seek(start_pos+1, os.SEEK_SET)
if __name__ == '__main__':
import random
from StringIO import StringIO
# create some data
sio = StringIO()
[cPickle.dump(random.random(), sio, cPickle.HIGHEST_PROTOCOL) for _ in xrange(1000)]
sio.flush()
# read from subsequent offsets and find discontinuous jumps in object count
size = sio.tell()
last_count = None
for step in xrange(size):
sio.seek(step, os.SEEK_SET)
count = sum(1 for _ in stream_unpickle(file_obj))
if last_count is None or count == last_count - 1:
last_count = count
elif count != last_count:
# if successful, these should never print (but they do...)
print '%d elements read from byte %d' % (count, step)
print '(%d elements read from byte %d)' % (last_count, step-1)
last_count = count
Answer: The _[pickletools](http://docs.python.org/library/pickletools.html#module-
pickletools)_ module has a _dis_ function that shows the opcodes. It shows
that there is a STOP opcode that you may be scan for:
>>> import pickle, pickletools, StringIO
>>> s = StringIO.StringIO()
>>> pickle.dump('abc', s)
>>> p = s.getvalue()
>>> pickletools.dis(p)
0: S STRING 'abc'
7: p PUT 0
10: . STOP
highest protocol among opcodes = 0
Note, using the STOP opcode is a bit tricky because the codes are of variable
length, but it may serve as a useful hint about where the cutoffs are.
If you control the pickling step on the other end, then you can improve the
situation by adding your own unambiguous alternative separator:
>>> sep = '\xDE\xAD\xBE\xEF'
>>> s = StringIO.StringIO()
>>> pickle.dump('abc', s)
>>> s.write(sep)
>>> pickle.dump([10, 20], s)
>>> s.write(sep)
>>> pickle.dump('def', s)
>>> s.write(sep)
>>> pickle.dump([30, 40], s)
>>> p = s.getvalue()
Before unpacking, split into separate pickles using the known separator:
>>> for pick in p.split(sep):
print pickle.loads(pick)
abc
[10, 20]
def
[30, 40]
|
import data to a database from another database in python
Question: I have two databases, one is located in localhost, the other one in located in
another server.
In python, I want to import some data from database 2 to database 1:
con1 = mdb.connect (host= xx, user=xx, passwd= xx, db= xx) //connect database 1
cur1 = con1.cursor()
con2 = mdb.connect (host= xx, user=xx, passwd= xx, db= xx) //connect database 2
cur2 = con2.cursor()
query = "update database1.table1 set area = (select database2.table2.area from database2.table2)"
cur(1 or 2?).execute(query) //Don't know how to solve this problem
Answer: No, you can't do cross-server operations like that.
1. Fetch your data from source database (in batches, if it's big)
2. Prepare and send update statements to target database.
|
Passing Variable to Java Through JSP?
Question: I'm wondering if it is possible to pass form data from index.jsp file to a
java class and then back to the response.jsp. I'm trying to compute something
with the data they have entered into index.jsp but it can't be done in
Javascript. Is there a way that I can retrieve the data?
The other thing is using http servelts and writing the entire html in Java,
but that seems overly complex and not worth the effort if there are simpler
ways of doing this.
Thanks in advance for the help!
This is one of the scripts (in javascript) that I written to try and solve
this problem, but my class, RunPython.java, always comes an error?
<script type="text/javascript">
function onSubmit(){
var Bugfile = document.forms[0]["BugFile"].value;
var GD = document.forms[0]["GD"].value;
<%
String s = request.getParameter("Bugfile");
String d = request.getParameter("GD");
RunPython re = new RunPython(s,d);
%>
}
**Error:**
org.apache.jasper.JasperException: PWC6033: Error in Javac compilation for JSP
PWC6197: An error occurred at line: 61 in the jsp file: /index.jsp PWC6199:
Generated servlet error: string:///index_jsp.java:106: cannot find symbol
symbol : class RunPython location: class org.apache.jsp.index_jsp
PWC6197: An error occurred at line: 61 in the jsp file: /index.jsp PWC6199:
Generated servlet error: string:///index_jsp.java:106: cannot find symbol
symbol : class RunPython location: class org.apache.jsp.index_jsp
Answer: > Generated servlet error: string:///index_jsp.java:106: cannot find symbol
> symbol : class RunPython
> location: class org.apache.jsp.index_jsp
The compiler is just trying to tell you that it cannot resolve the mentioned
class in any of the imports. You need to import the mentioned class in JSP.
<%@ page import="com.example.RunPython" %>
This has **nothing** to do with passing variables around, although your
attempt seems to be pretty clumsy, but this is subject to a new question.
* * *
**Unrelated** to the concrete problem, you seem to misunderstand the purpose
and capabilities of servlets. They are not meant to write the HTML entirely in
it. Just only Java code. Start at [our servlets wiki
page](http://stackoverflow.com/tags/servlets/info) to learn about them.
|
Python- run socket script on the remote linux host
Question: I have uploaded the server script to the public directory on the server
machine. Then I try to connect to the server by a client, but I am not being
connected. Here is my code snippets:
# Echo client program
import socket
HOST = 'www.dotpy.ir/server.py' # The remote host
PORT = 50007 # The same port as used by the server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('Hello, world')
data = s.recv(1024)
s.close()
print 'Received', repr(data)
server:
# Echo server program
import socket
HOST = '' # Symbolic name meaning all available interfaces
PORT = 50007 # Arbitrary non-privileged port
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
print 'Connected by', addr
while 1:
data = conn.recv(1024)
if not data: break
conn.send(data)
conn.close()
Answer: These scripts seem to work well.
However, the script at the server must be run on the server for this to work.
It's not enough for it to be uploaded to the public file area. What access do
you have to the servers? Can you have scripts running on them?
If you succeed in running the script, then you will have to change the client
script from:
HOST = 'www.dotpy.ir/server.py' # The remote host
to
HOST = 'www.dotpy.ir' # The remote host
The reason is that you will connect to the host itself. There the script will
be running, listening to any inbound connections on the port specified. You
can't conect to a specific script.
Good luck!
|
Disabling HTML Encoding within Output of Custom Python Markdown Extension
Question: I have been working on creating a python markdown extension that will insert
an image gallery within my django project when a custom tag is used. The
actual extension is working, but the HTML that the extension returns is all
encoded. Here is the extension that I am using:
#!/usr/bin/env python
from django.template.loader import render_to_string
from main.models import *
import markdown
version = "0.1.0"
class GalleriaExtension(markdown.Extension):
def __init__(self, configs):
self.config = {
}
# Override defaults with user settings
for key, value in configs:
self.setConfig(key, value)
def add_inline(self, md, name, klass, re):
pattern = klass(re)
pattern.md = md
pattern.ext = self
md.inlinePatterns.add(name, pattern, "<reference")
def extendMarkdown(self, md, md_globals):
self.add_inline(md, 'galleria', Galleria,
r'\[\[(G|g)allery (?P<superpage_id>\w+)\]\]')
class Galleria(markdown.inlinepatterns.Pattern):
def handleMatch(self, m):
try:
images = SuperPage.objects.get(id=m.group('superpage_id')).superpageimage_set.all()
except:
images = None
if images:
rendered = render_to_string('galleria.html', { 'images': images })
else:
rendered = '<b>There are no images for the given ID</b>'
return rendered
def makeExtension(configs=None) :
return GalleriaExtension(configs=configs)
I ensured that the `render_to_string` was actually returning html that was not
encoded. From the shell here is an example of the output:
Output from `render_to_string`:
>>> from django.template.loader import render_to_string
>>> images = SuperPage.objects.get(id=8).superpageimage_set.all()
>>> render_to_string('galleria.html', { 'images': images })
u'<div class=\'galleria_std\'>\n <div class=\'gallery\' >\n <div id=\'stage_gallery\' >\n'
Here is output from `markdown` extension that is encoded:
>>>markdown.markdown('test [[gallery 8]] test', ['galleria'])
u'<p>test <div class=\'galleria_std\'>\n <div class=\'gallery\' >\n'
How can I make `rendered` return HTML mark up instead of encoded markup when
using the markdown extension? Also, I would appreciate any pointers on coding
this differently (syntax, layout, etc). I appreciate it.
Answer: If what you're looking for is a way to avoid the contents rendered by your
custom tag be autoescaped, then you can mark the string as safe with
[django.utils.safestring.mark_safe](https://docs.djangoproject.com/en/1.3/ref/utils/#django.utils.safestring.mark_safe).
You can see some examples in the django code itself. For example in
[`defaulttags.py`](https://code.djangoproject.com/browser/django/trunk/django/template/defaulttags.py).
|
Python crashing on Mac
Question: I just download a graphics module (found at
<http://mcsp.wartburg.edu/zelle/python/>) and wrote a quick program to test it
out. All it is supposed to do is create a window. It works, but the second the
window is created Python (not IDLE) goes non-responsive and I have to force
quit. What could be causing this? The code (that they provide as an example)
is:
from graphics import *
def main():
win = GraphWin("My Circle", 100, 100)
c = Circle(Point(50,50), 10)
c.draw(win)
win.getMouse()
win.close()
After I click it suddenly crashes.
Answer: This is actually expected behavior. The line
win.getMouse()
hangs the interpreter and window until you click. After the click, the line
win.close()
destroys the window, then your program terminates. This may appear as a
"crash" to you, but is actually the expected end of your Python program's run.
(If you're getting an error, post the trace in your question.)
|
python_d 2.7 will not load __m2crypto_d.pyd
Question: I've built python 2.7 debug with MSVC 2008 to debug a script that imports
M2Crypto. When I run my script, python_d correctly creates a Python_Eggs
cache, and copies both the __m2crypto.pyd and __m2crypto_d.pyd into it. But
then it attempts to load the non-debug python module from the cache, and
terminates because it contains no debug information. I've rebuilt both openssl
and M2Crypto and made certain that no other copies exist on the build machine
(a VM.) I've traced through python itself and cannot discover why it will not
load the _d.pyd.
Any ideas why this is happening?
Answer: First, the problem was that python attempted to load the non-debug version of
__m2crypto.pyd, which failed, because it lacked dependent components. This
caused python to terminate - not because the module was not found, but because
one if its children was not found. This is a critical error for python, and
whether this is a bug in python is for other minds to contemplate.
Using DEPENDENCY, I discovered that the openssl libraries were not installed
into the python home folder. This was because the script that makes the
M2Crypto distribution package has a "feature" which does not include these
files. So the following resolved the issue:
1. Build openssl with debug
2. Modify the setup() call in M2Crypto\setup.py to include **data_files=['ssleay32.dll','libeay32.dll']**
3. Build M2Crypto with debug, using the openssl debug
4. Install M2Crypto.
5. Profit!
Afterwards, I was able to import M2Crypto into both python and python_d.
|
Built-in method to Generate Random Strings of Fixed Length From Given Characters
Question: This is what my problem is: I need to make a random string 50 characters long,
made up of `1`s and `0`s.
I know how to solve this problem, and even have a one-liner for it. I have
also looked for various solutions to this problem on SO, only to get back what
I already know([1](http://stackoverflow.com/questions/2257441/python-random-
string-generation-with-upper-case-letters-and-digits),
[2](http://stackoverflow.com/questions/2257441/python-random-string-
generation-with-upper-case-letters-and-digits), etc). But what I really want
is **the most** Pythonic way of doing this.
Currently, I'm leaning towards `''.join(( random.choice([0,1]) for i in
xrange(50) ))`
Is there a more pythonic way of doing this? Is there a built-in that does
something like this, perhaps in `itertools`?
Answer: For Python2.7 or better:
In [83]: import random
In [84]: '{:050b}'.format(random.randrange(1<<50))
Out[84]: '10011110110110000011111000011100101111101001001011'
(In Python2.6, use `'{0:050b}'` instead of `'{:050b}'`.)
* * *
**Explanation** :
The `string.format` method can convert integers into their binary string
representations. The basic format code to do this is `'{:b}'`:
In [91]: '{:b}'.format(10)
Out[91]: '1010'
To make a string of width 50, use the format code `'{:50b}'`:
In [92]: '{:50b}'.format(10)
Out[92]: ' 1010'
and to fill in the whitespace with zeros, use `{:050b}`:
In [93]: '{:050b}'.format(10)
Out[93]: '00000000000000000000000000000000000000000000001010'
The [syntax for str.format](http://docs.python.org/library/string.html#format-
string-syntax) is a bit daunting at first. Here is my cheat sheet:
http://docs.python.org/library/string.html#format-string-syntax
replacement_field ::= "{" field_name ["!" conversion] [":" format_spec] "}"
field_name ::= (identifier|integer)("."attribute_name|"["element_index"]")*
attribute_name ::= identifier
element_index ::= integer
conversion ::= "r" | "s"
format_spec ::= [[fill]align][sign][#][0][width][,][.precision][type]
fill ::= <a character other than '}'>
align ::= "<" | ">" | "=" | "^"
"=" forces the padding to be placed after the sign (if any)
but before the digits. (for numeric types)
"<" left justification
">" right justification
"^" center justification
sign ::= "+" | "-" | " "
"+" places a plus/minus sign for all numbers
"-" places a sign only for negative numbers
" " places a leading space for positive numbers
# for integers with type b,o,x, tells format to prefix
output with 0b, 0o, or 0x.
0 enables zero-padding. equivalent to 0= fill align.
width ::= integer
, tells format to use a comma for a thousands separator
precision ::= integer
type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" |
"o" | "x" | "X" | "%"
c convert integer to corresponding unicode character
n uses a locale-aware separator
% multiplies number by 100, display in 'f' format, with percent sign
|
how to safely eval an urlencode-ed dictionary in Python?
Question: I should communicate with an online service that sends me the following data
urlencoded:
data_to_process = {...}
args = {'args0': data_to_process, 'action': 'save'}
result = urllib2.urlopen(..., urllib.urlencode(args), ...)
The main data of interest is stored in the `data_to_process` dictionary. I
would like to get back the original dictionary that's supposed to contain only
string, numeric and boolean values. How can you achieve this?
I've tried
eval(dict_str,{'__builtins__': None})
but this fails for `False` values (and who knows what else).
thanks for any ideas!
Answer: Use `ast.literal_eval`:
>>> from ast import literal_eval
>>> test = repr({"ham": True, 42: "spam", "foo": "bar"})
>>> test
"{42: 'spam', 'foo': 'bar', 'ham': True}"
>>> literal_eval(test)
{42: 'spam', 'foo': 'bar', 'ham': True}
|
Processing HTTP GET input parameter on server side in python
Question: I wrote a simple HTTP client and server in python for experienmenting. The
first code snippet below shows how I send an HTTP get request with a parameter
namely imsi. In the second code snippet I show my doGet function
implementation in the server side. My question is how I can extract the imsi
parameter in the server code and send a response back to the client in order
to signal the client that imsi is valid. Thanks.
P.S.: I verified that client sends the request successfully.
**CLIENT code snippet**
params = urllib.urlencode({'imsi': str(imsi)})
conn = httplib.HTTPConnection(host + ':' + str(port))
#conn.set_debuglevel(1)
conn.request("GET", "/index.htm", 'imsi=' + str(imsi))
r = conn.getresponse()
**SERVER code snippet**
import sys, string,cStringIO, cgi,time,datetime
from os import curdir, sep
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
class MyHandler(BaseHTTPRequestHandler):
# I WANT TO EXTRACT imsi parameter here and send a success response to
# back to the client.
def do_GET(self):
try:
if self.path.endswith(".html"):
#self.path has /index.htm
f = open(curdir + sep + self.path)
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
self.wfile.write("<h1>Device Static Content</h1>")
self.wfile.write(f.read())
f.close()
return
if self.path.endswith(".esp"): #our dynamic content
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
self.wfile.write("<h1>Dynamic Dynamic Content</h1>")
self.wfile.write("Today is the " + str(time.localtime()[7]))
self.wfile.write(" day in the year " + str(time.localtime()[0]))
return
# The root
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
lst = list(sys.argv[1])
n = lst[len(lst) - 1]
now = datetime.datetime.now()
output = cStringIO.StringIO()
output.write("<html><head>")
output.write("<style type=\"text/css\">")
output.write("h1 {color:blue;}")
output.write("h2 {color:red;}")
output.write("</style>")
output.write("<h1>Device #" + n + " Root Content</h1>")
output.write("<h2>Device Addr: " + sys.argv[1] + ":" + sys.argv[2] + "</h1>")
output.write("<h2>Device Time: " + now.strftime("%Y-%m-%d %H:%M:%S") + "</h2>")
output.write("</body>")
output.write("</html>")
self.wfile.write(output.getvalue())
return
except IOError:
self.send_error(404,'File Not Found: %s' % self.path)
Answer: You can parse the query of a GET request using urlparse, then split the query
string.
from urlparse import urlparse
query = urlparse(self.path).query
query_components = dict(qc.split("=") for qc in query.split("&"))
imsi = query_components["imsi"]
# query_components = { "imsi" : "Hello" }
# Or use the parse_qs method
from urlparse import urlparse, parse_qs
query_components = parse_qs(urlparse(self.path).query)
imsi = query_components["imsi"]
# query_components = { "imsi" : ["Hello"] }
You can confirm this by using
curl http://your.host/?imsi=Hello
|
Django redirect with kwarg
Question: I am new to python and django and i have a question regarding the redirect
function.
This is a reduced version of my views.py file.
def page_index(request, error_message=''):
print error_message
def add_page(request):
return redirect('page_index') # this work fine
return redirect('page_index', error_message='test') # this does not work
And here is a short version of my urls.py
urlpatterns = patterns(
'x.views',
url(r'^$', 'page_index', {'error_message': 't'}, name='page_index'),
url(r'^add/$', 'add_page', name='add_page'),
)
When i try redirecting to page_index without the keyword argument everything
works fine, but when i use the kwag i get the following error message:
> NoReverseMatch at /pages/add/
>
> Reverse for 'page_index' with arguments '()' and keyword arguments
> '{'error_message': 'test'}' not found.
What am i doing wrong?
Answer: Short answer: There is no place in your url scheme for the 'error_message'
keyword.
Longer answer:
The redirect() function is calling reverse() to build a URL; it is going to
send the user's browser to that URL by returning an HTTP response with a 302
redirect status code, and the new url. Any keyword arguments that you supply
to reverse() are supposed to end up as part of the url -- that's how they get
communicated to the user.
In your case, though, the url for 'page_index` is just defined as '^$' -- this
is the root url, which looks like 'http://yoursite.com/' in the browser.
If you want to be able to issue a redirect that contains other information,
you will need to define a place for it in the url, or add it in a different
way.
~~Two~~ Three ways are fairly common for this:
1. Use a query parameter -- this sends the message to the client explicitly; if you aren't careful, people can craft urls to make your index page say whatever they want it to.
return redirect(reverse('page-index')+"?error_message=test"))
2. Stash the message in the session and pull it out when the next page loads -- this requires that you have sessions configured, and keeps track of everything on the server side, rather than relying on the client to send you back the error message:
def add_page(request):
request.session['error_message'] = 'test'
return redirect('page-index')
def page_index(request):
print request.session.get('error_message','')
3. Use the messages framework for this -- this is preferred over ad-hoc session attributes, as long as you don't need too many 'types' of message on the same page. If all you have is a space in your template for error message, though, then this is really easy:
from django.contrib.messages import error
def add_page(request):
error(request, 'test')
return redirect('page-index')
And then in your base template, have a block like this somewhere (probably
more complex than this; styled, even):
{% for message in messages %}
<p>{{ message }}</p>
{% endfor %}
In ~~both~~ all cases, though, you can remove the arguments from your urls.py
-- the message itself is not going to be part of the path component of the
URL.
urlpatterns = patterns(
'x.views',
url(r'^$', 'page_index', name='page_index'),
url(r'^add/$', 'add_page', name='add_page'),
)
|
osx python - how do I decode sys.argv?
Question: I don't think I'm even using funky characters - just trying to pass in a "-d"
but my dash seems to get munged. If I just print sys.argv[1] it looks okay,
but if I print the entire list sys.argv, I can see funky characters instead of
my dash. Mac OSX 10.6.8 Python 2.6.1
#!/usr/bin/env python
import sys
if __name__ == "__main__":
try:
print "SVH FLAG sys.argv ",sys.argv
num_args = len(sys.argv)
for i in range(0,num_args):
print "SVH FLAG sys.argv[",i,"] ",sys.argv[i]
except:
print "problem with sys.argv"
Which gives me this when I call it with -d: ./deleteme.py –d
/Library/Python/2.6/site-packages
SVH FLAG sys.argv ['./deleteme.py', '\xe2\x80\x93d', '/Library/Python/2.6/site-packages']
SVH FLAG sys.argv[ 0 ] ./deleteme.py
SVH FLAG sys.argv[ 1 ] –d
SVH FLAG sys.argv[ 2 ] /Library/Python/2.6/site-packages
That funky string on the first line of output seems to really mess up
something like optparse, which doesn't see my dash.
Is there something I need to tell sys to give me a normal looking argv?
Thanks in advance!
Answer: Somehow, you are not typing a hyphen, but an actual dash character (option -),
Unicode 8211, whose UTF-8 representation is the three-byte string
`\xe2\x80\x93`. We always refer to the character that precedes command-line
options as a "dash", but it's really a hyphen!
|
Using BeautifulSoup to search html for string
Question: I am using BeautifulSoup to look for user entered strings on a specific page.
For example, I want to see if the string 'Python' is located on the page:
<http://python.org>
When I used: `find_string = soup.body.findAll(text='Python')` find_string
returned `[]`
But when I used: `find_string = soup.body.findAll(text=re.compile('Python'),
limit=1)` find_string returned `[u'Python Jobs']` as expected
What is the difference between these two statements that makes the second
statement work when there are more than one instances of the word to be
searched
Answer: The following line is looking for the **exact** NavigableString 'Python':
>>> soup.body.findAll(text='Python')
[]
Note that the following NavigableString is found:
>>> soup.body.findAll(text='Python Jobs')
[u'Python Jobs']
Note this behaviour:
>>> import re
>>> soup.body.findAll(text=re.compile('^Python$'))
[]
So your regexp is looking for an occurrence of 'Python' not the exact match to
the NavigableString 'Python'.
|
BeautifulSoup and line breaks within a table row?
Question: Sample code:
from BeautifulSoup import BeautifulSoup, SoupStrainer
html='''<tr>
<td align="left">Foo<br />
Bar<br /></td>
</tr>'''
soup=BeautifulSoup(html)
rows=soup.findAll('tr')
print rows
print rows[0].text.encode("utf8")
I would like the output to be something like "Foo Bar" or even if there was an
actual newline between the two lines that would be fine, but the output I get
just has "FooBar", note that there is no whitespace between the two lines.
Very new to python and beautifulsoup, can someone give a hand?
Answer: You can go one level further using `cell = rows[0].find('td')`, then see its
contents using `cell.contents`, then filter the elements you need, then `join`
them by spaces.
Another option: you can use a regular expression for replacing the `<br />` by
a space. for that you can write:
import re
s = re.sub('<br\s*?>', ' ', rows[0].text)
Then you can replace multiple consecutive whitespaces by
s = re.sub('\s+', ' ', s)
Then the string should look like this:
>>> print s
<tr> <td align="left">Foo Bar </td> </tr>
Then you can easily extract the part you need.
|
ImportError: No module named mapnik (Ubuntu 10.10)
Question: An error occurred when `generate_image.py` in `/bin/mapnik` to display
image.png
dewirobiatul@dewi:~/bin/mapnik$ ./generate_image.py Traceback (most
recent call last): File "./generate_image.py", line 6, in <module>
import mapnik ImportError: No module named mapnik dewirobiatul@dewi:~/bin/mapnik$
I try to install mapnik again with command:
sudo python scons/scons.py configure INPUT_PLUGINS=all OPTIMIZATION=3 SYSTEM_FONTS=/usr/share/fonts/truetype/
but
python: can't open file 'scons/scons.py': [Errno 2] No such file or directory
Please help and solution :) thank's
Answer: I don't have Ubuntu 10.10 here but you could follow the advice on
<https://github.com/mapnik/mapnik/wiki/UbuntuInstallation>
|
Converter for std::vector when passed by reference
Question: This is an follow up question to [std::vector to
boost::python::list](http://stackoverflow.com/questions/6157409/stdvector-to-
boostpythonlist/8949185#8949185)
I tried the provided example:
// C++ code
typedef std::vector<std::string> MyList;
class MyClass {
MyList myFuncGet();
void myFuncSet(MyList& list)
{
list.push_back("Hello");
}
};
// My Wrapper code
#include <boost/python/suite/indexing/vector_indexing_suite.hpp>
using namespace boost::python;
BOOST_PYTHON_MODULE(mymodule)
{
class_<MyList>("MyList")
.def(vector_indexing_suite<MyList>() );
class_<myClass>("MyClass")
.def("myFuncGet", &MyClass::myFuncGet)
.def("myFuncSet", &MyClass::myFuncSet)
;
}
But when I try to actually use it in Python I get an error (See bottom).
Python 2.7.2 (default, Jun 12 2011, 14:24:46) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from mymoduleimport *
>>> mc = MyClass()
>>> p = []
>>> mc.myFuncSet(p)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Boost.Python.ArgumentError: Python argument types in
MyClass.myFuncSet(MyClass, list)
did not match C++ signature:
myFuncSet(MyClass {lvalue}, std::vector<std::string, std::allocator<std::string> > {lvalue})
From what I've been able to gather by reading various other sites & posts, a
converter is required. Can someone complete my example by adding in the
necessary converter code? I'd do it myself, but I'm not familiar enough with
boost to know what such a converter looks like.
Thanks in advance!
Answer: I believe you can only use converters when passing the argument by value or
const reference. Passing by nonconst reference requires that the type be
directly exposed. This means that if you want to pass a list from python to
c++, without copying the list items, you'll need to change your code to work
with a `boost::python::list` rather than `MyList`, which'll be something like
(untested)
void myFuncSet(boost::python::list& list)
{
list.append("Hello");
}
The vector indexing suite adds python list like behaviour to your MyList
binding, it doesn't let you pass a python list in its place.
The error you're getting in your example is because you're trying to pass a
python list to a function which take a `std::vector<int>`. I suspect this
would work
p = mc.myFuncGet()
mc.myFuncSet(p)
This is quite a helpful article on writing converters.
<http://misspent.wordpress.com/2009/09/27/how-to-write-boost-python-
converters/>
|
Python importing subdirectories
Question: I'm trying to import all files from a sub-directory, so I figured I could
write `__init__.py` in that sub-directory to import the files. However, when I
do this it does not seem to import anything.
File structure:
prog.py
module/
__init__.py
code.py
Code for `prog.py`: `pass`
Code for `__init__.py`: `import code`
Code for `code.py`: `print('hello')`
When I run prog.py nothing happens. Why does it not print `hello`, and is
there a better way to easily import everything from a sub-directory?
Answer: If you have the following structure:
package
__init__.py
module.py
In `__init__.py` you can either try this:
import package.module
or this:
from . import module
This way, if `package` is in your `PYTHONPATH`, you'll get the expected
behaviour:
>>> import package
hello
|
How to portably parse the (Unicode) degree symbol with regular expressions?
Question: I'm writing a simple regular expression parser for the output of the `sensors`
utility on Ubuntu. Here's an example of a line of text I'm parsing:
temp1: +31.0°C (crit = +107.0°C)
And here's the regex I'm using to match that (in Python):
temp_re = re.compile(r'(temp1:)\s+(\+|-)(\d+\.\d+)\W\WC\s+'
r'\(crit\s+=\s+(\+|-)(\d+\.\d+)\W\WC\).*')
This code works as expected and matches the example text I've given above. The
only bits I'm really interested in are the numbers, so this bit:
(\+|-)(\d+\.\d+)\W\WC
which starts by matching the `+` or `-` sign and ends by matching the `°C`.
My question is, why does it take two `\W` (non-alphanumeric) characters to
match `°` rather than one? Will the code break on systems where Unicode is
represented differently to mine? If so, how can I make it portable?
Answer: Possible portable solution:
Convert input data to unicode, and use `re.UNICODE` flag in regular
expressions.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re
data = u'temp1: +31.0°C (crit = +107.0°C)'
temp_re = re.compile(ur'(temp1:)\s+(\+|-)(\d+\.\d+)°C\s+'
ur'\(crit\s+=\s+(\+|-)(\d+\.\d+)°C\).*', flags=re.UNICODE)
print temp_re.findall(data)
Output
[(u'temp1:', u'+', u'31.0', u'+', u'107.0')]
**EDIT**
[@netvope](http://stackoverflow.com/users/250827/netvope) allready pointed
this out in comments for question.
**Update**
Notes from [J.F. Sebastian](http://stackoverflow.com/users/4279/j-f-sebastian)
comments about input encoding:
> `check_output()` returns binary data that sometimes can be text (that should
> have a known character encoding in this case and you can convert it to
> Unicode). Anyway ord(u'°') == 176 so it can not be encoded using ASCII
> encoding.
So, to decode input data to `unicode`, basically* you should use encoding from
system locale using `locale.getpreferredencoding()` e.g.:
data = subprocess.check_output(...).decode(locale.getpreferredencoding())
With data encoded correctly:
> you'll get the same output without re.UNICODE in this case.
* * *
Why basically? Because on Russian Win7 with `cp1251` as `preferredencoding` if
we have for example `script.py` which decodes it's output to `utf-8`:
#!/usr/bin/env python
# -*- coding: utf8 -*-
print u'temp1: +31.0°C (crit = +107.0°C)'.encode('utf-8')
And wee need to parse it's output:
subprocess.check_output(['python',
'script.py']).decode(locale.getpreferredencoding())
will produce wrong results: `'В°'` instead `°`.
So you need to know encoding of input data, in some cases.
|
How to not import django here?
Question: I have filter that I took from django but now I must import too much of django
which I don't want and I must put the strange line in my file:
`os.environ['DJANGO_SETTINGS_MODULE'] = 'locale'`
This shouldn't be needed since I'm not using django. I use Jinja2 and I took
the floatformat filter from django and using it with jinja2 which works if I
do the crazy imports:
os.environ['DJANGO_SETTINGS_MODULE'] = 'locale'
from django.utils.encoding import force_unicode, iri_to_uri
from django.utils.safestring import mark_safe, SafeData, mark_for_escaping
from django.utils import formats
# Values for testing floatformat input against infinity and NaN representations,
# which differ across platforms and Python versions. Some (i.e. old Windows
# ones) are not recognized by Decimal but we want to return them unchanged vs.
# returning an empty string as we do for completley invalid input. Note these
# need to be built up from values that are not inf/nan, since inf/nan values do
# not reload properly from .pyc files on Windows prior to some level of Python 2.5
# (see Python Issue757815 and Issue1080440).
pos_inf = 1e200 * 1e200
neg_inf = -1e200 * 1e200
nan = (1e200 * 1e200) // (1e200 * 1e200)
special_floats = [str(pos_inf), str(neg_inf), str(nan)]
def floatformat(text, arg=-1):
"""
Displays a float to a specified number of decimal places.
If called without an argument, it displays the floating point number with
one decimal place -- but only if there's a decimal place to be displayed:
* num1 = 34.23234
* num2 = 34.00000
* num3 = 34.26000
* {{ num1|floatformat }} displays "34.2"
* {{ num2|floatformat }} displays "34"
* {{ num3|floatformat }} displays "34.3"
If arg is positive, it will always display exactly arg number of decimal
places:
* {{ num1|floatformat:3 }} displays "34.232"
* {{ num2|floatformat:3 }} displays "34.000"
* {{ num3|floatformat:3 }} displays "34.260"
If arg is negative, it will display arg number of decimal places -- but
only if there are places to be displayed:
* {{ num1|floatformat:"-3" }} displays "34.232"
* {{ num2|floatformat:"-3" }} displays "34"
* {{ num3|floatformat:"-3" }} displays "34.260"
If the input float is infinity or NaN, the (platform-dependent) string
representation of that value will be displayed.
"""
try:
input_val = force_unicode(text)
d = Decimal(input_val)
except UnicodeEncodeError:
return u''
except InvalidOperation:
if input_val in special_floats:
return input_val
try:
d = Decimal(force_unicode(float(text)))
except (ValueError, InvalidOperation, TypeError, UnicodeEncodeError):
return u''
try:
p = int(arg)
except ValueError:
return input_val
try:
m = int(d) - d
except (ValueError, OverflowError, InvalidOperation):
return input_val
if not m and p < 0:
return formats.number_format(u'%d' % (int(d)), 0)
if p == 0:
exp = Decimal(1)
else:
exp = Decimal(u'1.0') / (Decimal(10) ** abs(p))
try:
# Avoid conversion to scientific notation by accessing `sign`, `digits`
# and `exponent` from `Decimal.as_tuple()` directly.
sign, digits, exponent = d.quantize(exp, ROUND_HALF_UP).as_tuple()
digits = [unicode(digit) for digit in reversed(digits)]
while len(digits) <= abs(exponent):
digits.append(u'0')
digits.insert(-exponent, u'.')
if sign:
digits.append(u'-')
number = u''.join(reversed(digits))
return formats.number_format(number, abs(p))
except InvalidOperation:
return input_val
Can I achieve this filter's function without the django imports and with no
fake django setting that I just put there just because it asked for it?
Thank you for any help
## Update
I copied insane amounts of django code for this little use case and it s still
not working since I can't find the function `get_real_language`that I must
take from django:
# -*- coding: utf-8 -*-
from datetime import datetime
from decimal import Decimal, InvalidOperation, ROUND_HALF_UP
import os
def datetimeformat(value, format='%H:%M / %d-%m-%Y'):
return value.strftime(format)
def timesince(value, default="just now"):
now = datetime.utcnow()
diff = now - value
periods = (
(diff.days / 365, "year", "years"),
(diff.days / 30, "month", "months"),
(diff.days / 7, "week", "weeks"),
(diff.days, "day", "days"),
(diff.seconds / 3600, "hour", "hours"),
(diff.seconds / 60, "minute", "minutes"),
(diff.seconds, "second", "seconds"),
)
for period, singular, plural in periods:
if period:
return "%d %s ago" % (period, singular if period == 1 else plural)
return default
def makeid(n, countrycode="46"):
countrycode = str(countrycode)
n = str(n)
return "%s%s%s" % (countrycode, '0'*(12-len(countrycode)-len(n)), n)
# Values for testing floatformat input against infinity and NaN representations,
# which differ across platforms and Python versions. Some (i.e. old Windows
# ones) are not recognized by Decimal but we want to return them unchanged vs.
# returning an empty string as we do for completley invalid input. Note these
# need to be built up from values that are not inf/nan, since inf/nan values do
# not reload properly from .pyc files on Windows prior to some level of Python 2.5
# (see Python Issue757815 and Issue1080440).
pos_inf = 1e200 * 1e200
neg_inf = -1e200 * 1e200
nan = (1e200 * 1e200) // (1e200 * 1e200)
special_floats = [str(pos_inf), str(neg_inf), str(nan)]
def floatformat(text, arg=-1):
"""
Displays a float to a specified number of decimal places.
If called without an argument, it displays the floating point number with
one decimal place -- but only if there's a decimal place to be displayed:
* num1 = 34.23234
* num2 = 34.00000
* num3 = 34.26000
* {{ num1|floatformat }} displays "34.2"
* {{ num2|floatformat }} displays "34"
* {{ num3|floatformat }} displays "34.3"
If arg is positive, it will always display exactly arg number of decimal
places:
* {{ num1|floatformat:3 }} displays "34.232"
* {{ num2|floatformat:3 }} displays "34.000"
* {{ num3|floatformat:3 }} displays "34.260"
If arg is negative, it will display arg number of decimal places -- but
only if there are places to be displayed:
* {{ num1|floatformat:"-3" }} displays "34.232"
* {{ num2|floatformat:"-3" }} displays "34"
* {{ num3|floatformat:"-3" }} displays "34.260"
If the input float is infinity or NaN, the (platform-dependent) string
representation of that value will be displayed.
"""
try:
input_val = force_unicode(text)
d = Decimal(input_val)
except UnicodeEncodeError:
return u''
except InvalidOperation:
if input_val in special_floats:
return input_val
try:
d = Decimal(force_unicode(float(text)))
except (ValueError, InvalidOperation, TypeError, UnicodeEncodeError):
return u''
try:
p = int(arg)
except ValueError:
return input_val
try:
m = int(d) - d
except (ValueError, OverflowError, InvalidOperation):
return input_val
if not m and p < 0:
return number_format(u'%d' % (int(d)), 0)
if p == 0:
exp = Decimal(1)
else:
exp = Decimal(u'1.0') / (Decimal(10) ** abs(p))
try:
# Avoid conversion to scientific notation by accessing `sign`, `digits`
# and `exponent` from `Decimal.as_tuple()` directly.
sign, digits, exponent = d.quantize(exp, ROUND_HALF_UP).as_tuple()
digits = [unicode(digit) for digit in reversed(digits)]
while len(digits) <= abs(exponent):
digits.append(u'0')
digits.insert(-exponent, u'.')
if sign:
digits.append(u'-')
number = u''.join(reversed(digits))
return number_format(number, abs(p))
except InvalidOperation:
return input_val
def force_unicode(s, encoding='utf-8', strings_only=False, errors='strict'):
"""
Similar to smart_unicode, except that lazy instances are resolved to
strings, rather than kept as lazy objects.
If strings_only is True, don't convert (some) non-string-like objects.
"""
if strings_only and is_protected_type(s):
return s
try:
if not isinstance(s, basestring,):
if hasattr(s, '__unicode__'):
s = unicode(s)
else:
try:
s = unicode(str(s), encoding, errors)
except UnicodeEncodeError:
if not isinstance(s, Exception):
raise
# If we get to here, the caller has passed in an Exception
# subclass populated with non-ASCII data without special
# handling to display as a string. We need to handle this
# without raising a further exception. We do an
# approximation to what the Exception's standard str()
# output should be.
s = ' '.join([force_unicode(arg, encoding, strings_only,
errors) for arg in s])
elif not isinstance(s, unicode):
# Note: We use .decode() here, instead of unicode(s, encoding,
# errors), so that if s is a SafeString, it ends up being a
# SafeUnicode at the end.
s = s.decode(encoding, errors)
except UnicodeDecodeError, e:
if not isinstance(s, Exception):
raise DjangoUnicodeDecodeError(s, *e.args)
else:
# If we get to here, the caller has passed in an Exception
# subclass populated with non-ASCII bytestring data without a
# working unicode method. Try to handle this without raising a
# further exception by individually forcing the exception args
# to unicode.
s = ' '.join([force_unicode(arg, encoding, strings_only,
errors) for arg in s])
return s
def pluralize(value, arg=u's'):
"""
Returns a plural suffix if the value is not 1. By default, 's' is used as
the suffix:
* If value is 0, vote{{ value|pluralize }} displays "0 votes".
* If value is 1, vote{{ value|pluralize }} displays "1 vote".
* If value is 2, vote{{ value|pluralize }} displays "2 votes".
If an argument is provided, that string is used instead:
* If value is 0, class{{ value|pluralize:"es" }} displays "0 classes".
* If value is 1, class{{ value|pluralize:"es" }} displays "1 class".
* If value is 2, class{{ value|pluralize:"es" }} displays "2 classes".
If the provided argument contains a comma, the text before the comma is
used for the singular case and the text after the comma is used for the
plural case:
* If value is 0, cand{{ value|pluralize:"y,ies" }} displays "0 candies".
* If value is 1, cand{{ value|pluralize:"y,ies" }} displays "1 candy".
* If value is 2, cand{{ value|pluralize:"y,ies" }} displays "2 candies".
"""
if not u',' in arg:
arg = u',' + arg
bits = arg.split(u',')
if len(bits) > 2:
return u''
singular_suffix, plural_suffix = bits[:2]
try:
if int(value) != 1:
return plural_suffix
except ValueError: # Invalid string that's not a number.
pass
except TypeError: # Value isn't a string or a number; maybe it's a list?
try:
if len(value) != 1:
return plural_suffix
except TypeError: # len() of unsized object.
pass
return singular_suffix
pluralize.is_safe = False
def number_format(value, decimal_pos=None):
"""
Formats a numeric value using localization settings
"""
return format(
value,
get_format('DECIMAL_SEPARATOR'),
decimal_pos,
get_format('NUMBER_GROUPING'),
get_format('THOUSAND_SEPARATOR'),
)
def format(number, decimal_sep, decimal_pos, grouping=0, thousand_sep=''):
"""
Gets a number (as a number or string), and returns it as a string,
using formats definied as arguments:
* decimal_sep: Decimal separator symbol (for example ".")
* decimal_pos: Number of decimal positions
* grouping: Number of digits in every group limited by thousand separator
* thousand_sep: Thousand separator symbol (for example ",")
"""
use_grouping = True#settings.USE_L10N and \
#settings.USE_THOUSAND_SEPARATOR and grouping
# Make the common case fast:
if isinstance(number, int) and not use_grouping and not decimal_pos:
return mark_safe(unicode(number))
# sign
if float(number) < 0:
sign = '-'
else:
sign = ''
str_number = unicode(number)
if str_number[0] == '-':
str_number = str_number[1:]
# decimal part
if '.' in str_number:
int_part, dec_part = str_number.split('.')
if decimal_pos:
dec_part = dec_part[:decimal_pos]
else:
int_part, dec_part = str_number, ''
if decimal_pos:
dec_part = dec_part + ('0' * (decimal_pos - len(dec_part)))
if dec_part: dec_part = decimal_sep + dec_part
# grouping
if use_grouping:
int_part_gd = ''
for cnt, digit in enumerate(int_part[::-1]):
if cnt and not cnt % grouping:
int_part_gd += thousand_sep
int_part_gd += digit
int_part = int_part_gd[::-1]
return sign + int_part + dec_part
def get_format(format_type):
"""
For a specific format type, returns the format for the current
language (locale), defaults to the format in the settings.
format_type is the name of the format, e.g. 'DATE_FORMAT'
"""
format_type = smart_str(format_type)
if True:#settings.USE_L10N:
cache_key = (format_type, get_language())
try:
return _format_cache[cache_key] or getattr(settings, format_type)
except KeyError:
for module in get_format_modules():
try:
val = getattr(module, format_type)
_format_cache[cache_key] = val
return val
except AttributeError:
pass
_format_cache[cache_key] = None
return getattr(settings, format_type)
def smart_str(s, encoding='utf-8', strings_only=False, errors='strict'):
"""
Returns a bytestring version of 's', encoded as specified in 'encoding'.
If strings_only is True, don't convert (some) non-string-like objects.
"""
if strings_only and isinstance(s, (types.NoneType, int)):
return s
if isinstance(s, Promise):
return unicode(s).encode(encoding, errors)
elif not isinstance(s, basestring):
try:
return str(s)
except UnicodeEncodeError:
if isinstance(s, Exception):
# An Exception subclass containing non-ASCII data that doesn't
# know how to print itself properly. We shouldn't raise a
# further exception.
return ' '.join([smart_str(arg, encoding, strings_only,
errors) for arg in s])
return unicode(s).encode(encoding, errors)
elif isinstance(s, unicode):
return s.encode(encoding, errors)
elif s and encoding != 'utf-8':
return s.decode('utf-8', errors).encode(encoding, errors)
else:
return s
class Promise(object):
"""
This is just a base class for the proxy class created in
the closure of the lazy function. It can be used to recognize
promises in code.
"""
pass
def get_language():
return real_get_language()
Answer: You are using only `formats` and `force_unicode`. Just copy these into your
code and get rid of all imports.
|
assertRises failure
Question: I'm trying to write a unit test that will ensure an HTTPException is raised
when necessary. Here is the test:
import unittest
from requests import HTTPError
import pyport
# Code omitted...
def test_bad_item_type(self):
"""A bad item type should raise a HTTPError"""
test_type = 'bad'
test_id = 1986134
self.assertRaises(HTTPError, pyport.get_item(test_type, test_id))
Which produces the following:
ERROR: test_bad_item_type (__main__.TestPyportFunctions) A bad item
type should raise requests.HTTPError
----------------------------------------------------------------------
Traceback (most recent call last): File "./tests.py", line 65, in
test_bad_item_type
self.assertRaises(HTTPError, pyport.get_item(test_type, test_id)) File "/home/sean/workspace/pyport/pyport.py", line 54, in get_item
response.raise_for_status() File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 741, fin raise_for_status
raise HTTPError('%s Client Error' % self.status_code) HTTPError: 404 Client Error
The exception is raised, but it is not caught by the test. This is similar to
what happened in [this
question](http://stackoverflow.com/questions/6972703/assertraises-fails-even-
though-exception-is-raised), but it is not quite the same. Can someone tell me
what I'm missing?
Answer: It should be:
self.assertRaises(HTTPError, pyport.get_item, test_type, test_id)
See the signature of
[`assertRaises`](http://docs.python.org/library/unittest.html#unittest.TestCase.assertRaises):
assertRaises(exception, callable, *args, **kwds)
This is defined this way because if you do it your way, the Python interpreter
**first** calls `pyport.get_item(test_type, test_id)` and then passes its
result to assertRaises. The result being `assertRaises` is not called at all
and the exception is not caught. Now if assertRaises has access to the
function and its arguments it can call the function itself and catch the
appropriate exception.
|
AES Encryption in Python is different form iOS
Question: I am trying to encrypt a string in IOS and then send it to a TCP server.
Python version of code and iOS versions are shown below. Please see outputs of
the both versions. They look quite similar but the lengths are different and I
do not know the reason. Can anybody check it , what could be the reason?
Please note that PADDING in Python script should be discarded , as I gave a
text length of 16 already.
PYTHON Code:
#!/usr/bin/env python
from Crypto.Cipher import AES
import base64
import os
# the block size for the cipher object; must be 16, 24, or 32 for AES
BLOCK_SIZE = 16
PADDING = '{'
# one-liner to sufficiently pad the text to be encrypted
pad = lambda s: s + (BLOCK_SIZE - len(s) % BLOCK_SIZE) * PADDING
# one-liners to encrypt/encode and decrypt/decode a string
# encrypt with AES, encode with base64
EncodeAES = lambda c, s: base64.b64encode(c.encrypt(pad(s)))
DecodeAES = lambda c, e: c.decrypt(base64.b64decode(e)).rstrip(PADDING)
secret = "1234567890123456"
# create a cipher object using the random secret
cipher = AES.new(secret)
encoded = EncodeAES(cipher, 'password12345678')
print 'Encrypted string:', encoded
decoded = DecodeAES(cipher, encoded)
print 'Decrypted string:', decoded
OUTPUT:
Encrypted string: 57AayWF4jKYx7KzGkwudIBZUsn1ULOC0C4c5YF3xeI8=
Decrypted string: password12345678
NSString *forKey=@"1234567890123456";
NSString *mystr =@"password12345678";
const char *utfString = [mystr UTF8String];
NSData *aData=[NSData dataWithBytes: utfString length: strlen(utfString)];
aData=[mystr dataUsingEncoding:NSUTF8StringEncoding];
NSData *data;//=[aData AES128EncryptWithKey:forKey];
data=[aData AES128EncryptWithKey:forKey];
NSString *base64 = [data base64EncodedString];
aData=[data AES128DecryptWithKey:forKey];
mystr=[[NSString alloc] initWithData:aData encoding:NSUTF8StringEncoding];
NSLog(@"AES data : %@ \n %@",mystr,base64 );
OUTPUT:
AES data : password12345678
57AayWF4jKYx7KzGkwudIKNlwA+HErrmiy1Z0szzZds=
Answer: OK , here it is. Thanks sarnold for the clue :)
from Crypto.Cipher import AES
import base64
import os
# the block size for the cipher object; must be 16, 24, or 32 for AES
BLOCK_SIZE = 16
mode = AES.MODE_CBC
secret = "1234567890123456" #os.urandom(BLOCK_SIZE)
# create a cipher object using the random secret
cipher = AES.new(secret,mode)
# encode a string
#tx=cipher.encrypt('1234567890123456')
#print base64.b64encode(tx)
myData='aaaaaaaaaaaaaaaa'
#encoded = EncodeAES(cipher, myData)
encoded = cipher.encrypt(myData)
print 'Encrypted string:', base64.b64encode(encoded)
mode = AES.MODE_ECB
cipher=AES.new(secret,mode)
decoded = cipher.decrypt(encoded)
print 'Decrypted string:', decoded
**Python OUTPUT:**
Encrypted string: C9pEG6g8ge76xt2q9XLbpw==
Decrypted string: aaaaaaaaaaaaaaaa
> *Changed AES CCOptions to kCCOptionECBMode in iOS. *
CCCryptorStatus cryptStatus = CCCrypt(kCCEncrypt, kCCAlgorithmAES128, kCCOptionECBMode,keyPtr, CCKeySizeAES128, NULL,[self bytes], dataLength, buffer, bufferSize, &numBytesEncrypted);
And now output is:
**iOS Output:**
AES data : aaaaaaaaaaaaaaaa
C9pEG6g8ge76xt2q9XLbpw==
|
Django is throwing a "[123L, 123L] is not JSON serializable"
Question: in my code i have:
json.dumps({'foo': {'bar': [123L, 123L]}})
the long ints are primary keys. this line for some reason generates the error:
"[123L, 123L] is not JSON serializable"
When i try to execute it in python manage.py shell:
json.dumps({'foo': {'bar': [123L, 123L]}})
works fine.
Can anyone tell me what's going on?
exact Django error:
Exception Type: TypeError Exception Value: [123L] is not JSON serializable
Exception Location: /usr/lib/python2.7/dist-packages/simplejson/encoder.py in
default, line 192
Answer: This works for me:
from django.db.models.query import ValuesListQuerySet
def default(o):
if isinstance(o, ValuesListQuerySet):
return list(o)
raise TypeError(repr(o) + " is not JSON serializable")
json.dumps(valueListObject, default=default)
|
Python setdefault not lvalue, any work-arounds?
Question: I have the following small program:
ages=[23,23,43,54,34,22,43,23]
histogram={}
for age in ages:
if not age in histogram:
histogram[age]=1
else:
histogram[age]+=1
for age,count in sorted(histogram.items()):
print "Age: %d Number of people: %d"%(age,count)
It creates a simple histogram of the ages of people in a list. However, I find
the double lookup in the histogram hash quite unsightly. I know hash access is
basically O(1), so this isn't as inefficient as it may seem, but still...
I've tried various work-arounds, such as trying to use setdefault, but the
following won't fly:
histogram.setdefault("age",0)+=1 #<- Incorrect
I know I can use defaultdict, but it changes the behaviour of the created
histogram dict object, which is not what I want.
If there were a way for me to set this question to a "low priority" I would,
because obviously this isn't very important. But I've been looking for a
clever and/or elegant solution to this problem for a long time.
So, the question is: How to increment an integer by a key in a dict, or set it
to 1 if it doesn't exist?
Answer: For this particular application, you should use [a
`Counter`](http://docs.python.org/library/collections.html#collections.Counter).
from collections import Counter
ages = [23,23,43,54,34,22,43,23]
histogram = Counter(ages)
for age,count in sorted(histogram.items()):
print "Age: %d Number of people: %d"%(age,count)
If you really need a `dict`, you could convert the Counter back to a dict with
the `dict` constructor.
histogram = dict(Counter(args))
|
draw random element in numpy
Question: I have an array of element probabilities, let's say `[0.1, 0.2, 0.5, 0.2]`.
The array sums up to 1.0.
Using plain Python or numpy, I want to draw elements proportional to their
probability: the first element about 10% of the time, second 20%, third 50%
etc. The "draw" should return index of the element drawn.
I came up with this:
def draw(probs):
cumsum = numpy.cumsum(probs / sum(probs)) # sum up to 1.0, just in case
return len(numpy.where(numpy.random.rand() >= cumsum)[0])
It works, but it's too convoluted, there must be a better way. Thanks.
Answer:
import numpy as np
def random_pick(choices, probs):
'''
>>> a = ['Hit', 'Out']
>>> b = [.3, .7]
>>> random_pick(a,b)
'''
cutoffs = np.cumsum(probs)
idx = cutoffs.searchsorted(np.random.uniform(0, cutoffs[-1]))
return choices[idx]
* * *
**How it works:**
In [22]: import numpy as np
In [23]: probs = [0.1, 0.2, 0.5, 0.2]
Compute the cumulative sum:
In [24]: cutoffs = np.cumsum(probs)
In [25]: cutoffs
Out[25]: array([ 0.1, 0.3, 0.8, 1. ])
Compute a uniformly distributed random number in the half-open interval `[0,
cutoffs[-1])`:
In [26]: np.random.uniform(0, cutoffs[-1])
Out[26]: 0.9723114393023948
Use
[searchsorted](http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html#numpy-
searchsorted) to find the index where the random number would be inserted into
`cutoffs`:
In [27]: cutoffs.searchsorted(0.9723114393023948)
Out[27]: 3
Return `choices[idx]`, where `idx` is that index.
|
storages in OpenCV with Python
Question: I want to find contours in an image and further process them e.g. drawing them
on the image. To do that I have two functions running in different threads:
storage = cv.CreateMemStorage(0)
contour = cv.FindContours(inData.content, storage, cv.CV_RETR_EXTERNAL, cv.CV_CHAIN_APPROX_SIMPLE)
and
while contours:
bound_rect = cv.BoundingRect(list(contours))
contours = contours.h_next()
pt1 = (bound_rect[0], bound_rect[1])
pt2 = (bound_rect[0] + bound_rect[2], bound_rect[1] + bound_rect[3])
cv.Rectangle(inImg.content, pt1, pt2, cv.CV_RGB(255,0,0), 1)
Each function runs in a loop processing one image after the other. When a
function is done it puts the image in a buffer from which the other function
can get it. This works except that in the result the contours are drawn in the
image one or two images before their corresponding image.
I think this has something to do with the storage of OpenCV but I don't
understand why the storage is needed and what it does
**EDIT** Here is some more code:
My program is meant to be a node based image analasys software.
This is how the node graph of my current code looks like:
|---------| |--------|
|-----| |-----|------>|Threshold|--->|Contours|--->|-------------| |------|
|Input|--->|Split| |---------| |--------| |Draw Contours|--->|Output|
|-----| |-----|----------------------------------->|-------------| |------|
* * *
This is the class from which all nodes derive:
from Buffer import Buffer
from threading import Thread
from Data import Data
class Node(Thread):
def __init__(self, inputbuffers, outputbuffers):
Thread.__init__(self)
self.inputbuffers = inputbuffers
self.outputbuffers = outputbuffers
def getInputBuffer(self, index):
return self.inputbuffers[index]
def getOutputBuffer(self, index):
return self.outputbuffers[index]
def _getContents(self, bufferArray):
out = []
for bufferToGet in bufferArray:
if bufferToGet and bufferToGet.data:
out.append(bufferToGet.data)
for bufferToGet in bufferArray:
bufferToGet.data = None
return out
def _allInputsPresent(self):
for bufferToChk in self.inputbuffers:
if not bufferToChk.data:
return False
return True
def _allOutputsEmpty(self):
for bufferToChk in self.outputbuffers:
if bufferToChk.data != None:
return False
return True
def _applyOutputs(self, output):
for i in range(len(output)):
if self.outputbuffers[i]:
self.outputbuffers[i].setData(output[i])
def run(self):
#Thread loop <------------------------------------
while True:
while not self._allInputsPresent(): pass
inputs = self._getContents(self.inputbuffers)
output = [None]*len(self.outputbuffers)
self.process(inputs, output)
while not self._allOutputsEmpty(): pass
self._applyOutputs(output)
def process(self, inputs, outputs):
'''
inputs: array of Data objects
outputs: array of Data objects
'''
pass
* * *
The nodes pass around these Data objects:
class Data(object):
def __init__(self, content = None, time = None, error = None, number = -1):
self.content = content #Here the actual data is stored. Mostly images
self.time = time #Not used yet
self.error = error #Not used yet
self.number = number #Used to see if the correct data is put together
* * *
This are the nodes:
from Node import Node
from Data import Data
import copy
import cv
class TemplateNode(Node):
def __init__(self, inputbuffers, outputbuffers):
super(type(self), self).__init__(inputbuffers, outputbuffers)
def process(self, inputs, outputs):
inData = inputs[0]
#Do something with the content e.g.
#cv.Smooth(inData.content, inData.content, cv.CV_GAUSSIAN, 11, 11)
outputs[0] = inData
class InputNode(Node):
def __init__(self, inputbuffers, outputbuffers):
super(InputNode, self).__init__(inputbuffers, outputbuffers)
self.capture = cv.CaptureFromFile("video.avi")
self.counter = 0
def process(self, inputs, outputs):
image = cv.QueryFrame(self.capture)
if image:
font = cv.InitFont(cv.CV_FONT_HERSHEY_SIMPLEX, 1, 1, 0, 3, 8)
x = 30
y = 50
cv.PutText(image, str(self.counter), (x,y), font, 255)
outputs[0] = Data(image,None,None,self.counter)
self.counter = self.counter+1
class OutputNode(Node):
def __init__(self, inputbuffers, outputbuffers, name):
super(type(self), self).__init__(inputbuffers, outputbuffers)
self.name = name
def process(self, inputs, outputs):
if type(inputs[0].content) == cv.iplimage:
cv.ShowImage(self.name, inputs[0].content)
cv.WaitKey()
class ThresholdNode(Node):
def __init__(self, inputbuffers, outputbuffers):
super(type(self), self).__init__(inputbuffers, outputbuffers)
def process(self, inputs, outputs):
inData = inputs[0]
inimg = cv.CreateImage(cv.GetSize(inData.content), cv.IPL_DEPTH_8U, 1);
cv.CvtColor(inData.content, inimg, cv.CV_BGR2GRAY)
outImg = cv.CreateImage(cv.GetSize(inimg), cv.IPL_DEPTH_8U, 1);
cv.Threshold(inimg, outImg, 70, 255, cv.CV_THRESH_BINARY_INV);
inData.content = outImg
outputs[0] = inData
class SplitNode(Node):
def __init__(self, inputbuffers, outputbuffers):
super(type(self), self).__init__(inputbuffers, outputbuffers)
def process(self, inputs, outputs):
inData = inputs[0]
if type(inData.content) == cv.iplimage:
imagecpy = cv.CloneImage(inData.content)
outputs[1] = Data(imagecpy, copy.copy(inData.time), copy.copy(inData.error), copy.copy(inData.number))
else:
outputs[1] = copy.deepcopy(inData)
print
class ContoursNode(Node):
def __init__(self, inputbuffers, outputbuffers):
super(type(self), self).__init__(inputbuffers, outputbuffers)
def process(self, inputs, outputs):
inData = inputs[0]
storage = cv.CreateMemStorage(0)
contours = cv.FindContours(inData.content, storage, cv.CV_RETR_EXTERNAL, cv.CV_CHAIN_APPROX_SIMPLE)
contoursArr = []
while contours:
points = []
for (x,y) in contours:
points.append((x,y))
contoursArr.append(points)
contours = contours.h_next()
outputs[0] = Data(contoursArr, inData.time, inData.error, inData.number)
pass
class DrawContoursNode(Node):
def __init__(self, inputbuffers, outputbuffers):
super(type(self), self).__init__(inputbuffers, outputbuffers)
def process(self, inputs, outputs):
inImg = inputs[0]
contours = inputs[1].content
print "Image start"
for cont in contours:
for (x,y) in cont:
cv.Circle(inImg.content, (x,y), 2, cv.CV_RGB(255, 0, 0))
print "Image end"
outputs[0] = inImg
* * *
This is the main function. Here all the nodes and buffers are created.
from NodeImpls import *
from Buffer import Buffer
buffer1 = Buffer()
buffer2 = Buffer()
buffer3 = Buffer()
buffer4 = Buffer()
buffer5 = Buffer()
buffer6 = Buffer()
innode = InputNode([], [buffer1])
split = SplitNode([buffer1], [buffer2, buffer3])
thresh = ThresholdNode([buffer3], [buffer4])
contours = ContoursNode([buffer4], [buffer5])
drawc = DrawContoursNode([buffer2, buffer5],[buffer6])
outnode = OutputNode([buffer6], [], "out1")
innode.start()
split.start()
thresh.start()
contours.start()
drawc.start()
outnode.start()
while True:
pass
* * *
The buffer:
class Buffer(object):
def __init__(self):
self.data = None
def setData(self, data):
self.data = data
def getData(self):
return self.data
Answer: > I think this has something to do with the storage of OpenCV but I don't
> understand why the storage is needed and what it does
Storage is just a place to keep the results. OpenCV is a C++ library, and
relies on manual memory allocation, C++ style. Python bindings are just a thin
wrapper around it, and are not very _pythonic_. That's why you have to
allocate storage manually, like if you did it in C or in C++.
> I have two functions running in different threads ... This works except that
> in the result the contours are drawn in the image one or two images before
> their corresponding image.
I assume your threads are not properly synchronized. This problem is not
likely to be related to OpenCV, but to what functions you have, what data they
use and pass around, and how you share the data between them.
In short, please post your code where you create threads and call these
functions, as well where `inImg`, `inData`, `contour`, `contours` and
`storage` are accessed or modified.
|
best way to parse a big (500mb) json file and insert into mongoDB
Question: > **Possible Duplicate:**
> [Importing large json file into
> mongodb](http://stackoverflow.com/questions/5126718/importing-large-json-
> file-into-mongodb)
I have a fairly big JSON file (500+MB) that'd I'd like to edit and insert into
a database in MongoDB. I tried using a python script but it crashes. what
would be the best way to do this?
Answer: First of all, it is important to note that Mongo has a maximum document size
of 16MB, explained here: <http://www.mongodb.org/display/DOCS/Documents> This
is why Mike Christensen was asking if the data inside your file was one giant
object, or if it is split into multiple documents.
If you are completely new to MongoDB, it will help to familiarize yourself
with the way information is stored in a Mongo Database. In a nutshell, a
Database is made of Collections, which are made up of JSON-like objects called
Documents, which contain keys and values.
This is explained in the "Mongo data model" section of the Introduction page:
<http://www.mongodb.org/display/DOCS/Introduction>
Here are some other good resources for getting started with MongoDB:
<http://www.mongodb.org/display/DOCS/Inserting> This gives an explanation of
Mongo Documents, what they look like, and how they are stored in a Collection.
<http://www.mongodb.org/display/DOCS/Tutorial> This is an introductory
document which contains notes on how to get up and running with Mongo, as well
as notes on schema design.
<http://try.mongodb.org/> This is an interactive tutorial, which runs right in
your web browser, and will give you a better sense of how documents are stored
in a Mongo Collection.
<http://mongly.com/> This is a website created by a very active member of the
MongoDB community named Karl Seguin. It includes many resources that are
perfect for getting started with Mongo, including "The Little MongoDB Book"
and a more in-depth interactive tutorial.
<http://www.mongodb.org/display/DOCS/Python+Language+Center> Because you
mentioned working with Python, here is the link to the Python Language center.
The official Python driver for MongoDB is called PyMongo.
<http://api.mongodb.org/python/current/> This is the official documentation
for the PyMongo Driver.
Forgive me if you have already discovered the above resources on your own.
This response is also for the benefit of any other readers who may be new to
MongoDB.
Now to address your actual question of how to import the information inside
your JSON file.
There is a utility called mongoimport that will take a single file that
contains one JSON string per line and insert it into a Mongo collection. The
documentation on this may be found here:
<http://www.mongodb.org/display/DOCS/Import+Export+Tools>
The caveat is that the code for mongoimport is not terribly complex, and you
may find that it does not create documents in exactly the structure that you
are looking for. When importing data into a new Mongo collection from another
source (such as a JSON file) the official recommendation is to write a custom
script, as you first attempted to do.
What, exactly was the error that you received from your Python script? Was it
an error that the document size was greater than 16MB and could not be
inserted into a collection, or did Python itself choke trying to open your
500MB file? If the answer is the former, you will have to modify your script
to split up your giant document(s) into smaller ones. If the answer is the
latter, we might have a more difficult issue to deal with. If the error was
something else entirely, please post it and we can work through it.
Hopefully the resources I have provided will give you a better understanding
of how documents are stored in a Mongo Collection, and what your original JSON
file should look like before it can be inserted into a Mongo Database.
Finally, thanks for trying Mongo! Welcome to the Community! If you have any
follow-up questions, we are here to help!
|
Persistent python subprocess
Question: Is there a way to make a subprocess call in python "persistent"? I'm calling a
program that takes a while to load multiple times. So it would be great if I
could just leave that program open and communicate with it without killing it.
The cartoon version of my python script looks like this:
for text in textcollection:
myprocess = subprocess.Popen(["myexecutable"],
stdin = subprocess.PIPE, stdout = subprocess.PIPE,
stderr = None)
myoutputtext, err = myprocess.communicate(input=text)
I need to process each text separately, so joining it all into one large text
file and processing it once is not an option.
Preferably, if there's an option like this
myprocess = subprocess.Popen(["myexecutable"],
stdin = subprocess.PIPE, stdout = subprocess.PIPE,
stderr = None) for text in textcollection:
for text in textcollection:
myoutputtext, err = myprocess.communicate(input=text)
where I can leave the process open, I'd really appreciate it.
Answer: You can use `myprocess.stdin.write()` and `myprocess.stdout.read()` to
communicate with your subprocess, you just need to be careful to make sure you
handle buffering correctly to prevent your calls from blocking.
If the output from your subprocess is well-defined, you should be able to
reliably communicate with it using line-buffering and
`myprocess.stdout.readline()`.
Here is an example:
>>> p = subprocess.Popen(['cat'], bufsize=1, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>>> p.stdin.write('hello world\n')
>>> p.stdout.readline()
'hello world\n'
>>> p.stdout.readline() # THIS CALL WILL BLOCK
An alternative to this method for Unix is to put the file handle in non-
blocking mode, which will allow you to call functions like
`myprocess.stdout.read()` and have it return data if any is available, or
raise an `IOError` if there isn't any data:
>>> p = subprocess.Popen(['cat'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>>> import fcntl, os
>>> fcntl.fcntl(p.stdout.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
0
>>> p.stdout.read() # raises an exception instead of blocking
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: [Errno 11] Resource temporarily unavailable
This would allow you to do something like this:
fcntl.fcntl(p.stdout.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
for text in textcollection:
myprocess.stdin.write(text + '\n')
while True:
myoutputtext = ''
try:
myoutputtext += myprocess.stdout.read()
except IOError:
pass
if validate_output(myoutputtext):
break
time.sleep(.1) # short sleep before attempting another read
In this example, `validate_output()` is a function you would need to write
that returns `True` if the data you have received so far is all of output that
you expect to get.
|
Can anyone give me a quick tutorial in stdin and stdout in Python 3?
Question: I know this sounds like something I can google, but the truth is that I don't
find or do not understand what the very few Python 3 sources explains.
So here are my questions:
* Is `input()` the `stdin` function in Python 3? Does that mean that when you open your filename.py program, the `stdin` is what the user types?
* Is `print()` the `stdout` function in Python 3, or do you have to write to a file?
* For the **[Spotify puzzle](http://www.spotify.com/us/jobs/tech/best-before/)** , is says "Input is read from stdin". What should my file include of `stdin` and `stdout`?
* * *
Update: Does that mean that i can use:
import sys
unfmtdDate = str(sys.stdin.read())
...instead of...
unfmtdDate = str(input())
?
Answer: `stdin` and `stdout` are file-like objects provided by the OS. In general,
when a program is run in an interactive session, `stdin` is keyboard input and
`stdout` is the user's tty, but the shell can be used to redirect them from
normal files or piped output from and input to other programs.
`input()` is used to prompt the user for typed input. In the case of something
like a programming puzzle, it's normally assumed that `stdin` is redirected
from a data file, and when the input format is given it's usually best to use
`sys.stdin.read()` rather than prompting for input with `input()`. `input()`
is intended for interactive user input, it can display a prompt (on
sys.stdout) and use the GNU readline library (if present) to allow line
editing, etc.
`print()` is, indeed, the most common way of writing to `stdout`. There's no
need to do anything special to specify the output stream. `print()` writes to
`sys.stdout` if no alternate file is given to it as a `file=` parameter.
|
How to make python function always unbound even assigned to a class attribute
Question: Having the following code, what can I do with the 'foo' function to make it
unbound in any case even if it is assigned to class attribute? Overriding
__get__ doesn't help - as far as I understand because it is not used when
function is absent from __dict__ of an instance (it is so in case of class
attributes). But what else can be done here?
def foo(x):
print(x)
def foo_get(self, obj, type=None):
return foo
foo.__get__ = foo_get
class A(object):
def __init__(self):
self.f = foo
class B(object):
f = foo
a = A()
print(a.f) #<function foo at 0x2321d10>
print(a.f.__get__(a, A)) #<function foo at 0x2321d10>
b = B()
print(b.f) #<bound method B.foo of <__main__.B object at 0x23224d0>>
Answer: I'm pretty sure you want
[@staticmethod](http://docs.python.org/library/functions.html#staticmethod).
`foo = staticmethod(foo)`
You can also define `__get__` on a callable class:
class Foo(object):
def __get__(self, obj, type=None):
return self
def __call__(self, x):
print(x)
foo = Foo()
class A(object):
def __init__(self):
self.f = foo
class B(object):
f = foo
foo(1) # 1
a = A()
a.f(1) # 1
print(a.f) #<function foo at 0x2321d10>
print(a.f.__get__(a, A)) #<function foo at 0x2321d10>
b = B()
b.f(1) # 1
print(b.f) #<__main__.Foo object at 0x7fa8c260be10>
It's one extra level of indentation and a few extra lines of code, but that's
how it can be done, if it's important.
|
Python conditional 'module object has no attribute' error with personal package distinct from circular import issue
Question: I'm getting a 'module object has no attribute ..." error when trying to use a
package heirarchy I created. The error is reminiscant of the error you get
when there is a circular import (i.e. module a imports b and module b imports
a), but I can't see that issue here. I've gone through many posts with a
similar error, but none of the explanations I saw quite fit.
This was seen with python 2.7.1 and python 2.4.3.
I've watered it down to the following example:
Consider the following heirarchy (see code below):
alpha
alpha/__init__.py
alpha/bravo
alpha/bravo/__init__.py
alpha/bravo/charlie.py
alpha/bravo/delta.py
alpha/bravo/echo.py
The module charlie imports echo which in turn imports delta. If the
alpha/bravo/__init__.py (like alpha/__init__.py) is essentially blank, a
script can do:
import alpha.bravo.charlie
The problem surfaces if I try to import alpha.bravo.charlie in
alpha/bravo/__init__.py (with the thinking I could surface relevant
classes/methods there, and a script would do 'import alpha.bravo').
Code:
alpha/__init__.py
(blank)
alpha/bravo/__init__.py
import alpha.bravo.charlie
alpha/bravo/charlie.py
import alpha.bravo.echo
def charlie_foo(x): return str(x)
def charlie_bar(x): return alpha.bravo.echo.echo_biz()
alpha/bravo/delta.py
def delta_foo(x): return str(x)
alpha/bravo/echo.py
import alpha.bravo.delta
print alpha.bravo.delta.delta_foo(1)
def echo_biz(): return 'blah'
If I try:
python -c 'import alpha.bravo'
I get:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/kmkc980/svn/working/azcif/python/lib/alpha/bravo/__init__.py", line 1, in <module>
import alpha.bravo.charlie
File "/home/kmkc980/svn/working/azcif/python/lib/alpha/bravo/charlie.py", line 1, in <module>
import alpha.bravo.echo
File "/home/kmkc980/svn/working/azcif/python/lib/alpha/bravo/echo.py", line 2, in <module>
print alpha.bravo.delta.delta_foo(1)
AttributeError: 'module' object has no attribute 'bravo'
But, if I comment out the import line in alpha/bravo/__init__.py, then all
seems OK:
python -c 'import alpha.bravo'
python -c 'import alpha.bravo.charlie'
1
Moreover, if I use the same code above (including the import line in
alpha/bravo/__init__.py), but edit everything to exclude the 'alpha' level of
the hierarchy, it seems to work fine.
So the hierarchy is now just:
bravo
bravo/__init__.py
bravo/charlie.py
bravo/delta.py
bravo/echo.py
and I change all the lines with "alpha.bravo.*" to "bravo.*"
Then no problem:
python -c 'import bravo'
1
I've been able to work around the issue, but I'd still like to understand it.
Thanks.
Answer: # Here's the why
(This is, I believe, mostly supported by the explanation at
<http://docs.python.org/faq/programming.html#how-can-i-have-modules-that-
mutually-import-each-other>)
When the Python interpreter encounters a line of the form `import a.b.c`, it
runs through the following steps. In pseudo-python:
for module in ['a', 'a.b', 'a.b.c']:
if module not in sys.modules:
sys.modules[module] = (A new empty module object)
run every line of code in module # this may recursively call import
add the module to its parent's namespace
return module 'a'
There are three important points here:
1. The modules a, a.b, and a.b.c get imported in order, if they haven't been imported already
2. A module _does not exist_ in its parent's namespace until it has completely finished being imported. So module `a` does not have a `b` attribute until `a.b` has been imported completely.
3. No matter how deep your module chain is, even if you `import a.b.c.d.e.f.g`, _your code_ only gets one symbol added to its namespace: `a`.
So when you later try to run `a.b.c.d.e.f.g.some_function()`, the interpreter
has to traverse all the way down the chain of modules to get to that method.
# Here's what is happening
Based on the code that you have posted, the problem seems to lie in the print
statement in `alpha/bravo/echo/__init__.py`. What the interpreter has done by
the time it gets there is roughly this:
1. Set up an empty module object for alpha in sys.modules
2. Run the code in alpha/__init__.py (Note that dir(alpha) won't contain 'bravo' at this point)
3. Set up an empty module object for alpha.bravo in sys.modules
4. Run the code in alpha/bravo/__init__.py:
4.1 Set up an empty module object for alpha.bravo.charlie in sys.modules
4.2 Run the code in alpha/bravo/charlie/__init__.py:
4.2.1 Set up an empty module object for alpha/bravo/echo in sys.modules
4.2.2 Run the code in alpha/bravo/echo/__init__.py:
4.2.2.1 Set up an empty module object for alpha/bravo/delta in sys.modules
4.2.2.2 Run the code in alpha/bravo/delta/__init__.py -- This finishes, so
'delta' is added to 'alpha.bravo's symbols.
4.2.2.3 Add 'alpha' to echo's symbols. This is the last step in `import
alpha.bravo.delta`.
At this point, if we call dir() on all of the modules in sys.modules, we will
see this:
* 'alpha': `['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', '__return__']` (this is essentially empty)
* 'alpha.bravo': `['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', 'delta']` (delta has finished being imported, so it's here)
* 'alpha.bravo.charlie': `['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__']` (empty)
* 'alpha.bravo.delta': `['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', '__return__', 'delta.foo']` (This is the only one that has completed)
* 'alpha.bravo.echo': `['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', '__return__', 'alpha']`
Now the intepreter continues with alpha/bravo/echo/__init__.py, where it
encounters the line `print alpha.bravo.delta.delta_foo(1)`. That starts this
sequence:
1. get the global variable `alpha` \-- this returns the still-empty `alpha` module.
2. call getattr(alpha, 'bravo') -- this fails, because `alpha.bravo` isn't finished being initialized yet, so bravo hasn't been inserted into alpha's symbol table.
This is the same thing that happens during a circular import -- the module
isn't finished being initialized, so the symbol table isn't completely
updated, and attribute access fails.
If you were to replace the offending line in echo/__init__.py with this:
import sys
sys.modules['alpha.bravo.delta'].delta_foo(1)
That would probably work, since delta is completely initialized. But until
bravo is complete (after echo and charlie return), the symbol table for alpha
won't be updated, and you won't be able to access bravo through it.
Also, as @Ric Poggi says, if you change the import line to
from alpha.bravo.delta import delta_foo
Then that will work. In this case, because `from alpha.bravo.delta` goes right
to the sys.modules dict, rather than traversing from alpha to bravo to delta,
it can get the function from the delta module and assign it to a local
variable, which you can then access without any trouble.
|
google app engine get_serving_url() is not defined
Question: It seems like an easy problem yet I cant figure it out:
I call `get_serving_url()` function in my code and get en error:
> NameError: global name 'get_serving_url' is not defined
my import statement currently looks like: from google.appengine.api import
images
before i tried various "from PIL import Image" and got import errors. i
recently installed the PIL library
I have site-packages and PIL folder on my python path
how do I make `get_serving_url()` work?
Answer:
from google.appengine.api import images
^ Is correct.
Then use `images.get_serving_url()` to make it look for the function in the
module you just imported.
If you don't prefix it with `images.` it will look in the `globals()` in your
current module, when Python can't find it there it raises `NameError`.
|
Python and py2exe - Implicitely Importing Modules
Question: I've used py2exe several times in the past to create *.exe files for my python
programs. However, I'm getting an error this time. I think I know what the
issue is, but I don't know how to resolve it.
I have a handful of wx.Panels in a subfolder and it could be a variable
amount, so I import them via a function that finds the *.py files in the
folder and calls the function below to actually import each panel.
In normal python, this works well. However, py2exe leaves these files out. I
assume that because they are not explicitly imported, py2exe doesn't believe
they are needed. Is there a solution to this? Some option in py2exe that I'm
unaware of?
Thanks!
# module = Module to be imported (string)
# folder = Folder containing the module (string)
def import_module(module, folder=None):
if folder is None:
return __import__(module)
return getattr(__import__('%s.%s' % (folder.replace(os.path.sep, '.'),
module)), module)
...within some other function...
modules = [import_module(os.path.basename(os.path.splitext(filename)[0]), 'Panels') for filename in glob.glob('Panels//*.py')]
**EDIT**
I'm adding a sample setup.py script I've used. But I've used probably 20
different variations and several completely different scripts (what I could
find on the internet). Note that one requirement is that it is completely
self-contained in one executable file.
from distutils.core import setup
import py2exe
import wxversion
wxversion.select("2.8.12.1")
import wx
import wx.lib.pubsub
includes = []
excludes = ['_gtkagg', '_tkagg', 'bsddb', 'curses', 'email', 'pywin.debugger',
'pywin.debugger.dbgcon', 'pywin.dialogs', 'tcl',
'Tkconstants', 'Tkinter']
packages = ['wx.lib.pubsub']
dll_excludes = ['libgdk-win32-2.0-0.dll', 'libgobject-2.0-0.dll', 'tcl84.dll',
'tk84.dll']
import glob
my_data_files = [('Panels', glob.glob('Panels/*.py'))]
setup(
options = {"py2exe": {"compressed": 2,
"optimize": 2,
"includes": includes,
"excludes": excludes,
"packages": packages,
"dll_excludes": dll_excludes,
"bundle_files": 2,
"dist_dir": "dist",
"xref": False,
"skip_archive": False,
"ascii": False,
"custom_boot_script": '',
}
},
zipfile = None,
#data_files = my_data_files,
windows=['Main.py']
)
Answer: I believe I've found the solution for my issue. In my setup.py file, I
replaced the 'includes = []' line with:
includes = ['Panels.%s' % os.path.basename(os.path.splitext(filename)[0]) for
filename in glob.glob('Panels//*.py')]
In my code where I use the 'import_module' function, it used to use that glob
to import the modules within the Panels directory. Instead, I've hard-coded a
list of modules to include.
This isn't the exact solution I wanted (I don't want to hard-code that list of
panels), but it does seem to work. Unless I discover anything better, this is
what I'll continue to use.
|
Add clr reference to WordPerfect dll using python
Question: After trying several variations on the theme of clr.AddReference, I am still
unable to add a reference to the "wpwin14.dll", which is the interop reference
for WordPerfect X4. I am using Eclipse on Windows, and other clr calls work
fine, so I assume my IronPython/IDE setup is fine. I have successfully added
that file name to Visual Studio for a project in VB.NET, so I believe that
file is the correct file to use.
The simple code I am using is this:
import clr
clr.AddReferenceToFile("wpwin14.dll")
I have also tried clr.AddReferenceToFileAndPath using the full path on my
machine. I have also tried these commands from the IronPython command line.
Just in case the wpwin14.dll file was not appropriate for the .NET reference,
I converted the associated wpwin14.tlb file into a dll, using the tlbimp
command. That did not work either.
The error message I get is this: IOError: System.IO.IOException: Could not add
reference to assembly wpwin14.dll
Am I missing something basic here?
Answer: Probably the easiest way to interact with the COM library would be to use
something like the following
from System import Activator, Type
shell = Activator.CreateInstance(Type.GetTypeFromProgID('Shell.Application'))
Just replace Shell.Application with the ProgID for the WordPerfect application
COM object. IronPython will take care of the method resolution when you start
calling methods.
|
Running Python scripts in Java
Question: I'm trying to run a python script during the execution of my java code,
because it will depend on the output received from the python script. So far
I've tried using jythonc, unfortunately to no success, and now im trying to
use the java
[Runtime](http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/Runtime.html)
and java
[Process](http://docs.oracle.com/javase/1.4.2/docs/api/java/lang/Process.html)
to execute the python script.
Now I've run into a problem when trying to call the python script. I feel as
though it doesn't even call the script because it takes less than a couple
seconds to get to the next page....
Could the problem be how I am calling the python script?? I am trying to run
this through a web application...
Here is some of my code:
String run = "cmd /c python duplicatetestingoriginal.py" ;
boolean isCreated = fwr.writeFile(BugFile, GD, 500, true, 5, "LET");
if(isCreated){
try{
r = Runtime.getRuntime();
p = r.exec(run);
BufferedReader stdInput = new BufferedReader(new InputStreamReader(p.getInputStream()));
BufferedReader stdError = new BufferedReader(new InputStreamReader(p.getErrorStream()));
String line = "";
while ((line = stdInput.readLine()) != null) {
System.out.println(line);
}
while ((line = stdError.readLine()) != null) {
errorW.write(line);
}
int exitVal = p.waitFor();
arrayList = fwr.readResults();
}catch(Exception e){
}
}
else{
// troubleshoot....
}
Answer: Instead of String for the command, split it to chunks and make a String[]. No
need to state `cmd /c`, I think.
This is a sample code from my application:
//Running on windows
command = new String[4];
command[0]=directory.getCanonicalPath()+"/data/ExtenalApp.exe"; //extenal commandline app, not placed in path, but in subfolder
command[1]=directory.getCanonicalPath()+"/data/SomeFile.txt"; //file needed for the external app, sent as an argument
command[2]=arg1; //argument for the app
command[3]=arg2; //argument for the app
//Running on Mac
command = new String[6];
command[0]="python";
command[1]=directory.getCanonicalPath()+"/data/wp.py"; //path to the script
command[2]="-F"; //argument/Flag/option
command[3]="--dir="+path; //argument/option
command[4]="--filename="+filename; //argument/option
command[5]=argument; //argument/option
Process process = Runtime.getRuntime().exec(command);
process.waitFor();
process.destroy();
I don't handle the Input/Output streams because the script/app doesn't require
input, and outputs only when finished, nothing important. Which might not be
the case for you.
|
isinstance() and type() equivelence failure due to import mechanism (python/django)
Question: In a Django project I'm working on I import a form in the view as follows
#views.py
from forms import SomeForm
then in a test file I have
#form_test.py
from app.forms import SomeForm
.
.
.
self.assertTrue(isinstance(response.context['form'], SomeForm))
Why doesn't the isinstance work?
If I inspect the output of type() for the two objects I get this:
response.context form: Expected form:
I can fix this by making my import mechanism in views.py match that in the
form_test.py, but this seems like the wrong approach.
for reference, file structure as follows:
* site/
* manage.py
* app/
* forms.py
* views.py
* tests/
* form_test.py
Answer: `isinstance` also compare module location, `response.context['form']` class'
module is `forms` where SomeForm module is `app.forms` you check this by
inspecting respectively `__class__.__module__` and `__module__`.
To make isinstance work you can:
* fix the import in the views.py (recommended)
* alter sys.path in form_testse.py to be able to import the form as from forms import SomeForm
* try [intrapackage references](http://docs.python.org/tutorial/modules.html#intra-package-references)
|
Python : 2d contour plot from 3 lists : x, y and rho?
Question: I have a simple problem in python and matplotlib. I have 3 lists : x, y and
rho with rho[i] a density at the point x[i], y[i]. All values of x and y are
between -1. and 1. but they are not in a specific order.
How to make a contour plot (like with imshow) of the density rho (interpolated
at the points x, y).
Thank you very much.
EDIT : I work with large arrays : x, y and rho have between 10,000 and
1,000,000 elements
Answer: You need to interpolate your `rho` values. There's no one way to do this, and
the "best" method depends entirely on the a-priori information you should be
incorporating into the interpolation.
Before I go into a rant on "black-box" interpolation methods, though, a radial
basis function (e.g. a "thin-plate-spline" is a particular type of radial
basis function) is often a good choice. If you have millions of points, this
implementation will be inefficient, but as a starting point:
import numpy as np
import matplotlib.pyplot as plt
import scipy.interpolate
# Generate data:
x, y, z = 10 * np.random.random((3,10))
# Set up a regular grid of interpolation points
xi, yi = np.linspace(x.min(), x.max(), 100), np.linspace(y.min(), y.max(), 100)
xi, yi = np.meshgrid(xi, yi)
# Interpolate
rbf = scipy.interpolate.Rbf(x, y, z, function='linear')
zi = rbf(xi, yi)
plt.imshow(zi, vmin=z.min(), vmax=z.max(), origin='lower',
extent=[x.min(), x.max(), y.min(), y.max()])
plt.scatter(x, y, c=z)
plt.colorbar()
plt.show()

|
Why does Python CSV module make the CSV file read-only when I try to open it after running program?
Question: I'm using a program (follows) to see the similarities in certain columns
between two CSV files, then create a third when data matches certain
specifications (two columns are the same but the third is not) so that I can
update an e-mail list.
When I try to open the results.csv file after running the program however,
Windows Excel will only open the program in read-only mode.
Any thoughts?
Here's my code:
import csv
sample_data = open("sample.csv", "r")
lib_data = open("library.csv", "r")
csv1 = csv.reader(sample_data)
csv2 = csv.reader(lib_data)
results = open("results.csv", "w")
res_csv = csv.writer(results)
limit = 1071
limit2 = 1001
x = 0
y = 0
while (y != limit):
row1 = csv1.__next__()
while (x != limit2):
row2 = csv2.__next__()
if (row1[0] == row2[3] and row1[1] == row2[2] and row1[2] != row2[5]):
print ("SAMPLE:")
print (row1[0], ", ", row1[1], ", ", row1[2])
print ("LIBRARY:")
print (row2[3], ", ", row2[2], ", ", row2[5])
print("\n")
res_csv.writerow(row1)
x = x+1
y = y+1
x = 0
lib_data.seek(0)
Answer: Use `with` to ensure files will be closed properly:
with open("sample.csv", "r") as sample_data:
with open("library.csv", "r") as lib_data:
with open("results.csv", "w") as results:
# other code
You can even put several variables into one `with` if you're using Python >=
2.7.
|
Syncing directories with python's ftplib
Question: I'm learning python and trying to write a code to sync two directories: one is
on ftp server, the other is on my local disk. So far, I wrote a working code
but I have a question or two about it :)
import os
from ftplib import FTP
h_local_files = [] # create local dir list
h_remote_files = [] # create remote dir list
h_local = 'C:\\something\\bla\\' # local dir
ftp = FTP('ftp.server.com')
ftp.login('user', 'pass')
if os.listdir(h_local) == []:
print 'LOCAL DIR IS EMPTY'
else:
print 'BUILDING LOCAL DIR FILE LIST...'
for file_name in os.listdir(h_local):
h_local_files.append(file_name) # populate local dir list
ftp.sendcmd('CWD /some/ftp/directory')
print 'BUILDING REMOTE DIR FILE LIST...\n'
for rfile in ftp.nlst():
if rfile.endswith('.jpg'): # i need only .jpg files
h_remote_files.append(rfile) # populate remote dir list
h_diff = sorted(list(set(h_remote_files) - set(h_local_files))) # difference between two lists
for h in h_diff:
with open(os.path.join(h_local,h), 'wb') as ftpfile:
s = ftp.retrbinary('RETR ' + h, ftpfile.write) # retrieve file
print 'PROCESSING', h
if str(s).startswith('226'): # comes from ftp status: '226 Transfer complete.'
print 'OK\n' # print 'OK' if transfer was successful
else:
print s # if error, print retrbinary's return
This piece of code should make two python lists: a list of files in local
directory and a list of files in ftp directory. After removing duplicates from
lists, the script should download 'missing' files to my local directory.
For now, this piece of code is doing what I need, but I have noticed that when
I run it my output is not acting how I imagine it would act :)
For example, my current output goes:
PROCESSING 2012-01-17_07.05.jpg
OK
# LONG PAUSE HERE
PROCESSING 2012-01-17_07.06.jpg
OK
# LONG PAUSE HERE
PROCESSING 2012-01-17_07.06.jpg
OK
etc...
but I imagine that it should work like this:
PROCESSING 2012-01-17_07.05.jpg
# LONG PAUSE HERE (WHILE DOWNLOADING)
OK
PROCESSING 2012-01-17_07.06.jpg
# LONG PAUSE HERE (WHILE DOWNLOADING)
OK
PROCESSING 2012-01-17_07.06.jpg
# LONG PAUSE HERE (WHILE DOWNLOADING)
OK
etc...
As I said, I just started to learn python, and maybe I'm doing some stuff here
completely wrong (`if str(s).startswith('226')`????). Maybe I cannot achieve
this with`ftplib` only? So in the end my questions are:
What am I doing wrong here? :)
How to produce 'proper' output and is there a way to print some kind of status
while downloading a file (at least a line of dots), for example:
PROCESSING 2012-01-17_07.05.jpg
..........
OK
PROCESSING 2012-01-17_07.06.jpg
......
OK
PROCESSING 2012-01-17_07.06.jpg
...............
OK
etc...
Thanks a lot for helping!
Answer: retrybinary blocks until it is complete. This is why you see `Processing ZZZ\n
OK` immediately, because it occurs after the call to retrbinary has completed.
If you want to print `.` for each call, then you need to provide a callback
function to do this. here is the docstring for retrbinary:
"""Retrieve data in binary mode. A new port is created for you.
Args:
cmd: A RETR command.
callback: A single parameter callable to be called on each
block of data read.
blocksize: The maximum number of bytes to read from the
socket at one time. [default: 8192]
rest: Passed to transfercmd(). [default: None]
Returns:
The response code.
"""
So, you need to provide a different callback that both writes the file and
prints out '.'
import sys # At the top of your module.
# Modify your retrbinary
ftp.retrbinary('RETR ' + h, lambda s: ftpfile.write(s) and sys.stdout.write('.'))
You may have to edit that snippet of code, but it ought to give you an idea of
what to do.
|
My python twisted irc bot responding to commands
Question: Hello People I'm building an irc bot using python twisted, everything is built
but the bot doesn't respond to commands like i want it to. For example if i
want to call a bot command on the irc channel i want to be able to call it
like this $time and have the bot reply what time it is, i am able to get it to
work like this -> crazybot $time and it prints the time but i don't want to
have to type the name every time...How do i get the the bot to run the
commands without calling the name first ? Here is the update -> everything
connects .......
def privmsg(self, user, channel, msg):
user = user.split('!', 1)[0]
if not msg.startswith('#'): # not a trigger command
return # do nothing
command, sep, rest = msg.lstrip('#').partition(' ')
func = getattr(self, 'command_' + command, None)
def command_time(self, *args):
return time.asctime()
.... When i type !time there is no error and no output ..
Answer: You could modify [`MyFirstIrcBot`](http://www.habnabit.org/twistedex.html):
Replace `!` by `$` in:
if not message.startswith('!'): # not a trigger command
return # do nothing
command, sep, rest = message.lstrip('!').partition(' ')
Add:
from datetime import datetime
# ...
def command_time(self, rest):
return datetime.utcnow().isoformat()
|
Python SEM_OPEN Error
Question: I am trying to use Celery running with RabbitMQ and Django. So far i have
RabbitMQ 2.7 installed on my server, along with python 2.7 , Django 1.3,
celery 2.4.6 and django-celery 2.4.2
i followed the simple instructions here <http://django-
celery.readthedocs.org/en/latest/introduction.html>
and then i tried to start celery by running
$ python manage.py celeryd
but it gave me this error
Unrecoverable error: ImportError('This platform lacks a functioning sem_open implementation, therefore, the required synchronization primitives needed will not function, see issue 3770.
i did some research and found the bug here <http://bugs.python.org/issue3770>,
and its supposedly fixed, but i guess not for python 2.7.
Also, My OS is CentOS release 4.9 (Final). And i built python from source.
I started the python shell and ran :
import multiprocessing
with no errors, however i then ran:
from multiprocessing import synchronize
and i got the error ( above ).
Any help is appreciated.
Answer: I ran into the same problem on CentOS 6 running under OpenVZ. I had to mount
`/dev/shm` because it was missing. Add the following to `/etc/fstab`:
tmpfs /dev/shm tmpfs defaults 0 0
And then run `sudo mount /dev/shm` and see if it works. I had my own custom
built Python 2.7.3 and this device _needs to be present when building as well_
, otherwise Python will not be build with `sem_open` support. Check for the
following output when running `./configure`:
checking for sem_open... yes
|
Makefile for C program that uses numpy extensions
Question: Please what is the simplest / most elegant way of how to determine correct
paths for numpy include as they are present on target system ? And then use it
by make command ? At the moment I am using
gcc ... -I/usr/include/python2.7/ -I/usr/lib/python2.7/site-
packages/numpy/core/include/numpy/
and I would like to have those two includes automatically selected based on
system on which the build is perfromed on.
It seems like I can get the second include like this:
python -c "import numpy; print numpy.__path__[0] + 'core/include/numpy/'"
but I am not sure about the first one and even if I was I still wouldn't be
sure how to best use it from makefile (in an easy / elegant way)
Answer: `numpy.get_include()` is the easiest/best way to get the includes. If your C
extension module uses numpy then in
[Extension](http://docs.python.org/distutils/apiref.html#distutils.core.Extension)
you have to use `include_dirs=[numpy.get_include()]`. Why
`numpy.get_include()` doesn't seem to have any documentation I don't know.
Then you can do as user1056255 suggests but just a bit better...
CFLAGS = $(shell python-config --includes) $(shell python -c "import numpy; print '-I' + numpy.get_include()")
|
python-daemon context fails to start when a stale PID file is present
Question: I'm using python-daemon, and having the problem that when I `kill -9` a
process, it leaves a pidfile behind (ok) and the next time I run my program it
doesn't work unless I have already removed the pidfile by hand (not ok).
I catch all exceptions in order that `context.close()` is called before
terminating -- when this happens (e.g. on a `kill`) the /var/run/mydaemon.pid*
files are removed and a subsequent daemon run succeeds. However, when using
SIGKILL (`kill -9`), I don't have the chance to call `context.close()`, and
the /var/run files remain. In this instance, the next time I run my program it
does not start successfully -- the original process returns, but the
daemonized process blocks at `context.open()`.
It seems like python-daemon ought to be noticing that there is a pidfile for a
process that no longer exists, and clearing it out, but that isn't happening.
Am I supposed to be doing this by hand?
Note: I'm not using `with` because this code runs on Python 2.4
from daemon import DaemonContext
from daemon.pidlockfile import PIDLockFile
context = DaemonContext(pidfile = PIDLockFile("/var/run/mydaemon.pid"))
context.open()
try:
retry_main_loop()
except Exception, e:
pass
context.close()
Answer: With the script provided [here](http://stackoverflow.com/questions/473620/how-
do-you-create-a-daemon-in-python/9047339#9047339) the pid file remains on kill
-9 as you say, but the script also cleans up properly on a restart.
|
urllib3 maxretryError
Question: I have just started using urllib3, and I am running into a problem
straightaway. According to their manuals, I started off with the simple
example:
Python 2.7.1+ (r271:86832, Apr 11 2011, 18:13:53)
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib3
>>>
>>> http = urllib3.PoolManager()
>>> r = http.request('GET', 'http://google.com/')
I get thrown the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 65, in request
**urlopen_kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 78, in request_encode_url
return self.urlopen(method, url, **urlopen_kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 113, in urlopen
return self.urlopen(method, e.new_url, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 113, in urlopen
return self.urlopen(method, e.new_url, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 113, in urlopen
return self.urlopen(method, e.new_url, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 113, in urlopen
return self.urlopen(method, e.new_url, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 109, in urlopen
return conn.urlopen(method, url, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 309, in urlopen
raise MaxRetryError(url)
urllib3.exceptions.MaxRetryError: Max retries exceeded for url: http://google.com/
Any clues as to why this happens? Many thanks.
Answer: This is a known bug which has been fixed in the master branch:
* [#28: "Usage" example at top of docs raises MaxRetryError (Fixed)](https://github.com/shazow/urllib3/issues/28)
I really should have published a bugfix release last weekend with this fix,
but I ran out of time. The next release this coming weekend should include
this fix (and a bunch of other cool improvements). Sorry for the troubles!
**Update:** [urllib3 v1.2](http://pypi.python.org/pypi/urllib3#changes) is now
on PyPI which includes this fix and more. :)
|
Running non-django commands from a sub-directory for a Django project hosted on Heroku?
Question: I've deployed a Django application on Heroku. The application by itself works
fine. I can run commands such as `heroku run python project/manage.py
syncdb`and `heroku run python project/manage.py shell` and this works well.
My Django project makes use of the Python web scraping library called Scrapy.
Scrapy comes with a command called `scrapy crawl abc` which helps me scrape
websites I have defined in the scrapy application. When I run a scrapy command
such as `scrapy crawl spidername` on my local machine, the application is able
to scrape date and copy it to my database. However when I run the same command
on Heroku under a sub-directory of my project directory `heroku run scrapy
crawl spidername`, nothing happens.
I don't see anything in the Heroku logs which can point to where I'm going
wrong:
2012-01-26T15:45:38+00:00 heroku[run.1]: State changed from created to starting
2012-01-26T15:45:43+00:00 app[run.1]: Awaiting client
2012-01-26T15:45:43+00:00 app[run.1]: Starting process with command `project/spiderMainDir scrapy crawl spidername`
2012-01-26T15:45:44+00:00 heroku[run.1]: State changed from starting to up
2012-01-26T15:45:46+00:00 heroku[run.1]: State changed from up to complete
2012-01-26T15:45:46+00:00 heroku[run.1]: Process exited
Some additional information:
My scrapy app calls `pipelines.py` to save the scraped items to the database.
In the `pipelines.py` file, this is what I've written to invoke the Django
settings so that I can import my models and save data to the database from the
scrapy application.
import os,sys
PROJECT_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
sys.path.append(PROJECT_PATH)
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
Any pointers on where exactly am I going wrong? How do I execute the `scrapy`
command on Heroku such that my application can scrape an external website and
save that data to the database. Isn't the way external commands are run in
Heroku like - `heroku run command`?
Answer: I'm answering my own question because I discovered what the problem was.
Heroku for some reason was not able to find `scrapy` when I executed the
command from a sub-directory and not the top-level directory.
The command `heroku run ...` is generally run from the top-level directory.
For my project which uses scrapy, I was required to go to a sub-directory and
run the `scrapy` command from the sub-directory (this is how scrapy is
designed). This wasn't working in Heroku. So I went to the Heroku bash by
typing `heroku run bash` to see what was going on. When I ran the `scrapy`
command from the top-level directory, Heroku recognized the command but when I
went to a sub-directory, it failed to recognize the `scrapy` command. I
suppose there is some problem related to the path. From the sub-directory, I
had to specify the complete path to `scrapy` (`~/bin/scrapy crawl spidername`)
to be able to execute it.
To run the `scrapy` command without going to the Heroku bash manually each
time, my work around this problem was that I created a shell script containing
the following code and put it under the bin directory of my top-level
directory and pushed the changes to Heroku.
bin/scrapy.sh :
#!/usr/bin/env bash
cd ~/project/spiderSubDirectory
~/bin/scrapy $@
After this was done, I could execute `$ heroku run scrapy.sh crawl spidername`
from my local bash. I suppose its not the best solution but this works.
|
dpkt throws NeedData on valid pcap
Question: I have this python code:
import sys
import dpkt
f = file("pcaop.Pcap")
pcap = dpkt.pcap.Reader(f)
i = 0
for ts, buf in pcap:
print "Ya"
dpkt throws NeedData on the 52nd packet. The same one every time - I've
checked packet 52 and it is the same as everyone else on wireshark.
What causes this?
Answer: Solution is provided here: [Python stops reading file using
read](http://stackoverflow.com/questions/8440819/python-stops-reading-file-
using-read) I had the same problem when dpkt.pcap was working fine under Linux
but failed instantly when run in Windows. The problem is that when a file is
opened in text mode `open("filename", "r")` the file is read until EOF is
encountered. Thus, `open("filename", "rb")`
|
How to generate a repeatable random number sequence?
Question: I would like a function that can generate a pseudo-random sequence of values,
but for that sequence to be repeatable every run. The data I want has to be
reasonably well randomly distributed over a given range, it doesn't have to be
perfect.
I want to write some code which will have performance tests run on it, based
on random data. I would like that data to be the same for every test run, on
every machine, but I don't want to have to ship the random data with the tests
for storage reasons (it might end up being many megabytes).
The library for the `random` module doesn't appear to say that the same seed
will always give the same sequence on any machine.
EDIT: If you're going to suggest I seed the data (as I said above), please
provide the documentation that says the approach valid, and will work on a
range of machines/implementations.
EDIT: CPython 2.7.1 and PyPy 1.7 on Mac OS X and CPython 2.7.1 and CPython
2.52=.2 Ubuntu appear to give the same results. Still, no docs that stipulate
this in black and white.
Any ideas?
Answer: For this purpose, I've used a repeating MD5 hash, since the intention of a
hashing function is a cross-platform one-to-one transformation, so it will
always be the same on different platforms.
import md5
def repeatable_random(seed):
hash = seed
while True:
hash = md5.md5(hash).digest()
for c in hash:
yield ord(c)
def test():
for i, v in zip(range(100), repeatable_random("SEED_GOES_HERE")):
print v
Output:
184 207 76 134 103 171 90 41 12 142 167 107 84 89 149 131 142 43 241 211 224 157 47 59 34 233 41 219 73 37 251 194 15 253 75 145 96 80 39 179 249 202 159 83 209 225 250 7 69 218 6 118 30 4 223 205 91 10 122 203 150 202 99 38 192 105 76 100 117 19 25 131 17 60 251 77 246 242 80 163 13 138 36 213 200 135 216 173 92 32 9 122 53 250 80 128 6 139 49 94
Essentially, the code will take your seed (any valid string) and repeatedly
hash it, thus generating integers from 0 to 255.
|
importing target functions | multiprocessing
Question: I want to learn multiprocessing in python. I started reading
<http://www.doughellmann.com/PyMOTW/multiprocessing/basics.html> and I am not
able to understand the section on importing target functions.
In particular what does the following sentence mean..
"Wrapping the main part of the application in a check for __main__ ensures
that it is not run recursively in each child as the module is imported."
Can someone explain this in more detail with an example ?
Answer: <http://effbot.org/pyfaq/tutor-what-is-if-name-main-for.htm>
<http://docs.python.org/tutorial/modules.html#executing-modules-as-scripts>
[What does <if __name__=="__main__":>
do?](http://stackoverflow.com/questions/419163/what-does-if-name-main-do)
<http://en.wikipedia.org/wiki/Main_function#Python>
|
break two for loops
Question: > **Possible Duplicate:**
> [How to break out of multiple loops in
> Python?](http://stackoverflow.com/questions/189645/how-to-break-out-of-
> multiple-loops-in-python)
is it possible to break out of two **for** loops in Python?
i.e.
for i in range(1,100):
for j in range(1,100):
break ALL the loops!
Answer: No, there is no nested `break` statement in python.
Instead, you can simplify your function, like this:
import itertools
for i,j in itertools.product(range(1, 100), repeat=2):
break
.. or put the code into its own function, and use `return`:
def _helper():
for i in range(1,100):
for j in range(1,100):
return
_helper()
.. or use an exception:
class BreakAllTheLoops(BaseException): pass
try:
for i in range(1,100):
for j in range(1,100):
raise BreakAllTheLoops()
except BreakAllTheLoops:
pass
.. or use for-else-continue:
for i in range(1,100):
for j in range(1,100):
break
else:
continue
break
.. or use a flag variable:
exitFlag = False
for i in range(1,100):
for j in range(1,100):
exitFlag = True
break
if exitFlag:
break
|
getting arguments from command line python
Question: I am trying to get three arguments from command line:
-o (for outputfile) -k (number of clusters) -l (data to be clustered)
So i wrote this.
def get_input():
print 'ARGV :', sys.argv[1:]
options, remainder = getopt.getopt(sys.argv[1:], 'o:v:k:l', ['output=',
'verbose',
'k_clust=',
'limit='])
print "options ",options
file_flag , k_flag, count_flag = False, False,False
for opt, arg in options:
print opt
if opt in ('-o', '--output'):
print "here ", opt, arg
output_filename = arg
o_flag = True
if opt in ('-v', '--verbose'):
verbose = True
if opt == '--version':
version = arg
if opt in ('-k','--k_clust'):
print "here", opt, arg
k_clust = arg
k_flag = True
if opt in ('-l','--limit'):
kcount = arg
assert kcount!=0 and kcount!= ''
print "limit ", arg
count_flag = True
if k_flag == False:
sys.exit(" no cluster specified, will be exiting now")
if o_flag == False:
print "using default outfile name ",output_filename
if count_flag == False:
kcount = 10000000
return output_filename, k_clust,kcount
Everything is working on fine except the -l flag so if my command line command
is this:
$python foo.py -o foo.txt -k 2 -l 2
and the print argv prints
ARGV : ['-o', 'demo.txt', '-k', '2', '-l', '2']
but the options is:
options [('-o', 'demo.txt'), ('-k', '2'), ('-l', '')]
Notice that nothing is being parsed in the "l" field. Wat am i doing wrong?
Thanks
Answer: `getopt` is a rather old module. If you have Python2.7, use
[argparse](http://docs.python.org/library/argparse.html#module-argparse). If
you have a slightly older version of Python >= 2.3, you can still [install
argparse](http://pypi.python.org/pypi/argparse):
With
import argparse
parser=argparse.ArgumentParser()
parser.add_argument('-o', help = 'outputfile')
parser.add_argument('-k', help = 'number of clusters')
parser.add_argument('-l', help = 'data to be clustered')
args=parser.parse_args()
print(args)
running
test.py -o foo.txt -k 2 -l 2
yields
Namespace(k='2', l='2', o='foo.txt')
|
Break statement in Python
Question: I am trying to break out of a for loop, but for some reason the following
doesn't work as expected:
for out in dbOutPut:
case_id = out['case_id']
string = out['subject']
vectorspace = create_vector_space_model(case_id, string, tfidf_dict)
vectorspace_list.append(vectorspace)
case_id_list.append(case_id)
print len(case_id_list)
if len(case_id_list) >= kcount:
print "true"
break
It just keeps iterating untill the end of `dbOutput`. What am I doing wrong?
Answer: I'm guessing, based on your [previous
question](http://stackoverflow.com/questions/9039320/getting-arguments-from-
command-line-python), that `kcount` is a string, not an int. Note that when
you compare an int with a string, (in CPython version 2) the [int is always
less than the
string](http://docs.python.org/library/stdtypes.html#comparisons) because
`'int'` comes before `'str'` in alphabetic order:
In [12]: 100 >= '2'
Out[12]: False
If `kcount` is a string, then the solution is add a type to the `argparse`
argument:
import argparse
parser=argparse.ArgumentParser()
parser.add_argument('-k', type = int, help = 'number of clusters')
args=parser.parse_args()
print(type(args.k))
print(args.k)
running
% test.py -k 2
yields
<type 'int'>
2
* * *
This confusing error would not arise in Python3. There, comparing an `int` and
a `str` raises a TypeError.
|
What's the pythonic way of generating a range of chars?
Question: In other languages, I would use a construct like this:
a..z
I couldn't come up with a better solution than this:
[chr(x) for x in range(ord("a"), ord("z") + 1)]
Is there a shorter, more readable way of building such a list ?
Answer: Not necessarily great if you want to do something other than a to z, but you
can do this:
from string import ascii_lowercase
for c in ascii_lowercase:
print c
|
Lines splines in graphic of values but y
Question: I would like to generate a graph like the link below
<http://en.wikipedia.org/wiki/Reaction_coordinate>
The graph generated from a calculation of the python library installed. I
would like the line is smooth with type cspline gnuplot
The values E_ads=234.4211 , E_dis=0.730278 and E_reac=-0.8714
Could anyone help me
from ase import *
from ase.calculators.jacapo import *
import Gnuplot as gp
# -- Read in all energies
datadict = {'H2O' :'water.nc',
'Pt' :'out-Pt.nc',
'H2OPt' :'H2O.Pt.nc',
'OHPt' :'OHPt.nc',
'HPt' :'HPt.nc',
}
E = {}
for label, file in datadict.items():
print 'Reading energy for %5s from file %20s' % (label, file),
atoms = Jacapo.read_atoms(file)
E[label] = atoms.get_potential_energy()
print '\tE = %14.6f eV'% E[label]
print
# -- Calculate adsorption and disassociation energies
E_ads = (E['H2OPt'] - 2*E['H2O'] - E['Pt'])/2
print 'H2O adsorption energy on Pt:'
print 'E_ads =', E_ads, 'eV\n'
E_dis = E['HPt'] - E['Pt'] + E['OHPt'] - E['Pt'] - E['H2O']
print 'H2O -> OH + H disassociation energy on Pt:'
print 'E_dis =', E_dis, 'eV\n'
E_reac = E['H2OPt'] - E['HPt'] - E['OHPt'] + E['Pt']
print 'H2O@Pt -> OH@Pt +H@Pt reaction energy on Pt:'
print 'E_reac =', E_reac, 'eV\n'
# -- Collect reaction path
Epath = np.asarray([1.0, E_ads, E_dis, E_reac])
PathLabels= ['']
# -- Plot the reaction path
import pylab as p
import numpy as np
import matplotlib.path as mpath
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
from scipy.interpolate import spline
import matplotlib.pyplot as plt
from numpy import array, linspace
from scipy.interpolate import spline
fig = p.figure(1)
sp = p.subplot(1,1,1)
p.plot(Epath, color='black', linestyle=':', linewidth=2.0, alpha=0.8)
p.text(0.37, 10.05, 'Free H$_2$O',fontsize=12, color='black',ha='right', va='bottom', alpha=1.0)
p.text(1.1, 238, 'H$_2$O + Pt',fontsize=12, color='black',ha='right', va='bottom', alpha=1.0)
p.title('H$_2$O disassociation')
p.ylabel('Energy [eV]')
p.xlabel('Reaction path')
#p.xlim([-0.5, 2.5])
#p.ylim([-0.5, 1.5])
sp.get_xaxis().set_ticks([]) # Turn off ticks on xaxis
#p.savefig('Teste.png')
p.show()
Answer: You can plot a regular cubic spline version of your data by just doing
something like this:
plot(np.linspace(0,3),spline([0,1,2,3],Epath,np.linspace(0,3)))
which will yield something like:

But I suspect that isn't what you want. You might need to resort to something
like [Monotone
splines](http://en.wikipedia.org/wiki/Monotone_cubic_interpolation) or [shape
preserving
splines](http://scholar.google.com/scholar?q=shape%20preserving%20spline&hl=en&as_sdt=0&as_vis=1&oi=scholart)
to get the shape which looks the same as those curves shown in your Wikipedia
link. I don't believe either of those interpolation methods are presently
implemented in scipy.
If you have a rough idea of the mathematical form of those curves, you could
always fit your own approximate function for the continuous section and just
clamp the function outside of that range. For example:
plot(np.linspace(0,3),np.maximum(E_react,spline([0,1,2,3],Epath,np.linspace(0,3))))
would yield:

which at least "looks" like the curve you linked to, even if it isn't the
correct fit.
|
Python requests exception handling
Question: How to handle exceptions with python library requests? For example how to
check is PC connected to internet?
When I try
try:
requests.get('http://www.google.com')
except ConnectionError:
# handle the exception
it gives me error name `ConnectionError` is not defined
Answer: Assuming you did `import requests`, you want `requests.ConnectionError`.
`ConnectionError` is an exception defined by `requests`. See the [API
documentation](http://docs.python-requests.org/en/latest/api/) here.
|
reshaping ndarrays versus regular arrays in numpy?
Question: I have an object of type 'numpy.ndarray', called "myarray", that when printed
to the screen using python's "print", looks like hits
[[[ 84 0 213 232] [153 0 304 363]]
[[ 33 0 56 104] [ 83 0 77 238]]
[[ 0 0 9 61] [ 0 0 2 74]]]
"myarray" is made by another library. The value of `myarray.shape` equals (3,
2). I expected this to be a 3dimensional array, with three indices. When I try
to make this structure myself, using:
second_array = array([[[84, 0, 213, 232], [153, 0, 304, 363]],
[[33, 0, 56, 104], [83, 0, 77, 238]],
[[0, 0, 9, 61], [0, 0, 2, 74]]])
I get that `second_array.shape` is equal to `(3, 2, 4)`, as expected. Why is
there this difference? Also, given this, how can I reshape "myarray" so that
the two columns are merged, i.e. so that the result is:
[[[ 84 0 213 232 153 0 304 363]]
[[ 33 0 56 104 83 0 77 238]]
[[ 0 0 9 61 0 0 2 74]]]
**Edit:** to clarify, I know that in the case of `second_array`, I can do
`second_array.reshape((3,8))`. But how does this work for the ndarray which
has the format of `myarray` but does not have a 3d index?
`myarray.dtype` is "`object`" but can be changed to be ndarray too.
**Edit 2** : Getting closer, but still cannot quite get the `ravel`/`flatten`
followed by reshape. I have:
a = array([[1, 2, 3],
[4, 5, 6]])
b = array([[ 7, 8, 9],
[10, 11, 12]])
arr = array([a, b])
I try:
arr.ravel().reshape((2,6))
But this gives `[[1, 2, 3, 4, 5, 6], ...]` and I wanted `[[1, 2, 3, 7, 8, 9],
...]`. How can this be done?
thanks.
Answer: Indeed, `ravel` and `hstack` can be useful tools for reshaping arrays:
import numpy as np
myarray = np.empty((3,2),dtype = object)
myarray[:] = [[np.array([ 84, 0, 213, 232]), np.array([153, 0, 304, 363])],
[np.array([ 33, 0, 56, 104]), np.array([ 83, 0, 77, 238])],
[np.array([ 0, 0, 9, 61]), np.array([ 0, 0, 2, 74])]]
myarray = np.hstack(myarray.ravel()).reshape(3,2,4)
print(myarray)
# [[[ 84 0 213 232]
# [153 0 304 363]]
# [[ 33 0 56 104]
# [ 83 0 77 238]]
# [[ 0 0 9 61]
# [ 0 0 2 74]]]
myarray = myarray.ravel().reshape(3,8)
print(myarray)
# [[ 84 0 213 232 153 0 304 363]
# [ 33 0 56 104 83 0 77 238]
# [ 0 0 9 61 0 0 2 74]]
* * *
Regarding Edit 2:
import numpy as np
a = np.array([[1, 2, 3],
[4, 5, 6]])
b = np.array([[ 7, 8, 9],
[10, 11, 12]])
arr = np.array([a, b])
print(arr)
# [[[ 1 2 3]
# [ 4 5 6]]
# [[ 7 8 9]
# [10 11 12]]]
Notice that
In [45]: arr[:,0,:]
Out[45]:
array([[1, 2, 3],
[7, 8, 9]])
Since you want the first row to be `[1,2,3,7,8,9]`, the above shows that you
want the second axis to be the first axis. This can be accomplished with the
`swapaxes` method:
print(arr.swapaxes(0,1).reshape(2,6))
# [[ 1 2 3 7 8 9]
# [ 4 5 6 10 11 12]]
Or, given `a` and `b`, or equivalently, `arr[0]` and `arr[1]`, you could form
`arr` directly with the `hstack` method:
arr = np.hstack([a, b])
# [[ 1 2 3 7 8 9]
# [ 4 5 6 10 11 12]]
|
Python extension (Boost.Python & Py++) and dlopen confusion
Question: I'm wrapping a C++ project with Py++/Boost.Python under Windows and Linux.
Everything in Windows is working fine, but I'm a bit confused over the
behavior in Linux. The C++ project is built into a single shared library
called libsimif, but I'd like to split it up into 3 separate extension
modules. For simplicity, I'll only discuss two of them, since the behavior for
the third is identical. The first, called storage contains definitions of data
structures. It has no dependencies on anything defined in either of the other
two extension modules. The second module, control, uses data structures that
are defined in storage. On the C++ side of things, the headers and source
files for storage and control are in entirely different directories. I've
tried a number of different configurations to build the extensions, but one
thing that has remained consistent is that for storage, I am only generating
Py++ wrappers for the headers included in the storage directory and only
building source files in that directory along with the Py++ generated sources.
Ditto for the control extension.
The current configuration that I am using that works passes in libsimif as a
library to the distutils.Extension constructor. Then before starting Python, I
need to ensure that libsimif is found in LD_LIBRARY_PATH. Then I can launch
Python and import either module (or from them) and everything works as-
expected. Here is some sample output from this working configuration:
>>> import ast.simif.model_io.storage as storage
>>> import ast.simif.model_io.control as control
>>> dir(storage)
['DiscreteStore', 'PulseStore', 'RtStore', 'SerialStore', 'SharedMemoryBuilder', 'SharedMemoryDeleter', 'SpaceWireStore', '__doc__', '__file__', '__name__', '__package__']
>>> dir(control)
['DiscreteController', 'ModelIoController', 'PulseController', 'RtController', 'SerialController', 'SpaceWireController', '__doc__', '__file__', '__name__', '__package__']
>>> storage.__file__
'ast/simif/model_io/storage.so'
>>> control.__file__
'ast/simif/model_io/control.so'
As you can see, both modules have their own shared library and unique set of
symbols. Now here is why I am confused. In Linux, we've always set the dlopen
flags to include RTLD_NOW and RTLD_GLOBAL. If I do that, this is what happens:
>>> import sys
>>> import DLFCN
>>> sys.setdlopenflags(DLFCN.RTLD_NOW | DLFCN.RTLD_GLOBAL)
>>> import ast.simif.model_io.storage as storage
>>> import ast.simif.model_io.control as control
__main__:1: RuntimeWarning: to-Python converter for DiscreteStore::FrameData already registered; second conversion method ignored.
__main__:1: RuntimeWarning: to-Python converter for PulseStore::FrameData already registered; second conversion method ignored.
__main__:1: RuntimeWarning: to-Python converter for RtStore::Link already registered; second conversion method ignored.
__main__:1: RuntimeWarning: to-Python converter for RtStore::FrameData already registered; second conversion method ignored.
__main__:1: RuntimeWarning: to-Python converter for RtStore::RtData already registered; second conversion method ignored.
__main__:1: RuntimeWarning: to-Python converter for SerialStore::FrameData already registered; second conversion method ignored.
__main__:1: RuntimeWarning: to-Python converter for SharedMemoryBuilder already registered; second conversion method ignored.
__main__:1: RuntimeWarning: to-Python converter for SharedMemoryDeleter already registered; second conversion method ignored.
>>> dir(storage)
['DiscreteStore', 'PulseStore', 'RtStore', 'SerialStore', 'SharedMemoryBuilder', 'SharedMemoryDeleter', 'SpaceWireStore', '__doc__', '__file__', '__name__', '__package__']
>>> dir(control)
['DiscreteStore', 'PulseStore', 'RtStore', 'SerialStore', 'SharedMemoryBuilder', 'SharedMemoryDeleter', '__doc__', '__file__', '__name__', '__package__']
>>> storage.__file__
'ast/simif/model_io/storage.so'
>>> control.__file__
'ast/simif/model_io/control.so'
So, here storage imports ok, but control complains about a bunch of duplicate
registrations. Then when inspecting the modules, control is completely wrong.
It's like it tried to import storage twice even though **file** reports the
correct shared libraries. Perhaps not surprising, if I change the import order
and import control ahead of storage, this is what happens:
>>> import sys
>>> import DLFCN
>>> sys.setdlopenflags(DLFCN.RTLD_NOW | DLFCN.RTLD_GLOBAL)
>>> import ast.simif.model_io.control as control
>>> dir(control)
['DiscreteController', 'ModelIoController', 'PulseController', 'RtController', 'SerialController', 'SpaceWireController', '__doc__', '__file__', '__name__', '__package__']
>>> import ast.simif.model_io.storage as storage
__main__:1: RuntimeWarning: to-Python converter for DiscreteController already registered; second conversion method ignored.
__main__:1: RuntimeWarning: to-Python converter for PulseController already registered; second conversion method ignored.
__main__:1: RuntimeWarning: to-Python converter for RtController already registered; second conversion method ignored.
__main__:1: RuntimeWarning: to-Python converter for SerialController already registered; second conversion method ignored.
__main__:1: RuntimeWarning: to-Python converter for SpaceWireController already registered; second conversion method ignored.
>>> dir(storage)
['DiscreteController', 'ModelIoController', 'PulseController', 'RtController', 'SerialController', 'SpaceWireController', 'SpaceWireStore', '__doc__', '__file__', '__name__', '__package__']
Similar behavior, but now the storage import is FUBAR. Does anyone understand
what is going on here?
I'm using:
* x64 Python 2.6.6 on x64 RHEL6. Gcc version 4.4.6
* x64 Python 2.6.5 on x64 RHEL5. Gcc version 4.1.2
Answer: Turns out this was actually due to a quirk with how Boost.Python registration
code is generated when using balanced_split_module in Py++.
balanced_split_module basically splits up all of the registration code into a
fixed number of source files, each with its own registration function. The
source files are named using the extension name plus the generated file number
(e.g. _.cpp, but the gotcha is that the actual functions they contain do not
contain the extension name and are just a simple register_1(), register_2(),
etc. This is find and dandy when you are only importing a single module or not
making making a module's symbols global. What happens in this case is that
when you set RTLD_GLOBAL the first module import successfully, but then all
subsequent modules will call the registration functions that were loaded in as
part of the initial module.
|
Python: File doesn't read whole file, io.FileIO does - why?
Question: The following code, executed in python 2.7.2 _on windows_ , only reads in a
fraction of the underlying file:
import os
in_file = open(os.path.join(settings.BASEPATH,'CompanyName.docx'))
incontent = in_file.read()
in_file.close()
while this code works just fine:
import io
import os
in_file = io.FileIO(os.path.join(settings.BASEPATH,'CompanyName.docx'))
incontent = in_file.read()
in_file.close()
Why the difference? From my reading of the docs, they should perform
identically.
Answer: You need to open the file in binary mode, or the `read()` will stop at the
first EOF character it finds. And a `docx` is a ZIP file which is guaranteed
to contain such a character somewhere.
Try
in_file = open(os.path.join(settings.BASEPATH,'CompanyName.docx'), "rb")
`FileIO` reads [raw
bytestreams](http://docs.python.org/library/io.html#io.RawIOBase) and those
are "binary" by default.
|
How to retrieve data from Google Reader?
Question: Problem: I want to download rss feed data from Google Reader. From direct feed
I could download around 20 entries, but from Google Reader I could get 5000
last posts.
My solution (that does not work):
1.) Authenticate (using this post: [Google Reader API Unread
Count](http://stackoverflow.com/questions/52880/google-reader-api-unread-
count))
2.)
[http://www.google.com/reader/atom/feed/FEED_URL?r=n&n=NUMBER_OF_ITEMS](http://www.google.com/reader/atom/feed/FEED_URL?r=n&n=NUMBER_OF_ITEMS)
this address should return X last posts. Specific url for NY Times feed and
for last 100 posts:
[http://www.google.com/reader/atom/feed/http://feeds.nytimes.com/nyt/rss/HomePage?r=n&n=100](http://www.google.com/reader/atom/feed/http://feeds.nytimes.com/nyt/rss/HomePage?r=n&n=100)
3.) use python and feedparser to get the content
> import feedparser
>
> url =
> "http://www.google.com/reader/atom/feed/http://feeds.nytimes.com/nyt/rss/HomePage?r=n&n=100"
>
> d = feedparser.parse(url)
>
> print d["feed"]["title"] # to get the title of the feed
My error message after running this code:
> Traceback (most recent call last):
>
> File "---", line 40, in print d["feed"]["title"]
>
> File "C:\Python27\lib\site-packages\feedparser.py", line 346, in __
> getitem__ return dict.__ getitem__(self, key)
>
> KeyError: 'title'
Could I use feedparser to parse the content? Or the only way is to use regular
expressions?
Answer: Look like you have to pagination. Look at the documentation for "count" and
"continuation"
<http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI>
|
Python generator to return series of times
Question: I hope this is not outside of the abilities of Python generators, but I'd like
to build one so that every time the function is called, it returns the next
minute up until the end time.
So the function reads in a start and end time, and returns the time on a
minute by minute basis until all the time in between has been covered.
How would this be implemented? TIA
Answer: The [datetime module](http://docs.python.org/library/datetime.html) is quite
awesome. There are two datatypes you need to know about:
[`datetime`](http://docs.python.org/library/datetime.html#datetime-objects)
and [`timedelta`](http://docs.python.org/library/datetime.html#timedelta-
objects). `datetime` is a point in time, while `timedelta` is a period of
time. Basically, what I'm going to do here is start at a time and end at a
time (as a `datetime` object), and progressively add 1 minute.
This obviously has the caveat that you have to figure out how to get your
start and end time into a `datetime`. There are a number of ways to do this:
through [the
constructor](http://docs.python.org/library/datetime.html#datetime.datetime),
[right
now](http://docs.python.org/library/datetime.html#datetime.datetime.now), from
[UTC
timestamp](http://docs.python.org/library/datetime.html#datetime.datetime.fromtimestamp),
etc.
import datetime
def minute_range(start, end, step=1):
cur = start
while cur < end:
yield cur
cur += datetime.timedelta(minutes=step)
|
Python string formating using values from multiple lists
Question: I'm trying to format a string using values from several lists. The following
is pseudo-code but should give an idea of the expected output. The output
would be combination of each item in each list: **each person likes to eat all
fruits while doing all hobbies**. So how to do this in python?
There should be `len(names)*len(fruits)*len(hobbies)` possibilities (64 in my
example)
names = ['tom','marry','jessica','john']
fruits = ['oranges','apples','grapes','bananas']
hobbies = ['dancing','sitting','bicycling','watching tv']
print '%(name)s likes to eat %(fruit)s while %(hobby)s \n'
% {'name':names, 'fruit':fruits, 'hobby':hobbies}
Answer: If I understand your "The output would be combination of each item in each
list: each person likes all fruits while doing each hobby" line, you want
every possible combination. You can do this in a nested loop way:
names = ['tom','mary','jessica','john']
fruits = ['oranges','apples','grapes','bananas']
hobbies = ['dancing','sitting','bicycling','watching tv']
for name in names:
for fruit in fruits:
for hobby in hobbies:
print '%(name)s likes to eat %(fruit)s while %(hobby)s' % {'name':name, 'fruit':fruit, 'hobby':hobby}
which produces
tom likes to eat oranges while dancing
tom likes to eat oranges while sitting
tom likes to eat oranges while bicycling
tom likes to eat oranges while watching tv
tom likes to eat apples while dancing
[etc.]
john likes to eat bananas while bicycling
john likes to eat bananas while watching tv
or you could use the itertools module, which has a function `product` which
gives you every possible combination of the input lists:
import itertools
for name, fruit, hobby in itertools.product(names, fruits, hobbies):
print '%(name)s likes to eat %(fruit)s while %(hobby)s' % {'name':name, 'fruit':fruit, 'hobby':hobby}
|
extracting from a tagged corpus in python
Question: hi i'm trying exract proper noun from a tagged corpus, lets say for example-
from the nltk tagged corpus brown i'm trying to extract the words only tagged
with "NP".
my code:
import nltk
from nltk.corpus import brown
f = brown.raw('ca01')
print nltk.corpus.brown.tagged_words()
w=[nltk.tag.str2tuple(t) for t in f.split()]
print w
but it is not showing the words istead it is showing only
[]
sample output:
[('The', 'AT'), ('Fulton', 'NP-TL'), ...]
[]
why is it??
thanks.
I i only prit f.split()..then i get
[('The', 'AT'), ('Fulton', 'NP-TL'), ('County', 'NN-TL'), ('Grand', 'JJ-TL'), ('Jury', 'NN-TL'), ('said', 'VBD'), ('Friday', 'NR'), ('an', 'AT'), ('investigation', 'NN'), ('of', 'IN'), ("Atlanta's", 'NP$'), ('recent', 'JJ'), ('primary', 'NN'), ('election', 'NN'), ('produced', 'VBD'), ('``', '``'), ('no', 'AT'), ('evidence', 'NN'), ("''", "''"), ('that', 'CS'), ('any', 'DTI'), ('irregularities', 'NNS'), ('took', 'VBD'), ('place', 'NN'), ('.', '.'), ('The', 'AT'), ('jury', 'NN'), ('further', 'RBR'), ('said', 'VBD'), ('in', 'IN'), ('term-end', 'NN'), ('presentments', 'NNS'), ('that', 'CS'), ('the', 'AT'), ('City', 'NN-TL').....
Answer: Can't really tell from what you've given us, but have you tried going into the
problem step by step? It seems that under no circumstances does
`t.split('/')[1] == 'NP'` evaluate to True. So you should start by:
1. print/debug to see what exactly does `f.split()` contain
2. make sure your condition is actually the correct one, from the little sample of output you gave there it looks to me you are looking more for: `if t.split('/')[1].startswith('NP')` but can't really tell.
EDIT:
Ok, first if that is really what `f.split()` prints to you then you should get
an exception sicne `t` is a tuple and a tuple doesnt have a `split()` method.
So you made me curious and I installed `nltk` and downloaded the 'brown'
corpus and tried your code. Now first, to me if I do:
import nltk
from nltk.corpus import brown
f = brown.raw('ca01')
print f.split()
['The/at', 'Fulton/np-tl', 'County/nn-tl', 'Grand/jj-tl', 'Jury/nn-tl', 'said/vbd', 'Friday/nr', 'an/at', 'investigation/nn', 'of/in', "Atlanta's/np$", 'recent/jj', 'primary/nn', 'election/nn', 'produced/vbd', '``/``', 'no/at', 'evidence/nn', "''/''", 'that/cs', 'any/dti', 'irregularities/nns', 'took/vbd', 'place/nn', './.', 'The/at', 'jury/nn', 'further/rbr', 'said/vbd', 'in/in', 'term-end/nn', 'presentments/nns', 'that/cs', 'the/at', 'City/nn-tl', 'Executive/jj-tl', 'Committee/nn-tl', ',/,', 'which/wdt', 'had/hvd', 'over-all/jj', 'charge/nn', 'of/in', 'the/at', 'election/nn', ',/,', '``/``', 'deserves/vbz', 'the/at', 'praise/nn', 'and/cc', 'thanks/nns', 'of/in', 'the/at', 'City/nn-tl' .....]
So I have no ideea what you did there to get the result but it was incorrect.
Now as you can see from the groups, the second part of the word is in
lowercase, that is why your code failed. So if you do:
w=[nltk.tag.str2tuple(t) for t in f.split() if t.split('/')[1].lower() == 'np']
This will get you the result:
[('September-October', 'NP'), ('Durwood', 'NP'), ('Pye', 'NP'), ('Ivan', 'NP'), ('Allen', 'NP'), ('Jr.', 'NP'), ('Fulton', 'NP'), ('Atlanta', 'NP'), ('Fulton', 'NP'), ('Fulton', 'NP'), ('Jan.', 'NP'), ('Fulton', 'NP'), ('Bellwood', 'NP'), ('Alpharetta', 'NP'), ('William', 'NP'), ('B.', 'NP'), ('Hartsfield', 'NP'), ('Pearl', 'NP'), ('Williams', 'NP'), ('Hartsfield', 'NP'), ('Aug.', 'NP'), ('William', 'NP'), ('Berry', 'NP'), ('Jr.', 'NP'), ('Mrs.', 'NP'), ('J.', 'NP'), ('M.', 'NP'), ('Cheshire', 'NP'), ('Griffin', 'NP'), ('Opelika', 'NP'), ('Ala.', 'NP'), ('Hartsfield', 'NP'), ('E.', 'NP'), ('Pelham', 'NP'), ('Henry', 'NP'), ('L.', 'NP'), ('Bowden', 'NP'), ('Hartsfield', 'NP'), ('Atlanta', 'NP'), ('Jan.', 'NP'), ('Ivan', 'NP'), ....]
Now for future reference double check before you post information like the one
I asked for, just because if it's not correct then it's missleading and it
won't help neither the ones who try to help you, nor yourself. Not as a critic
but as constructive advice :)
|
How to upload files to Soundcloud using Python?
Question: I am building an application that would record what people say, generate an
audio file and upload it to SoundCloud and get the URL of the uploaded track
using Python.
I used PyAudio to record and generate an audio file - a wave file.
But I need to know how to upload the file to SoundCloud. By research I found
there is a Python wrapper for SoundCloud API and with Python library Poster,
one can easily upload files to SoundCloud.
How do I do it? I have not used this API thing before and I don't find a
proper tutorial or a guide to how to make use of it. So if anybody can help me
with this, please answer my question here.
How to use this [SoundCloud Python API
wrapper](https://github.com/soundcloud/python-api-wrapper) to upload files to
SoundCloud using Python with the help of the Python library
[Poster](http://atlee.ca/software/poster/)?
Answer: We just released a new Python API wrapper. You can get it on
[PyPi](http://pypi.python.org/pypi/soundcloud/) or from our [Github
account](https://github.com/soundcloud/soundcloud-python). To upload a track,
you'll want to first get an access token using one of the supported OAuth2
auth flows. You can read about that in the [README
file](https://github.com/soundcloud/soundcloud-python/blob/master/README.rst).
Let me know if you want me to elaborate on auth and I can edit my answer.
To get an access token, first [register your application on
soundcloud.com](http://soundcloud.com/you/apps). You will need to provide a
URI that users will be directed to after authorizing your application and you
will be given a client id and client secret. Once you have those credentials,
pass them to the Client constructor:
import soundcloud
client = soundcloud.Client(client_id=YOUR_CLIENT_ID,
client_secret=YOUR_CLIENT_SECRET,
redirect_uri="http://your/redirect/uri")
You'll then be able to redirect the user to the authorization URL in order to
authorize your app. The user will be sent to soundcloud.com to log in (if they
do not have an active session) and approve access for your app. Depending on
the framework you're using (e.g. Django, Flask, etc) it could look something
like this:
return redirect(client.authorize_url)
After approving access for your app, the user will be redirected to the
redirect uri you specified when registering your app and in the constructor.
The URL will have a query string that includes a 'code' parameter which you
can then use to obtain an access token. Again, depending on the framework
you're using, this could look like this:
code = request.params.get('code')
token = client.exchange_token(code)
print token.access_token # don't actually print it, just showing how you would access it
You should probably store the access token (i.e. in some kind of data store
like MySQL or Redis) so you can use it whenever that user wants to access
SoundCloud in the future.
Once you've got an access token, uploading a track should be pretty simple.
Once you've got your audio file, just send a POST request to the tracks
resource. Here's an example:
import soundcloud
client = soundcloud.Client(access_token='YOUR_ACCESS_TOKEN')
track = client.post('/tracks', track={
'title': 'The title you want to give your track',
'sharing': 'private', # make this 'public' if you want
'asset_data': open('yourtrack.mp4', 'rb')
})
You'll get back a track resource, which you can then use to get the Soundcloud
URL:
print track.permalink_url
Hope that helps! Let me know if you have any questions.
|
Python expected an indented block
Question: I am a newbie to Python and would like to genereate some numbers according to
geometric distribution. i found this code on Internet but isn´t work:
import random
from math import ceil, log
def geometric(p):
# p should be in (0.0, 1.0].
if ((p <= 0.0) or (p >=1.0)):
raise ValueError("p must be in the interval (0.0, 1.0]")
elif p == 1.0:
# If p is exactly 1.0, then the only possible generated value is 1.
# Recognizing this case early means that we can avoid a log(0.0) later.
# The exact floating point comparison should be fine. log(eps) works just
# dandy.
return 1
# random() returns a number in [0, 1). The log() function does not
# like 0.
U = 1.0 - random.random()
# Find the corresponding geometric variate by inverting the uniform variate.
G = int(ceil(log(U) / log(1.0 - p)))
return G
p=1.0/2.0
for i in range(10):
print geometric(p)
When I try to run it tells me the following error:
File "test.py", line 8
if (p <= 0.0) or (p >=1.0):
^
IndentationError: expected an indented block
What is the error and how I can fix it?
Answer: In Python, indentation is significant. [PEP
8](http://www.python.org/dev/peps/pep-0008/) covers good indentation style.
To take one of your functions as an example, it should look like this:
def geometric(p):
# p should be in (0.0, 1.0].
if ((p <= 0.0) or (p >=1.0)):
raise ValueError("p must be in the interval (0.0, 1.0]")
elif p == 1.0:
# If p is exactly 1.0, then the only possible generated value is 1.
# Recognizing this case early means that we can avoid a log(0.0) later.
# The exact floating point comparison should be fine. log(eps) works just
# dandy.
return 1
If it's not indented properly, it's not valid Python code.
|
Parsing Python Module Docstrings
Question: Is it possible to parse module-level docstrings with the AST?
I am working on a python documenter [here](http://github.com/mvanveen/Tacco)
and visiting the module tokens and grabbing the documentation does not yield
the module-level docstring. So far, I've had to resort to importing the module
and grabbing its `__doc__` or using `inspect` to grab the documentation.
I looked into the [pydoc module](http://docs.python.org/library/pydoc.html)
source for clues as to how other documenters parse docstrings, and discovered
that pydoc ends up having to do basically the same thing as my documenter in
order to grab the module-level strings.
Am I missing something? Is the only way to parse module-level docstrings
through actually importing the module, or is it possible to parse the
docstrings out of the AST directly?
Answer: Maybe I miss-understand the question, but can't you just do this (python
2.7.1)?
test file:
"""
DOC STRING!!
"""
def hello():
'doc string'
print 'hello'
hello()
Interactive session:
>>> M = ast.parse(''.join(open('test.py')))
>>> ast.get_docstring(M)
'DOC STRING!!'
You can also walk through the ast, looking for the slot the doc string would
be in.
>>> M._fields
('body',)
>>> M.body
[<_ast.Expr object at 0x10e5ac710>, <_ast.FunctionDef object at 0x10e5ac790>, <_ast.Expr object at 0x10e5ac910>]
>>> # doc would be in the first slot
>>> M.body[0]._fields
('value',)
>>> M.body[0].value
<_ast.Str object at 0x10e5ac750>
>>> # it contains a string object, so maybe it's the doc string
>>> M.body[0].value._fields
('s',)
>>> M.body[0].value.s
'\nDOC STRING!!\n'
|
Symbol Table in Python
Question: How can we see the Symbol-Table of a python source code???
I mean , Python makes a symbol table for each program before actually running
it,, So my question is how can I get that symbol-table as output???
Answer: Python is dynamic rather than static in nature. Rather than a symbol table as
in compiled object code, the virtual machine has an addressible namespace for
your variables.
The `dir()` or `dir(module)` function returns the effective namespace at that
point in the code. It's mainly used in the interactive interpreter but can be
used by code as well. It returns a list of strings, each of which is a
variable with some value.
The `globals()` function returns a dictionary of variable names to variable
values, where the variable names are considered global in scope at that
moment.
The `locals()` function returns a dictionary of variable names to variable
values, where the variable names are considered local in scope at that moment.
$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> locals()
{'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__doc__': None, '__package__': None}
>>> globals()
{'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', '__doc__': None, '__package__': None}
>>> dir()
['__builtins__', '__doc__', '__name__', '__package__']
>>> import base64
>>> dir(base64)
['EMPTYSTRING', 'MAXBINSIZE', 'MAXLINESIZE', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '_b32alphabet', '_b32rev', '_b32tab', '_translate', '_translation', '_x', 'b16decode', 'b16encode', 'b32decode', 'b32encode', 'b64decode', 'b64encode', 'binascii', 'decode', 'decodestring', 'encode', 'encodestring', 'k', 're', 'standard_b64decode', 'standard_b64encode', 'struct', 'test', 'test1', 'urlsafe_b64decode', 'urlsafe_b64encode', 'v']
|
How can python subprocess.Popen see select.poll and then later not? (select 'module' object has no attribute 'poll')
Question: I'm using the (awesome) [mrjob](http://packages.python.org/mrjob/) library
from Yelp to run my python programs in Amazon's Elastic Map Reduce. It depends
on subprocess in the standard python library. From my mac running python2.7.2,
everything works as expected
However, when I switched to using the exact same code on Ubuntu LTS 11.04 also
with python2.7.2, I encountered something strange:
mrjob loads the job, and then attempts to communicate with its child processes
using subprocess and generates this error:
File "/usr/local/lib/python2.7/dist-packages/mrjob-0.3.1-py2.7.egg/mrjob/emr.py", line 1212, in _build_steps
steps = self._get_steps()
File "/usr/local/lib/python2.7/dist-packages/mrjob-0.3.1-py2.7.egg/mrjob/runner.py", line 1003, in _get_steps
stdout, stderr = steps_proc.communicate()
File "/usr/lib/python2.7/subprocess.py", line 754, in communicate
return self._communicate(input)
File "/usr/lib/python2.7/subprocess.py", line 1302, in _communicate
stdout, stderr = self._communicate_with_poll(input)
File "/usr/lib/python2.7/subprocess.py", line 1332, in _communicate_with_poll
poller = select.poll()
AttributeError: 'module' object has no attribute 'poll'
This appears to be a problem with subprocess and not mrjob.
I dug into /usr/lib/python2.7/subprocess.py and found that during import it
runs:
if mswindows:
... snip ...
else:
import select
_has_poll = hasattr(select, 'poll')
By editing that, I verified that it really does set _has_poll==True. And this
is correct; easily verified on the command line.
However, when execution progresses to using Popen._communicate_with_poll
somehow the select module has changed! This is generated by printing
dir(select) right before it attempts to use select.poll().
['EPOLLERR', 'EPOLLET', 'EPOLLHUP', 'EPOLLIN', 'EPOLLMSG',
'EPOLLONESHOT', 'EPOLLOUT', 'EPOLLPRI', 'EPOLLRDBAND',
'EPOLLRDNORM', 'EPOLLWRBAND', 'EPOLLWRNORM', 'PIPE_BUF',
'POLLERR', 'POLLHUP', 'POLLIN', 'POLLMSG', 'POLLNVAL',
'POLLOUT', 'POLLPRI', 'POLLRDBAND', 'POLLRDNORM',
'POLLWRBAND', 'POLLWRNORM', '__doc__', '__name__',
'__package__', 'error', 'select']
no attribute called 'poll'!?!? How did it go away?
So, I hardcoded _has_poll=False and then mrjob happily continues with its
work, runs my job in AWS EMR, with subprocess using communicate_with_select...
and I'm stuck with a hand-modified standard library...
Any advice? :-)
Answer: I had a similar problem and it turns out that gevent replaces the built-in
`select` module with `gevent.select.select` which doesn't have a `poll` method
(as it is a blocking method). However for some reason by default gevent
doesn't patch `subprocess` which uses `select.poll`.
An easy fix is to replace `subprocess` with `gevent.subprocess`:
import gevent.monkey
gevent.monkey.patch_all(subprocess=True)
import sys
import gevent.subprocess
sys.modules['subprocess'] = gevent.subprocess
If you do this before importing the mrjob library, it should work fine.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.