text
stringlengths 226
34.5k
|
---|
How can I download a file to a specific directory?
Question: I have been recently trying to make a program in python that downloads files
to a specific directory. I am using Ubuntu and so far i have this
import os
import getpass
import urllib2
y = getpass.getuser()
if not os.access('/home/' + y + '/newdir/', os.F_OK):
print("Making New Directory")
os.mkdir('/home/' + y + '/newdir/')
url = ("http://example.com/Examplefile.ex")
file_name = url.split('/')[-1]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
file_size_dl = 0
block_sz = 8192
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
status = status + chr(8)*(len(status)+1)
print status,
f.close()
this currently downloads the file to the same directory how could I change the
directory it downloads to?
fixed it new code:
import os
import getpass
import urllib2
y = getpass.getuser()
if not os.access('/home/' + y + '/newdir/', os.F_OK):
print("Making New Directory")
os.mkdir('/home/' + y + '/newdir/')
os.chdir('/home/'+y+'/newdir/')
url = ("http://example.com/Examplefile.ex")
file_name = url.split('/')[-1]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
file_size_dl = 0
block_sz = 8192
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
status = status + chr(8)*(len(status)+1)
print status,
f.close()
Answer: Sorry Guys I was Being stupid but to Answer the Question I added
os.chdir('/home/' + y + '/newdir/')
right after the first if statement ex:
import os
import getpass
import urllib2
y = getpass.getuser()
if not os.access('/home/' + y + '/newdir/', os.F_OK):
print("Making New Directory")
os.mkdir('/home/' + y + '/newdir/')
os.chdir('/home/'+y+'/newdir/')
url = ("http://example.com/Examplefile.ex")
file_name = url.split('/')[-1]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
file_size_dl = 0
block_sz = 8192
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
status = status + chr(8)*(len(status)+1)
print status,
f.close()
|
Python Rar All File In Directory, Each File Different Directory
Question: It feels like this is a colossally stupid question, but the documentation for
rar as a whole is pretty sketchy, and using python to rar pulls an insane
number of hits, none of them even seem to be attempting what I'm trying to do
(which I find somewhat odd).
I have a directory with a bunch of files: FILE_1.ext FILE_2.ext FILE_3.ext ...
FILE_N.ext
The names aren't necessarily uniform, neither are the extensions. I'm looking
for a python script to: for all files in directory that don't start with . rar
a -m0 -R -v1g FILE_NAME.rar "FILE_NAME" #Note: FILE_NAME.rar doesn't have the
".ext"
The "rar a -m0 -R -v1g FILE_NAME.rar "FILE_NAME"" part is what I use when I'm
sending a shell command, for one file, and I have to enter the FILE_NAME
myself, etc. Hasn't been a problem, but now I'm dealing with a lot of large
files, and it's too much to enter them all in one-by-one, but I need to have
each file be it's own volume.
Answer: How about:
import os
for file_n in os.listdir(DIRECTORY_NAME):
if not file_n.startswith('.') and not file_n.endswith('.rar'):
os.system('rar a -m0 -R -v1g %s.rar "%s"' %(os.path.splitext(file_n)[0], file_n))
|
AttributeError: 'NoneType' object has no attribute 'str' in suds
Question: I am using suds client for WSDL in our project.
i have this code .
sudsclient = sudsClient(settings.WSDL_URL)
values = {
"MerchantCode": settings.YP_MERCHANT_CODE,
"MerchantReference": str(reference_id),
"TransactionType":settings.YP_TRANSACTION_TYPE,
"Amount":int(charged),
"CurrencyCode":client.currency,
"CardHolderName":str(form.cleaned_data['name_on_card']),
"CardNumber": str(form.cleaned_data['card_number']),
"ExpiryMonth":int(form.cleaned_data['exp_month']),
"ExpiryYear":int(form.cleaned_data['exp_year']),
"CardID":0,
"CardSecurityCode":str(form.cleaned_data['security_code']),
"CustomerAccountNumber":"",
"BillNumber":0,
"CardHolderEmail":str(form.cleaned_data['email']),
"ClientIPAddress":get_ip,
"Notes":"OK",
}
response = sudsclient.service.OnlineTransaction(**values)
when i run my program i got this error:
Exception Type: AttributeError
Exception Value:
'NoneType' object has no attribute 'str'
Exception Location: /usr/local/lib/python2.7/dist-packages/suds/sax/document.py in str, line 48
I am sure that my code in my local and test are same.
I think the problem is in the `suds`, but i don't have any idea on how to
solve it.
Do anyone can help me in my case? thanks in advance ..
Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/1/book/save/?csrfmiddlewaretoken=05e5bdb542c3be7515b87e8160c347a0&check_in=2012-04-24&check_out=2012-04-25&no_of_nights=1&quantity=1&product=4&price=900.0&chargedMasterCard=180.0&chargedVisa=90.0&totalcostMasterCard=720.0&totalcostVisa=810.0&totalcost=900.0&charged=10.0&price_rate=1000.0&old_totalcost=1000.0&discount_charged=100.0&first_name=dsnmbmh&last_name=jhbjhb&email=jdlabandero%40agile.com.ph&contact=657879&address=gjkj&no_of_adult=1&no_of_kid=0&memo=&card_type=MasterCard&card_number=40000234234210&security_code=788&name_on_card=ghjk&exp_month=1&exp_year=2012
Django Version: 1.3.1
Python Version: 2.7.1
Installed Applications:
['admin_tools',
'admin_tools.theming',
'admin_tools.menu',
'admin_tools.dashboard',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'django.contrib.admin',
'surebooked.booking',
'surebooked.api',
'surebooked.account_app',
'surebooked.client_app',
'surebooked.product_app',
'surebooked.report_app',
'debug_toolbar',
'billing',
'south',
'paypal.standard.ipn',
'django_extensions',
'cms',
'menus',
'mptt',
'south',
'cms.plugins.text',
'cms.plugins.picture',
'cms.plugins.link',
'cms.plugins.file',
'cms.plugins.snippet',
'cms.plugins.googlemap',
'sekizai',
'django.contrib.admin',
'filer',
'sorl.thumbnail',
'easy_thumbnails',
'cmsplugin_filer_file',
'cmsplugin_filer_folder',
'cmsplugin_filer_image',
'cmsplugin_filer_teaser',
'cmsplugin_filer_video',
'media_tree',
'django_cron']
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.csrf.CsrfResponseMiddleware',
'debug_toolbar.middleware.DebugToolbarMiddleware',
'media_tree.middleware.SessionPostMiddleware',
'cms.middleware.page.CurrentPageMiddleware',
'cms.middleware.user.CurrentUserMiddleware',
'cms.middleware.toolbar.ToolbarMiddleware')
Traceback:
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py" in get_response
111. response = callback(request, *callback_args, **callback_kwargs)
File "/home/agileone/workspace/surebooked/surebooked/../surebooked/booking/views.py" in booking_save_page
752. response = sudsclient.service.OnlineTransaction(**values)
File "/usr/local/lib/python2.7/dist-packages/suds/client.py" in __call__
542. return client.invoke(args, kwargs)
File "/usr/local/lib/python2.7/dist-packages/suds/client.py" in invoke
595. soapenv = binding.get_message(self.method, args, kwargs)
File "/usr/local/lib/python2.7/dist-packages/suds/bindings/binding.py" in get_message
120. content = self.bodycontent(method, args, kwargs)
File "/usr/local/lib/python2.7/dist-packages/suds/bindings/document.py" in bodycontent
63. p = self.mkparam(method, pd, value)
File "/usr/local/lib/python2.7/dist-packages/suds/bindings/document.py" in mkparam
105. return Binding.mkparam(self, method, pdef, object)
File "/usr/local/lib/python2.7/dist-packages/suds/bindings/binding.py" in mkparam
287. return marshaller.process(content)
File "/usr/local/lib/python2.7/dist-packages/suds/mx/core.py" in process
62. self.append(document, content)
File "/usr/local/lib/python2.7/dist-packages/suds/mx/core.py" in append
73. log.debug('appending parent:\n%s\ncontent:\n%s', parent, content)
File "/usr/lib/python2.7/logging/__init__.py" in debug
1120. self._log(DEBUG, msg, args, **kwargs)
File "/usr/lib/python2.7/logging/__init__.py" in _log
1250. self.handle(record)
File "/usr/lib/python2.7/logging/__init__.py" in handle
1260. self.callHandlers(record)
File "/usr/lib/python2.7/logging/__init__.py" in callHandlers
1300. hdlr.handle(record)
File "/usr/lib/python2.7/logging/__init__.py" in handle
744. self.emit(record)
File "/home/agileone/workspace/surebooked/surebooked/.ve/src/django-debug-toolbar/debug_toolbar/panels/logger.py" in emit
51. 'message': record.getMessage(),
File "/usr/lib/python2.7/logging/__init__.py" in getMessage
328. msg = msg % self.args
File "/usr/local/lib/python2.7/dist-packages/suds/sax/document.py" in __str__
58. return unicode(self).encode('utf-8')
File "/usr/local/lib/python2.7/dist-packages/suds/sax/document.py" in __unicode__
61. return self.str()
File "/usr/local/lib/python2.7/dist-packages/suds/sax/document.py" in str
48. s.append(self.root().str())
Exception Type: AttributeError at /1/book/save/
Exception Value: 'NoneType' object has no attribute 'str'
i really2x don't know why i got this error. Now I got the same error in my
local and my production. btw, when i separate the code and try to run. it runs
ok.
sudstest.py
#!/usr/bin/env python
import os
from suds.client import Client as abo
WSDL = 'DirectConnect.production.wsdl'
#def test_api():
url = 'file://' + os.path.join(os.path.abspath(os.path.dirname(__file__)), WSDL)
print url
client = abo(url)
data = {
'MerchantCode': 'HELLO',
'MerchantReference': '3252',
'TransactionType': 20,
'Amount': 10,
'CurrencyCode': 'USD',
'CardHolderName': 'RAUL O REVECHE',
'CardNumber': 4005550000000001,
'ExpiryMonth': 5,
'ExpiryYear': 2013,
'CardID': 0,
'CardSecurityCode': 400,
'CustomerAccountNumber': '',
'BillNumber': 0,
'CardHolderEmail': '[email protected]',
'ClientIPAddress': 'http://127.0.0.1:8000/',
'Notes': 'This is test',
}
print data
result = client.service.OnlineTransaction(**data)
print result.ResponseDescription
Answer: The bug is in suds. @okm was close, but the problem is really in
`Document.__str__`. However the bug is only exposed when using django-debug-
toolbar, as the logging panel shows all the messages that have been logged.
This triggers the suds bug.
I've created a patched version of suds that fixes this problem:
<https://github.com/bradleyayers/suds-htj>
**Edit:** My patch has now been merged into <https://github.com/htj/suds-htj>
– use that repository instead
|
Incorrect value - python to model in Django
Question: The problem I have is when trying to insert some values in the database
created by Django model, the values are strictly numeric, but when trying to
insert it I get an error:
567523
<type 'int'>
Warning: Incorrect integer value: 'id' for column 'id_unidad' at row 1
cursor.execute(sql, ('id',))
the code I'm using to insert using a mysql request is:
import MySQLdb
import _mysql
id = 567523
print id
print type(id)
sql = """INSERT INTO gprs_evento ( id_unidad ) VALUES (%s)"""
db = MySQLdb.Connect(host="localhost", user="*****",passwd="******",db="gp")
cursor = db.cursor()
try :
cursor.execute(sql, ('id',))
db.commit()
except _mysql.Error, e:
print "Error %d: %s" % (e.args[0], e.args[1])
Django model I'm using is:
from django.db import models
class Evento(models.Model):
id_unidad = models.IntegerField(max_length=15)
Attempt to insert in each field in the database:
id: 13831010240120
<type 'long'>
ip: 3235021102
<type 'long'>
e: 2
<type 'int'>
H: 102718.0
<type 'float'>
LN: 210.128871
<type 'float'>
LO: 3203.323664
<type 'float'>
V: 28.0
<type 'float'>
A: 90.0
<type 'float'>
F: 40101.0
<type 'float'>
In the model I'm using the reference of the [Field
types](https://docs.djangoproject.com/en/dev/ref/models/fields) and try and
IntegerField, BigIntegerField, DecimalField but nothing, I just inserted a
normal number of 15 characters from Python.
Answer: You're trying to insert the _string_ `"id"`, rather than the value of the
variable `id`.
It should be:
cursor.execute(sql, (id,))
Of course, there's no reason to be using raw SQL for this at all.
|
Python cdecimal InvalidOperation
Question: I am trying to read financial data and store it. The place I get the financial
data from stores the data with incredible precision, however I am only
interested in 5 figures after the decimal point. Therefore, I have decided to
use t = .quantize(cdecimal.Decimal('.00001'), rounding=cdecimal.ROUND_UP) on
the Decimal I create, but I keep getting an InvalidOperation exception. Why is
this?
>>> import cdecimal
>>> c = cdecimal.getcontext()
>>> c.prec = 5
>>> s = '45.2091000080109'
>>> # s = '0.257585003972054' works!
>>> t = cdecimal.Decimal(s).quantize(cdecimal.Decimal('.00001'), rounding=cdecimal.ROUND_UP)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
cdecimal.InvalidOperation: [<class 'cdecimal.InvalidOperation'>]
Why is there an invalid operation here? If I change the precision to 7 (or
greater), it works. If I set s to be '0.257585003972054' instead of the
original value, that also works! What is going on?
Thanks!
Answer: decimal version gives a better description of the error:
Python 2.7.2+ (default, Feb 16 2012, 18:47:58)
>>> import decimal
>>> s = '45.2091000080109'
>>> decimal.getcontext().prec = 5
>>> decimal.Decimal(s).quantize(decimal.Decimal('.00001'), rounding=decimal.ROUND_UP)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/decimal.py", line 2464, in quantize
'quantize result has too many digits for current context')
File "/usr/lib/python2.7/decimal.py", line 3866, in _raise_error
raise error(explanation)
decimal.InvalidOperation: quantize result has too many digits for current context
>>>
[Docs](http://docs.python.org/library/decimal.html#decimal.Decimal.quantize):
> Unlike other operations, if the length of the coefficient after the quantize
> operation would be greater than precision, then an InvalidOperation is
> signaled. This guarantees that, unless there is an error condition, the
> quantized exponent is always equal to that of the right-hand operand.
But i must confess i don't know what this means.
|
Sending stdin to subprocess in python script
Question: I have a python script which requires the opening of another terminal window
to send some commands. Stackoverflowers have already gotten me that far:
import subprocess
server_terminal = subprocess.Popen(['open','-a','/Applications/Utilities/Terminal.app','/Applications/Utilities/Terminal.app']) #Not sure about the inelegance of the doubled path, but it does open another terminal window, which I need.
The next step that I need is to send this terminal some commands so that I can
get a local server set up on the user's desktop. Having some significant
troubles figuring how to control this subprocess.
Can anyone direct me to the method I would use to make this terminal
subprocess take stdin from me? I would prefer to avoid 3rd party modules.
Edit: Some context -- My major goal is to open a [selenium
server](http://seleniumhq.org/) in a separate terminal window so that I can
issue it commands. I already know how to control the server once it's started,
I just can't figure out how to automatically start one.
Thanks for all your help!
Answer: You can use `Popen` to start the server directly:
selenium_jar = '/full/path/to/selenium-server-standalone-2.0b3.jar'
path_to_fifo_profile = '...'
# start external process
p = subprocess.Popen(['java', '-jar', selenium_jar,
'-firefoxProfileTemplate', path_to_fifo_profile,
])
# wait for it to finish before exiting
p.wait()
If you want to see the output, you can use `os.system` instead:
os.system('java '
'-jar /path/to/selenium-server-standalone-2.0b3.jar '
'-firefoxProfileTemplate /path/to/profile')
|
Image spot detection in Python
Question: I have millions of images containing every day photos. I'm trying to find a
way to pick out those in which some certain colours are present, say, red and
orange, disregarding the shape or object. The size may matter - e.g., at least
50x50 px.
Is there an efficient and lightweight library for achieving this? I know there
is OpenCV and it seems quite powerful, but would it be too bloated for this
task? It's a relatively simple task, right?
Thanks
Answer: Certainly OpenCV can do this, but you could also use the Python Imaging
Library [PIL](http://www.pythonware.com/products/pil/) and just create a
function to iterate through the image cropping small blocks of the image set
at your minimum size, and testing these blocks average colour and tolerance
against the matching criteria. Something along the lines of (untested pseudo
code):
import Image
im = Image.open("test_picture.png")
for y in xrange(image_height - block_height):
for x in xrange(image_width - block_width):
block = im.crop(x, y, x + block_width, y + block_height)
if colour_test(block): # test for match
return True
Its very easy to get the colour frequency info of an image using
`block.getcolors()`, so you can easily write the `colour_test()` function.
|
Importing Archived TweetStream Twitter Ouput into Mongodb?
Question: I have around 1000 lines of twitter data captured using python tweetstream.
The data was collected using the simple tweetstream example of:
>>> stream = tweetstream.SampleStream("username", "password")
>>> for tweet in stream:
... print tweet
which outputs like:
{u'user': {u'follow_request_sent': None,
u'profile_use_background_image': True,
u'profile_background_image_url_https': u'https://si0.twimg.com/
profile_background_images/
181013334/25957_1367646636642_1395984493_31038644_61586_n.jpg',
u'verified': False, u'profile_image_url_https': u'https://
si0.twimg.com/profile_images/1820265868/rosajennifer_normal.jpg',
u'profile_sidebar_fill_color': u'DDEEF6', u'id': 46478005,
u'profile_text_color': u'333333', u'followers_count': 505,
u'protected': False, u'location': u'', u'default_profile_image':
False, u'listed_count': 4, u'utc_offset': -21600, u'statuses_count':
35923, u'description': u'Take me as I am or watch me as I go. . .\n
\n', u'friends_count': 315, u'profile_link_color': u'0084B4',
u'profile_image_url': u'http://a1.twimg.com/profile_images/1820265868/
rosajennifer_normal.jpg', u'notifications': None,
u'show_all_inline_media': True, u'geo_enabled': False,
u'profile_background_color': u'C0DEED', u'id_str': u'46478005',
u'profile_background_image_url': u'http://a2.twimg.com/
profile_background_images/
181013334/25957_1367646636642_1395984493_31038644_61586_n.jpg',
u'name': u'rosa jennifer', u'lang': u'en', u'following': None,
u'profile_background_tile': True, u'favourites_count': 82,
u'screen_name': u'rosajennifer', u'url': u'http://www.facebook.com/
profile.php?ref=profile&id=1329240058', u'created_at': u'Thu Jun 11
20:11:28 +0000 2009', u'contributors_enabled': False, u'time_zone':
u'Central Time (US & Canada)', u'profile_sidebar_border_color':
u'C0DEED', u'default_profile': False, u'is_translator': False},
u'favorited': False, u'contributors': None, u'entities':
{u'user_mentions': [{u'indices': [1, 14], u'id': 90939650, u'id_str':
u'90939650', u'name': u'Dajuan(Dae-John)', u'screen_name':
u'Juan_Ton5oup'}], u'hashtags': [], u'urls': []}, u'text':
u'\u201c@Juan_Ton5oup: Spanish girls love jeans with animals outlined
on the back pockets.\u201dfoh lmao', u'created_at': u'Tue Feb 14
00:27:32 +0000 2012', u'truncated': False, u'retweeted': False,
u'in_reply_to_status_id': None, u'coordinates': None, u'id':
169216166617817088, u'source': u'<a href="http://twitter.com/#!/
download/ipad" rel="nofollow">Twitter for iPad</a>',
u'in_reply_to_status_id_str': None, u'in_reply_to_screen_name': None,
u'id_str': u'169216166617817088', u'place': None, u'retweet_count': 0,
u'geo': None, u'in_reply_to_user_id_str': None,
u'in_reply_to_user_id': None}
I have a single file of ~1000 of these, each on a seperate line. I've tried
mongoimport as well as a dozen other methods but I can't seem to get this data
imported. Mongoimport passes back this error:
Sat Mar 10 12:51:00 Assertion: 10340:Failure parsing JSON string near:
u'user': {
0x581762 0x528994 0xaa29f3 0xaa4ca3 0xa9b7dd 0xa9f772 0x34df82169d
0x4fe679
mongoimport(_ZN5mongo11msgassertedEiPKc+0x112) [0x581762]
mongoimport(_ZN5mongo8fromjsonEPKcPi+0x444) [0x528994]
mongoimport(_ZN6Import8parseRowEPSiRN5mongo7BSONObjERi+0x8b3)
[0xaa29f3]
mongoimport(_ZN6Import3runEv+0x16e3) [0xaa4ca3]
mongoimport(_ZN5mongo4Tool4mainEiPPc+0x169d) [0xa9b7dd]
mongoimport(main+0x32) [0xa9f772]
/lib64/libc.so.6(__libc_start_main+0xed) [0x34df82169d]
mongoimport(__gxx_personality_v0+0x3d1) [0x4fe679]
exception:Failure parsing JSON string near: u'user': {
I assume this is because the string is not actual json, it's some sort of
(json like) format.
Can anyone help?
Answer: The first problem as you noted, the following is not valid JSON, it's a python
dictionary: `{u'indices':`.
Second problem, why are you trying to use `mongoimport`? In python you can
just save the dictionary to the database. This is basically the first example
of how to use MongoDB.
>>> from pymongo import Connection
>>> connection = Connection('localhost', 27017)
>>> db = connection.test_database
>>> posts = db.posts
>>> stream = tweetstream.SampleStream("username", "password")
>>> for tweet in stream:
... posts.insert(post)
|
argparse: usage reporting for individual choices of a positional argument?
Question: I'm trying to refactor a program to use the new standard argparse module.
Given the structure below, should I be using
[choices](http://docs.python.org/library/argparse.html#choices) for the
primary positional argument,
[subcommands](http://docs.python.org/library/argparse.html#sub-commands), or
some other method?
foo [OPTION]... COMMAND [PACKAGE(S)]...
Commands:
available - show packages available to be installed
install - download and install packages, including dependencies
remove - uninstall packages
...snip
version - display installed version of package
Options:
-d,--download download only
-i,--ini=FILE use alternate setup.ini
...snip
Choices example, concise & clear, but no usage help for each command:
parser = argparse.ArgumentParser()
commands = "available install remove ... version".split()
parser.add_argument('command', choices=commands,
help="generic help for all 12 commands goes here")
Choices is logically more in tune with the program, as what comes after the
command on the command line is largely the same for all of them. While
subcommands seems to wants me to treat each command one as world mostly to
itself and involves 3 times the code. However there doesn't seem to be a
method to display usage for each choice in choices(?) Perhaps there's a third
route I'm not seeing?
I'm using python 2.7.
Answer: I'd go with the [sub-
commands](http://docs.python.org/library/argparse.html#sub-commands) option.
You wont necessarily need to code more: build commands classes, each one
carrying its own setting (using inheritance where needed), in your `Main` cli
class instantiate the main parser and then hand it over to the commands that
will care to add/set their own
[subparser](http://docs.python.org/library/argparse.html#argparse.ArgumentParser.add_subparsers).
The answer is short, but if it's not clear tell me and I'll add a code sample.
* * *
### Rough Example
You should build your commands as classes:
class Install(BaseCommand):
help = "download and install packages"
@classmethod
def interface(cls, cmd_parser):
cmd_parser.add_argument('--foo')
cmd_parser.set_defaults(cmd=cls) # this line is very important
def start(self, foo=None):
# command execution
And your command line interface should be a class too:
class Main(BaseCli):
def __init__(self):
self.commands = [Install] # just the command classes
self._parser = argparse.ArgumentParser()
self._subparsers = self._parser.add_subparsers()
def load_interface(self):
for cmd in self.commands:
cmd_parser = self.add_command_parser(help=cmd.help)
cmd.interface(cmd_parser)
def add_command_parser(self, *args, **kwargs):
return self._subparsers.add_parser(*args, **kwargs)
def parse_args(self, args=None, namespace=None):
return self._parser.parse_args(args, namespace)
def start_session(self, namespace):
# this will instantiate the appropriate command class
cmd = namespace.cmd()
# and call its start method with the user args
cmd.start(**namespace.__dict__)
To be used like this:
cli = Main()
cli.load_interface()
args = cli.parse_args()
cli.start_session(args)
Notice that with this approach you'll have all the power of inheritance by
your side! :)
|
Deferred_output for cyclical iteration in Python 2.x
Question: [Here](http://code.activestate.com/recipes/576961-technique-for-cyclical-
iteration/) I found a very good recipe of how to calculate a sequence of
number using a technique with **deferred output**.
I decided to use the same technique to solve the second problem from [Project
Euler](http://projecteuler.net/problems).
The code:
#Answer: 4613732
from itertools import *
import operator
seeds = [1,2]
def deferred_output():
for i in output:
yield i
result,a1,a2 = tee(deferred_output(),3)
paired = map(operator.add,a1,islice(a2,1,None))
output = chain(seeds,paired)
cropped = takewhile((4000000).__gt__,result)
evened = filter(lambda x: x%2==0,cropped)
print(sum(evened))
The code worked perfectly in Python 3.x
But when I tried to run it in Python 2.x the following error rose:
Traceback (most recent call last):
File "C:\Documents and Settings\Oleg\Мои документы\_Мои документы\_SyncedWithFlashDrive\Программирование\Project Euler\2\1.py", line 14, in <module>
paired = map(operator.add,a1,islice(a2,1,None))
File "C:\Documents and Settings\Oleg\Мои документы\_Мои документы\_SyncedWithFlashDrive\Программирование\Project Euler\2\1.py", line 9, in deferred_output
for i in output:
NameError: global name 'output' is not defined
It means that **deferred output** doesn't work in Python 2.x
Why is that?
Answer: the following code works:
from itertools import *
import operator
seeds = [1,2]
def deffered_output():
for i in output:
yield i
result,a1,a2 = tee(deffered_output(),3)
paired = imap(operator.add,a1,islice(a2,1,None)) # change 2 (imap)
output = chain(seeds,paired)
cropped = takewhile(lambda x: x <= 4000000,result) # change 1 (lambda)
evened = filter(lambda x: x%2==0,cropped)
print(sum(evened))
and i needed to make two changes:
first, the argument to takewhile needs to be a lambda because integers in 2.7
don't have methods like `__gt__`.
second, and more importantly, `map()` in python 3 is lazy - it returns a
generator that does the work later. in contrast, in python 2.7, it is eager -
it does the work straight away and returns a list.
so, in python 2.7, the `map()` triggers evaluation of the code, which calls
back through the various generators until it evaluates the `deffered_output()`
function. and this all occurs _before_ the line where `output` is defined. so
there is an error, because `output` is undefined.
however, in python 3 (or when using `imap()` in python 2.7) that line creates
another generator, which doesn't actually do the work until things are
evaluated in the `sum()` (and by that point, `output` is defined, so it's ok
for `deffered_output` to be evaluated).
if that's not clear then you need to learn more about
[generators](http://wiki.python.org/moin/Generators) in python.
ps not important, but it's driving me crazy to look at it: it's "deferred",
not "deffered"!
|
Python importing a module: how to track its orgin PYTHONPATH, sys, os
Question: the module nosetests runs everywhere on my computer (it shouldn't it should
only run in a few specified places). I guess this is because i have
accidentally added the module nosetests to the PYTHONPATH either by putting it
directly in either the dist-packages or site-packages or telling python to
look for it permanently everytime.
I'm familiar with a few commands like find, import os, import sys and
PYTHONPATH but i can't seem to find a way track down the culprit directory
thats allowing this to happen.
something like
>>> find . -name "*nosetests"* -print
any help would be great.
Answer: Let's look at this example:
>>> import itertools
>>> print itertools.__file__
/usr/lib/python2.7/lib-dynload/itertools.so
>>> import string
>>> print string.__file__
/usr/lib/python2.7/string.pyc
|
Google App Engine e-mail and attachment extensions
Question: In Google App Engine for python can I send e-mails with attachments that have
no extension? What are the allowed extensions? Can I send zip files as
attachments?
Answer: No you cannot send attachments with no extension. Last time I checked (SDK
1.6.3) all the extensions are allowed except the ones blacklisted in
from google.appengine.api import mail
mail.EXTENSION_BLACKLIST
I have also found out in practice that .zip files are not allowed although
.zip is not in that list as of 1.6.3.
This was first answered
[here](https://groups.google.com/forum/?fromgroups#!topic/google-appengine-
python/emfJkCuAzu4).
|
How to call a web-service using JavaEE?
Question: I've been using rpclib to auto-generate a WSDL and implement it in Python.
Then I wanted to call a web-service* that has this WSDL using JavaEE, so I
simply used the **Web Service from WSDL** option in the creation wizard in
Eclipse (Indigo 3.7.1 with OEPE), but then the Ant build failed with the
exception (in short):
weblogic.wsee.tools.WsBuildException Error running JAX-WS wsdlc
Caused by java.lang.NoSuchMethodException: javax.xml.bind.annotation.XmlElementRef.required()
What should I do? How can I call the web-service using JavaEE?
* The web service is configured with: Apache HTTP Server 2.2.2 + mod_wsgi 3.3 + Python 2.6.5 + rpclib 2.6.1.
Answer: Ok, stumbled upon your post the second time, so I'll elaborate my comment
given before :).
First I recapitulate your set-up:
* You have a working webservice and an URL pointing to the corresponding WSDL
* You'll try to invoke the WS methods from a different Java EE project on a different machine
General options for invoking a WS:
1. Use [Dependency Injection](http://www.theserverside.com/news/1321158/A-beginners-guide-to-Dependency-Injection) to inject the WS reference
2. Create your own WS stubs
The first option won't work in your set-up because DI will only work in an
container-managed-environment (see my comment). That means that the WS class
and the executing class have to be in the same container (e.g. the same
server).
So what is left is to generate your WS stubs manually. Therefore you can use
the `wsimport` tool mentioned in your own answer. There are several different
ways to use this tool. Lets have a look in the CLI use:
1. navigate in your projekt folder of the WS client used by your IDE : `%IDE_WORKSPACE%/your project/src`
2. crate a new folder, e.g. `stub`
3. open a command window in this directory
4. execute the follwing command : `wsimport -keep <http://yourwsdl?wsdl>`
5. After a refresh you should see several created files
Back in your IDE:
Now you're able to use your generated stub-files to connect to the WS by
getting a `port` from the generated `service`-file
public class WsClient {
public static void main(String[] args) {
//Create Service
'GeneratedFile'Service service = new 'GeneratedFile'Service();
//create proxy
'GeneratedFile' proxy = service.get'GeneratedFile'Port();
//invoke
System.out.println(proxy.yourMethod(yourParam));
}
}
Last hints:
* For portabilty purpose check the generated files. In their annotations sometimes the WSDL file is linked to a local copy. Just change this back to your WSDL-URL.
AFAIK there is an [option](http://jax-ws.java.net/2.1.5/docs/wsimport.html) in
the `wsimport` tool to set this directly in the import routine.
* There is a plugin for Eclipse called [soapUI](http://www.soapui.org/) which allows you to use the `wsimport` tool in a GUI out of Eclipse. Once set up it should accelerate your work.
* I've also found a [quick start guide](http://metro.java.net/2.1.1/guide/Developing_with_Eclipse.html) in developing WS clients with eclipse.
Hope this helped, have Fun!
**EDIT:** Just to clarify:

After you used the `wsimport` tool you should have a directory containing
files like shown in the image. To make this example clear you'll need to get a
_Service_ from the `RequestFileService` (this is my WS operation) like
`RequestFileService service = new RequestFileService();` and after this you'll
need a _Port_ on this service like `RequestFile proxy =
service.getRequestFilePort();`.
After this you can invoke your method calls by using the port
`proxy.yourMethod(yourParam);`
|
Need a good beginner's WSGI guide
Question: I have an almost complete simple web app written as a Python CGI script. I
would like to change it to use WSGI, but I can't find documentation that helps
me make sense of what WSGI actually is (one only repeatedly finds calls with
start_response etc. but there doesn't seem to be much explanation fo rwhat
these calls actually do). Can someone point me to a good explanation, plus a
how-to on using WSGI?
**Edit** : Should add that I've seen [this
question](http://stackoverflow.com/questions/219110/how-python-web-frameworks-
wsgi-and-cgi-fit-together) but the answers still don't seem to tell one how to
use WSGI in a direct script (as opposed to in a framework).
Answer: WSGI is [PEP 333](http://www.python.org/dev/peps/pep-0333/) (and
[PEP3333](http://www.python.org/dev/peps/pep-3333/) for Python 3), a.k.a. Web
Server Gateway Interface. It has three parts, but the part you're interested
in is how you write a WSGI application. And WSGI app is a callable object that
takes two arguments and returns an iterable object (or is a generator).
# this is my_app module
def app(environ, start_response):
# environ is dict-like object containing the WSGI environment
# refer to the PEP for details
# start_response is a callable that, well, starts the response
headers = [('Content-Type', 'text/plain; charset=utf-8')]
start_response('200 OK', headers)
return ["I'm a WSGI application.\n"]
To run the application, you need another part of WSGI, which is gateway. In
the standard library you'll find `wsgiref` package. It contains a CGI gateway:
#!/usr/bin/python
# this is a CGI script that runs a WSGI application inside CGI handler
from wsgiref.handlers import CGIHandler
from my_app import app
CGIHandler().run(app)
And also a simple HTTP server for development:
from wsgiref.simple_server import make_server
from my_app import app
httpd = make_server('localhost', 8000, app)
httpd.serve_forever()
As you can see, WSGI allows you to reuse your application in different
environments — CGI, SCGI, FastCGI, mod_wsgi, mod_python, etc., without
actually rewriting it.
The last part of WSGI is middleware — basically, it's a concept that allows
you to combine different WSGI applications. It forms sort of a sandwich —
request flows from the top (the gateway) to the bottom (which is usually your
application), with some intermediate layers in between, that might implement
stuff like database connection pooling or sessions. `wsgiref` contains one
such middleware — `wsgiref.validate.validator`, which checks whether layers
below and above it conforms to the rules of WSGI spec.
And that's basically it. Now go use a framework.
|
How to set SQLite PRAGMA statements with SQLAlchemy
Question: I would like SQLAlchemy to put the SQLite .journal file in-memory to speed up
performance. I have tried this:
sqlite_db_engine = create_engine('sqlite:///%s' % str(dbname), connect_args = {'PRAGMA journal_mode':'MEMORY', 'PRAGMA synchronous':'OFF', 'PRAGMA temp_store':'MEMORY', 'PRAGMA cache_size':'5000000'})
db = sqlite_db_engine.connect()
and this:
sqlite_db_engine = create_engine('sqlite:///%s' % str(dbname))
db = sqlite_db_engine.connect()
db.execute("PRAGMA journal_mode = MEMORY")
db.execute("PRAGMA synchronous = OFF")
db.execute("PRAGMA temp_store = MEMORY")
db.execute("PRAGMA cache_size = 500000")
With no luck. For long transactions I can still see the .journal file being
created on the disk. Is there another way to set this?
*note I have no problem doing it with the built-in python sqlite module
Answer: Basically you should be able to rewrite the examples about foreignkey to
achieve what you want. Take a look at
<http://stackoverflow.com/a/7831210/1890086>
engine = create_engine(database_url)
def _fk_pragma_on_connect(dbapi_con, con_record):
dbapi_con.execute('PRAGMA journal_mode = MEMORY')
# ...
from sqlalchemy import event
event.listen(engine, 'connect', _fk_pragma_on_connect)
|
Ignore unicode in xml with python and lxml?
Question: I'm looking to either ignore the unicode within my xml. I'm willing to somehow
change it as well in the processing of the output.
My python:
import urllib2, os, zipfile
from lxml import etree
doc = etree.XML(item)
docID = "-".join(doc.xpath('//publication-reference/document-id/*/text()'))
target = doc.xpath('//references-cited/citation/nplcit/*/text()')
#target = '-'.join(target).replace('\n-','')
print "docID: {0}\nCitation: {1}\n".format(docID,target)
outFile.write(str(docID) +"|"+ str(target) +"\n")
Creates an output of:
docID: US-D0607176-S1-20100105
Citation: [u"\u201cThe birth of Lee Min Ho's donuts.\u201d Feb. 25, 2009. Jazzholic. Apr. 22, 2009 <http://www
But if I try to add back in the `'-'join(target).replace('\n-','')` I get this
error for both `print` and `outFile.write`:
Traceback (most recent call last):
File "C:\Documents and Settings\mine\Desktop\test_lxml.py", line 77, in <module>
print "docID: {0}\nCitation: {1}\n".format(docID,target)
UnicodeEncodeError: 'ascii' codec can't encode character u'\u201c' in position 0: ordinal not in range(128)
How can I ignore the unicode so I can string out `target` with the
`outFile.write`?
Answer: You are getting this error because you have a string with unicode-characters
that you are trying to output using the ascii characterset. When printing the
list, you are getting the 'repr' of the lists, and the strings inside it,
avoiding the problem.
You need to either encode to a different characterset (UTF-8 for instance), or
strip out or replace invalid characters when encoding.
I recommend reading Joels [The Absolute Minimum Every Software Developer
Absolutely, Positively Must Know About Unicode and Character Sets (No
Excuses!)](http://www.joelonsoftware.com/articles/Unicode.html), followed by
the relevant chapters on encoding and decoding strings in [the Python
docs](http://docs.python.org/library/stdtypes.html#str.encode).
Here's a small hint to get you started:
print "docID: {0}\nCitation: {1}\n".format(docID.encode("UTF-8"),
target.encode("UTF-8"))
|
tastypie -- OAuthAuthentication -- python-oauth2 import issue
Question: I am wishing to use OAuthAuthentication in tastypie. In my ModelResource, I
do: (showing only the relevant portion)
`from tastypie.authentication import OAuthAuthentication
class FooResource(ModelResource):
class Meta:
authentication = OAuthAuthentication()`
And I get an error saying: The 'python-oauth2' package could not be imported.
It is required for use with the 'OAuthAuthentication' class.
Before this, I did a pip install of python-oauth(below), but subsequently, it
continues the complaint above. Do I need to do anything else in my resource or
anywhere else to explicitly import this?
`pip install -r http://code.daaku.org/python-oauth/reqs
Obtaining urlencoding from git+git://github.com/nshah/python-urlencoding.git#egg=urlencoding (from -r http://code.daaku.org/python-oauth/reqs (line 1))
Cloning git://github.com/nshah/python-urlencoding.git to ./src/urlencoding
Running setup.py egg_info for package urlencoding
Obtaining oauth from git+git://github.com/nshah/python-oauth.git#egg=oauth (from -r http://code.daaku.org/python-oauth/reqs (line 2))
Cloning git://github.com/nshah/python-oauth.git to ./src/oauth
Running setup.py egg_info for package oauth
Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/lib/python2.6/site-packages (from urlencoding->-r http://code.daaku.org/python-oauth/reqs (line 1))
Downloading/unpacking setuptools-git (from urlencoding->-r http://code.daaku.org/python-oauth/reqs (line 1))
Downloading setuptools-git-0.4.2.tar.gz
Running setup.py egg_info for package setuptools-git
Installing collected packages: urlencoding, oauth, setuptools-git
Running setup.py develop for urlencoding
Creating /usr/lib/python2.6/site-packages/urlencoding.egg-link (link to .)
Adding urlencoding 0.0.1 to easy-install.pth file
Installed /usr/lib/python2.6/site-packages/tastypie/src/urlencoding
Running setup.py develop for oauth
Creating /usr/lib/python2.6/site-packages/oauth.egg-link (link to .)
Adding oauth 0.0.1 to easy-install.pth file
Installed /usr/lib/python2.6/site-packages/tastypie/src/oauth
Running setup.py install for setuptools-git
Successfully installed urlencoding oauth setuptools-git
Cleaning up...`
Answer: It seems that you're installing "python-oauth" instead of "python-oauth2". The
error clearly states that. To solve this issue, you should simply install
"python-oauth2" which is a different library from "python-oauth".
Try this:
pip install oauth2
|
Some questions on dendrogram - python (Scipy)
Question: I am new to scipy but I managed to get the expected dendrogram. I am some more
questions;
1. In the dendrogram, distance between some points are `0` but its not visible due to image border. How can I remove the border and make the lower limit of y-axis to `-1`, so that it is clearly visible. e.g. distance between these points are `0` (13,17), (2,10), (4,8,19)
2. How can I prune/truncate on a particular distance. for e.g. prune at `0.4`
3. How to write these clusters(after pruning) to a file
My python code:
import scipy
import pylab
import scipy.cluster.hierarchy as sch
import numpy as np
D = np.genfromtxt('LtoR.txt', dtype=None)
def llf(id):
return str(id)
fig = pylab.figure(figsize=(10,10))
Y = sch.linkage(D, method='single')
Z1 = sch.dendrogram(Y,leaf_label_func=llf,leaf_rotation=90)
fig.show()
fig.savefig('dendrogram.png')
Dendrogram: 
thank you.
Answer: 1.`fig.gca().set_ylim(-0.4,1.2)` Here `gca()` returns the current `axes`
object, so you can give it a name
ax=fig.gca()
ax.set_ylim(-0.4,ax.get_ylim()[1])
|
OpenCv CreateImage Function isn't working
Question: I'm trying to create an image using opencv v 2.1, but I get that error :
image=cv.CreateImage((w,h),no_of_bits,channels) AttributeError: 'module'
object has no attribute 'CreateImage'
The code is
#!/usr/bin/python
import cv
from opencv import *
from opencv.cv import *
from opencv.highgui import *
import sys
import PIL
w=500
h=500
no_of_bits=8
channels=3
image=cv.CreateImage((w,h),no_of_bits,channels)
cv.ShowImage('WindowName',image)
cvWaitKey()
Answer: Since there aren't that many good examples how to create new blank image
filled with a color using cv2, here's one:
**Create OpenCV image of certain (R, G, B) color:**
import cv2
import numpy as np
def create_blank(width, height, rgb_color=(0, 0, 0)):
"""Create new image(numpy array) filled with certain color in RGB"""
# Create black blank image
image = np.zeros((height, width, 3), np.uint8)
# Since OpenCV uses BGR, convert the color first
color = tuple(reversed(rgb_color))
# Fill image with color
image[:] = color
return image
# Create new blank 300x300 red image
width, height = 300, 300
red = (255, 0, 0)
image = create_blank(width, height, rgb_color=red)
cv2.imwrite('red.jpg', image)
|
ImportError: cannot import name cached_property
Question: I am trying to set up Tipfy extensions on a Python/Google App Engine project
that I've been brought in on. I have followed all the instructions for running
bootstrap.py and then bin/buildout. Everything seems to have worked but now
when I try to import tipfy.ext.auth, I get this error:
File "C:\{project}\app\lib\dist\tipfy\ext\auth\__init__.py", line 15, in <module>
from tipfy import abort, cached_property, import_string, redirect
ImportError: cannot import name cached_property
I'm a bit lost now and don't really know where to even look for the solution
to this as I am, admittedly, very new to both Python and GAE. Any help or
direction on how to get this extension working would be greatly appreciated.
Answer: Potential solution here:
<http://tipfy.googlecode.com/hg/docs/source/guide/migrating.rst>
tl;dr: Remove that import and import from werkzeug instead.
|
Python: function to fetch data always returns None
Question: I have this list of dictionaries:
cust = [
{"id": 1, "name": u"name 1", "bill_amount": 1000},
{"id": 2, "name": u"name 2", "bill_amount": 5000},
{"id": 3, "name": u"name 3", "bill_amount": 7600},
{"id": 4, "name": u"name 4", "bill_amount": 30}
]
And I want to get a list of just the names.
Trying this:
def getName(x): x["name"]
print map(getName, cust)
Returns this:
[None, None, None, None]
Why? Am I missing something obvious?
Answer: You could also use
[`operator.itemgetter()`](http://docs.python.org/library/operator.html#operator.itemgetter)
instead of defining your own function for this:
>>> import operator
>>> map(operator.itemgetter("name"), cust)
[u'name 1', u'name 2', u'name 3', u'name 4']
|
Newbie Django: Creating a project with several apps or all in one
Question: I am begginer in the Django world, I developed some "information sites"
(nothing complicated) but this week my boss order me to make a migration of a
big software that has 7 modules.
So I went to read the documentation page and search in google for how I could
design this software using Django. I know that the every "module" can named as
"app", so I create a new project and one app for every module (I dont know if
it was right because the modules will not be public).
The problem is that now I don't know what is the next step.
All my apps can share data (every app has its owns models but sometimes one
app has a model that was related to the models in other apps)?
Where do I write the code for the login process (I create a manageUsers app
that was thinked to handle the registration, edit, share and validate profile
of the current or new user ) and we can be able to share this session data
accross the apps?
I need one more app for put the website information (like contact, about,
pricing ...)? I use Python 2.7, Django 1.3, Memcached and Mysql 5.
If someone can help me or tell me where it may clarify these questions because
most explains how to develop using only one app and in the IRC got no reply or
else I must be write all the code in one app?
Best Regards
Answer: A good place to start (dated, but worth reading; look at user comment bubbles
too): <http://www.djangobook.com/en/2.0/> . Chapter 1 - 10 are essential
reading. You can pick-and-choose to read the remaining chapters, if desired.
Yes, all Django Apps can share data with one another. You make multiple Django
Application's, housed under a single Django Project. The Project sets up a
common database to use, and each Application creates Models which use said
database. App1 can talk to App2 and vice-versa.
Django Project (one) <----->> (many) Django Application
Typically you separate Apps based on common function. User accounts get their
own app (see Auth below). Blog postings get another. A Google Maps interface
will get another. User subscriptions, another.
For user accounts and login, Django provides the [Auth
Module](https://docs.djangoproject.com/en/1.3/topics/auth/). You can have user
accounts stored directly in Django, or configure it to talk to something else,
like Active Directory. Auth works "pretty good" out of the box, though I
personally customized mine a bit to allow 255-character email addresses as
usernames (by default, it limits to 40 characters). [Chapter
14](http://www.djangobook.com/en/2.0/chapter14/) in the Django book might be a
little easier to read than the official Auth docs. If you do use Auth, you
don't have to make your own Django Application, since Auth already is one! You
just install it in settings.py and you're golden.
Your Django structure will likely look something like this:
/Project/
__init__.py
manage.py
settings.py
urls.py
App1/
__init__.py
forms.py
models.py
views.py
templates/App1/
template1.html
template2.html
App2/
...
App2 can access the data-models of App1 by doing: `from Project.App1.models
import someModel`
|
More elegant way to create a list of 2D points in Python
Question: I need to create a list of 2D points (x,y) in python. This will do it
l = []
for x in range (30,50,5):
for y in range (1,10,3):
l.append((x,y))
So: `print l` will produce:
[(30, 1), (30, 4), (30, 7), (35, 1), (35, 4), (35, 7), (40, 1), (40, 4), (40, 7), (45, 1), (45, 4), (45, 7)]
Is there a more elegant way of doing this?
Answer: Use
[`itertools.product`](http://docs.python.org/library/itertools.html#itertools.product):
from itertools import product
l = list(product(range(30,50,5), range(1,10,3)))
It scales better and should be faster than a generator expression, list
comprehension, or explicit loops.
|
python pycurl giving for soap error of gnutls
Question: While trying pycurl for some url
def call_soap_curl(ncServerURL, xml, action):
c = pycurl.Curl()
c.setopt(pycurl.URL, ncServerURL)
c.setopt(pycurl.POST, 1)
c.setopt(pycurl.SSL_VERIFYPEER, 0)
c.setopt(pycurl.SSL_VERIFYHOST, 0)
header=["Content-type: text/xml","SOAPAction:"+action,'Content-Type: text/xml; charset=utf-8','Content-Length: '+str(len(xml))]
print header
c.setopt(pycurl.HTTPHEADER, header)
c.setopt(pycurl.POSTFIELDS, str(xml))
import StringIO
b = StringIO.StringIO()
c.setopt(pycurl.WRITEFUNCTION, b.write)
c.perform()
ncServerData = b.getvalue()
return ncServerData
The error m getting is
(56, 'GnuTLS recv error (-9): A TLS packet with unexpected length was received.')
@c.perform()
Please suggest what can be problem , how can i solve this. **Using Ubuntu and
same url in curl in php working**
This is my
pycurl.version_info()
(3, '7.21.6', 464134, 'x86_64-pc-linux-gnu', 17981, 'GnuTLS/2.10.5', 0, '1.2.3.4', ('dict', 'file', 'ftp', 'ftps', 'gopher', 'http', 'https', 'imap', 'imaps', 'ldap', 'pop3', 'pop3s', 'rtmp', 'rtsp', 'smtp', 'smtps', 'telnet', 'tftp'), None, 0, '1.22')
Answer: This seems to be a problem with libcurl compiled with GnuTLS.
Elsewhere I read:
_<https://bugs.launchpad.net/gwibber/+bug/626023/comments/28> was also
suggested, "The only way i managed to get rid of that error, was by
recompiling libcurl with OpenSSL instead of GnuTLS and then recompiling
python-pycurl against the new cURL library_
I recompiled libcurl with OpenSSL as suggested and the problem was solved for
me. I've included my recipe here for inspiration, though it's Ubuntu specific.
#!/bin/bash
sudo apt-get install libcurl4-openssl-dev
wget http://curl.haxx.se/download/curl-7.26.0.tar.gz
wget http://pycurl.sourceforge.net/download/pycurl-7.19.0.tar.gz
tar -xzvf pycurl-7.19.0.tar.gz
tar -xzvf curl-7.26.0.tar.gz
cd curl-7.26.0/
./configure --with-ssl
make
sudo make install
cd ../pycurl-7.19.0/
sudo python setup.py install
sudo ldconfig
|
Do I have to do StringIO.close()?
Question: Some code:
import cStringIO
def f():
buffer = cStringIO.StringIO()
buffer.write('something')
return buffer.getvalue()
The
[documentation](http://docs.python.org/library/stringio.html#StringIO.StringIO.close)
says:
> `StringIO.close()`: Free the memory buffer. Attempting to do further
> operations with a closed StringIO object will raise a ValueError.
Do I have to do `buffer.close()`, or it will happen automatically when buffer
goes out of scope and is garbage collected?
**UPDATE:**
I did a test:
import StringIO, weakref
def handler(ref):
print 'Buffer died!'
def f():
buffer = StringIO.StringIO()
ref = weakref.ref(buffer, handler)
buffer.write('something')
return buffer.getvalue()
print 'before f()'
f()
print 'after f()'
Result:
vic@wic:~/projects$ python test.py
before f()
Buffer died!
after f()
vic@wic:~/projects$
Answer: Generally it's still better to call `close()` or use the `with` statement,
because there may be some unexpected behaviour in special circumstances. For
example, the expat-`IncrementalParser` seems to expect a file to be closed, or
it won't return the last tidbit of parsed xml until a timeout occurs in some
rare circumstances.
But for the `with`-statement, which handles the closing for you, you have to
use the `StringIO` class from the `io`-Modules, as stated in the comment of
Ivc.
This was a major headache in some legacy sax-parser script we solved by
closing the StringIO manually.
The "out-of-scope" close didn't work. It just waited for the timeout-limit.
|
Python function translate relative path ../../ into full path
Question: is there a function that translate relative path ../test.txt into a full path?
example:
the full path is:
/Users/paganotti/Documents/Project/simple/work/paga/
the relative path is:
../test.txt
I want build this result:
/Users/paganotti/Documents/Project/simple/work/test.txt
As you can see translate ../ into "work" folder for build entire path.
Is there general function that acomplish this task?
Answer: use this:
import os
os.path.abspath('../test.txt')
[Documentation](http://docs.python.org/library/os.path.html)
|
Numpy array assignment
Question: I've written a very simple python numpy code. It have a strange behavior...
from numpy import *
# generate 2 array with 15 random int between 1 and 50
pile = random.randint(1, 50, 15)
pile2 = copy(pile)
print("*** pile2",type(pile2),pile2)
print("tab with fixed values ")
tmp2=array([155,156,157,158,159])
print("tmp2",type(tmp2),tmp2)
pile2[:5]=tmp2
print("pile2",type(pile2),pile2)
print("*** pile",type(pile),pile)
print("flip a part of pile and put in an array")
tmp=pile[4::-1]
print("tmp",type(tmp),tmp)
pile[:5]=tmp
print("pile",type(pile),pile)
When I run this script, it return :
*** pile2 <class 'numpy.ndarray'> [20 23 29 31 8 29 2 44 46 17 11 47 29 43 10]
tab with fixed values
tmp2 <class 'numpy.ndarray'> [155 156 157 158 159]
pile2 <class 'numpy.ndarray'> [155 156 157 158 159 29 2 44 46 17 11 47 29 43 10]
Ok! pile2 become something like "tmp2[] and pile2[6::]", but for the second...
*** pile <class 'numpy.ndarray'> [20 23 29 31 8 29 2 44 46 17 11 47 29 43 10]
flip a part of pile and put in an array
tmp <class 'numpy.ndarray'> [ 8 31 29 23 20]
pile <class 'numpy.ndarray'> [ 8 31 29 31 8 29 2 44 46 17 11 47 29 43 10]
tmp [ **8 31 29 23 20**]
pile [ **8 31 29** 31 8 29 2 44 46 17 11 47 29 43 10]
Oh! There is problem with assignement ! What's happens ?
Answer: I can confirm the behaviour with numpy 1.3.0. I guess this is indeed an old
bug. And this:
pile[:5]=tmp.copy()
solves the issue.
|
Python -- how to loop over range starting at random point
Question: I have a list of categories (1-4) and I want a loop to iterate over all of
them. But I need to be able to start with a random category and wrap around to
the beginning of the list if my starting point was ahead of list[0].
I was able to do it in a rather verbose way, but I'm wondering if there's a
faster/more elegant way. Here's what I did (and it works):
def categorize(self, cat):
cats = [1,2,3,4]
if cat > 1:
ncats = cats[:(cat-1)]
cats = cats[(cat-1):]
cats.extend(ncats)
for c in cats:
pass
Answer:
from random import randrange
cats = [1,2,3,4]
i = randrange(len(cats))
for c in cats[i:]+cats[:i]:
pass
(Changed `choice` to `randrange` as per suggestion)
|
Using wxOverlay ontop of widgets?
Question: I have created a button in this rubberbands example. The rubber bands do not
appear over the button, I would like the rubberbands to appear over the
button.
import wx
print wx.version()
class TestPanel(wx.Panel):
def __init__(self, *args, **kw):
wx.Panel.__init__(self, *args, **kw)
self.Bind(wx.EVT_PAINT, self.OnPaint)
self.Bind(wx.EVT_LEFT_DOWN, self.OnLeftDown)
self.Bind(wx.EVT_LEFT_UP, self.OnLeftUp)
self.Bind(wx.EVT_MOTION, self.OnMouseMove)
self.startPos = None
self.overlay = wx.Overlay()
self.b=wx.Button(self)
def OnPaint(self, evt):
# Just some simple stuff to paint in the window for an example
dc = wx.PaintDC(self)
coords = ((40,40),(200,220),(210,120),(120,300))
dc.SetBackground(wx.Brush("sky blue"))
dc.Clear()
dc.SetPen(wx.Pen("red", 2))
dc.SetBrush(wx.CYAN_BRUSH)
dc.DrawPolygon(coords)
dc.DrawLabel("Drag the mouse across this window to see \n"
"a rubber-band effect using wx.Overlay",
(140, 50, -1, -1))
def OnLeftDown(self, evt):
# Capture the mouse and save the starting posiiton for the
# rubber-band
self.CaptureMouse()
self.startPos = evt.GetPosition()
def OnMouseMove(self, evt):
if evt.Dragging() and evt.LeftIsDown():
rect = wx.RectPP(self.startPos, evt.GetPosition())
# Draw the rubber-band rectangle using an overlay so it
# will manage keeping the rectangle and the former window
# contents separate.
dc = wx.ClientDC(self)
#*** This won't work because wx.GCDC is not a wx.WindowDC
#dc = wx.GCDC(dc)
odc = wx.DCOverlay(self.overlay, dc)
odc.Clear()
#*** This crashes on wxMac
#dc = wx.GCDC(dc)
dc.SetPen(wx.Pen("black", 2))
if 'wxMac' in wx.PlatformInfo:
dc.SetBrush(wx.Brush(wx.Colour(0xC0, 0xC0, 0xC0, 0x80)))
else:
dc.SetBrush(wx.TRANSPARENT_BRUSH)
dc.DrawRectangleRect(rect)
del odc # work around a bug in the Python wrappers to make
# sure the odc is destroyed before the dc is.
def OnLeftUp(self, evt):
if self.HasCapture():
self.ReleaseMouse()
self.startPos = None
# When the mouse is released we reset the overlay and it
# restores the former content to the window.
dc = wx.ClientDC(self)
odc = wx.DCOverlay(self.overlay, dc)
odc.Clear()
del odc
self.overlay.Reset()
app = wx.App(redirect=False)
frm = wx.Frame(None, title="wx.Overlay Test", size=(450,450))
pnl = TestPanel(frm)
frm.Show()
app.MainLoop()
Answer: I think that currently it can only be done on OSX where the wx.Overlay is a
true overlay object. On the other platforms they are simply emulated using a
wx.ClientDC, which means that the drawing done to the overlay can only be sent
to the window the DC was created for.
|
Python regex, searching for prefixes inside a target string
Question: I need to find a **list of prefixes** of words inside a **target string** (I
would like to have the list of matching indexes in the target string handled
as an array).
* I think using **regex** should be the cleanest way.
* Given that I am looking for the pattern "foo", I would like to retrieve in the target string words like "foo", "Foo", "fooing", "Fooing"
* Given that I am looking for the pattern "foo bar", I would like to retrieve in the target string patterns like "foo bar", "Foo bar", "foo Bar", "foo baring" (they are still all handled as prefixes, am I right?)
At the moment, after running it in different scenarios, my Python code still
does not work.
* I am assuming I have to use **^** to match the **beginning of a word** in a target string (i.e. a prefix).
* I am assuming I have to use something like **^[fF]** to be **case insensitive with the first letter of my prefix**.
* I am assuming I should use something like **".*"** to let the regexp behave like a **prefix**.
* I am assuming I should use the **\prefix1|prefix2|prefix3** to put in **logic OR many different prefixes** in the pattern to search.
The following source code does not work because I am wrongly setting the
`txt_pattern`.
import re
# ' ' ' ' ' ' '
txt_str = "edb foooooo jkds Fooooooo kj fooing jdcnj Fooing ujndn ggng sxk foo baring sh foo Bar djw Foo";
txt_pattern = ''#???
out_obj = re.match(txt_pattern,txt_str)
if out_obj:
print "match!"
else:
print "No match!"
1. What am I missing?
2. How should I set the `txt_pattern`?
3. Can you please suggest me a good **tutorial** with **minimum working examples**? At the moment the standard tutorials from the first page of a Google search are very long and detailed, and not so simple to understand.
Thanks!
Answer: > I am assuming I have to use ^ to match the beginning of a word in a target
> string (i.e. a prefix).
No, the `^` is an anchor that only matches the start of the string. You can
use `\b` instead, meaning a word boundary (but remember to escape the
backslash inside a string literal, or use a raw string literal).
You will also have to use `re.search` instead of `re.match` because the latter
only checks the start of the string, whereas the former searches for matches
anywhere in the string.
|
TypeError: cannot concatenate 'str' and 'builtin_function_or_method' objects in python
Question: Please I am beginner in python coding.This is my code and am not able to
resolve the error . Can some one suggest me what is the prob in code actually.
thanks in advance.
import urllib2
username = '<YOUR USERNAME>'
password = '<YOUR PASSWORD>'
format = 'json' # json or xml
filename = 'archive.json' # filename of the archive
tweets = 164 # number of tweets
pages = (int(float(tweets)/float(80)))+1
auth = urllib2.HTTPPasswordMgrWithDefaultRealm()
auth.add_password(None, 'http://twitter.com/account/', username, password)
authHandler = urllib2.HTTPBasicAuthHandler(auth)
opener = urllib2.build_opener(authHandler)
urllib2.install_opener(opener)
i = 1
response = ''
print 'Downloading tweets. Note that this may take some time'
while i <= pages:
request = urllib2.Request('http://twitter.com/statuses/user_timeline/account.' \
+ format + '?page=' + str(i))
response = response + urllib2.urlopen(request).read()
i = i + 1
handle = open(filename,"w")
handle.write(response)
handle.close()
print 'Archived ' + str(tweets) + ' of ' + username + \
'\'s tweets to ' + filename
ERROR is like this below:
**Traceback (most recent call last):
File "<pyshell#14>", line 3, in <module>
+ format + '?page=' + str(i))
TypeError: cannot concatenate 'str' and 'builtin_function_or_method' objects**
Answer: `format` is a built-in function. The error you quote is exactly what you get
if you try to use `+` with the built-in `format` and a string.
Your earlier assignment `format = 'json'` should have shadowed the built-in
function. But your error trace indicates that you're running this from some
sort of shell, and not actually executing the code as you have posted it. So
without knowing what exactly is executing, my guess is your assignment to
`format` isn't in effect, for whatever reason.
|
Import Python file from within executing script
Question: I am attempting to import a python file(called test.py that resides in the
parent directory) from within the currently executing python file(I'll call it
a.py). All my directories involved have a file in it called init.py(with 2
underscores each side of init)
**My Problem:** When I attempt to import the desired file I get the following
error
> Attempted relative import in non-package
My code inside a.py:
try:
from .linkIO can_follow # error occurs here
except Exception,e:
print e
print success
Note: I know that if I were to create a file called b.py and import a.py(which
in itself imports the desired python file) it all works, so whats going wrong?
For eg:
b.py:
import a
print "success 2"
Answer: As stated in [PEP 328](http://www.python.org/dev/peps/pep-0328/) all import
must be absolute to prevent modules masking each other. Absolute means the
module/package must be in the module-path `sys.path`. Relative imports (thats
the dot for) are only allowed intra-packages wise, meaning if modules from the
same package want to import each other.
So this leave you with following possibilities:
1. You make a package (which you seem to have made already) and add the package-path to sys. path
2. you just adjust sys.path for each module
3. you put all your custom modules into the same directory as the start-script/main-application
for 1. and 2. you may add a package/module to sys.path like this:
import sys
from os.path import dirname, join
sys.path.append(dirname(__file__)) #package-root-directory
or
module_dir = 'mymodules'
sys.path.append(join(dirname(__file__), module_dir)) # in the main-file
BTW:
from .linkIO can_follow
can't work! The `import` statement is missing!
As a reminder: if using relative imports you MUST use the from-version: `from
.relmodule import xyz`. An `import .XYZ` without the `from` isn't allowed!
|
python not detecting changes if importing from root
Question: For some reason, my changes aren't reflected if I import a class relative to
the root. Here's an example:
root/__init__.py
subdir/__init__.py
bar.py
If I cd to subdir and do:
>>> from bar import baz
>>> dir(baz)
This reflects my changes and shows the method I added to baz
However, if I do:
>>> from subdir.bar import baz
>>> dir(baz)
This does NOT reflect my changes
I've deleted all .pyc files in this project. This is driving me nuts!!
Answer: What Andreas said in the comments fixed it:
"Have you checked your PYTHONPATH? Maybe there is somewhere an old version
hanging around..."
|
Glade catalog picks wrong version of glib module
Question: On Fedora 16, I have a catalog library of widgets that I wish to load into
glade. Normally, this should be easy but since I have different versions of
glib and gobject installed, the following error occurs:
; GLADE_CATALOG_PATH=./Components GLADE_MODULE_PATH=. glade fubar.glade
(glade:25069): GladeUI-PYTHON-WARNING **: Error initializing Python interpreter: could not import pygobject
(glade:25069): GladeUI-PYTHON-WARNING **: Unable to load pygobject module >= 2.90.0, please make sure it is in python's path (sys.path). (use PYTHONPATH env variable to specify non default paths)
could not import gobject (version mismatch, 2.90.0 is required, found 3.0.3)
zsh: segmentation fault (core dumped) GLADE_CATALOG_PATH=./Components GLADE_MODULE_PATH=. glade
Is there a way to force a version of gobject? Currently, I have this
installed:
; yum list installed | grep pygobject
pygobject2.x86_64 2.28.6-2.fc16 @anaconda-0
pygobject2-codegen.x86_64 2.28.6-2.fc16 @fedora
pygobject2-devel.x86_64 2.28.6-2.fc16 @fedora
pygobject2-doc.x86_64 2.28.6-2.fc16 @fedora
pygobject3.x86_64 3.0.3-1.fc16 @updates
Answer: I ran into this issue as well. The problem is that the version check is wrong,
_pyobject3_ is just fine for the glade Python plugin. Patch is here:
<https://bugzilla.gnome.org/show_bug.cgi?id=706304>. I don't know any fix
other than recompiling glade, though.
|
Setting a timeout for mechanize.Browser
Question: I was perusing the question posted here:
[What should I do if socket.setdefaulttimeout() is not
working?](http://stackoverflow.com/questions/8464391/what-should-i-do-if-
socket-setdefaulttimeout-is-not-working)
to try and come up with a solution to kill requests when my
`mechanize.Browser` object is taking too long, and I have been experimenting
with the first solution in tomasz's edit (reposted here for clarity):
import signal, time
def request(arg):
"""Your http request"""
time.sleep(2)
return arg
class Timeout():
"""Timeout class using ALARM signal"""
class Timeout(Exception): pass
def __init__(self, sec):
self.sec = sec
def __enter__(self):
signal.signal(signal.SIGALRM, self.raise_timeout)
signal.alarm(self.sec)
def __exit__(self, *args):
signal.alarm(0) # disable alarm
def raise_timeout(self, *args):
raise Timeout.Timeout()
# Run block of code with timeouts
try:
with Timeout(3):
print request("Request 1")
with Timeout(1):
print request("Request 2")
except Timeout.Timeout:
print "Timeout"
# Prints "Request 1" and "Timeout"
When I run this from my terminal using `python timeout.py` (version is`Python
2.7.2+` and I am on Ubuntu 11.10 Oneiric Ocelot), there is no exception
thrown- instead it simply prints
Request 1
Request 2
Could someone please explain how to fix this? An explanation of what's going
on with those `signal.alarm` and `signal.signal` calls would be awesome too.
Thank you very much for your time!
EDIT:
Running `strace -f python timeout.py` yields:
alarm(3) = 0
select(0, NULL, NULL, NULL, {2, 0}) = 0 (Timeout)
fstat64(1, {st_mode=S_IFREG|0664, st_size=0, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb740c000
alarm(0) = 1
rt_sigaction(SIGALRM, {0x812f450, [], 0}, {0x812f450, [], 0}, 8) = 0
alarm(1) = 0
select(0, NULL, NULL, NULL, {2, 0}) = 0 (Timeout)
alarm(0) = 0
rt_sigaction(SIGINT, {SIG_DFL, [], 0}, {0x812f450, [], 0}, 8) = 0
rt_sigaction(SIGALRM, {SIG_DFL, [], 0}, {0x812f450, [], 0}, 8) = 0
write(1, "Request 1\nRequest 2\n", 20) = 20
exit_group(0) = ?
Answer: If you want to know what happens here, please try:
$ strace -f python timeout.py
For me (running Debian 6 with python 2.6) this works. strace output of the
important part:
alarm(3) = 0
select(0, NULL, NULL, NULL, {2, 0}) = 0 (Timeout)
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 15), ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = x7f0fbbe06000
write(1, "Request 1\n", 10Request 1) = 10
alarm(0) = 1
rt_sigaction(SIGALRM, {0x4d0a90, [], SA_RESTORER, 0x7f0fbb9deff0}, {0x4d0a90, [], SA_RESTORER, 0x7f0fbb9deff0}, 8) = 0
alarm(1) = 0
select(0, NULL, NULL, NULL, {2, 0}) = ? ERESTARTNOHAND (To be restarted)
--- SIGALRM (Alarm clock) @ 0 (0) ---
rt_sigreturn(0xffffffff) = -1 EINTR (Interrupted system call)
alarm(0) = 0
write(1, "Timeout\n", 8Timeout
) = 8
alarm(3) is called; Request 1 comes through; alarm(1) is called which gives a
Timeout.
|
NameError when using "self" in class?
Question: I have the following Python (3.2) code:
from pygame import *
class Application:
def __init__(self):
self.running = True
self.display_surface = None
self.size = self.width, self.height = 640, 480
self.old_ticks = pygame.time.get_ticks
self.new_ticks = None
pygame.init()
self.display_surface = pygame.display.set_mode(self.size, pygame.HWSURFACE | pygame.DOUBLEBUF)
def on_event(self, event):
if event.type == pygame.QUIT:
self.running = False
def on_loop(self):
pass
def on_render(self):
pass
def on_cleanup(self):
pygame.quit()
def regulate_time(self):
self.new_ticks = pygame.time.get_ticks
while (self.new_ticks < self.old_ticks + 1000):
pass
self.old_ticks = self.new_ticks
def load_images(self):
pass
while(self.running == True):
for event in pygame.event.get():
self.on_event(event)
self.regulate_time
self.on_loop()
self.on_render()
self.on_cleanup()
test = Application
I'm having a problem with the following line:
while(self.running == True):
which throws me the error : `NameError: Name "self" is not defined.`
I am pretty much a python newbie and was hoping to use this time to get
started on learning python and pygame which I could then use for my college
project (two birds with one stone) and I cannot figure out why this error is
being thrown at me.
Answer: Well, the `while(self.running == True)` is _not_ in any method (`def`) so
there is no such _variable_ called `self` in scope (which is what the
`NameError` says)...
...perhaps there is some missing indentation? Although the previous `pass`
makes it look like more than this is missing: maybe `def run(self):`?
Remember, `self` is just the conventional (and proper) name given to the
_first_ parameter for a method by which the "current instance" is implicitly
passed.
Happy coding.
|
Ye olde UnicodeEncodeError printing results from a query on MS SQL with adodbapi
Question: Python novice here.
I am using python2.7.2 on Windows7.
I have installed the PyWin32 extensions (build 217).
I have adopdbapi installed in `c:\Python27\Lib\site-packages\adodbapi`
I have a very simple module that queries the AdventureWorks2008LT database in
MS SQL Server.
import adodbapi
connStr='Provider=SQLOLEDB.1;' \
'Integrated Security=SSPI;' \
'Persist Security Info=False;' \
'Initial Catalog=AVWKS2008LT;' \
'Data Source=.\\SQLEXPRESS'
conn = adodbapi.connect(connStr)
tablename = "[salesLT].[Customer]"
# create a cursor
cur = conn.cursor()
# extract all the data
sql = "select * from %s" % tablename
cur.execute(sql)
# show the result
result = cur.fetchall()
for item in result:
print item
# close the cursor and connection
cur.close()
conn.close()
The AdventureWorks2008LT sample database has customer, product, address, and
order tables (etc). Some of the string data in these tables is unicode.
The query works, _for the first couple rows_. I see the expected output. But
then, the script fails with this message:
Traceback (most recent call last):
File "C:\dev\python\query-1.py", line 24, in <module>
print item
File "C:\Python27\lib\site-packages\adodbapi\adodbapi.py", line 651, in __str__
return str(tuple([str(self._getValue(i)) for i in range(len(self.rows.converters))]))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 19: ordinal not in range(128)
...which is very much not helpful. To me.
I gather that adodbapi is trying to encode a u'\xe9' character into ASCII. I
understand why that will fail. I suppose it's trying to do that as part of the
`print` statement.
Why is it trying to encode the character into ASCII?
How can I tell it to just use UTF-8?
ps: I am running the script from the cmd.exe prompt in Windows. Does this mean
stdout is always ASCII?
eg, `\python27\python.exe -c "import sys; print(sys.stdout.encoding)"`
gives me 'cp437'
Answer: I was able to get the script to run to completion, printing all retrieved
rows, by modifying the output portion to do this:
# show the result
result = cur.fetchall()
for item in result:
print repr(item)
instead of this:
# show the result
result = cur.fetchall()
for item in result:
print item
So the problem is in fact the use of `str` within adodbapi as Borealid said in
a comment. But that is not necessarily a blocking problem. Normally when
retrieving rows from a database query, people don't simply want a string
representation of a row; they want to retrieve the values in the individual
columns. My conclusion is that this problem is sort of an artificial problem,
due to the way I was building a test app.
|
Nested for loop in python
Question: I'm trying to learn the python scripting . Being some body who started coding
in c/c++ or java i'm finding it extremely hard to find ways to write loops in
python , especially **for** loops with conditions
I've a list A of string, i need to do a specific op on these strings a pair at
a time ,say xor of the strings Also xor(a,b)=xor(b,a), hence i need to remove
the redundant pair while looping
In traditional lang i would do something like
for(i=0;i<len;i++){
for(j=i+1;j<len;j++){
res[count]=xor(a[i],a[j])
count++;
}
}
So how do I implement the same with Python, I could think of iterators but is
there a more efficient way , something very obvious eluding my eyes???
Answer: Python comes with [batteries
included](http://docs.python.org/tutorial/stdlib.html#batteries-included),
that is, most of the stuff like this is already written for you. If you want
combinations of strings, there is a [dedicated
function](http://docs.python.org/library/itertools.html#itertools.combinations)
for that:
import itertools
result = []
for pair in itertools.combinations(a, 2):
result.append(xor(pair[0], pair[1]))
or simply:
result = [xor(*p) for p in itertools.combinations(a, 2)]
|
Should I get results in real time while my webserver is running if I haven't done any html for a google app engine web app?
Question: when I run it against localhost it shouldn't show anything if I haven't made
any html right? Or is it because of some kind of database issue? I checked the
page source and there is nothing like it should be, but I'm not entirely
certain about specification of get requests and how they work in this case.
Despite the fact that i has response.write, etc.? I'm thinking that it
shouldn't show anything in localhost because its a web app now utilizing the
web app framework right? By the way I auto-generated the beginning part of the
project using the pycharm ide.
I realize this may be a dumb question, but I'm a newb. This is my main.py
file:
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
class MainPage(webapp.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write('Hello, webapp World!')
application = webapp.WSGIApplication(
[('/', MainPage)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
Here is my app.yaml file:
application: helloworld
version: 1
runtime: python
api_version: 1
handlers:
- url: /.*
script: helloworld.py
Answer: The intent of the code is to show `'Hello, webapp World!'` then you request
`/` url on the server. So if the code doesn't contain any errors then you
should get results both on localhost and on appengine (if you deploy it).
|
Python for ios interpreter
Question: > **Possible Duplicate:**
> [Python or Ruby Interpreter on
> iOS](http://stackoverflow.com/questions/4772591/python-or-ruby-interpreter-
> on-ios)
I just discovered this apps
[pypad](http://users.on.net/~jon.dowdall/pypad/index.html) and [python for
ios](http://pythonforios.com/)
They have like an interpreter an editor
So which app would you recomend
But most importantly, how does this interpreter work, and where can i see an
example of how the obj c and python get to work togheter?
Thanks!
Answer: I am the sole creator of [Python for iOS](http://pythonforios.com) so that is
of course what I would recommend, but a good indicator for your personal
decision is the reviews & ratings of each App. It took me weeks to figure out
how to properly integrate python into Objective-c for this App but here is the
best resource to get you started (keep in mind that ObjC is just a superset of
C):
<http://docs.python.org/c-api/>
* * *
Also, here is an example of calling a function defined in `myModule`. The
equivient python would be:
import myModule
pValue = myModule.doSomething()
print pValue
In Objective-c:
#include <Python.h>
- (void)example {
PyObject *pName, *pModule, *pDict, *pFunc, *pArgs, *pValue;
NSString *nsString;
// Initialize the Python Interpreter
Py_Initialize();
// Build the name object
pName = PyString_FromString("myModule");
// Load the module object
pModule = PyImport_Import(pName);
// pDict is a borrowed reference
pDict = PyModule_GetDict(pModule);
// pFunc is also a borrowed reference
pFunc = PyDict_GetItemString(pDict, "doSomething");
if (PyCallable_Check(pFunc)) {
pValue = PyObject_CallObject(pFunc, NULL);
if (pValue != NULL) {
if (PyObject_IsInstance(pValue, (PyObject *)&PyUnicode_Type)) {
nsString = [NSString stringWithCharacters:((PyUnicodeObject *)pValue)->str length:((PyUnicodeObject *) pValue)->length];
} else if (PyObject_IsInstance(pValue, (PyObject *)&PyBytes_Type)) {
nsString = [NSString stringWithUTF8String:((PyBytesObject *)pValue)->ob_sval];
} else {
/* Handle a return value that is neither a PyUnicode_Type nor a PyBytes_Type */
}
Py_XDECREF(pValue);
} else {
PyErr_Print();
}
} else {
PyErr_Print();
}
// Clean up
Py_XDECREF(pModule);
Py_XDECREF(pName);
// Finish the Python Interpreter
Py_Finalize();
NSLog(@"%@", nsString);
}
For much more documentation check out: [Extending and Embedding the Python
Interpreter](http://docs.python.org/extending/)
|
How to convert integer into date object python?
Question: I am creating a module in python, in which I am receiving the date in integer
format like `20120213`, which signifies the 13th of Feb, 2012. Now, I want to
convert this integer formatted date into a python date object.
Also, if there is any means by which I can subtract/add the number of days in
such integer formatted date to receive the date value in same format? like
subtracting 30 days from `20120213` and receive answer as `20120114`?
Answer: This question is already answered, but for the benefit of others looking at
this question I'd like to add the following suggestion: Instead of doing the
slicing yourself as suggested above you might also use strptime() which is
(IMHO) easier to read and perhaps the preferred way to do this conversion.
import datetime
s = "20120213"
s_datetime = datetime.datetime.strptime(s, '%Y%m%d')
|
Encoding a 128-bit integer in Python?
Question: Inspired by the "encoding scheme" of the answer to [this
question](http://stackoverflow.com/questions/742013/how-to-code-a-url-
shortener), I implemented my own encoding algorithm in Python.
Here is what it looks like:
import random
from math import pow
from string import ascii_letters, digits
# RFC 2396 unreserved URI characters
unreserved = '-_.!~*\'()'
characters = ascii_letters + digits + unreserved
size = len(characters)
seq = range(0,size)
# Seed random generator with same randomly generated number
random.seed(914576904)
random.shuffle(seq)
dictionary = dict(zip(seq, characters))
reverse_dictionary = dict((v,k) for k,v in dictionary.iteritems())
def encode(n):
d = []
n = n
while n > 0:
qr = divmod(n, size)
n = qr[0]
d.append(qr[1])
chars = ''
for i in d:
chars += dictionary[i]
return chars
def decode(str):
d = []
for c in str:
d.append(reverse_dictionary[c])
value = 0
for i in range(0, len(d)):
value += d[i] * pow(size, i)
return value
The issue I'm running into is encoding and decoding very large integers. For
example, this is how a large number is currently encoded and decoded:
s = encode(88291326719355847026813766449910520462)
# print s -> "3_r(AUqqMvPRkf~JXaWj8"
i = decode(s)
# print i -> "8.82913267194e+37"
# print long(i) -> "88291326719355843047833376688611262464"
The highest 16 places match up perfectly, but after those the number deviates
from its original.
I assume this is a problem with the precision of extremely large integers when
dividing in Python. Is there any way to circumvent this problem? Or is there
another issue that I'm not aware of?
Answer: The problem lies within this line:
value += d[i] * pow(size, i)
It seems like you're using `math.pow` here instead of the built-in `pow`
method. It returns a floating point number, so you lose accuracy for your
large numbers. You should use the built-in `pow` or the `**` operator or, even
better, keep the current power of the base in an integer variable:
def decode(s):
d = [reverse_dictionary[c] for c in s]
result, power = 0, 1
for x in d:
result += x * power
power *= size
return result
It gives me the following result now:
print decode(encode(88291326719355847026813766449910520462))
# => 88291326719355847026813766449910520462
|
Python Parameters (beginner)
Question: I need help with parameteres. Do both of these function definitions do the
exact same thing for print_twice?
def print_twice(lol):
print lol
print lol
def print_twice(michael):
print michael
print michael
If yes, then I'm guessing the word used for the parameter doesn't matter,
correct?
Answer: The word we use for the parameter **does matter**. It is important that the
word you use:
* is meaningful and clearly explains what the argument is for,
* does not override some variable name from the external scope.
## Importance of meaningful arguments' names
The name you use for argument is important, because the names of the
arguments, their default values and the function name are the things
developers using your function first see, even without the need to look into
function documentation (eg. by using `help(your_function)`). Just use IDLE to
define your function and then try to use it - when writing it, the IDLE will
show you possible arguments.
So please, give them meaningful names that will make using your function
easier and will not require looking into the documentation.
## Overriding variables from outer scopes
When it comes to the second point, just look at this example:
def show_elements(elements):
"""Shows elements of the passed argument
"""
for element in elements:
print element
which works ok, but if you replace `elements` with eg. `list`, you will
override `list` within this specific scope:
def show_elements(list):
"""Shows elements of the passed argument
"""
for element in list:
print element
and then if you would like to use `list` eg. for building a list, or
converting from other type into list, then you will have problems. `list` is a
builtin and you should not override it. Similar is true also about the other
variables from the scopes surrounding the function.
Historically, when Python was resolving variable names by first looking into
local scope, then global and builtin scopes, skipping all nonlocal ones (eg.
scope from the function in which our function was defined), enclosing scope's
variables were passed that way:
def funca():
local_val1 = 'some value1'
local_val2 = 'some value2'
def funcb(local_val1=local_val1):
# local_val1 is accessible here, even though local_val2 is not
...
...
But since the above is no longer true, you will need to take surrounding
scopes into account, thus **using non-conflicting name is important**.
|
How to fix default Python path after installing Virtualenv
Question: I installed virtualenv via `sudo pip install virtualenv` and created some
environments. But now I'm finding my default "global" Python path is
completely gone.
In a fresh terminal, I see output like:
user@localhost:~$ sudo pip install django
Requirement already satisfied (use --upgrade to upgrade): django in /usr/local/lib/python2.7/dist-packages
Cleaning up...
user@localhost:~$ ls /usr/local/lib/python2.7/dist-packages/django
bin contrib db forms __init__.py middleware template test views
conf core dispatch http __init__.pyc shortcuts templatetags utils
user@localhost:~$ python
Python 2.7.2+ (default, Oct 4 2011, 20:06:09)
[GCC 4.6.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named django
>>>
What's going on here? How do I fix my global Python installation so that it
can see its installed packages?
Answer: This problem was my own fault. I had accidentally run `virtualenv --no-site-
packages .` in my home directory, creating folders like ~/bin, ~/local, ~/lib,
and Python was looking for its packages there, where there were none. Deleting
these directories fixed the problem.
|
Python Project Issue: Import from parent folder or change project architecture
Question: I have a problem with my Python project. My project parses different recipe
websites for each recipe and writes the recipe data to a text file.
**My Problem:** I cannot import a python file/module that resides in the
parent directory. This file contains generic functions that parse webpages. I
get an error when I do so: "ValueError: Attempted relative import in non-
package"
**My solution:** I dont know what my solution is? I am thinking I either need
to find a way to import a file from the parent directory or to change my
project architecture so every domain parsing script sits in the same directory
**Project Architecture:**
Test Folder:
__init__.py
UtilityFunctions.py # the file I am importing
PerformAllIndexing.py
all_recipes:
__init__.py
index.py # imports ../UtilityFunctions.py
simply_recipes:
__init__.py
index.py # imports ../UtilityFunctions.py
Each subdirectory contains a file called _index.py_ that can extract recipe
name, ingredients and directions for a specific website and writes it to a
text file.
I encourage you to download my project to get a detailed understanding of my
problem: <http://www.mediafire.com/?ynup22oe8ofam21>
_Can you tell me what I can do to import a python module from a parent
directory or how I can change my project architecture to make this work
without having repeating code(the code residing in UtilityFunctions.py)?_
Answer: Just add the Test Folder to the python sys.path so you can import it with no
problems.
import sys
# Add the Test Folder path to the sys.path list
sys.path.append('/path/to/test/folder/')
# Now you can import your module
from test_folder import UtilityFunctions
|
Python: Read HTML source from URL and get date into program
Question: I'm a beginner at Python and I want to read info from a site and get some of
the data as output in my textbox (I use EasyGUI). I have found this to get the
HTML source of a URL but now I want to work with the HTML output, I know how
to work with XML and I guess it's a bit the same for HTML. Is there any way to
work with the elements and attributes?
filehandle = urllib.urlopen('URL')
for lines in filehandle.readlines():
print lines
filehandle.close()
thanks in advance
Answer: As suggested, Beautiful soup is a library that can help you.
<http://www.crummy.com/software/BeautifulSoup/bs3/download/2.x/documentation.html>,
shows a straightforward example.
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(filehandle.read())
titleTag = soup.html.head.title
Python has a built in parser too.
<http://docs.python.org/library/htmlparser.html>
BeautifulSoup is very good at handling broken html though.
|
In Python, when a function doesn't need an object prefix?
Question: In `sorted(list(mydict.keys()))`, `sorted` and `list` doesn't need an object
prefix `someobject.`, but `keys()` needed `dict1.`. When, or for what
functions, is the prefix necessary?
Answer: _Methods_ need to be called on a specific object. _Functions_ don't.
The functions that are available at any time are the built-in ones, such as
`sorted` and `list`, plus any functions that are in modules that you've
imported or that you've defined yourself. The methods that are available on a
particular object are the ones that are defined on that object's type.
|
How to get a list of variables in specific Python module?
Question: Let's assume I have the following file structure:
**data.py**
foo = []
bar = []
abc = "def"
**core.py**
import data
# do something here #
# a = ...
print a
# ['foo', 'bar', 'abc']
I need to get all the variables defined in data.py file. How can I achieve
that? I could use `dir()`, but it returns all the attributes of the module
including `__name__` and so on.
Answer:
print [item for item in dir(adfix) if not item.startswith("__")]
Is usually the recipe for doing this, but it begs the question.
# Why?
|
How to check if an element of a list is a list (in Python)?
Question: If we have the following list:
list = ['UMM', 'Uma', ['Ulaster','Ulter']]
If I need to find out if an element in the list is itself a list, what can I
replace _aValidList_ in the following code with?
for e in list:
if e == aValidList:
return True
Is there a special import to use? Is there a best way of checking if a
variable/element is a list?
Answer: Use [`isinstance`](http://docs.python.org/library/functions.html#isinstance):
if isinstance(e, list):
If you want to check that an object is a list or a tuple, pass several classes
to `isinstance`:
if isinstance(e, (list, tuple)):
|
time.sleep requires integers?
Question: I'm writing a macro that will click certain spots on the screen when I press a
key.
The first time I press a key, everything runs fine.
However, any other key press results in the error:
time.sleep(0.1)
TypeError: an integer is required
Here is the code:
import win32api
import win32con
import time
import pythoncom
import pyHook
import os
def Click(x,y):
win32api.SetCursorPos((x,y))
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,x,y,0,0)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,x,y,0,0)
def DeleteRun(event):
Click(1250, 741)
time.sleep(0.1)
Click(649,261)
time.sleep(0.1)
Click(651, 348)
time.sleep(0.1)
Click(800, 442)
time.sleep(0.1)
Click(865, 612)
Click(20,20)
KeyGrabber = pyHook.HookManager()
KeyGrabber.KeyDown = DeleteRun
KeyGrabber.HookKeyboard()
pythoncom.PumpMessages()
It seems the first time the `DeleteRun` function is run by `pyHook`,
`time.sleep(`) accepts floats.
On any following function calls, it seems it only accepts integers.
What is causing this?
I can't wait 5 seconds for the mouse arrangement! It's supposed to save time!
**Specs:**
* python 2.7.2
* Windows 7 (32)
Answer: Okay, how about this? Add a return True to DeleteRun:
def DeleteRun(event):
Click(1250, 741)
time.sleep(0.1)
[...]
return True
I should probably confess that this was little more than google-fu: read the
answer to [this question](http://stackoverflow.com/questions/3049068/help-
with-pyhook-error).
|
What does the PEP's status and structure mean?
Question: What is the PEP status' structure and mean?
These days, I try to write a python2.7 interpreter.
But, I don't know why comparison operator <> arise in python 2.7. (It was not
in python 3.0) Figure out it was in the PEP401. The official declarations is
for future compatibility (import from **future** or somewhat)
Something confusing me is that the status was `April Fool!`
April Fool?
Ref:
1. <http://www.python.org/dev/peps/pep-0401/>
2. <http://mail.python.org/pipermail/python-list/2009-April/1202030.html>
Answer: The linked PEP is, as the status suggests, an April Fool's joke; it is not a
real PEP.
There is no distinct `<>` operator; however, in Python 2, the interpreter will
read `<>` as a synonym for `!=`. In Python 3, `<>` is a syntax error.
|
Loading large amount of data into Postgres Hstore
Question: The hstore documentation only talks about using "insert" into hstore one row
at a time. Is there anyway to do a bulk upload of several 100k rows which
could be megabytes or Gigs into a postgres hstore.
The copy commands seems to work only for uploading csv files columns
Could someone post an example ? Preferably a solution that works with
python/psycopg
Answer: The above answers seems incomplete in that if you try to copy in multiple
columns including a column with an hstore type and use a comma delimiter, COPY
gets confused, like:
$ cat test
1,a=>1,b=>2,a
2,c=>3,d=>4,b
3,e=>5,f=>6,c
create table b(a int4, h hstore, c varchar(10));
CREATE TABLE;
copy b(a,h,c) from 'test' CSV;
ERROR: extra data after last expected column
CONTEXT: COPY b, line 1: "1,a=>1,b=>2,a"
Similarly:
copy b(a,h,c) from 'test' DELIMITER ',';
ERROR: extra data after last expected column
CONTEXT: COPY b, line 1: "1,a=>1,b=>2,a"
This can be fixed, though, by importing as a CSV and quoting the field to be
imported into hstore:
$ cat test
1,"a=>1,b=>2",a
2,"c=>3,d=>4",b
3,"e=>5,f=>6",c
copy b(a,h,c) from 'test' CSV;
COPY 3
select h from b;
h
--------------------
"a"=>"1", "b"=>"2"
"c"=>"3", "d"=>"4"
"e"=>"5", "f"=>"6"
(3 rows)
Quoting is only allowed in CSV format, so importing as a CSV is required, but
you can explicitly set the field delimiter and quote character to non ',' and
'"' values using the DELIMITER and QUOTE arguments for COPY.
|
Import web2py's DAL to be used with Google Cloud SQL on App Engine
Question: I want to build an app on App Engine which uses Cloud SQL as backend database
instead of App engine's own datastore facility (which doesn't support common
SQL operations such as JOIN).
Cloud SQL has a DB-API and hence I was looking for a lightweight Data
Abstraction Layer (DAL) to help easily manipulate the cloud databases. A
little research revealed that web2py has a pretty neat DAL which is compatible
with Cloud SQL.
Since I don't actually need the whole full-stack web2py framework, I copied
the dal.py file out from the /gluon folder into a simple testing app's main
directory and included this line in my app:
from dal import DAL, Field
db=DAL('google:sql://myproject:myinstance/mydatabase')
However, this generated an error after I deployed the app and tried to run it.
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 701, in __call__
handler.get(*groups)
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/helloworld2.py", line 13, in get
db=DAL('google:sql://serangoon213home:rainman001/guestbook')
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/dal.py", line 5969, in __init__
raise RuntimeError, "Failure to connect, tried %d times:\n%s" % (attempts, tb)
RuntimeError: Failure to connect, tried 5 times:
Traceback (most recent call last):
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/dal.py", line 5956, in __init__
self._adapter = ADAPTERS[self._dbname](*args)
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/dal.py", line 3310, in __init__
self.folder = folder or '$HOME/'+thread.folder.split('/applications/',1)[1]
File "/base/python_runtime/python_dist/lib/python2.5/_threading_local.py", line 199, in __getattribute__
return object.__getattribute__(self, name)
AttributeError: 'local' object has no attribute 'folder'
It looks like that it was due to an error with the 'folder' attribute which
was assigned by the statement
self.folder = folder or '$HOME/'+thread.folder.split('/applications/',1)[1]
Does anyone know what this attribute does and how can I resolve this problem?
Answer: folder is a parm in the DAL contructor. It points to the folder where you
store DBs (sqlite). Thus, I don't think that's the problem in your case. I
would check again the connection string.
From the web2py docs:
The DAL can be used from any Python program simply by doing this:
from gluon import DAL, Field
db = DAL('sqlite://storage.sqlite',folder='path/to/app/databases')
i.e. import the DAL, Field, connect and specify the folder which contains the .table files (the app/databases folder).
To access the data and its attributes we still have to define all the tables we are going to access with db.define_tables(...).
If we just need access to the data but not to the web2py table attributes, we get away without re-defining the tables but simply asking web2py to read the necessary info from the metadata in the .table files:
from gluon import DAL, Field
db = DAL('sqlite://storage.sqlite',folder='path/to/app/databases',
auto_import=True))
This allows us to access any db.table without need to re-define it.
|
How to create an SQL View with SQLAlchemy?
Question: Everything is in the title. Is there a "Pythonic" way (I mean, no "pure SQL"
query) to define an SQL view with SQLAlchemy ?
Thanks for your help,
Answer: **Update:** See also the SQLAlchemy usage recipe
[here](https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/Views)
Creating a (read-only non-materialized) view is not supported out of the box
as far as I know. But adding this functionality in SQLAlchemy 0.7 is
straightforward (similar to the example I gave
[here](http://stackoverflow.com/a/9597404/92092)). You just have to write a
[compiler extension](http://docs.sqlalchemy.org/en/latest/core/compiler.html)
`CreateView`. With this extension, you can then write (assuming that `t` is a
table object with a column `id`)
createview = CreateView('viewname', t.select().where(t.c.id>5))
engine.execute(createview)
v = Table('viewname', metadata, autoload=True)
for r in engine.execute(v.select()):
print r
Here is a working example:
from sqlalchemy import Table
from sqlalchemy.ext.compiler import compiles
from sqlalchemy.sql.expression import Executable, ClauseElement
class CreateView(Executable, ClauseElement):
def __init__(self, name, select):
self.name = name
self.select = select
@compiles(CreateView)
def visit_create_view(element, compiler, **kw):
return "CREATE VIEW %s AS %s" % (
element.name,
compiler.process(element.select, literal_binds=True)
)
# test data
from sqlalchemy import MetaData, Column, Integer
from sqlalchemy.engine import create_engine
engine = create_engine('sqlite://')
metadata = MetaData(engine)
t = Table('t',
metadata,
Column('id', Integer, primary_key=True),
Column('number', Integer))
t.create()
engine.execute(t.insert().values(id=1, number=3))
engine.execute(t.insert().values(id=9, number=-3))
# create view
createview = CreateView('viewname', t.select().where(t.c.id>5))
engine.execute(createview)
# reflect view and print result
v = Table('viewname', metadata, autoload=True)
for r in engine.execute(v.select()):
print r
If you want, you can also specialize for a dialect, e.g.
@compiles(CreateView, 'sqlite')
def visit_create_view(element, compiler, **kw):
return "CREATE VIEW IF NOT EXISTS %s AS %s" % (
element.name,
compiler.process(element.select, literal_binds=True)
)
|
How do a load a python package resource from the current distribution using pkg_resources?
Question: I have a Python package with some css stylesheets which I have included as
resources like so:
from setuptools import setup
setup(
package_data={
'my.package.name': ['*.css']
}
# ...
)
I would now like to load one of these included resources as a string. What is
the best way to load a resource from the current package?
I see that the
[`pkg_resources.Distribution`](http://packages.python.org/distribute/pkg_resources.html#distribution-
methods) object has a `get_resource_string()` method, but I am stuck on how to
use this: How do I get a `Distribution` object for the current package?
Answer: There is a convenience method at the top level of `pkg_resources` for this:
import pkg_resources
my_data = pkg_resources.resource_string(__name__, "my_style.css")
|
Put python doctest at the end of the code file?
Question: I can put python doctests in the bodies of each function, which I sometimes
like for small libraries, because they are in the same file as the function.
Or I can put them all together into a seperate file and execute the separate
file, which is nice in case I do not want the doctest in between the
functions. Sometimes I find the code is easier to work on if the docstrings
are small.
Is there also a way to keep the python doctests in the same file, but put them
all together at the end of the file?
* * *
EDIT: A solution, based on the accepted answer below:
def hello_world():
return u'Hello World'
def hello(name):
return u'Hello %s' % name
def doctest_container():
"""
>>> hello_world()
u'Hello World'
>>> hello(u'Guido')
u'Hello Guido'
"""
pass
if __name__ == "__main__":
import doctest
doctest.testmod()
In fact it is simple, a dummy function is created as the last function that
contains all the doctests in one docstring.
Answer: You can append the doctests to the docstring at the end of file like that:
def myfunc():
"""This is a docstring without a doctest
"""
pass
# ... some other code here
# Add docstrings for doctest:
myfunc.__doc__ += """
>>> myfunc()
>>> repr(myfunc())
None
"""
|
Show more levels of exception in Python traceback
Question: I'm catching exceptions in context managers, however I don't see all levels of
reraised exceptions. Anyone knows how to improve this?
import traceback
def f():
try:
raise Exception("Interesting")
except Exception as e:
raise Exception("Exc {} raised".format(e))
class Try():
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
print("Exception {} raised".format(exc_val))
print("".join(traceback.format_tb(exc_tb, 100)))
return True
with Try():
f()
Here I'd like to also see the code line of the "Interesting" exception (line
5) in the traceback, however I get
Exception Exc Interesting raised raised
File "try_test.py", line 19, in <module>
f()
File "try_test.py", line 7, in f
raise Exception("Exc {} raised".format(e))
Answer: Use `traceback.format_exception` instead of `traceback.format_tb`.
See the [`traceback`](http://docs.python.org/py3k/library/traceback.html)
documentation.
|
Why are references to instance methods stored in each instance object rather than in the class object?
Question: From what I understand, each instance of a class stores references to the
instance's methods.
I thought, in concept, all instances of a class have the same instance
methods. If so, both memory savings and logical clarity seem to suggest that
instance methods should be stored in the class object rather than the instance
object (with the instance object looking them up through the class object; of
course, each instance has a reference to its class). Why is this not done?
A secondary question. Why are instance methods not accessible in a way similar
to instance attributes, i.e., through `__dict__`, or through some other system
attribute? Is there any way to look at (and perhaps change) the names and the
references to instance methods?
EDIT:
Oops, sorry. I was totally wrong. I saw the following Python 2 code, and
incorrectly concluded from it that instance methods are stored in the
instances. I am not sure what it does, since I don't use Python 2, and `new`
is gone from Python 3.
import new
class X(object):
def f(self):
print 'f'
a = X()
b = X()
def g(self):
print 'g'
# I thought this modified instance method just in a, not in b
X.f = new.instancemethod(g, a, X)
Answer: > From what I understand, each instance of a class stores references to the
> instance's methods.
I don't know where you got this from, but it's wrong. They don't.
> Why are instance methods not accessible in a way similar to instance
> attributes, i.e., through `__dict__`, or through some other system
> attribute?
Well, because they are not stored on the instance.
> Is there any way to look at (and perhaps change) the names and the
> references to instance methods?
Since these references don't exist, you cannot change them. You can of course
create any attribute you want by normal assignments, but note that functions
stored on the instance are not treated like ordinary methods -- the mechanism
that implicitly passes the `self` parameter does not apply for them.
|
Issue in generating the xml file from python minidom
Question: Here is the code:
from xml.dom.minidom import Document
doc = Document()
root = doc.createElement('root')
doc.appendChild(root)
for i in range(1,3):
main = doc.createElement('item class:=memory')
root.appendChild(main)
for j in range(1,3):
text = doc.createTextNode('DIMM Size'+str(j))
main.appendChild(text)
print (doc.toprettyxml(indent='\t'))
Here is the output:
<?xml version="1.0" ?>
<root>
<item class:=memory>
DIMM Size1
DIMM Size2
</item class:=memory>
<item class:=memory>
DIMM Size1
DIMM Size2
</item class:=memory>
</root>
I am trying to generate the file with following code. Is there a way to
generate the following output:
<root>
<item class:=memory>
<p> DIMM Size1 </p>
<p>DIMM Size2 </p>
</item>
<item class:=memory>
<p>DIMM Size1</p>
<p>DIMM Size2</p>
</item>
</root>
Answer: You need two quick changes
1. Create a `p` element e.g. `doc.createElement('p')`
2. Don't set attributes manually instead use node.attributes e.g. `main.attributes['class']='memory'`
so your code should look like this
from xml.dom.minidom import Document
doc = Document()
root = doc.createElement('root')
doc.appendChild(root)
for i in range(1,3):
main = doc.createElement('item')
main.attributes['class']='memory'
root.appendChild(main)
for j in range(1,3):
p = doc.createElement('p')
text = doc.createTextNode('DIMM Size'+str(j))
p.appendChild(text)
main.appendChild(p)
print (doc.toprettyxml(indent='\t'))
A long term change would be to use
[ElementTree](http://docs.python.org/library/xml.etree.elementtree.html) which
has more intuitive interface and is easy to use, more so while reading xml
e.g. your example in element tree
from xml.etree import cElementTree as etree
root = etree.Element('root')
for i in range(1,3):
item = etree.SubElement(root, 'item')
item.attrib['class']='memory'
for j in range(1,3):
p = etree.SubElement(item, 'p')
p.text = 'DIMM Size %s'%j
print etree.tostring(root)
|
Connecting to LibreOffice with named pipes
Question: I can connect with sockets just fine, but I heard that using pipes is faster
when everything is local, so I wanted to try it out, but I can't get a
connection.
I start Libre with
> soffice --headless --invisible --norestore --nodefault --nolockcheck --nofirstwizard --accept='pipe,name=ooo_pipe;urp;'
And the bare minimum python script that should work but doesn't is
import uno
from com.sun.star.connection import NoConnectException
pipe = 'ooo_pipe'
localContext = uno.getComponentContext()
resolver = localContext.ServiceManager.createInstanceWithContext("com.sun.star.bridge.UnoUrlResolver", localContext)
context = resolver.resolve("uno:pipe,name=%s;urp;StarOffice.ComponentContext" % pipe)
Answer: I've used socket mode so far. Just tested pipe on my machine by the cmd:
/usr/lib/openoffice/program/soffice.bin -accept='pipe,name=foo;urp;StarOffice.ServiceManager' -nologo -headless -nofirststartwizard -invisible
$ lsof -c soffice|egrep "pipe|foo"
soffice.b 6698 user 3r FIFO 0,8 0t0 15766935 pipe
soffice.b 6698 user 4w FIFO 0,8 0t0 15766935 pipe
soffice.b 6698 user 15u unix 0xffff88009773ed00 0t0 15767001 /tmp/OSL_PIPE_1000_foo
lsof shows that there is a named socket foo and its OK to get the connection
in Python. At the start of the experiment, there were occasions that no foo is
generated and hence com.sun.star.connection.NoConnectException was raised. But
I can't repeat this error after that.
We've used socket-mode headless soffice in production for several years and
its very stable and fast enough. Seems pipe mode here still relies on unix
socket, so I suggest using socket mode.
|
writing to csv, Python, different data types
Question: I'm new to Python and would like to write data of different types to the
columns of a csv file.
I have two lists and one ndarray. I would like to have these as the three
columns with the first row being the variable names.
Is there are a way to do this in one line or does one have to first convert to
arrays?
len(all_docs.data)
Out[34]: 19916
In [35]: type(all_docs.data)
Out[35]: list
In [36]: len(all_docs.target)
Out[36]: 19916
In [37]: type(all_docs.target)
Out[37]: numpy.ndarray
In [38]: id = range(len(all_docs.target)
Answer: You could convert it all over to a numpy array and save it with `savetxt`, but
why not just do it directly?
You can iterate through the array just like you'd iterate through a list. Just
`zip` them together.
with open('output.csv', 'w') as outfile:
outfile.write('Col1name, Col2name, Col3name\n')
for row in zip(col1, col2, col3):
outfile.write('{}, {}, {}\n'.format(a,b,c))
Or, if you'd prefer, you can use the `csv` module. If you have to worry about
escaping `,`'s, it's quite useful.
import csv
with open('output.csv', 'w') as outfile:
writer = csv.writer(outfile)
outfile.write('Col1name, Col2name, Col3name\n')
for row in zip(col1, col2, col3):
writer.writerow(row)
|
python mechanize session not saved
Question: I'm trying to use python mechanize to retrive the list of apps on iTunes
connect. Once this list is retrieved, further work will be done with those
links.
Logging in succeeds but then when i follow the "Manage Your Applications" link
I get redirected back to the login page. It is as if the session gets lost.
import mechanize
import cookielib
from BeautifulSoup import BeautifulSoup
import html2text
filename = 'itunes.html'
br = mechanize.Browser()
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
br.set_handle_equiv(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
br.open('https://itunesconnect.apple.com/WebObjects/iTunesConnect.woa')
br.select_form(name='appleConnectForm')
br.form['theAccountName'] = username
br.form['theAccountPW'] = password
br.submit()
apps_link = br.find_link(text='Manage Your Applications')
print "Manage Your Apps link = ", apps_link
req = br.follow_link(text='Manage Your Applications')
for app_link in br.links():
print "link is ", app_link
Any ideas what could be wrong?
Answer: You need to
[save/load](http://docs.python.org/library/cookielib.html#cookielib.FileCookieJar.save)
the cookiejar
|
Importing everything to the current namespace from a module in ipython
Question: I want to import everything in a module to the global namespace in an IPython
session. So I tried `from <module> import *`, but that doesn't work. Although
this works as expected in a python session. How can I do this in IPython?
I realise this is bad practice, but I just want to do this for an interactive
session for a frequently used module.
Answer: `from ROOT import *` is not supported by PyROOT under IPython. Here is the
warning from ROOT 5.34/09:
UserWarning: "from ROOT import *" is not supported under IPython
|
Python list to insert to other table or list
Question: Hi pythonist I am newbie here. I have a table with duplicate date and field2
as id of sale
table_sale
field1 field2 field3 field4 field5
3/16/2012 a KONDRA I KOMANG 1 TERAPI OZON 60 MENIT
3/16/2012 b WARTI NI WAYAN 1 TERAPI OZON 60 MENIT
3/16/2012 c MARDIKA I GUSTI PUTU 1 TERAPI OZON 60 MENIT
3/16/2012 d DARMIASIH NI KOMANG 1 TERAPI OZON 60 MENIT
3/19/2012 e DARMIASIH NI KOMANG 0.5 Orbitalized 240T
3/19/2012 e DARMIASIH NI KOMANG 0.5 Octogenarian 240T
3/19/2012 e DARMIASIH NI KOMANG 1 TERAPI AKUPUNKTUR
3/29/2012 f ARNI NI MADE 3 Lingzhi 60C
3/29/2012 f ARNI NI MADE 1 Octogenarian 240T
How to get result or print like this: And do it in the python way? Thanks.
table_log
field1 field2
KONDRA I KOMANG ;3/16/2012 (1 TERAPI OZON 60 MENIT)
WARTI NI WAYAN ;3/16/2012 (1 TERAPI OZON 60 MENIT)
MARDIKA I GUSTI PUTU ;3/16/2012 (1 TERAPI OZON 60 MENIT)
DARMIASIH NI KOMANG ;3/16/2012 (1 TERAPI OZON 60 MENIT) ;3/19/2012 (0.5 Orbitalized 240T + 0.5 Octogenarian 240T + 1 TERAPI AKUPUNKTUR)
ARNI NI MADE ;3/29/2012 (3 Lingzhi 60C + 1 Octogenarian 240T)
Answer: This will parse your example table:
from itertools import groupby
from operator import itemgetter
table = '''\
field1 field2 field3 field4 field5
3/16/2012 a KONDRA I KOMANG 1 TERAPI OZON 60 MENIT
3/16/2012 b WARTI NI WAYAN 1 TERAPI OZON 60 MENIT
3/16/2012 c MARDIKA I GUSTI PUTU 1 TERAPI OZON 60 MENIT
3/16/2012 d DARMIASIH NI KOMANG 1 TERAPI OZON 60 MENIT
3/19/2012 e DARMIASIH NI KOMANG 0.5 Orbitalized 240T
3/19/2012 e DARMIASIH NI KOMANG 0.5 Octogenarian 240T
3/19/2012 e DARMIASIH NI KOMANG 1 TERAPI AKUPUNKTUR
3/29/2012 f ARNI NI MADE 3 Lingzhi 60C
3/29/2012 f ARNI NI MADE 1 Octogenarian 240T
'''
# Setup bounds for fields in fixed width table.
fields = (0,12,20,44,52,None)
bounds = zip(fields[:-1],fields[1:])
# Parse the data
data = []
for line in table.splitlines():
data.append([line[a:b].rstrip() for a,b in bounds])
# Sort the data by field3,field2 for groupby.
data.pop(0) # remove header line
data.sort(key=itemgetter(2,1))
# Output result header
print '{:24}{}'.format('field1','field2')
# Output result lines
for field3,items in groupby(data,key=itemgetter(2)):
result = []
for field1,field1_items in groupby(items,key=itemgetter(0)):
result.append(';{} ({})'.format(field1,' + '.join(' '.join(item[3:]) for item in field1_items)))
print '{:24}{}'.format(field3,' '.join(result))
### Output
field1 field2
ARNI NI MADE ;3/29/2012 (3 Lingzhi 60C + 1 Octogenarian 240T)
DARMIASIH NI KOMANG ;3/16/2012 (1 TERAPI OZON 60 MENIT) ;3/19/2012 (0.5 Orbitalized 240T + 0.5 Octogenarian 240T + 1 TERAPI AKUPUNKTUR)
KONDRA I KOMANG ;3/16/2012 (1 TERAPI OZON 60 MENIT)
MARDIKA I GUSTI PUTU ;3/16/2012 (1 TERAPI OZON 60 MENIT)
WARTI NI WAYAN ;3/16/2012 (1 TERAPI OZON 60 MENIT)
|
Change count into an array of letters, python
Question: I am struggling with an idea I want put into a python script. I'm not even
sure how to ask the most appropriate question as I've been surfing through the
net trying to find what I want with no luck.
Basically, I have a script that does a simple calculation:
divider = int(math.ceil(df.scale / 3000))
This is because I want ' divider' to return the value divided by 3000 and
always rounded up. I want to use that value to help me return letters.
So, it goes like this:
if 1 returns then I want a to return 'A' if 2 returns, then I want to create
'A', 'B' if 3 returns, then I want to create 'A', 'B', 'C' and so on....
My end result is, that I want to save some files. 'divider' will determine how
many files I want to save and then each file will recursively be named with
the letter in it (i.e. FileA, FileB, FileC...)
Ok, I know my question isn't exactly well put together, but I'm struggling
with the logic, so if you need some clarity, please let me know.
Answer: Do you mean something like:
for i in range(int(math.ceil(df.scale / 3000))):
# i will contain 0, 1, 2, ...
# letter will contain 'A', 'B', 'C', ...
letter = chr(ord('A') + i)
Or if you need the actual list:
[chr(ord('A') + i) for i in range(int(math.ceil(df.scale / 3000)))]
You can also use
[`string.ascii_uppercase`](http://docs.python.org/library/string.html#string.ascii_uppercase)
for a list of uppercase letters and slice it as you need:
from string import ascii_uppercase
print list(ascii_uppercase)[:int(math.ceil(df.scale / 3000))]
|
R/XLL: Interface to call XLL method in R
Question: I am trying to call the methods defined in the XLL addin(for Excel) from R.
Something similar to this Python code:
import os
from win32com.client import Dispatch
Path = 'myxll.xll'
xlApp = Dispatch("Excel.Application")
xlApp.RegisterXLL(Path)
# function call from excel
# =xllfunction("param1","param2",...)
result = xlApp.run('xllfunction', "param1","param2",...)
Is there any library in R that does the XLL interface? Thanks for your help.
Answer: rcom + statconnDCOM is what you need. rcom is on CRAN so you can do
install.packages("rcom")
statconnDCOM is available here: <http://rcom.univie.ac.at/>
|
Why does Python's math.factorial not play nice with threads?
Question: Why does math.factorial act so weird in a thread?
Here is an example, it creates three threads:
* thread that just sleeps for a while
* thread that increments an int for a while
* thread that does math.factorial on a large number.
It calls `start` on the threads, then `join` with a timeout
The sleep and spin threads work as expected and return from `start` right
away, and then sit in the `join` for the timeout.
The factorial thread on the other hand does not return from `start` until it
runs to the end!
import sys
from threading import Thread
from time import sleep, time
from math import factorial
# Helper class that stores a start time to compare to
class timed_thread(Thread):
def __init__(self, time_start):
Thread.__init__(self)
self.time_start = time_start
# Thread that just executes sleep()
class sleep_thread(timed_thread):
def run(self):
sleep(15)
print "st DONE:\t%f" % (time() - time_start)
# Thread that increments a number for a while
class spin_thread(timed_thread):
def run(self):
x = 1
while x < 120000000:
x += 1
print "sp DONE:\t%f" % (time() - time_start)
# Thread that calls math.factorial with a large number
class factorial_thread(timed_thread):
def run(self):
factorial(50000)
print "ft DONE:\t%f" % (time() - time_start)
# the tests
print
print "sleep_thread test"
time_start = time()
st = sleep_thread(time_start)
st.start()
print "st.start:\t%f" % (time() - time_start)
st.join(2)
print "st.join:\t%f" % (time() - time_start)
print "sleep alive:\t%r" % st.isAlive()
print
print "spin_thread test"
time_start = time()
sp = spin_thread(time_start)
sp.start()
print "sp.start:\t%f" % (time() - time_start)
sp.join(2)
print "sp.join:\t%f" % (time() - time_start)
print "sp alive:\t%r" % sp.isAlive()
print
print "factorial_thread test"
time_start = time()
ft = factorial_thread(time_start)
ft.start()
print "ft.start:\t%f" % (time() - time_start)
ft.join(2)
print "ft.join:\t%f" % (time() - time_start)
print "ft alive:\t%r" % ft.isAlive()
And here is the output on Python 2.6.5 on CentOS x64:
sleep_thread test
st.start: 0.000675
st.join: 2.006963
sleep alive: True
spin_thread test
sp.start: 0.000595
sp.join: 2.010066
sp alive: True
factorial_thread test
ft DONE: 4.475453
ft.start: 4.475589
ft.join: 4.475615
ft alive: False
st DONE: 10.994519
sp DONE: 12.054668
I've tried this on python 2.6.5 on CentOS x64, 2.7.2 on Windows x86 and the
factorial thread does not return from start on either of them until the thread
is done executing.
I've also tried this with PyPy 1.8.0 on Windows x86, and there result is
slightly different. The start _does_ return immediately, but then the join
doesn't time out!
sleep_thread test
st.start: 0.001000
st.join: 2.001000
sleep alive: True
spin_thread test
sp.start: 0.000000
sp DONE: 0.197000
sp.join: 0.236000
sp alive: False
factorial_thread test
ft.start: 0.032000
ft DONE: 9.011000
ft.join: 9.012000
ft alive: False
st DONE: 12.763000
Tried IronPython 2.7.1 too, it produces the expected result.
sleep_thread test
st.start: 0.023003
st.join: 2.028122
sleep alive: True
spin_thread test
sp.start: 0.003014
sp.join: 2.003128
sp alive: True
factorial_thread test
ft.start: 0.002991
ft.join: 2.004105
ft alive: True
ft DONE: 5.199295
sp DONE: 5.734322
st DONE: 10.998619
Answer: Threads often only allow different things to be interleaved in Python, not
different things to happen at the same time, because of the [Global
Interpreter Lock](http://wiki.python.org/moin/GlobalInterpreterLock).
If you look at the Python bytecode:
from math import factorial
def fac_test(x):
factorial(x)
import dis
dis.dis(fac_test)
you get:
4 0 LOAD_GLOBAL 0 (factorial)
3 LOAD_FAST 0 (x)
6 CALL_FUNCTION 1
9 POP_TOP
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
As you can see, the call to `math.factorial` is a single operation at the
Python bytecode level (`6 CALL_FUNCTION`) -- it's implemented in C.
`factorial` doesn't release the GIL because of the type of work it does (see
the comments on my answer), so Python doesn't switch to other threads while
it's running, and you get the result you've observed.
|
shared axes in scatter plots in matplotlib
Question: I am trying to put two scatter plots next to each other with a shared y axis,
but the axis seems to get an odd scale. Without the shared axis the two plots
look fine. I also noticed that the problem does not occur when using "plot"
instead of "scatter". Images are included below. Here is the code I am using.
#!/usr/bin/python
import matplotlib.pyplot as plt
fig = plt.figure(1)
for i in range(1,3):
if i == 1:
ax = fig.add_subplot(1,2,i)
else:
fig.add_subplot(1,2,i, sharey=ax)
#plt.plot([5.0], [1],marker="*",color='tomato')
plt.scatter([5.0], [1], s=20, color='tomato')
plt.show()
[I would include images but the site won't let me as a newbie.] When I run the
code above I see plots with a y axis that runs from 0.0000 to 0.0008 with a
single point plotted at 0.0004. Without shared axes the y axis goes from 0.94
to 1.06 with a single point plotted at 1.00, as expected.
Can anyone tell me why? Is this a bug or a feature?
matplotlib: 0.99.1.2-3ubuntu on Ubuntu 10.04 LTS - the Lucid Lynx
Answer: I've no answer to the why question, but here's how to get rid of it: In your
code snippet, giving `scatter` three points to draw
plt.scatter([1.0,2,3], [1.1,2.2,2.9], s=20, color='tomato')
works for me (matplotlib 1.1.0 on Lucid).
I can only guess that `scatter` tries to be a bit smarter than `plot` with the
axes limits, but whatever it's doing, it goes nuts for just a single point.
|
App Engine - AttributeError: 'function' object has no attribute 'id'
Question: I'm using App Engine, SDK 1.6.3 with Python 2.7.
I've created a model like this:
class MyModel(db.Model):
name = db.StringProperty()
website = db.StringProperty()
I can iterate and see everything except the Key id's. For example, in the
interactive shell I can run this:
from models import *
list = MyModel.all()
for p in list:
print(p.name)
and it prints the name of every Entity. But when I run this:
from models import *
list = MyModel.all()
for p in list:
print(p.key.id) [or p.key.name or p.key.app]
I get an AttributeError:
Traceback (most recent call last):
File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/admin/__init__.py", line 317, in post
exec(compiled_code, globals())
File "<string>", line 4, in <module>
AttributeError: 'function' object has no attribute 'id'
Can anyone please help me??
Answer: key() and id() are instance methods. Try with parenthesis:
for p in list:
print(p.key().id())
See the
[documentation](http://code.google.com/appengine/docs/python/datastore/keyclass.html#Key).
|
Running 'top' in thread produces SIGTTOU
Question: For reasons I won't go into, I need to run a variant of 'top -m io -d 2 10'
within a subprocess from a Python thread on FreeBSD 8.1. The trouble is, it
seems that sometimes SIGTTOU gets produced (under certain code-dependent
conditions that I haven't yet deciphered), halting top and the thread
entirely. Other times, it seems that SIGTTOU is not produced, but top or the
thread get stuck anyway.
The output from top should produce two sets of IO stats for the top 10
processes on the system, where the first set is "absolute" numbers and the
second set is the incremental difference of the stats since the last set, one
second earlier. Running this command on the terminal or within a shell script,
whether redirecting the output or not, works fine.
When the problem occurs, it seems that 'top' writes the first set of outputs,
but then hangs/receives SIGTTOU before it can output the second set. In the
sample code below, only one set of process stats is written to the output
file.
I discovered the SIGTTOU signal running the python script under 'truss', but
it seems that interactions between 'truss' and 'top' themselves may be a
confounding matter, since simply running `truss top -d 2` produces the signal
and hangs, as below:
...
ioctl(1,TIOCGETA,0xffffe460) = 0 (0x0)
ioctl(1,TIOCGETA,0xc6b138) = 0 (0x0)
ioctl(1,TIOCGETA,0xffffe410) = 0 (0x0)
ioctl(1,TIOCGWINSZ,0xffffe460) = 0 (0x0)
ioctl(1,TIOCGWINSZ,0xffffe930) = 0 (0x0)
ioctl(1,TIOCGETA,0x50e560) = 0 (0x0)
sigprocmask(SIG_BLOCK,SIGINT|SIGQUIT|SIGTSTP,0x0) = 0 (0x0)
ioctl(1,TIOCGETA,0x50e560) = 0 (0x0)
SIGNAL 22 (SIGTTOU)
Here's a sample Python script that reproduces the hang and/or SIGTTOU:
import subprocess
from threading import Thread
def run():
with open("top.log", "wb") as f:
subprocess.Popen(("/usr/bin/top", "-m", "io", "-d", "2", "10"), stdout=f, stderr=f, stdin=subprocess.PIPE).communicate()
if __name__ == "__main__":
th = Thread(target=run)
print "Starting"
th.start()
th.join()
On my last run through, this sample program did not produce SIGTTOU, but top
did hang. Truss shows:
....
open("/usr/local/lib/python2.7/lib-tk/_heapq.pyc",O_RDONLY,0666) ERR#2 'No such file or directory'
stat("/usr/local/lib/python2.7/lib-dynload/_heapq",0x7fffffffa500) ERR#2 'No such file or directory'
open("/usr/local/lib/python2.7/lib-dynload/_heapq.so",O_RDONLY,0666) = 5 (0x5)
fstat(5,{ mode=-rwxr-xr-x ,inode=238187,size=22293,blksize=16384 }) = 0 (0x0)
sigprocmask(SIG_BLOCK,SIGHUP|SIGINT|SIGQUIT|SIGKILL|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGSTOP|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2,0x0) = 0 (0x0)
open("/usr/local/lib/python2.7/lib-dynload/_heapq.so",O_RDONLY,057) = 6 (0x6)
fstat(6,{ mode=-rwxr-xr-x ,inode=238187,size=22293,blksize=16384 }) = 0 (0x0)
pread(0x6,0x80074c2e0,0x1000,0x0,0xffff800800653120,0x8080808080808080) = 4096 (0x1000)
mmap(0x0,1069056,PROT_NONE,MAP_PRIVATE|MAP_ANON|MAP_NOCORE,-1,0x0) = 34389442560 (0x801c54000)
mmap(0x801c54000,12288,PROT_READ|PROT_EXEC,MAP_PRIVATE|MAP_FIXED|MAP_NOCORE,6,0x0) = 34389442560 (0x801c54000)
mmap(0x801d56000,12288,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_FIXED,6,0x2000) = 34390499328 (0x801d56000)
mmap(0x0,36864,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANON,-1,0x0) = 34366377984 (0x800655000)
close(6) = 0 (0x0)
mmap(0x0,832,PROT_READ|PROT_WRITE,MAP_ANON,-1,0x0) = 34366414848 (0x80065e000)
munmap(0x80065e000,832) = 0 (0x0)
sigprocmask(SIG_SETMASK,0x0,0x0) = 0 (0x0)
sigprocmask(SIG_BLOCK,SIGHUP|SIGINT|SIGQUIT|SIGKILL|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGSTOP|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2,0x0) = 0 (0x0)
sigprocmask(SIG_SETMASK,0x0,0x0) = 0 (0x0)
close(5) = 0 (0x0)
close(4) = 0 (0x0)
close(3) = 0 (0x0)
close(2) = 0 (0x0)
fstat(1,{ mode=crw------- ,inode=102,size=0,blksize=4096 }) = 0 (0x0)
ioctl(1,TIOCGETA,0xffffe400) = 0 (0x0)
Starting
write(1,"Starting\n",9) = 9 (0x9)
sigprocmask(SIG_BLOCK,SIGHUP|SIGINT|SIGQUIT|SIGILL|SIGTRAP|SIGABRT|SIGEMT|SIGFPE|SIGKILL|SIGBUS|SIGSEGV|SIGSYS|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGSTOP|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2,0x0) = 0 (0x0)
_umtx_op(0x7fffffffe1d8,0x3,0x1,0x0,0x0,0x0) = 0 (0x0)
sigprocmask(SIG_BLOCK,SIGHUP|SIGINT|SIGQUIT|SIGABRT|SIGEMT|SIGKILL|SIGSYS|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGSTOP|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2,SIGHUP|SIGINT|SIGQUIT|SIGILL|SIGTRAP|SIGABRT|SIGEMT|SIGFPE|SIGBUS|SIGSEGV|SIGSYS|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2) = 0 (0x0)
sigprocmask(SIG_SETMASK,SIGHUP|SIGINT|SIGQUIT|SIGILL|SIGTRAP|SIGABRT|SIGEMT|SIGFPE|SIGBUS|SIGSEGV|SIGSYS|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2,0x0) = 0 (0x0)
sigprocmask(SIG_BLOCK,SIGHUP|SIGINT|SIGQUIT|SIGABRT|SIGEMT|SIGKILL|SIGSYS|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGSTOP|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2,SIGHUP|SIGINT|SIGQUIT|SIGILL|SIGTRAP|SIGABRT|SIGEMT|SIGFPE|SIGBUS|SIGSEGV|SIGSYS|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2) = 0 (0x0)
sigprocmask(SIG_SETMASK,SIGHUP|SIGINT|SIGQUIT|SIGILL|SIGTRAP|SIGABRT|SIGEMT|SIGFPE|SIGBUS|SIGSEGV|SIGSYS|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2,0x0) = 0 (0x0)
mmap(0x7fffffbde000,135168,PROT_READ|PROT_WRITE,MAP_STACK,-1,0x0) = 140737484021760 (0x7fffffbde000)
mprotect(0x7fffffbde000,4096,PROT_NONE) = 0 (0x0)
thr_new(0x7fffffffe220,0x68,0x800a9f4c0,0x186fc,0xffffffff,0x0) = 0 (0x0)
sigprocmask(SIG_SETMASK,0x0,0x0) = 0 (0x0)
mmap(0x0,2097152,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANON,-1,0x0) = 34390511616 (0x801d59000)
mmap(0x801f59000,684032,PROT_READ|PROT_WRITE,MAP_PRIVATE|MAP_ANON,-1,0x0) = 34392608768 (0x801f59000)
munmap(0x801d59000,684032) = 0 (0x0)
_umtx_op(0x8010127f8,0x10,0x1,0x0,0x0,0x0) = 0 (0x0)
_umtx_op(0x800e0b438,0xf,0x0,0x0,0x0,0x0) = 0 (0x0)
_umtx_op(0x800e0b438,0x10,0x1,0x0,0x0,0x0) = 0 (0x0)
_umtx_op(0x800e0b438,0x10,0x1,0x0,0x0,0x0) = 0 (0x0)
_umtx_op(0x800e0b438,0x10,0x1,0x0,0x0,0x8080808080808080) = 0 (0x0)
open("top.log",O_WRONLY|O_CREAT|O_TRUNC,0666) = 2 (0x2)
fstat(2,{ mode=-rw-r--r-- ,inode=70860,size=0,blksize=16384 }) = 0 (0x0)
pipe(0x7fffffbfd910) = 0 (0x0)
pipe(0x7fffffbfd870) = 0 (0x0)
fcntl(6,F_GETFD,) = 0 (0x0)
fcntl(6,F_SETFD,FD_CLOEXEC) = 0 (0x0)
sigprocmask(SIG_BLOCK,SIGHUP|SIGINT|SIGQUIT|SIGABRT|SIGEMT|SIGKILL|SIGSYS|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGSTOP|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2,SIGHUP|SIGINT|SIGQUIT|SIGILL|SIGTRAP|SIGABRT|SIGEMT|SIGFPE|SIGBUS|SIGSEGV|SIGSYS|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2) = 0 (0x0)
fork() = 21503 (0x53ff)
sigprocmask(SIG_SETMASK,SIGHUP|SIGINT|SIGQUIT|SIGILL|SIGTRAP|SIGABRT|SIGEMT|SIGFPE|SIGBUS|SIGSEGV|SIGSYS|SIGPIPE|SIGALRM|SIGTERM|SIGURG|SIGTSTP|SIGCONT|SIGCHLD|SIGTTIN|SIGTTOU|SIGIO|SIGXCPU|SIGXFSZ|SIGVTALRM|SIGPROF|SIGWINCH|SIGINFO|SIGUSR1|SIGUSR2,0x0) = 0 (0x0)
close(6) = 0 (0x0)
close(3) = 0 (0x0)
read(5,0x801e31024,1048576) = 0 (0x0)
close(5) = 0 (0x0)
fcntl(4,F_GETFL,) = 2 (0x2)
fstat(4,{ mode=p--------- ,inode=0,size=0,blksize=4096 }) = 0 (0x0)
close(4) = 0 (0x0)
I've looked into SIGTTOU and found references to the TOSTOP termios flag, and
I've fiddled with it in the main thread, in the child thread, and in the
environment invoking Python, all to no avail. It's been an educational
process, but I'm not there yet.
I've run tests to make sure that the top process is created in and appears to
stay in the process group of the Python process (based on the SIGTTOU
documentation, if it weren't, this would be the reason for SIGTTOU), and that
seems fine: the PGRP ends up being the same as the Python PID/PGRP.
I've tried running 'top' with subprocess.check_output and with .Popen() using
shell=True, shell=False, and redirecting std{out,err,in} all over the place,
none of which seems to change this end result. I've tried running 'top' using
a '/bin/sh -c' command executed through subprocess, also to no avail.
Without doing something semi-weird like running 'top' within a shell script
which my Python thread invokes, or resorting to os.fork() instead of using
threading, how can I get around this issue, and what's the root cause?
Answer: I realize that this question is a bit old, but if you're still running into
errors, I'd love to debug this into the dirt.
**Root cause** : Your SIGTTOU is occurring because your Python interpreter is
forking to create the background thread when you call `th =
Thread(target=run)` and `top` hasn't been told/doesn't know it shouldn't be
using the terminal. You are seeing signals because `top` is getting frisky and
trying to write to the terminal (or change its emulation mode) as a
_background_ process when you have disallowed this behavior from occurring in
your TTY settings.
`man stty` explains this more succinctly than I would:
tostop (-tostop)
Send (do not send) SIGTTOU for background output. This causes back-
ground jobs to stop if they attempt terminal output.
**Workaround** : Allow background threads to throw output onto the terminal
during the run of your script (`stty -tostop; python my_script.py; stty
tostop`) or add the (`'-n'`) flag to your subprocess call of `top`.
* * *
Elaboration: _Only one_ process per group can be in the foreground and the
rest remain in the background -- the _foreground_ process handles I/O from a
tty and the rest must remain as _background_ processes or you'll see job
control signals start getting thrown (e.g. SIGTTIN/SIGTTOU).
During the execution of your Python script, I believe the following occurs:
$SHELL #(controls TTY)
$ python my_script.py #(tcsetpgrp() is called to hand off control of TTY)
~~~ heck yeah, snake party ~~~
th = Thread(target=run) #(run target=proc in background)
print "Starting" #(still okay -- this gets handed up to the foreground interpreter)
th.start()
#(here be dragons, std i/o in background fork)
subprocess.Popen(("/usr/bin/top", "-m", "io", "-d", "2", "10").communicate()
I checked out the [FreeBSD manual for its top
implementation](https://www.freebsd.org/cgi/man.cgi?top) and I found the
following smoking gun:
DESCRIPTION
Top displays the top processes on the system and periodically updates
this information...
Top makes a distinction between terminals that support advanced capa-
bilities and those that do not...If the output of top is redi-
rected to a file, it acts as if it were being run on a dumb terminal.
...
OPTIONS
-i Use "interactive" mode. In this mode, any input is immediately
read for processing. See the section on "Interactive Mode" for
an explanation of which keys perform what functions. After the
command is processed, the screen will immediately be updated,
even if the command was not understood. This mode is the
default when standard output is an intelligent terminal.
...
-n Use "non-interactive" mode. This is identical to "batch" mode.
Whereas `top` doesn't know that it's being run in a background process (the
file handing is being done with your Python context manager) and you didn't
specify non-interactive mode, it's assuming that it's free to use the tty --
meaning that you'll probably see SIGTTIN signals if `top` gets ahold of any
STDIN and SIGTTOU signals when commands are processed and it tries to update
the screen.
Of particular interest from FreeBSD's top implementation, the difference in
what happens when called interactively or not:
* [Flags being parsed](https://github.com/freebsd/freebsd/blob/master/contrib/top/top.c#L329-L332)
* [See differences in function calls for non-interactive](https://github.com/freebsd/freebsd/blob/master/contrib/top/top.c#L741-L748)
Your idea to add `shell=True` verifies this theory as it [sets the child
process of 'top' to the PID of the shell that `subprocess.Popen(..)`
spawns](https://docs.python.org/2/library/subprocess.html#subprocess.Popen.pid),
which is still in a background Python thread.
(n.b. apologies: I don't have access to a FreeBSD 8.1 host to verify behavior
on your host OS right now.)
|
pipe "less" content out, when "less" get content from stdin
Question: In "ipython", we use `some_obj??` to get documentation, which uses "less" to
show the docs. How can we get the document out of "less", into a text editor?
The content is fed from stdin, so pressing "v" gives the error: "can not edit
standard input"
Answer: `some_obj??` is equivalent the built-in `help` function. This is a wrapper
around pydoc.help.
import pydoc
doc = pydoc.text.document(some_obj)
print doc
You may save the documentation to a file. Then open in a text editor.
|
How to determine if Python script was run via command line?
Question: ## Background
I would like my Python script to pause before exiting using something similar
to:
`raw_input("Press enter to close.")`
but only if it is NOT run via command line. Command line programs shouldn't
behave this way.
## Question
Is there a way to determine if my Python script was invoked from the command
line:
`$ python myscript.py`
verses double-clicking `myscript.py` to open it with the default interpreter
in the OS?
Answer: If you're running it without a terminal, as when you click on "Run" in
Nautilus, you can just check if it's attached to a tty:
import sys
if sys.stdin.isatty():
# running interactively
print "running interactively"
else:
with open('output','w') as f:
f.write("running in the background!\n")
But, as ThomasK points out, you seem to be referring to running it in a
terminal that closes just after the program finishes. I think there's no way
to do what you want without a workaround; the program is running in a regular
shell and attached to a terminal. The decision of exiting immediately is done
just after it finishes with information it doesn't have readily available (the
parameters passed to the executing shell or terminal).
You could go about [examining the parent process
information](http://code.google.com/p/procpy/) and detecting differences
between the two kinds of invocations, but it's probably not worth it in most
cases. Have you considered adding a command line parameter to your script
(think `--interactive`)?
|
Python: import a lib with a non-py extenson
Question: I have to import library which is called `functions.sage`. How can I do it? I
tried:
__import__('functions.sage')
and also this:
import imp
imp.load_source('fun', 'functions.sage')
**Edit** :
Actually I want to import [sage](http://en.wikipedia.org/wiki/Sagemath) lib
into sage. And that lib contains sage-specific code. I tired above variants in
sage interpreted. And both gave me 'no functions module' or something like
this.
Answer: This isn't a good idea. (This is right at the borderline between an answer and
a comment, but I wanted to give examples hard to cram into a comment.)
The .sage file either contains Sage-specific syntax and behaviour or it
doesn't. If it doesn't, you can simply rename it to .py, or make a symbolic
link, or whatever. But if it does, then you're going to have to preparse it
anyway before it'll work in Python.
For example, if the "functions.sage" file writes:
x = 2/3
if you load the file into sage, you get an element of QQ:
sage: x
2/3
sage: parent(x)
Rational Field
but in Python 2, you'd simply get int(0).
It might use Sage-style ranges:
sage: [1,3,..,11]
[1, 3, 5, 7, 9, 11]
or other Sage features:
sage: F.<x,y> = GF(2)[]
sage: F
Multivariate Polynomial Ring in x, y over Finite Field of size 2
and all of these are dealt with by the Sage preparser, not by Python. Behind
the scenes, it's doing this:
sage: preparse("F.<x,y> = GF(2)[]")
"F = GF(Integer(2))['x, y']; (x, y,) = F._first_ngens(2)"
UPDATE: Apparently I didn't make the problem clear enough.
sage: import imp
sage: !cat functions.sage
x = 2/3
sage: functions = imp.new_module("functions")
sage: execfile("functions.sage", vars(functions))
sage: dir(functions)
['__builtins__', '__doc__', '__name__', '__package__', 'x']
sage: functions.x
0
sage: type(functions.x)
<type 'int'>
One way or another, you're going to have to pass functions.sage through the
preparser.
|
What's the fastest way to iterate over a CvMat in Python using OpenCV?
Question: I'm using OpenCV with Python to process a video stream. I'd like to implement
my own algorithm, so I need to iterate over each frame.
What I have so far works, but way too slow to be real-time. I know that Python
isn't the most efficient programming language, but I believe it can do much
better than this, considering, that the built in image transformation
functions are very fast. Numpy may be the way to go, but I'm not yet familiar
with it.
import cv, numpy
vidFile = cv.CaptureFromFile( 'sample.avi' )
nFrames = int( cv.GetCaptureProperty( vidFile, cv.CV_CAP_PROP_FRAME_COUNT ) )
for f in xrange( nFrames ):
frameImg = cv.QueryFrame( vidFile )
frameMat=cv.GetMat(frameImg)
print "mat ", mat[3,1]
for x in xrange(frameMat.cols):
for y in xrange(frameMat.rows):
# just an example, multiply all 3 components by 0.5
frameMat[y, x] = tuple(c*0.5 for c in frameMat[y, x])
cv.ShowImage( "My Video Window", frameMat )
if cv.WaitKey( waitPerFrameInMillisec ) == 27:
break
How can I speed up the process? Thanks, b_m
Answer: OpenCV has pretty good python documentation
[here](http://opencv.willowgarage.com/documentation/python/index.html).
Basically you should always try to do operations on video frames using these
builtin opencv functions, or numpy. For frame processing take a look at
[operations on
arrays](http://opencv.willowgarage.com/documentation/python/core_operations_on_arrays.html),
using this you can replace your entire pixel by pixel processing loop, which
is absurdly slow:
frameMat=cv.GetMat(frameImg)
print "mat ", mat[3,1]
for x in xrange(frameMat.cols):
for y in xrange(frameMat.rows):
# just an example, multiply all 3 components by 0.5
frameMat[y, x] = tuple(c*0.5 for c in frameMat[y, x])
cv.ShowImage( "My Video Window", frameMat )
with:
cv.ConvertScale(frameImg, frameImg, scale=0.5)
cv.ShowImage( "My Video Window", frameImg )
and easily play it in real time, there are loads of cool functions allowing
you to merge videos etc.
|
Importing module in django installed with easy_install
Question: I just installed django-stdimage with easy_install on my server.
It told it me installed successfully at
/home/myuser/lib/python2.4/django_stdimage-0.2.2-py2.4.egg
How do I import stdimage with django to start using it?
Answer: Found the answer. Easy_install's .egg files are just .zip files. I unzipped
the file where it was installed and then imported the module:
from stdimage import StdImageField
|
py2app picking up .git subdir of a package during build
Question: We use py2app extensively at our facility to produce self contained .app
packages for easy internal deployment without dependency issues. Something I
noticed recently, and have no idea how it began, is that when building an
.app, py2app started including the .git directory of our main library.
commonLib, for instance, is our root python library package, which is a git
repo. Under this package are the various subpackages such as database,
utility, etc.
commonLib/
|- .git/ # because commonLib is a git repo
|- __init__.py
|- database/
|- __init__.py
|- utility/
|- __init__.py
# ... etc
In a given project, say Foo, we will do imports like `from commonLib import
xyz` to use our common packages. Building via py2app looks something like:
`python setup.py py2app`
So the recent issue I am seeing is that when building an app for project Foo,
I will see it include everything in commonLib/.git/ into the app, which is
extra bloat. py2app has an excludes option but that only seems to be for
python modules. I cant quite figure out what it would take to exclude the .git
subdir, or in fact, what is causing it to be included in the first place.
Has anyone experienced this when using a python package import that is a git
repo? Nothing has changed in our setup.py files for each project, and
commonLib has always been a git repo. So the only thing I can think of being a
variable is the version of py2app and its deps which have obviously been
upgraded over time.
_Edit_
I'm using the latest py2app 0.6.4 as of right now. Also, my setup.py was first
generated from py2applet a while back, but has been hand configured since and
copied over as a template for every new project. I am using PyQt4/sip for
every single one of these projects, so it also makes me wonder if its an issue
with one of the recipes?
### Update
From the first answer, I tried to fix this using various combinations of
`exclude_package_data` settings. Nothing seems to force the .git directory to
become excluded. Here is a sample of what my setup.py files generally look
like:
from setuptools import setup
from myApp import VERSION
appname = 'MyApp'
APP = ['myApp.py']
DATA_FILES = []
OPTIONS = {
'includes': 'atexit, sip, PyQt4.QtCore, PyQt4.QtGui',
'strip': True,
'iconfile':'ui/myApp.icns',
'resources':['src/myApp.png'],
'plist':{
'CFBundleIconFile':'ui/myApp.icns',
'CFBundleIdentifier':'com.company.myApp',
'CFBundleGetInfoString': appname,
'CFBundleVersion' : VERSION,
'CFBundleShortVersionString' : VERSION
}
}
setup(
app=APP,
data_files=DATA_FILES,
options={'py2app': OPTIONS},
setup_requires=['py2app'],
)
I have tried things like:
setup(
...
exclude_package_data = { 'commonLib': ['.git'] },
#exclude_package_data = { '': ['.git'] },
#exclude_package_data = { 'commonLib/.git/': ['*'] },
#exclude_package_data = { '.git': ['*'] },
...
)
### Update #2
I have posted my own answer which does a monkeypatch on distutils. Its ugly
and not preferred, but until someone can offer me a better solution, I guess
this is what I have.
Answer: I am adding an answer to my own question, to document the only thing I have
found to work thus far. My approach was to monkeypatch distutils to ignore
certain patterns when creating a directory or copying a file. This is really
not what I wanted to do, but like I said, its the only thing that works so
far.
## setup.py ##
import re
# file_util has to come first because dir_util uses it
from distutils import file_util, dir_util
def wrapper(fn):
def wrapped(src, *args, **kwargs):
if not re.search(r'/\.git/?', src):
fn(src, *args, **kwargs)
return wrapped
file_util.copy_file = wrapper(file_util.copy_file)
dir_util.mkpath = wrapper(dir_util.mkpath)
# now import setuptools so it uses the monkeypatched methods
from setuptools import setup
Hopefully someone will comment on this and tell me a higher level approach to
avoid doing this. But as of now, I will probably wrap this into a utility
method like `exclude_data_patterns(re_pattern)` to be reused in my projects.
|
How to have an 'if' within a loop?
Question: I'm trying to solve a problem whereas, but I feel the loop I've got has a
missing link somewhere...
The brief I've been given - "Now write a program that calculates the minimum
fixed monthly payment needed in order pay off a credit card balance within 12
months."
In essence what I've done is far, is code up something that will take a base
value (10 for example), take it away from the credit card balance (taking into
account the interest), and if the total number of months it takes to put the
balance into a negative (e.g. paid off) is above 12, it adds to the 'minmonth'
(amount to be paid off per month) until the number of months is equal to is or
below 12.
Apologies in advance, I've literally only been learning Python for 2 days!
Where am I going wrong?
balance = float(raw_input('Enter the outstanding balance on your creditcard: '))
interest = float(raw_input('Enter the annual credit card interest rate as a decimal: '))
minmonth = 10
months = 0
monthlyinterest = interest / 12
while(balance > 0):
balance = balance * (1 + monthlyinterest) - minmonth
months = months + 1
if(months > 12):
months = 0
minmonth = minmonth + 10
else:
print 'RESULT!'
print 'Total amount to pay per month would be'
print minmonth
print 'Total amounts to pay'
print months
Answer: Indentation is important in Python. You would want to bring the `else` in line
with the `if` statement in your code.
|
how does a python program tell if it's being run within emacs?
Question: I usually write python in emacs.
I'll often want to re-evaluate my file, which I can do with Ctrl-C Ctrl-C,
which causes the interpreter to reload the entire file and then I can carry on
playing.
so if I'm writing a program that takes input, I'll usually find myself with
two lines:
lines = open("/home/jla/inputfile").readlines()
#lines = fileinput.input()
the first line is 'what to do while developing' (read from a known example
input file) the second is 'what to do when run from the command line' (read
from stdin, or a provided file name)
Obviously this is bad, so I am thinking:
if in_emacs():
lines = open("/home/jla/inputfile").readlines()
if run_from_shell():
lines = fileinput.input()
else:
oops()
And I know how to write `oops()`, but I am a bit stuck with `in_emacs()` and
`run_from_shell()`, and I wonder if you wise ones can help.
Answer: Shells opened by emacs should have the environment variable `EMACS=t`. At
least this works on my emacs, YMMV.
If that doesn't fly for you, here's how to find out what emacs-dependent
environment variables python can see. Run from the shell and under emacs, and
compare the outputs.
import os
for e in os.environ:
if 'EMACS' in e:
print e, os.environ[e]
|
How Can a LAMP Guy Easily Implement WebSockets?
Question: I've always worked with Apache, MySQL, and PHP. I'd like to eventually branch
out to Python/Django or Ruby/Ruby on Rails, but that's another discussion. Two
great things about Apache, MySQL, and PHP are all three are ubiquitous and
it's very easy to launch a website. Just set up an Apache virtual host, import
the database into MySQL, and copy the PHP files onto the server. That's it.
This is all I've ever done and all I've ever known. Please keep this in mind.
These days, it's becoming increasingly important for websites to be able to
deliver data in real-time to the users. Users expect this too due to the live
nature of Facebook and Gmail. This effect can be faked with Ajax polling, but
that has a lot of overhead, as explained
[here](https://www.youtube.com/watch?v=cdDug2U29wk#t=6m). I'd like to use
WebSockets. Now remember that I've always been a LAMP guy. I've only ever
launched websites using the method I described earlier. So if I have, say, a
CakePHP site, how can I "add on" the feature of WebSockets? Do I need to
install some other server or something or can I get it to work smoothly with
Apache? Will it require Apache 2.4? Please explain the process to me keeping
in mind that I only know about LAMP. Thanks!
Answer: One key thing to keep in mind, is that a realtime websockets server needs to
be "long running", so that it can push stuff to clients. In the classic LAMP
setup, Apache spawns a PHP interpreter on each request. Between requests the
PHP interpreter is not running, and the only protocol state kept between
requests is sessions.
One nice property of the LAMP way, is that memory management is easy. You just
implicitly allocate whatever memory you need, and it is automatically
reclaimed when the request is done, and the PHP process exits. As soon as you
want the server to keep running, you need to consider memory management. In
some laguages, like C++, you manage allocation and deallocation explicitly. In
other languages, like Java or Javascript, you have garbage collection. In PHP
you throw everything away, and start with a fresh slate on each request.
I think you will have a hard time making long running servers with something
like Cake or any other classic PHP framework. Those frameworks works by
basically taking an HTTP request and turning it into an HTTP response.
My advice is that you should look into something like Node.JS and SocketIO. If
you know Javascript, or don't mind learning, these technologies allow you to
easily implement real-time servers and clients. If necessary you could run a
reverse proxy like nginx, so that your existing LAMP stack would get some
requests, and one or more NodeJS servers would get some.
This answer came out a bit fluffy, but I hope that it helps a little.. :-)
|
Encode Decode of strings python
Question: I have a list of html pages which may contain certain encoded characters. Some
examples are as below -
<a href="mailto:lad%20at%20maestro%20dot%20com">
<em>ada@graphics.maestro.com</em>
<em>mel@graphics.maestro.com</em>
I would like to decode (escape, I'm unsure of the current terminology) these
strings to -
<a href="mailto:lad at maestro dot com">
<em>[email protected]</em>
<em>[email protected]</em>
Note, the HTML pages are in a string format. Also, I DO NOT want to use any
external library like a BeautifulSoup or lxml, only native python libraries
are ok.
**Edit** -
The below solution isn't perfect. HTML Parser unescaping with urllib2 throws a
UnicodeDecodeError: 'ascii' codec can't decode byte 0x94 in position 31: ordinal not in range(128)
error in some cases.
Answer: You need to unescape HTML entities, and URL-unquote.
The standard library has
[`HTMLParser`](http://docs.python.org/library/htmlparser.html?highlight=htmlparser#HTMLParser)
and
[`urllib2`](http://docs.python.org/library/urllib2.html?highlight=urllib2#urllib2)
to help with those tasks.
import HTMLParser, urllib2
markup = '''<a href="mailto:lad%20at%20maestro%20dot%20com">
<em>ada@graphics.maestro.com</em>
<em>mel@graphics.maestro.com</em>'''
result = HTMLParser.HTMLParser().unescape(urllib2.unquote(markup))
for line in result.split("\n"):
print(line)
Result:
<a href="mailto:lad at maestro dot com">
<em>[email protected]</em>
<em>[email protected]</em>
* * *
Edit:
If your pages can contain non-ASCII characters, you'll need to take care to
decode on input and encode on output.
The sample file you uploaded has charset set to `cp-1252`, so let's try
decoding from that to Unicode:
import codecs
with codecs.open(filename, encoding="cp1252") as fin:
decoded = fin.read()
result = HTMLParser.HTMLParser().unescape(urllib2.unquote(decoded))
with codecs.open('/output/file.html', 'w', encoding='cp1252') as fou:
fou.write(result)
* * *
Edit2:
If you don't care about the non-ASCII characters you can simplify a bit:
with open(filename) as fin:
decoded = fin.read().decode('ascii','ignore')
...
|
python overwrite previous line
Question: how do you overwrite the previous print in python 2.7? I am making a simple
program to calculate pi. here is the code:
o = 0
hpi = 1.0
i = 1
print "pi calculator"
acc= int(raw_input("enter accuracy:"))
if(acc>999999):
print "WARNING: this might take a VERY long time. to terminate, press CTRL+Z"
print "precision: " + str(acc)
while i < acc:
if(o==0):
hpi *= (1.0+i)/i
o = 1
elif(o==1):
hpi *= i/(1.0+i)
o = 0
else:
print "loop error."
i += 1
if i % 100000 == 0:
print str(hpi*2))
print str(hpi*2))
It basicly outputs the current pi after 100000 calculations. how can I make it
overwrite the previous calculation?
Answer: Prefix your output with carriage return symbol `'\r'` and do not end it with
line feed symbol `'\n'`. This will place cursor at the beginning of the
current line, so output will overwrite previous its content. Pad it with some
trailing blank space to guarantee overwrite. E.g.
sys.stdout.write('\r' + str(hpi) + ' ' * 20)
sys.stdout.flush() # important
Output the final value as usual with `print`.
I believe this should work both in most *nix terminal emulators and Windows
console. YMMV, but this is the simplest way.
|
ChromeDriver under FreeBSD
Question: With FreeBSD-7.3 amd64 and Python 2.6 I cannot make it possible to run
ChromeDriver. ChromeDriver binary is added to path and Chrome works fine in
appropriate display but I keep getting errors related to binary
incompatibility or something similar _ELF binary type "0" not known._ :
>>> from selenium import webdriver
>>> d = webdriver.Chrome()
ELF binary type "0" not known.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.6/site-packages/selenium/webdriver/chrome/webdriver.py", line 60, in __init__
self.service.start()
File "/usr/local/lib/python2.6/site-packages/selenium/webdriver/chrome/service.py", line 58, in start
and read up at http://code.google.com/p/selenium/wiki/ChromeDriver")
selenium.common.exceptions.WebDriverException: Message: 'ChromeDriver executable needs to be available in the path. Please download from http://code.google.com/p/selenium/downloads/list and read up at http://code.google.com/p/selenium/wiki/ChromeDriver'
>>>
The same when I am trying to execute binary: _ELF binary type "0" not known._
I've tried both with chromedriver_linux64_19.0.1068.0.zip and 18 version with
no luck. Any advice?
Answer: This is a Linux binary. To run that under FreeBSD you need to install at least
the Linux emulator base port, `/usr/ports/emulators/linux_base-f10`. And
probably the Linux versiond of a host of other libraries.
The Chromium browser is available as a native FreeBSD binary with the port
`/usr/ports/www/chromium`. But this doesn't build the chromedriver by default.
You could ask the port maintainer to add it? Or build it, go into the work
directory and use `gmake chromedriver`. If that works, put the binary
somewhere in your path.
|
Python simple linear plotting
Question: So I'm trying to plot 2 different arrays of the same dimensions using python's
matplotlib. This is the code I currently have:
from numpy import *
from pylab import *
import matplotlib.pyplot as plt
p, pdot, s400, dist=loadtxt("cc45list.txt", usecols=(1,2,3,4), unpack=True)
for i in arange(0,45,1):
k = (s400*(dist**2))/((p**1)*(pdot**0.5))
kbar=sum(k)
var=abs(k-kbar)
x=((p**1)*(pdot**0.5))
y=s400*(dist**2)
kbararray=ones((1,45))*kbar
I'm trying to plot a simple line with the xaxis being `x` and the yaxis being
`kbararray` (which is just an array of the same value calculated in the loop).
I tried using this bit of matplotlib which has worked in the past for my other
plots:
plot(x,kbararray)
But I keep end up recieveing this error message:
Traceback (most recent call last):
File "C:\PortablePython_1.1_py2.6.1\App\April_2010\graphing lines.py", line 3, in <module>
plot ( range(0,11),[9,4,5,2,3,5,7,12,2,3],'.-',label='sample1' )
File "C:\PortablePython_1.1_py2.6.1\App\Lib\site-packages\matplotlib\pyplot.py", line 2141, in plot
ret = ax.plot(*args, **kwargs)
File "C:\PortablePython_1.1_py2.6.1\App\Lib\site-packages\matplotlib\axes.py", line 3432, in plot
for line in self._get_lines(*args, **kwargs):
File "C:\PortablePython_1.1_py2.6.1\App\Lib\site-packages\matplotlib\axes.py", line 311, in _grab_next_args
for seg in self._plot_args(remaining, kwargs):
File "C:\PortablePython_1.1_py2.6.1\App\Lib\site-packages\matplotlib\axes.py", line 288, in _plot_args
x, y = self._xy_from_xy(x, y)
File "C:\PortablePython_1.1_py2.6.1\App\Lib\site-packages\matplotlib\axes.py", line 228, in _xy_from_xy
raise ValueError("x and y must have same first dimension")
ValueError: x and y must have same first dimension
From what I've researched this error usually comes up if you try to plot 2
arrays of different dimensions but I'm sure mine are of the same dimension,
right? So why would I be getting this error? Sorry if this is a basic
question/answered elsewhere but I couldn't find anything. Thanks.
Answer: You can verify that they are the same shape by printing the shape of `x` and
`kbararray` at runtime. Right before you call `plot`, add these lines:
print 'Shape of x:', x.shape
print 'Shape of kbararray:', kbararray.shape
If the shapes are different, you have a problem and should check that you are
actually plotting what you think you're plotting.
|
Perl - regex How to write this in python?
Question: I wrote the following `regex` in `perl` and now i want to write it in
`python`.
if ($textline =~ /^([A-Z0-9]+:)(.*)/)
Answer:
>>> import re
>>> textline = "AZERTY:qwerty"
>>> if re.search('^([A-Z0-9]+:)(.*)', textline):
... print("match !")
<http://docs.python.org/library/re.html>
If you have problems using the interactive python interpreter, copy/paste this
in your terminal :
cd /tmp
wget http://sputnick-area.net/ftp/downloads/re.py
chmod +x re.py
./re.py
**Edit** : this is an example that should fit your needs :
#!/usr/bin/python2
# -*- coding: utf8 -*-
# $ cat /tmp/dmesg
# AZERTY:qwerty01
# AZERTY:qwerty02
# zzzzzzzzzzzzzzz
# AZERTY:qwerty03
import re
f = open("/tmp/dmesg", "r")
for textline in f.readlines():
if re.search('^([A-Z0-9]+:)(.*)', textline):
print "match "+textline.rstrip('\n')
f.close()
|
Django Tweepy File
Question: I’m confused about this; instead of writing tweepy codes like-
auth=tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api=tweepy.API(auth)
In a new python file (IDLE). Can’t we write it in our views.py in django? Or
we should create a new file in django app for the codes?
Answer: I added a twitter_status.py file to my project and from views.py I call the
update_twitter_status() method
twitter_status.py:
"""
Adds a tweet to the twitter account in settings.
Login to dev.twitter.com and add a desktop application
Add the keys and secrets for the added application to the settings file
Requires tweepy to be installed
https://github.com/joshthecoder/tweepy
"""
from django.conf import settings
from tweepy import *
class TwitterManager:
def __get_api_handle(self):
#Create OAuth object
auth = OAuthHandler(settings.TWITTER_CONSUMER_KEY, settings.TWITTER_CONSUMER_SECRET)
#Set access tokens
auth.set_access_token(settings.TWITTER_ACCESS_TOKEN, settings.TWITTER_ACCESS_TOKEN_SECRET)
#Create API handle
api = API(auth)
return api
def update_twitter_status(self, message):
api = self.__get_api_handle()
#Send update
api.update_status(message)
Then in my views.py I just call the update_twitter_status(message) method
views.py:
from myproject.twitter_status import TwitterManager
def __update_twitter(message):
twit_mgr = TwitterManager()
twit_mgr.update_twitter_status(message)
Then whenever I want to tweet from my views.py I add this line
__update_twitter('I am tweeting')
If someone disagree on how I have implemeted the class or methods please I
would be glad to get your feedback.
|
Python hash function as Twisted xmlrpc class issueing same has for every file?
Question: I'm new to most of this so forgive me if I'm doing something really dumb. The
following is a simple Twisted xmlrpc server which is supposed to return file
info. It works fine except that the `xmlrpc_hash` function gives the same
result for every file. Example below code. Any help would be great!
from twisted.web import xmlrpc, server
import os
class rfi(xmlrpc.XMLRPC):
"""
rfi - Remote File Info server
"""
def xmlrpc_echo(self, x):
"""
Return all passed args as a test
"""
return x
def xmlrpc_location(self):
"""
Return current directory name
"""
return os.getcwd()
def xmlrpc_ls(self, path):
"""
Run ls on the path
"""
result = []
listing = os.listdir(path)
for l in listing:
result.append(l)
return result
def xmlrpc_stat(self, path):
"""
Stat the path
"""
result = str(os.stat(path))
return result
def xmlrpc_hash(self, path):
"""
Hash the path
"""
from hashlib import sha1
if os.path.isfile(path):
f = open(path,'rb')
h = sha1()
block_size = 2**20
f.close()
return h.hexdigest()
else:
return 'Not a file'
if __name__ == '__main__':
from twisted.internet import reactor
r = rfi()
reactor.listenTCP(7081, server.Site(r))
reactor.run()
Example output:
import xmlrpclib
s = xmlrpclib.Server('http://localhost:7081/')
s.hash('file_1.txt')
'da39a3ee5e6b4b0d3255bfef95601890afd80709'
s.hash('file_2.txt')
'da39a3ee5e6b4b0d3255bfef95601890afd80709'
Answer: This is because you're never actually updating the hash object:
from hashlib import sha1
if os.path.isfile(path):
f = open(path,'rb')
h = sha1()
h.update(f.read()) # You're missing this line
f.close()
return h.hexdigest()
else:
return 'Not a file'
|
How to tell pythons numpy where to find liblapack.so.3?
Question: So I have to run some python scripts on a cluster with machines for which I
have no admin rights. Since numpy was missing, on some of the machines, I
created a virtual environment and installed numpy there. I connected to a
machine from which I knew it has no python and started the virtualenv python
interpreter by `~my_env/bin/python` to check for `import numpy`.
I got this error:
# Some trace...
ImportError: liblapack.so.3: File was not found # or something similar.
So I made some research on the internet and somebody came up with
`LD_LIBRARY_PATH`. So I added
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/me/lib; export LD_LIBRARY_PATH
to my `.bashrc` and sourced it. And I put `liblapack.so.3` into
`/home/me/lib;`. Now when I try to import numpy I get the error message:
ImportError: liblapack.so.3: wrong ELF class: ELFCLASS32
Now I am stuck. Any ideas what could have went wrong?
The cluster machines have Suse installed on them.
Answer: The error means that the format of the `liblapack.so.3` C library differs from
the binary format of Python itself. The usual cause for this particular error
is having a 64-bit Python but a C library that is 32-bit. You need to have
Python and the C library be compiled in the same way.
|
Plot a sequence of images with matplotlib Python
Question: I'm implementing the kmeans clustering algorithm in Python. I would like to
plot at each iteration the status (image) of the clusters quality. So,
basically I have a cycle which plot at each iteration an image and I want to
animate this. I don't know if I made that clear. At the moment I just use the
show() command which plot the image but then I have to close it in order to
continue the iteration.
So, is there some way to animate the sequence of images computed at each step?
Answer: I tried the `ion()` method and it works fine for small amount of data, but if
you have large images or images streaming in relatively quickly, this method
is horrendously slow. From what I understand, `ion()` will redraw everything
each time you make a change to your figure, including axes and labels, etc.
Which might not be what you want.
[This thread](http://matplotlib.1069221.n5.nabble.com/viewing-an-image-cube-
sequence-with-imshow-td9873.html) shows a much nicer way of doing things
Here's a simple example that I made showing how to do this:
import time
import numpy
import matplotlib.pyplot as plt
fig = plt.figure( 1 )
ax = fig.add_subplot( 111 )
ax.set_title("My Title")
im = ax.imshow( numpy.zeros( ( 256, 256, 3 ) ) ) # Blank starting image
fig.show()
im.axes.figure.canvas.draw()
tstart = time.time()
for a in xrange( 100 ):
data = numpy.random.random( ( 256, 256, 3 ) ) # Random image to display
ax.set_title( str( a ) )
im.set_data( data )
im.axes.figure.canvas.draw()
print ( 'FPS:', 100 / ( time.time() - tstart ) )
I get about 30 FPS on my machine with the above code. When I run the same
thing with `plt.ion()` and `ax.imshow( data )` instead of
`im.axes.figure.canvas.draw()` and `im.set_data( data )`, I get around 1 FPS
|
double click a file to run python script. how to get that file as an input?
Question: I want to be able to create a program that can save text files with my own
extension at the end. later the user should be able to double click on that
file to run the program and open that file.
I need to know how to make the python program the default program a file opens
whenever the user double click on it , and also how to get that file when the
program starts running.
python 2.7 mac os x 10.6 and windows 7
edit: say as an example, i was making a paint program. the user wants to save
the file he was working on. my program will save it as untitled.paint, later
the user double click on untitled.paint and expects my program to open up that
file.
is there a way for me to tell the operating system to open all files ending
with .paint with my paint program.
I can't save it as a .jpg because that won't save the layers or anything else.
Answer: Don't know about OSX but in Windows you can do it as follows:
1. Create a batch file
2. Select a file of the type you want to open "automatically", and use "Open with..." in the context menu to select the batch file as the default program to use.
3. The batch file will get the "clicked" file passed argument 1, which you can then pass to your Python script as an argument - it then receives it as sys.argv[1].
An example using an extension "`.paint`" which is opened using a hypothetical
python script `pypaint.py` may clarify things:
run_paint.bat:
@echo off
echo 'clicked file is' %1
python path_to_pypaint.py %1
py_paint.py:
import sys
print 'opening', sys.argv[1]
|
Python Command Line Arguments (Windows)
Question: I am running 32-bit Windows 7 and Python 2.7.
I am trying to write a command line Python script that can run from CMD. I am
trying to assign a value to sys.argv[1]. The aim of my script is to calculate
the MD5 hash value of a file. This file will be inputted when the script is
invoked in the command line and so, sys.argv[1] should represent the file to
be hashed.
Here's my code below:
import sys
import hashlib
filename = sys.argv[1]
def md5Checksum(filePath):
fh = open(filePath, 'rb')
m = hashlib.md5()
while True:
data = fh.read(8192)
if not data:
break
m.update(data)
return m.hexdigest()
# print len(sys.argv)
print 'The MD5 checksum of text.txt is', md5Checksum(filename)
Whenver I run this script, I receive an error:
filename = sys.argv[1]
IndexError: list index out of range
To call my script, I have been writing "script.py test.txt" for example. Both
the script and the source file are in the same directory. I have tested
len(sys.argv) and it only comes back as containing one value, that being the
python script name.
Any suggestions? I can only assume it is how I am invoking the code through
CMD
Answer: You should check that in your registry the way you have associated the files
is correct, for example:
[HKEY_CLASSES_ROOT\Applications\python.exe\shell\open\command]
@="\"C:\\Python27\\python.exe\" \"%1\" %*"
|
Find Out If a Function has been Called
Question: I am programming in Python, and I am wondering if i can test if a function has
been called in my code
def example():
pass
example()
#Pseudocode:
if example.has_been_called:
print("foo bar")
How would I do this?
Answer: If it's OK for the function to know its own name, you can use a function
attribute:
def example():
example.has_been_called = True
pass
example.has_been_called = False
example()
#Actual Code!:
if example.has_been_called:
print("foo bar")
You could also use a decorator to set the attribute:
import functools
def trackcalls(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
wrapper.has_been_called = True
return func(*args, **kwargs)
wrapper.has_been_called = False
return wrapper
@trackcalls
def example():
pass
example()
#Actual Code!:
if example.has_been_called:
print("foo bar")
|
i want python to open a wesite and then refresh it or reopen it a sertain amount of times
Question: here is what i have so far:
loop = 1
print ("Copy your Youtube link")
Website = input (">>>")
print ("How manmy views would you like to get?")
Views = input (">>>")
print (Website)
print (Views)
import time
import webbrowser
Websites = webbrowser.open(Website) THIS IS WHERE I WANT THE WEBSITE TO REFRESH OR OPEN MORE. THE (VIEWS) amount
print ("There you go")
Answer: [Use a loop](http://wiki.python.org/moin/ForLoop) to perform actions
repeatedly. The loop can either repeat _n_ times (`for` loop with `xrange()`),
step over elements in a list (`for item in list:`) or repeat until some
condition is satisfied (`while condition_is_true:`).
Be aware that calling `webbrowser.open()` repeatedly may not be a good idea.
(Hint: What happens when you open a thousand browser windows all at the same
time?)
* * *
If the _intent_ of this script is to inflate the view count of a YouTube video
(therefore propelling you to stardom), be aware that this will not work.
YouTube almost certainly deduplicates views from the same computer, so that
loading a video 1000 times on the same computer will not increase the view
count by 1000. Instead, the view count will increase by 1.
Additionally, repeatedly hitting their site with an automated bot for the
purpose of inflating page views is both impolite and almost certainly against
their terms of service.
Also, I'm being nitpicky, but please make some effort to spellcheck your
English. 'i' should be 'I', and 'sertain' should be 'certain'.
|
Why Twisted resource.Resource execute render() twice?
Question: Im new to Twisted. Why is it printing "render()" twice? I know if I return
server.NOT_DONE_YET, it will only print once, but I wish to return string/JSON
instead. Any help?
Code:
from twisted.web import resource, server
from twisted.internet import reactor
import simplejson
class WResource(resource.Resource):
isLeaf=True
def __init__(self):
print "resource started"
def render(self, request):
print "render()"
request.setHeader('Content-Type', 'application/json')
return simplejson.dumps(dict(through_port=8080, subdomain='hello'))
reactor.listenTCP(9000, server.Site(WResource()))
reactor.run()
Output:
> python server.py
resource started
render()
render()
Answer: Because your web browser is requesting `favicon.ico`. If you print
`request.postpath` in your `render` method, you'll see that only one of the
requests is hitting the page that you expect.
|
Setting output of a program to pipe in Python 2.6.6
Question: I was wondering how I would redirect the output of a program I would run with
subprocess.Popen to the program I am using. For example, if I'm executing a
script from Python ImageProcessing.py in.jpeg out.jpeg, I would like to
instead do some extra processing in the higher level script; meaning that I
would like the python code to replace out.jpeg with a pipe or some other sort
of communication. (Also, yes I know it would be better to import in most
cases, in this case I rather not).
Thank you in advance!
Answer: You can use the `stdout=subprocess.PIPE` argument to `Popen`. The stdout of
the subprocess is a normal file. Example:
import subprocess
p = subprocess.Popen(["/bin/cat", "hello.txt"], stdout=subprocess.PIPE)
for line in p.stdout:
print("PIPE OUT [%s]" % line)
|
fastest packing of data in Python (and Java)
Question: ([Sometimes](http://www.codinghorror.com/blog/2009/01/the-sad-tragedy-of-
micro-optimization-theater.html) our host is wrong; nanoseconds matter ;)
I have a Python Twisted server that talks to some Java servers and profiling
shows spending ~30% of its runtime in the JSON encoder/decoder; its job is
handling thousands of messages per second.
[This talk](http://highscalability.com/blog/2012/3/26/7-years-of-youtube-
scalability-lessons-in-30-minutes.html) by youtube raises interesting
applicable points:
* Serialization formats - no matter which one you use, they are all expensive. Measure. Don’t use pickle. Not a good choice. Found protocol buffers slow. They wrote their own BSON implementation which is 10-15 time faster than the one you can download.
* You have to measure. Vitess swapped out one its protocols for an HTTP implementation. Even though it was in C it was slow. So they ripped out HTTP and did a direct socket call using python and that was 8% cheaper on global CPU. The enveloping for HTTP is really expensive.
* Measurement. In Python measurement is like reading tea leaves. There’s a lot of things in Python that are counter intuitive, like the cost of grabage colleciton. Most of chunks of their apps spend their time serializing. Profiling serialization is very depending on what you are putting in. Serializing ints is very different than serializing big blobs.
Anyway, I control both the Python and Java ends of my message-passing API and
can pick a different serialisation than JSON.
My messages look like:
* a variable number of longs; anywhere between 1 and 10K of them
* and two already-UTF8 text strings; both between 1 and 3KB
Because I am reading them from a socket, I want libraries that can cope
gracefully with streams - its irritating if it doesn't tell me how much of a
buffer it consumed, for example.
The other end of this stream is a Java server, of course; I don't want to pick
something that is great for the Python end but moves problems to the Java end
e.g. performance or torturous or flaky API.
I will obviously be doing my own profiling. I ask here in the hope you
describe approaches I wouldn't think of e.g. using
[`struct`](http://docs.python.org/library/struct.html) and what the fastest
kind of strings/buffers are.
Some simple test code gives surprising results:
import time, random, struct, json, sys, pickle, cPickle, marshal, array
def encode_json_1(*args):
return json.dumps(args)
def encode_json_2(longs,str1,str2):
return json.dumps({"longs":longs,"str1":str1,"str2":str2})
def encode_pickle(*args):
return pickle.dumps(args)
def encode_cPickle(*args):
return cPickle.dumps(args)
def encode_marshal(*args):
return marshal.dumps(args)
def encode_struct_1(longs,str1,str2):
return struct.pack(">iii%dq"%len(longs),len(longs),len(str1),len(str2),*longs)+str1+str2
def decode_struct_1(s):
i, j, k = struct.unpack(">iii",s[:12])
assert len(s) == 3*4 + 8*i + j + k, (len(s),3*4 + 8*i + j + k)
longs = struct.unpack(">%dq"%i,s[12:12+i*8])
str1 = s[12+i*8:12+i*8+j]
str2 = s[12+i*8+j:]
return (longs,str1,str2)
struct_header_2 = struct.Struct(">iii")
def encode_struct_2(longs,str1,str2):
return "".join((
struct_header_2.pack(len(longs),len(str1),len(str2)),
array.array("L",longs).tostring(),
str1,
str2))
def decode_struct_2(s):
i, j, k = struct_header_2.unpack(s[:12])
assert len(s) == 3*4 + 8*i + j + k, (len(s),3*4 + 8*i + j + k)
longs = array.array("L")
longs.fromstring(s[12:12+i*8])
str1 = s[12+i*8:12+i*8+j]
str2 = s[12+i*8+j:]
return (longs,str1,str2)
def encode_ujson(*args):
return ujson.dumps(args)
def encode_msgpack(*args):
return msgpacker.pack(args)
def decode_msgpack(s):
msgunpacker.feed(s)
return msgunpacker.unpack()
def encode_bson(longs,str1,str2):
return bson.dumps({"longs":longs,"str1":str1,"str2":str2})
def from_dict(d):
return [d["longs"],d["str1"],d["str2"]]
tests = [ #(encode,decode,massage_for_check)
(encode_struct_1,decode_struct_1,None),
(encode_struct_2,decode_struct_2,None),
(encode_json_1,json.loads,None),
(encode_json_2,json.loads,from_dict),
(encode_pickle,pickle.loads,None),
(encode_cPickle,cPickle.loads,None),
(encode_marshal,marshal.loads,None)]
try:
import ujson
tests.append((encode_ujson,ujson.loads,None))
except ImportError:
print "no ujson support installed"
try:
import msgpack
msgpacker = msgpack.Packer()
msgunpacker = msgpack.Unpacker()
tests.append((encode_msgpack,decode_msgpack,None))
except ImportError:
print "no msgpack support installed"
try:
import bson
tests.append((encode_bson,bson.loads,from_dict))
except ImportError:
print "no BSON support installed"
longs = [i for i in xrange(10000)]
str1 = "1"*5000
str2 = "2"*5000
random.seed(1)
encode_data = [[
longs[:random.randint(2,len(longs))],
str1[:random.randint(2,len(str1))],
str2[:random.randint(2,len(str2))]] for i in xrange(1000)]
for encoder,decoder,massage_before_check in tests:
# do the encoding
start = time.time()
encoded = [encoder(i,j,k) for i,j,k in encode_data]
encoding = time.time()
print encoder.__name__, "encoding took %0.4f,"%(encoding-start),
sys.stdout.flush()
# do the decoding
decoded = [decoder(e) for e in encoded]
decoding = time.time()
print "decoding %0.4f"%(decoding-encoding)
sys.stdout.flush()
# check it
if massage_before_check:
decoded = [massage_before_check(d) for d in decoded]
for i,((longs_a,str1_a,str2_a),(longs_b,str1_b,str2_b)) in enumerate(zip(encode_data,decoded)):
assert longs_a == list(longs_b), (i,longs_a,longs_b)
assert str1_a == str1_b, (i,str1_a,str1_b)
assert str2_a == str2_b, (i,str2_a,str2_b)
gives:
encode_struct_1 encoding took 0.4486, decoding 0.3313
encode_struct_2 encoding took 0.3202, decoding 0.1082
encode_json_1 encoding took 0.6333, decoding 0.6718
encode_json_2 encoding took 0.5740, decoding 0.8362
encode_pickle encoding took 8.1587, decoding 9.5980
encode_cPickle encoding took 1.1246, decoding 1.4436
encode_marshal encoding took 0.1144, decoding 0.3541
encode_ujson encoding took 0.2768, decoding 0.4773
encode_msgpack encoding took 0.1386, decoding 0.2374
encode_bson encoding took 55.5861, decoding 29.3953
[bson](http://pypi.python.org/pypi/bson/0.3.3), [msgpack](http://msgpack.org/)
and [ujson](http://pypi.python.org/pypi/ujson/) all installed via easy_install
I would _love_ to be shown I'm doing it wrong; that I should be using
cStringIO interfaces or however else you speed it all up!
There must be a way to serialise this data that is an order of magnitude
faster surely?
Answer: While JSon is flexible, it is one of the slowest serialization formats in Java
(possible python as well) in nano-seconds matter I would use a binary format
in native byte order (likely to be little endian)
Here is a library were I do exactly that
[AbstractExcerpt](https://github.com/peter-lawrey/Java-
Chronicle/blob/master/src/main/java/vanilla/java/chronicle/impl/AbstractExcerpt.java)
and [UnsafeExcerpt](https://github.com/peter-lawrey/Java-
Chronicle/blob/master/src/main/java/vanilla/java/chronicle/impl/UnsafeExcerpt.java)
A typical message takes 50 to 200 ns to serialize and send or read and
deserialize.
|
Fetching href of a link
Question: I am using lxml and python. I want to fetch the **href** for the link that
reads **More reviews (40)** on this
[page](http://maps.google.com/maps/place?cid=2860002122405830765). I am
basically scrapping this site and want to get the reviews.
Would appreciate the help. Thanx
Answer: The link is added using client-side javascript. So you cannot get the href
using normal HTML parsing. You can however look at the javascript code and get
the link from there:
>>> import re
>>> import urllib2
>>> import lxml.html
>>> page = urllib2.urlopen("http://maps.google.com/maps/place?cid=2860002122405830765").read()
# have to search the page source since the link is added in javascript
>>> mo = re.search(r'<div class="pp-more-reviews">.*?</div>', page)
>>> div = lxml.html.fromstring(mo.group(0))
>>> href = div.find("a").attrib["href"]
Other options are:
1. Use [selenium](http://seleniumhq.org/) to control a real browser.
2. Use the [phantomJS](http://www.phantomjs.org/) headless browser
|
Broadcast error when adding ndarray and sparse matrix converted to dense in python
Question: I am converting from a scipy sparse matrix to a dense matrix and adding that
to an ndarray using a += operator and I am getting a broadcast error. The
ndarray has a shape (M,) while the dense matrix has a shape (M,1). When I try
and convert the numpy.matrix to an ndarray the shape becomes (1,M). I also
tried using reshape(M,) after the conversion and the shape becomes (M,M).
Could someone please explain where am I messing up.
M = 100
N = 1000
K = 4
a= np.zeros((M,K))
b = sp.csr_matrix(np.ones((N,1)))
d = sp.csr_matrix(np.ones((M,N)))
c = np.ones(())
for k in range(K):
a[:,k] += d.dot(b).todense()
P.S: I am a newbie to both python and stackoverflow. Apologies if this is a
wrongly framed question or if this question was already asked earlier (I
couldn't find it)
Answer: Side note to future googlers, OP's code has the following package imported:
> import numpy as np
> import scipy.sparse as sp
Anyway, let's begin with our explanation.
* * *
You can check your matrix size using:
> np.shape(your_matrix_here) #equals to MATLAB: > size(your_matrix_here)
As you call todense(), you got:
matrix([[ 1000.], #shape: (1, 100)
[ 1000.],
[ 1000.],
...
while a[:,1] looks like this:
array([ 0., 0., ... , 0., 0.]) #shape: (100,)
So you may want to transpose your matrix using transpose().
* * *
But transposing the matrix returns matrix containing matrix:
matrix([[ 1000., 1000., ... , 1000., 1000.]]) #shape: (1, 100)
and for some reason `d.dot(b).todense().transpose()[0]` doesn't return the
first element the matrix:
matrix([[ 1000., 1000., ... , 1000., 1000.]]) #still the same!
* * *
This can be fixed using:
> np.array(d.dot(b).todense().transpose())[0]
thus returning:
array([ 1000., 1000., ... 1000., 1000.])
* * *
Now two of them got the same shape, allowing them to perform matrix operation:
> np.shape(np.array(d.dot(b).todense().transpose())[0]) #(100,)
> np.shape(a[:,1]) #(100,)
* * *
**In conclusion,** you want to change this line:
a[:,k] += d.dot(b).todense()
to:
a[:,k] += np.array(d.dot(b).todense().transpose())[0]
|
PyCrypto: Decrypt only with public key in file (no private+public key)
Question: Hello everyone.
I am trying to play a bit with RSA public and private keys and
encryption/decryption with _PyCrypto_ and I have encountered and issue that
seems kind of strange to me (it probably makes a lot of sense the way it's
working now, but I don't know much about RSA asymmetric encryption and that's
why it's puzzling me). It is the inability I have encountered to decrypt
something having only the public key.
Here's the thing: I have a server and a client. I want the server to
"recognize" and register the client and show it in a list of "known devices".
The client will have the public key of the server and the server will have the
public key of the client, so when the client communicates with the server, it
will encrypt its data with his client's private key and with the server's
public key. By doing this, only the proper server will be able to open the
data (with its private key) and will be able to verify that the sender is
actually the client that claims to be... well... or at least, that's what I
think, because I'm pretty newbie in this asymmetric encryption. The idea is
that when one of those clients wakes up, it will send its public key
(encrypted with the server's public key, of course, but that's probably not
relevant at this point... yet) saying "_Hey, I'm a new client and this is my
public key. Register that key with my UUID_ " and the server will obey,
associating that public key with the client's UUID and use that key to decrypt
data coming from that client. I just want to transmit the client's public key,
keeping its private key secret, secret, secret (it's private, right?)
I am doing some tests with openssl and very simple Python scripts that use
_PyCrypto_ (actually, not even in a server/client architecture or anything...
just trying to encrypt something with a private key and decrypt it with the
public key)
First of all, I have created a public/private key set with:
openssl genrsa -out ~/myTestKey.pem -passout pass:"f00bar" -des3 2048
Ok, first thing that puzzles me a bit... It generates only one file, with both
the private and the public keys... Well... O'right... whatever. I can extract
the public key with:
openssl rsa -pubout -in ~/myTestKey.pem -passin pass:"f00bar" -out ~/myTestKey.pub
So I thought I had my couple of private (_private+public, actually_) and
public keys in `~/myTestKey.pem` and `~/myTestKey.pub` respectively. Well...
apparently I'm doing something wrong, because _PyCrypto_ doesn't like this
assembly. And I don't know why.
I have two very simple test scripts, "`encryptor.py`" and "`decryptor.py`".
The "`encryptor.py`" should encrypt something with the private key, and
"`decryptor.py`", decrypt it with the public key. I know... I'm a parangon of
originality...
So, I encrypt the string "_Loren ipsum_ " with my "`encryptor.py`" (with
private key):
\----------- encryptor.py ----------------
#!/usr/bin/python
from Crypto.PublicKey import RSA
def encrypt(message):
externKey="/home/borrajax/myTestKey.pem"
privatekey = open(externKey, "r")
encryptor = RSA.importKey(privatekey, passphrase="f00bar")
encriptedData=encryptor.encrypt(message, 0)
file = open("/tmp/cryptThingy.txt", "wb")
file.write(encriptedData[0])
file.close()
if __name__ == "__main__":
encryptedThingy=encrypt("Loren ipsum")
* * *
And it works fine. Well... I suppose so, because in the file
"`/tmp/cryptThingy.txt`" I get a lot of gibberish. It looks really, really
encrypted to me.
But when I try to decrypt it using just the file that contains just the public
key...
\----------- decryptor.py ---------------
#!/usr/bin/python
from Crypto.PublicKey import RSA
def decrypt():
externKey="/home/borrajax/myTestKey.pub"
publickey = open(externKey, "r")
decryptor = RSA.importKey(publickey, passphrase="f00bar")
retval=None
file = open("/tmp/cryptThingy.txt", "rb")
retval = decryptor.decrypt(file.read())
file.close()
return retval
if __name__ == "__main__":
decryptedThingy=decrypt()
print "Decrypted: %s" % decryptedThingy
* * *
... _PyCrypto_ yells at me with a:
File "/usr/local/lib/python2.7/dist-packages/pycrypto-2.5-py2.7-linux-i686.egg/Crypto/PublicKey/RSA.py", line 107, in _decrypt
mp = self.key._decrypt(cp)
TypeError: Private key not available in this object
Yeah, of course it's not available! I extracted the public key! It took me 2
hours finding how to do it properly!!
What am I missing? As I said, I'm pretty newbie in this public/private
asymmetric key encryption so I might have a core "conceptual error"... Any
hint will be appreciated.
¡Thank you in advance!
Answer: You have it the wrong way round, you encrypt with the public key, and decrypt
with the private key.
> The publicly available encrypting-key is widely distributed, while the
> private decrypting-key is known only to the recipient. Messages are
> encrypted with the recipient's public key and can be decrypted only with the
> corresponding private key. [Source](https://en.wikipedia.org/wiki/Public-
> key_cryptography#How_it_works)
The idea is that you give the sending side the public key (which anyone can
have, so you can distribute it in the open) then you encrypt the data with it,
then decrypt it on your end with your private key (which only you have). This
way the data stays secure.
You can encrypt something with the private key as the private key contains the
information required to make the public key, but it would be unusual to do so,
as normally the person encrypting the data does _not_ have the private key.
|
Python Matplotlib Colormap
Question: I use the colormap "jet" to plot my graphics. But, I would like to have the
lower values in white color and this colormap goes from blue to red colors. I
also don't want to use another colormap because I need this range of colors...
I tried to make my colormap to get the same as "jet" with a range of values in
white, but this is too difficult. Someone could help me please? Thank you
Answer: Probably there should be an easiest solution, but the way I figured out is by
creating your own matplotlib.colors.LinearSegmentedColormap, based on the
"jet" one.
(The lowest level of your colormap is defined in the first line of each tuple
of red, green, and blue, so that's where you start editting. I add one extra
tuple to have a clearly white spot in lower part. ...for each color, in the
first element of the tuple you indicate the position in your colorbar (from 0
to 1), and in the second and third the color itself).
from matplotlib.pyplot import *
import matplotlib
import numpy as np
cdict = {'red': ((0., 1, 1),
(0.05, 1, 1),
(0.11, 0, 0),
(0.66, 1, 1),
(0.89, 1, 1),
(1, 0.5, 0.5)),
'green': ((0., 1, 1),
(0.05, 1, 1),
(0.11, 0, 0),
(0.375, 1, 1),
(0.64, 1, 1),
(0.91, 0, 0),
(1, 0, 0)),
'blue': ((0., 1, 1),
(0.05, 1, 1),
(0.11, 1, 1),
(0.34, 1, 1),
(0.65, 0, 0),
(1, 0, 0))}
my_cmap = matplotlib.colors.LinearSegmentedColormap('my_colormap',cdict,256)
pcolor(np.random.rand(10,10),cmap=my_cmap)
colorbar()
show()
You'll get the following: 
|
Python tkinter .pack/.pack_forget memory issue
Question: I've been teaching myself Python for a few months now and have proceed into
learning some GUI techniques.
I wrote this simple script based off a pack_remove example I found within a
book. My script simply displays local and UTC time every second. Granted the
only difference is the hour, I would still like to redisplay every second.
The script works, yet my RAM is consistently increasing with every time
display. I start out with around 4mb then after 2 hours or so the script uses
25mb. This makes some sense to me, but I was curious if there was a way
display new times every second, but reduce the memory usage of such a simple
clock display.
Or am I using an inefficient technique to re-display data in a GUI at a high
frequency?
Here is my code:
from tkinter import *
import time
class TimeDisplay(Frame):
def __init__(self,msecs = 1000):
Frame.__init__(self)
self.msecs = msecs
self.pack()
utc_time = Label(self, text='')
utc_time.pack()
cst_time = Label(self, text='')
cst_time.pack()
self.utc_time = utc_time
self.cst_time = cst_time
self.repeater()
def repeater(self):
self.utc_time.pack_forget()
self.cst_time.pack_forget()
self.utc_time = Label(self, text= 'UTC: ' + time.strftime('%Y/%m/%d %H:%M:%S',time.gmtime()))
self.utc_time.pack()
self.utc_time.config(bg='navy',fg='white')
self.cst_time = Label(self, text= 'CST: ' + time.strftime('%Y/%m/%d %H:%M:%S',time.localtime()))
self.cst_time.pack()
self.cst_time.config(bg='navy',fg='white')
self.after(self.msecs, self.repeater)
if __name__ == '__main__': TimeDisplay(msecs=1000).mainloop()
Thanks in advance
Answer: `pack_forget` doesn't destroy anything, it just makes it non-visible. This is
a GUI version of a memory leak -- you keep creating objects without ever
destroying them.
So, the first lesson to learn is that you should destroy a widget when you are
done with it.
The more important lesson to learn is that you don't have to keep destroying
and recreating the same widget over and over. You can change the text that is
displayed with the `configure` method. For example:
self.utc_time.configure(text="...")
This will make your program not use any extra memory, and even use
(imperceptibly) less CPU.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.