text
stringlengths 226
34.5k
|
---|
Do I need a lock when using python multi-thread?
Question: Think about this code:
#!/usr/bin/env python
from threading import Thread
count = 0
def test():
global count
for i in range(10):
count = count + 1
if __name__ == '__main__':
for i in range(1000):
Thread(target = test).start()
print count
I use multiple threads, but the result is always correct. Does that mean I
could use python threads without a lock when implementing something like a
visitor counter ?
Answer: You do need one. Although multithreading works differently in Python, due to
the [Global Interpreter
Lock](http://wiki.python.org/moin/GlobalInterpreterLock), operations that are
not atomic in Python-bytecode will still need locking.
In you case, you can inspect the bytecode for your function `test`
(`dis.dis(test)`):
3 0 SETUP_LOOP 30 (to 33)
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (1000)
9 CALL_FUNCTION 1
12 GET_ITER
>> 13 FOR_ITER 16 (to 32)
16 STORE_FAST 0 (i)
4 19 LOAD_GLOBAL 1 (count) # start of increment
22 LOAD_CONST 2 (1)
25 BINARY_ADD
26 STORE_GLOBAL 1 (count) # end of increment
29 JUMP_ABSOLUTE 13
>> 32 POP_BLOCK
>> 33 LOAD_CONST 0 (None)
36 RETURN_VALUE
As you can see, the increment is a 2xload, update, store on bytecode-level, so
this wont work. The increment is actually 4 separate operations, which you
must protect to ensure they are not interrupted.
In your example the problem remains even if you use `count += 1`, as the
bytecode shows:
4 19 LOAD_GLOBAL 1 (count)
22 LOAD_CONST 2 (1)
25 INPLACE_ADD
26 STORE_GLOBAL 1 (count)
|
Is there a way to view cPickle or Pickle file contents without loading Python in Windows?
Question: I use cPickle to save data sets from each run of a program. Since I sometimes
need to see the outline of the data without running the code, I would like an
easy way to quickly view the contents by just double-clicking on the file. I
am trying to avoid having to load a terminal and pointing python to a file
each time, just to run some `print` script.
I looked for Notepad++ plugins but couldn't find anything.
Is there some easy way to do this? Does anyone have any suggestions?
Note: I run Windows 7.
Answer: I _REALLY_ doubt that there's any way to do this since with `pickle`, you can
pack in pretty much anything. When unpickling, you need to be able to load the
modules etc. that were loaded when the object was pickled. In other words, in
general, to be able to `unpickle` something, python needs to be able to
reproduce the "environment" of the program (or at least a close enough
approximation) -- loaded modules, classes in the global namespace, etc ... In
general, this isn't possible without some help from the user. Consider:
import pickle
class Foo(object): pass
a = Foo()
with open('data.pickle','wb') as f:
pickle.dump(a,f)
Now if you try to restore this in a separate script, python has no way of
knowing what a `Foo` looks like and so it can't restore the object (unless you
define an suitable `Foo` object in that script). This isn't really a process
that can be done without some human intervention.
Of course, an arguably useful special case where you're just pickling builtin
objects and things from the standard library might be able to be attempted ...
but I don't think you could write a general unpickler extension.
|
Django: url and content creation results in 500 and no apache error log entry
Question: If i try to open a created url i get an 500 error.
My procedure was:
First `python manage.py startapp black`
I added in `project/settings.py` under `INSTALLED_APPS` `'black',`
I added in `project/urls.py` `url(r'^test/', include('black.urls')),`
Content of `black/urls.py` is:
from django.conf.urls import patterns, url
from black import views
urlpatterns = patterns('',
url(r'^$', views.index, name='index')
)
And content of black/views.py:
from django.http import HttpResponse
def index(request):
return HttpResponse("SHOW ME: BLACK")
After all i synced the database.
I can't see any error in apache-error-log and also not in my posted django
files. What could cause this?
Answer: I suggest you to take a look a the several handy ways to [log
errors](https://docs.djangoproject.com/en/dev/topics/logging/) with django.
For production, you will want to configure your logging behavior.
You can find an example
[here](https://docs.djangoproject.com/en/dev/topics/logging/#an-example) in
the same docs.
What I personally do in production is enabling email logging, it sends me an
email each time there is a fatal error.
Let's see what it would look like for logging both by mail and in a file:
**settings.py**
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
'mail_admins': {
'level': 'ERROR',
'class': 'django.utils.log.AdminEmailHandler'
},
'logfile': {
'class': 'logging.handlers.WatchedFileHandler',
'filename': '/var/log/django/your_application.log'
},
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
'django': {
'handlers': ['logfile'],
'level': 'ERROR',
'propagate': False,
},
}
}
|
gtk python expanding content to fit window
Question: I have this program and, as much as I try I can not make the elements expand
to occupy the area defined for the tire size of the application.
Code:
import smtpClass
import xmlHostes
import pygtk
pygtk.require('2.0')
import gtk
class Base:
def get_main_menu(self, window):
accel_group = gtk.AccelGroup()
item_factory = gtk.ItemFactory(gtk.MenuBar, "<main>", accel_group)
item_factory.create_items(self.menu_items)
window.add_accel_group(accel_group)
self.item_factory = item_factory
return item_factory.get_widget("<main>")
def __init__(self):
## MENU CONTENT
self.menu_items = (# MENU TREE # CONTROL KEY # FUNCTION
( "/_File", None, None, 0, "<Branch>" ),
( "/File/_Setings", "<control>S", None, 0, None ),
( "/File/Quit", "<control>Q", gtk.main_quit, 0, None ),
( "/_Help", None, None, 0, "<LastBranch>" ),
( "/_Help/About", None, None, 0, None ),
)
# WINDOW WIDGET
self.window = gtk.Window(gtk.WINDOW_TOPLEVEL)
self.window.set_title("pyAnonMail")
self.window.set_size_request(500, 500)
# CONTEINER DATA
main_vbox = gtk.VBox(False, 5)
#main_vbox.set_border_width(1)
self.window.add(main_vbox)
main_vbox.show()
self.contentTable = gtk.Table(1, 3, True)
main_vbox.pack_start(self.contentTable, True, True, 10)
# ADD MENU TO CONTEINER
self.menubar = self.get_main_menu(self.window)
main_vbox.add(self.menubar)
self.menubar.show()
# FRAME 1 ##########################################################
self.frame1 = gtk.Frame()
self.contentTable.attach(self.frame1, 0, 1, 0, 1,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
self.frame1.set_label_align(0.0, 0.0)
self.frame1.set_label("Enter message data")
self.frame1.set_shadow_type(gtk.SHADOW_ETCHED_OUT)
self.frame1.show()
# DATA TABLE
dataTable = gtk.Table(2, 6, False)
self.frame1.add(dataTable)
dataTable.show()
senderNameFrame = gtk.Label("Sender Name:")
dataTable.attach(senderNameFrame, 0, 1, 0, 1,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
senderNameFrame.show()
senderNameEntry = gtk.Entry()
dataTable.attach(senderNameEntry, 0, 1, 1, 2,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
senderNameEntry.show()
senderEmailFrame = gtk.Label("Sender Email:")
dataTable.attach(senderEmailFrame, 1, 2, 0, 1,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
senderEmailFrame.show()
senderEmailEntry = gtk.Entry()
dataTable.attach(senderEmailEntry, 1, 2, 1, 2,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
senderEmailEntry.show()
recipientNameFrame = gtk.Label("Recipient Name:")
dataTable.attach(recipientNameFrame, 0, 1, 2, 3,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
recipientNameFrame.show()
recipientNameEntry = gtk.Entry()
dataTable.attach(recipientNameEntry, 0, 1, 3, 4,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
recipientNameEntry.show()
recipientEmailFrame = gtk.Label("Recipient Email:")
dataTable.attach(recipientEmailFrame, 1, 2, 2, 3,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
recipientEmailFrame.show()
recipientEmailEntry = gtk.Entry()
dataTable.attach(recipientEmailEntry, 1, 2, 3, 4,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
recipientEmailEntry.show()
dataAndTimeFrame = gtk.Label("Data and Time:")
dataTable.attach(dataAndTimeFrame, 0, 1, 4, 5,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
dataAndTimeFrame.show()
dataAndTimeEntry = gtk.Entry()
dataTable.attach(dataAndTimeEntry, 1, 2, 4, 5,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
dataAndTimeEntry.show()
subjectFrame = gtk.Label("Subject:")
dataTable.attach(subjectFrame, 0, 1, 5, 6,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
subjectFrame.show()
subjectEntry = gtk.Entry()
dataTable.attach(subjectEntry, 1, 2, 5, 6,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
subjectEntry.show()
####################################################################
# FRAME 2 ##########################################################
self.frame2 = gtk.Frame()
self.contentTable.attach(self.frame2,0,1,1,2,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
self.frame2.set_label_align(0.0, 0.0)
self.frame2.set_label("Enter message body")
self.frame2.set_shadow_type(gtk.SHADOW_ETCHED_OUT)
self.frame2.show()
table = gtk.ScrolledWindow()
table.set_policy(gtk.POLICY_AUTOMATIC, gtk.POLICY_AUTOMATIC)
text = gtk.TextView()
text.set_editable(True)
textbuffer = text.get_buffer()
self.frame2.add(table)
table.show()
table.add(text)
text.show()
####################################################################
# FRAME 3 ##########################################################
self.frame3 = gtk.Frame()
self.contentTable.attach(self.frame3,0,1,2,3,
gtk.FILL, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
self.frame3.set_label_align(0.0, 0.0)
self.frame3.set_label("Select attached files")
self.frame3.set_shadow_type(gtk.SHADOW_ETCHED_OUT)
self.frame3.show()
####################################################################
self.contentTable.show()
self.window.show()
def main(self):
gtk.main()
def destroy(self, widget, data=None):
gtk.main_quit()
I thought the problem was in the gtk.box parameters, or the parameters of
gtk.tables. I've tried everything and nothing changes. In the examples I find
on the net about gtk and python, everything is as I have but the result is
always different from mine.
Answer: You have to add gtk.EXPAND to the xoptions parameter when calling
[**contentTable.attach**](http://www.pygtk.org/docs/pygtk/class-
gtktable.html#method-gtktable--attach):
# FRAME 1 ##########################################################
self.frame1 = gtk.Frame()
self.contentTable.attach(self.frame1, 0, 1, 0, 1,
gtk.FILL | gtk.EXPAND, gtk.EXPAND | gtk.SHRINK | gtk.FILL, 0, 0)
|
Index similar entries in Python
Question: I have a column of data (easily imported from Google Docs thanks to
[gspread](http://burnash.github.com/gspread/userguide.html#getting-the-entire-
row-or-column)) that I'd like to intelligently align. I ingest entries into a
dictionary. Input can include email, twitter handle or a blog URL. For
example:
`[email protected]
@mikej45
[email protected]
_http://tumblr.com/mikej45`
Right now, the "dumb" version is:
def NomineeCount(spreadsheet):
worksheet = spreadsheet.sheet1
nominees = worksheet.col_values(6) # F = 6
unique_nominees = {}
for c in nominees:
pattern = re.compile(r'\s+')
c = re.sub(pattern, '', c)
if unique_nominees.has_key(c) == True: # If we already have the name
unique_nominees[c] += 1
else:
unique_nominees[c] = 1
# Print out the alphabetical list of nominees with leading vote count
for w in sorted(unique_nominees.keys()):
print string.rjust(str(unique_nominees[w]), 2)+ " " + w
return nominees
What's an efficient(-ish) way to add in some smarts during the if process?
Answer: You can try with defaultdict:
from collections import defaultdict
unique_nominees = defaultdict(lambda: 0)
unique_nominees[c] += 1
|
App Engine Python Authentication Custom Redirect
Question:
handlers:
- url: /secure_api/.*
script: _go_app
login: required
auth_fail_action: unauthorized
This code only brings me to a page saying "Login required to view page." Is
there a way to instead redirect to my home page?
Answer: When you specify `auth_fail_action: unauthorized`, you get the page you are
seeing (see
[here](https://developers.google.com/appengine/docs/python/config/appconfig#Requiring_Login_or_Administrator_Status)
for the details). Changing `unauthorized` to `redirect` will take them to the
login screen, but if you want to do more granular handling of users based on
their logged-in status, your best bet is to do it inside of your code via the
`Users` API. For instance (this is adapted from the
[docs](https://developers.google.com/appengine/docs/python/users/)), here is a
simple example that would redirect a non-logged-in user to `/`:
from google.appengine.api import users
import webapp2
class MyHandler(webapp2.RequestHandler):
def get(self):
user = users.get_current_user()
if user:
# Do stuff here for logged-in users
else:
# Redirect if user is None, which is the case for non-logged-in users
self.redirect('/')
|
Python Bar Plot pylab not reading in all data from csv
Question: I am attempting to create a bar plot of a large text file of data that looks
like the following, storedd in a csv file:
#DowntonPBS, 23
#DowntonAbbey, 12
#Download, 8
#Download:, 2
#Downloads, 2
#DownstairsMixtape, 1
#DownWithAssad, 1
#DownYoTLParty, 1
#DowntonAbbey?, 1
#Downtonabbey, 1
#DowntownAbbey, 1
The following code is where I'm at, and while this method has worked in the
past for different plotting scripts, I've done something wrong here that I
just can't seem to find. Instead of plotting all of the data, I only seem to
be getting three records.
import pylab as p
import sys
from matplotlib.mlab import csv2rec
y = []
fig = p.figure()
ax = fig.add_subplot(1,1,1)
input = open(sys.argv[1], 'r')
data = csv2rec(input, names=['tag', 'count'])
for item in data['count']:
y.append(item)
N = len(y)
ind = range(N)
ax.bar(ind, y, align='center')
ax.set_ylabel('Counts')
ax.set_title('HashTag Diversity')
ax.set_xticks(ind)
group_labels = data['tag']
ax.set_xticklabels(group_labels)
fig.autofmt_xdate()
p.show()
If I add print statements for y and N, and run the script against my larger
dataset I end up with:
[45, 37, 36]
3
These values should be a very large array "1000" values and the lenth (N)
should = 1000. I'm not sure what's going on here.
Answer: `csv2rec()` will ignore lines startswith "#" by default, you can change this
by:
csv2rec(input, names=['tag', 'count'], comments="something")
|
How do I read a text file containing an array and place it in an array? (Python)
Question:
from numpy import *
f = open('matrix.txt', 'r')
x = []
for line in f.readlines():
y = [value for value in line.split()]
x.append(y)
f.close()
x = map(int, x)
a = array([x])
Basically, my code is suppose to open the text file and put it into the list
x. I then change those values in x into an integer and place that in an array
a. Is there a faster way to do this? Btw my code doesn't work.
Answer: You'll probably do a little better if you use
[`np.loadtxt`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html).
|
How to define a Django model with a reference to itself?
Question: I'm new to Django and currently write an application, where users can enter
their opinions on some topic.
Every opinion A may have zero or more opinions, which support A and several
(zero or more) opinions, which refute A.
I tried to create a model for this and wrote a `models.py` file like this:
from django.db import models
# Create your models here.
class Opinion(models.Model):
id = models.AutoField(primary_key=True)
contents = models.CharField(max_length=256)
source = models.CharField(max_length=256)
proArguments = models.ManyToManyField(Opinion, verbose_name="Pro arguments")
contraArguments = models.ManyToManyField(Opinion, verbose_name="Contra arguments")
When I run `python manage sqlall`, I get following error:
File "D:\dev\history-site\history_site\opinions\models.py", line 4, in <module>
class Opinion(models.Model):
File "D:\dev\history-site\history_site\opinions\models.py", line 8, in Opinion
proArguments = models.ManyToManyField(Opinion, verbose_name="Pro arguments")
NameError: name 'Opinion' is not defined
How can I fix this error?
Answer: From [docs of
`ManyToManyField`](https://docs.djangoproject.com/en/dev/ref/models/fields/#ref-
manytomany):
> Requires a positional argument: the class to which the model is related.
> This works exactly the same as it does for `ForeignKey`, including all the
> options regarding recursive and lazy relationships.
Which
[says](https://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.ForeignKey):
> To create a recursive relationship -- an object that has a many-to-one
> relationship with itself -- use `models.ForeignKey('self')`.
So:
proArguments = models.ManyToManyField("self", verbose_name="Pro arguments")
contraArguments = models.ManyToManyField("self", verbose_name="Contra arguments")
I'm slightly wondering about your data model if arguments are considered
opinions as well, but that's another matter.
|
Using Python 3.3 to access blocked webpages
Question: I'm trying to download webpages off the internet. I'm able to steal the HTML
(with URLlib), but I can't download images correctly. There's already a
question for that though. My question is, **is there any way I can use python
to bypass a firewall to access 'blocked' webpages?**
Ideally it would be using some obscure code or module, but if it's impossible,
could someone tell me a good workaround using a different method (like a
proxy)?
Answer: If you want to extract images from a HTML page, you need to parse it with
**re** module
import re
using regex to extract only the _img src_ tag. You can also use a parser
alredy written. For example BeautifulSoup >
<http://www.crummy.com/software/BeautifulSoup/>
A firewall is a passive component of a perimeter defense into a computer
network that can also serve as contact points between two or more sections of
the network, ensuring a protection in terms of security of the network itself.
So you have to work directly in the network, not through the code language.
|
Get 10 numbers from the end of the string with Python regex?
Question: I have some strings like -
1. "07870 622103"
2. "(0) 07543 876545"
3. "07321 786543 - not working"
I want to get the last 10 digits of these strings. like -
1. "07870622103"
2. "07543876545"
3. "07321786543"
So far I have tried-
a = re.findall(r"\d+${10}", mobilePhone)
Please help.
Answer: It'll be easier just to filter your string for digits and picking out the last
10:
''.join([c for c in mobilePhone if c.isdigit()][-10:])
Result:
>>> mobilePhone = "07870 622103"
>>> ''.join([c for c in mobilePhone if c.isdigit()][-10:])
'7870622103'
>>> mobilePhone = "(0) 07543 876545"
>>> ''.join([c for c in mobilePhone if c.isdigit()][-10:])
'7543876545'
>>> mobilePhone = "07321 786543 - not working"
>>> ''.join([c for c in mobilePhone if c.isdigit()][-10:])
'7321786543'
The regular expression approach (filtering everything but digits), is faster
though:
$ python -m timeit -s "mobilenum='07321 786543 - not working'" "''.join([c for c in mobilenum if c.isdigit()][-10:])"
100000 loops, best of 3: 6.68 usec per loop
$ python -m timeit -s "import re; notnum=re.compile(r'\D'); mobilenum='07321 786543 - not working'" "notnum.sub(mobilenum, '')[-10:]"
1000000 loops, best of 3: 0.472 usec per loop
|
unicodedata.unidata_version prints wrong unicode version?
Question:
>>> import sys
>>> sys.version_info
sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0)
>>> import unicodedata
>>> unicodedata.unidata_version
'5.2.0'
Which means my Python version should have Unicode 5.2.0.
But When I go to the [list of newly added unicode chars in version
5.2.0](http://www.fileformat.info/info/unicode/version/5.2/index.htm) and
print such char, it is not recognised:
>>> print u"\u0803"
ࠃ
[Chars from
5.1.0](http://www.fileformat.info/info/unicode/version/5.1/index.htm) are
recognised however:
>>> print u"\u03CF"
Ϗ
So should I always count one version below the one is actually outputted by
`unicodedata.unidata_version` or am I misunderstanding something?
Answer: You are confusing what your terminal can print with what Python knows about
unicode characters.
Your _terminal font_ doesn't recognize those code points. Python can handle
them just fine:
>>> import unicodedata
>>> unicodedata.category(u'\u0803')
'Lo'
>>> unicodedata.name(u'\u0803')
'SAMARITAN LETTER DALAT'
>>> unicodedata.category(u'\u03CF')
'Lu'
>>> unicodedata.name(u'\u03CF')
'GREEK CAPITAL KAI SYMBOL'
Ironically enough, the font used by my browser doesn't define an image for
either codepoint. Your post shows two placeholder characters for me:

|
Inputting data from a text file with multiple numbers on each line - Python
Question: Lets say I have a data file that has this inside it:
23 33 45
91 81 414
28 0 4
7 9 14
8 9 17
1 1 3
38 19 84
How can I import it into a list so that each individual number it its own
item?
Answer: You can use, numpy loadtxt to read the data file to a numpy array
from numpy import loadtxt
a,b,c = loadtxt("filename", usecols=(0,1,2), unpack=True)
Output
a = array([23, 91, 28, 7, 8, 1, 38])
|
What can I do if celery forgets about tasks on a (apparently) random basis?
Question: I have a flask app with sqlalchemy and a celery worker up and running. I use
redis as my broker. Every time someone submits a new message in a
conversation, a worker is started and is supposed to send notification mails
to all people participating in the conversation. Thus it connects to the
database and gets all relevant email addresses.
Unfortunately there seems to be a random factor deciding, whether celery knows
the task that sends mail or not. After some starts it works perfectly
(sometimes), after some starts it does not work at all. The error I get, when
it does not work is:
[2012-11-28 21:42:58,751: ERROR/MainProcess] Received unregistered task of type 'messages.sendnotifies'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'retries': 0, 'task': 'messages.sendnotifies', 'eta': None, 'args': [41L], 'expires': None, 'callbacks': None, 'errbacks': None, 'kwargs': {}, 'id': '47da3ba7-ec91-4056-bb4f-a6afec2f960f', 'utc': True} (183b)
Traceback (most recent call last):
File "/var/www/virtual/venv/lib/python2.7/site-packages/celery/worker/consumer.py", line 410, in on_task_received
connection = self.connection
KeyError: 'messages.sendnotifies'
When i run celery with `--loglevel=DEBUG` it lists the task in the tasklist
though:
[Tasks]
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. event.notfiy
. messages.sendnotifies
. money.checktransaction
. money.deploypayment
. money.invoicepromotion
. protocols.plotweight
. questionnaire.deploy
. questionnaire.suitability
. registration.notify
. tracking.track
. user.checkaccount
. user.checkaccounts
. user.fixpermissions
. user.genpassreset
I could not determine a system yet, when it works and when not. But I have
upgraded all relevant packages to the latest version available today and it
still does not work.
I am hoping for any ideas on why this might not work and how I might be able
to fix it. Every feedback is very much appreciated, as I am kind of desperate!
Answer: This could be a bug fixed in the current 3.0 development, you could help test
it by installing:
pip install <https://github.com/celery/celery/zipball/3.0>
|
import a file from different directory
Question: i'm using python 2.7. I have written a script, i need to import a function
from some other file which is there in different folder. my script is in the
path
C:\python\xyz\xls.py
Path of File having function that i need to call is
C:\python\abc.py
i tried like this
from python.abc import *
but it is not working. Is there any other way to call the function or i need
to move the files into same directory? Please help Thank you
Answer: You can dynamically load a module from a file:
import imp
modl = imp.load_source('modulename', '/path/to/module.py')
The [imp module docs](http://docs.python.org/2/library/imp.html) will give you
more details.
|
How do I implement a sliding window in Python?
Question: I have a matrix for instance
a=[12,2,4,67,8,9,23]
and I would like a code that appends a value say 45 to it and removes the
first value '12' so in essence I want to make
a = [2,4,67,8,9,23,45]
I want to work with regular matrices not numpy matrices so I can't use hstack
or vstack How do I do this in python? Any help would be appreciated, thanks
Answer: Use a deque.
<http://docs.python.org/2/library/collections.html#collections.deque>
>>> import collections
>>> d = collections.deque(maxlen=7)
>>> d.extend([12,2,4,67,8,9,23])
>>> d.append(45)
>>> print d
deque([2, 4, 67, 8, 9, 23, 45], maxlen=7)
|
AttributeError when using numpy.logical_and with Pandas Series object
Question: I'm confused about the difference between Pandas Series objects when using
`reindex_like` and related features. For example, consider the following
Series objects:
>>> import numpy
>>> import pandas
>>> series = pandas.Series([1, 2, 3])
>>> x = pandas.Series([True]).reindex_like(series).fillna(True)
>>> y = pandas.Series(True, index=series.index)
>>> x
0 True
1 True
2 True
>>> y
0 True
1 True
2 True
On the surface `x` and `y` appear to be identical in their contents and
indexing. However, they must be different in some way because one of them
causes an error when using `numpy.logical_and()` and the other does not.
>>> numpy.logical_and(series, y)
0 True
1 True
2 True
>>> numpy.logical_and(series, x)
Traceback (most recent call last):
File "<ipython-input-10-e2050a2015bf>", line 1, in <module>
numpy.logical_and(series, x)
AttributeError: logical_and
What is `numpy.logical()` and complaining about here? I don't see the
difference between the two series, `x` and `y`. However, there must be some
subtle difference.
The Pandas documentation says the Series object is a valid argument to "most
NumPy functions." Clearly this is true somewhat in this case. Apparently the
creation mechanism makes `x` unusable to this particular numpy function.
As a side-note, which of the two creation mechanisms, `reindex_like()` and the
`index` argument are more efficient and idiomatic for this scenario? Maybe
there is another/better way I haven't considered also.
Answer: It looks like this is not a bug and the subtle difference is due to the usage
of the `reindex_like()` method. The call to `reindex_like()` inserts some NaN
data into the series so the `dtype` of that series changes from `bool` to
`object`.
>>> series = pandas.Series([1, 2, 3])
>>> x = pandas.Series([True])
>>> x.dtype
dtype('bool')
>>> x = pandas.Series([True]).reindex_like(series)
>>> x.dtype
dtype('object')
I posted an [issue](https://github.com/pydata/pandas/issues/2388) about this
anomaly on the Pandas github page.
The full explanation/discussion is
[here](https://github.com/pydata/pandas/issues/2388#issuecomment-10858926). It
looks like this behavior could potentially change in the future so watch the
[issue](https://github.com/pydata/pandas/issues/2388) for more on-going
details.
|
Combine items in list
Question: Python3:
dct = {'Mazda': [['Ford', 95], ['Toyota', 20], ['Chrysler', 52], ['Toyota', 5], ['Toyota', 26]]}
I have the above dictionary with the values being a list within a list. What I
would like to do is combine the items within the list that are the same and
add the integer to that value.
eg. since Toyota is in there 3x then combine all the numbers together to give
me another list
[Toyota, 51]
Final result should be does not need to be in this order
dct = {'Mazda': [['Ford', 95], ['Toyota', 51], ['Chrysler', 52]]}
Answer: For the input in the question:
dct = {'Mazda': [['Ford', 95], ['Toyota', 20], ['Chrysler', 52],
['Toyota', 5], ['Toyota', 26]]}
Try this:
from collections import defaultdict
for k, v in dct.items():
aux = defaultdict(int)
for car, num in v:
aux[car] += num
dct[k] = map(list, aux.items())
Now `dct` contains the expected result:
dct
=> {'Mazda': [['Ford', 95], ['Toyota', 51], ['Chrysler', 52]]}
|
"object has no attribute" in custom Django model field
Question: I am trying to create a Django model field that represents a duration with
days, hours, minutes and seconds text input fields in the HTML, and stores the
duration in the db using the ical format (RFC5545).
(this is related to my question on [How to create an ical duration field in
Django?](http://stackoverflow.com/questions/13597063/how-to-create-an-ical-
duration-field-in-django))
Here is my approach:
Thanks bakkal and Pol. Below is what I came up with.
from django.db import models
from icalendar.prop import vDuration
from django.forms.widgets import MultiWidget
from django.forms import TextInput, IntegerField
from django.forms.util import flatatt
from django.forms.fields import MultiValueField
from django.utils.encoding import force_unicode
from django.utils.safestring import mark_safe
from django.utils.text import capfirst
from django.utils.translation import ugettext_lazy as _
from django.core import validators
from datetime import timedelta
def is_int(s):
try:
int(s)
return True
except ValueError:
return False
class Widget_LabelInputField(TextInput):
"""
Input widget with label
"""
input_type="numbers"
def __init__(self, labelCaption, attrs=None):
self.labelCaption = labelCaption
super(Widget_LabelInputField, self).__init__(attrs)
def _format_value(self, value):
if is_int(value):
return value
return '0'
def render(self, name, value, attrs=None):
if value is None:
value = '0'
final_attrs = self.build_attrs(attrs, type=self.input_type, name=name)
if value != '':
# Only add the 'value' attribute if a value is non-empty.
final_attrs['value'] = force_unicode(self._format_value(value))
if (self.labelCaption):
typeString = self.labelCaption + ': '
else:
typeString = ''
return mark_safe(u'' + typeString + '<input%s style=\'width: 30px; margin-right: 20px\'/>' % flatatt(final_attrs))
class Widget_DurationField(MultiWidget):
"""
A Widget that splits duration input into two <input type="text"> boxes.
"""
def __init__(self, attrs=None):
widgets = (Widget_LabelInputField(labelCaption='days', attrs=attrs),
Widget_LabelInputField(labelCaption='hours', attrs=attrs),
Widget_LabelInputField(labelCaption='minutes', attrs=attrs),
Widget_LabelInputField(labelCaption='seconds', attrs=attrs)
)
super(Widget_DurationField, self).__init__(widgets, attrs)
def decompress(self, value):
if value:
duration = vDuration.from_ical(value)
return [str(duration.days), str(duration.seconds // 3600), str(duration.seconds % 3600 // 60), str(duration.seconds % 60)]
return [None, None, None, None]
class Forms_DurationField(MultiValueField):
widget = Widget_DurationField
default_error_messages = {
'invalid_day': _(u'Enter a valid day.'),
'invalid_hour': _(u'Enter a valid hour.'),
'invalid_minute': _(u'Enter a valid minute.'),
'invalid_second': _(u'Enter a valid second.')
}
def __init__(self, *args, **kwargs):
errors = self.default_error_messages.copy()
if 'error_messages' in kwargs:
errors.update(kwargs['error_messages'])
fields = (
IntegerField(min_value=-9999, max_value=9999,
error_messages={'invalid': errors['invalid_day']},),
IntegerField(min_value=-9999, max_value=9999,
error_messages={'invalid': errors['invalid_hour']},),
IntegerField(min_value=-9999, max_value=9999,
error_messages={'invalid': errors['invalid_minute']},),
IntegerField(min_value=-9999, max_value=9999,
error_messages={'invalid': errors['invalid_second']},),
)
super(Forms_DurationField, self).__init__(fields, *args, **kwargs)
def compress(self, data_list):
if data_list:
if data_list[0] in validators.EMPTY_VALUES:
raise ValidationError(self.error_messages['invalid_day'])
if data_list[1] in validators.EMPTY_VALUES:
raise ValidationError(self.error_messages['invalid_hour'])
if data_list[2] in validators.EMPTY_VALUES:
raise ValidationError(self.error_messages['invalid_minute'])
if data_list[3] in validators.EMPTY_VALUES:
raise ValidationError(self.error_messages['invalid_second'])
return vDuration(timedelta(days=data_list[0],hours=data_list[1],minutes=data_list[2],seconds=data_list[3]))
return None
class Model_DurationField(models.Field):
description = "Duration"
def __init__(self, *args, **kwargs):
super(Model_DurationField, self).__init__(*args, **kwargs)
def db_type(self, connection):
return 'varchar(255)'
def get_internal_type(self):
return "Model_DurationField"
def to_python(self, value):
if isinstance(value, vDuration) or value is None:
return value
return vDuration.from_ical(value)
def get_prep_value(self, value):
return value.to_ical()
def formfield(self, **kwargs):
defaults = {
'form_class': Forms_DurationField,
'required': not self.blank,
'label': capfirst(self.verbose_name),
'help_text': self.help_text}
defaults.update(kwargs)
return super(Model_DurationField, self).formfield(**defaults)
It works in the following Model:
class TestModel(models.Model):
ID = models.CharField(max_length=255)
start = models.DateTimeField(null=True)
#duration = models.CharField(max_length=255,null=True) commented out
otherDuration = duration.Model_DurationField(null=True)
but not in this one:
class TestModel(models.Model):
ID = models.CharField(max_length=255)
start = models.DateTimeField(null=True)
duration = models.CharField(max_length=255,null=True) # not commented out
otherDuration = duration.Model_DurationField(null=True)
I get the following error:
File "/somepath/models.py", line 5, in TestModel
otherDuration = duration.Model_DurationField(null=True)
AttributeError: 'CharField' object has no attribute 'Model_DurationField'
That puzzles me... it seems that python considers my field to be an attribute
of the previous field, but only if it is a CharField. Any ideas?
Answer: I was stupid. The problem was that I named the file where the model was
defined duration.py, so there was a naming conflict with the "duration" field.
I renamed the file and it worked.
|
How can I extract a wrapped C++ type from a Python type using boost::python?
Question: I've wrapped a C++ class using Py++ and everything is working great in Python.
I can instantiate the c++ class, call methods, etc.
I'm now trying to embed some Python into a C++ application. This is also
working fine for the most-part. I can call functions on a Python module, get
return values, etc.
The python code I'm calling returns one of the classes that I wrapped:
import _myextension as myext
def run_script(arg):
my_cpp_class = myext.MyClass()
return my_cpp_class
I'm calling this function from C++ like this:
// ... excluding error checking, ref counting, etc. for brevity ...
PyObject *pModule, *pFunc, *pArgs, *pReturnValue;
Py_Initialize();
pModule = PyImport_Import(PyString_FromString("cpp_interface"));
pFunc = PyObject_GetAttrString(pModule, "run_script");
pArgs = PyTuple_New(1); PyTuple_SetItem(pArgs, 0, PyString_FromString("an arg"));
pReturnValue = PyObject_CallObject(pFunc, pArgs);
bp::extract< MyClass& > extractor(pReturnValue); // PROBLEM IS HERE
if (extractor.check()) { // This check is always false
MyClass& cls = extractor();
}
The problem is the extractor never actually extracts/converts the PyObject* to
MyClass (i.e. extractor.check() is always false).
According to [the
docs](http://www.boost.org/doc/libs/1_47_0/libs/python/doc/tutorial/doc/html/python/object.html#python.extracting_c___objects)
this is the correct way to extract a wrapped C++ class.
I've tried returning basic data types (ints/floats/dicts) from the Python
function and all of them are extracted properly.
Is there something I'm missing? Is there another way to get the data and cast
to MyClass?
Answer: I found the error. I wasn't linking my bindings in my main executable because
the bindings were compiled in a separate project that created the python
extension only.
I assumed that by loading the extension using `pModule =
PyImport_Import(PyString_FromString("cpp_interface"));` the bindings would be
loaded as well, but this is not the case.
**To fix the problem, I simply added the files that contain my boost::python
bindings (for me, just wrapper.cpp) to my main project and re-built.**
|
Transferring specific data from one excel file to another using python
Question: I just started learning Python and I need help with a script my internship
asked me to write.
I have a csv file (sheet1.csv) and I need to extract data from only two of the
columns which have the headers referenceID and PartNumber that correspond to
each other. I need to update a separate csv file called sheet2.csv which also
contains the two columns referenceID and PartNumber however many of the
PartNumber cells are empty.
Basically I need to fill in the “PartNumber” field with the values from
sheet1. From the research I’ve done I’ve decided using dictionaries are a
solid approach to writing this script (I think). So far I have been able to
read the files and create two dictionaries with the referenceIDs as the keys
and the PartNumber as values… Here is what I have showing an example of what
the dictionaries look like.
import csv
a = open('sheet1.csv', 'rU')
b = open('sheet2.csv', 'rU')
csvReadera = csv.DictReader(a)
csvReaderb = csv.DictReader(b)
a_dict = {}
b_dict = {}
for line in csvReadera:
a_dict[line["ReferenceID"]] = line["PartNumber"]
print(a_dict)
for line in csvReaderb:
b_dict[line["ReferenceID"]] = line["PartNumber"]
print(b_dict)
a_dict = {'R150': 'PN000123', 'R331': 'PN000873', 'C774': 'PN000064', 'L7896': 'PN000447', 'R0640': 'PN000878', 'R454': 'PN000333'}
b_dict = {'C774': '', 'R331': '', 'R454': '', 'L7896': 'PN000000', 'R0640': '', 'R150': 'PN000333'}
How can I compare the two dictionaries and fill in/overwrite the missing
values for b-dict and then write to sheet2? Certainly, there must be more
efficient methods than what I have come up with, but I have never used Python
before so please forgive my pitiful attempt!
Answer: have a look at the pandas library.
import padas as pd
#this is how you read
dfa = pd.read_csv("sheet1.csv")
dfb = pd.read_csv("sheet2.csv")
let s jus take the dicts you defined as testdata
a_dict = {'R150': 'PN000123', 'R331': 'PN000873', 'C774': 'PN000064', 'L7896': 'PN000447', 'R0640': 'PN000878', 'R454': 'PN000333'}
b_dict = {'C774': '', 'R331': '', 'R454': '', 'L7896': 'PN000000', 'R0640': '', 'R150': 'PN000333'}
dfar = pd.DataFrame(a_dict.items(), columns = ['ReferenceID', 'PartNumber'])
dfbr = pd.DataFrame(b_dict.items(), columns = ['ReferenceID', 'PartNumber'])
dfa = dfar[['ReferenceID', 'PartNumber']]
dfa.columns = ['ReferenceIDA', 'PartNumberA']
dfb = dfbr[['ReferenceID', 'PartNumber']]
dfb.columns = ['ReferenceIDB', 'PartNumberB']
you get this
In [97]: dfa
Out[97]:
ReferenceIDA PartNumberA
0 R331 PN000873
1 R454 PN000333
2 L7896 PN000447
3 R150 PN000123
4 C774 PN000064
5 R0640 PN000878
In [98]: dfb
Out[98]:
ReferenceIDB PartNumberB
0 R331
1 R454
2 R0640
3 R150 PN000333
4 C774
5 L7896 PN000000
now
In [67]: cd = pd.concat([dfa,dfb], axis=1)
In [68]: cd
Out[68]:
ReferenceIDA PartNumberA ReferenceIDB PartNumberB
0 R331 PN000873 R331
1 R454 PN000333 R454
2 L7896 PN000447 R0640
3 R150 PN000123 R150 PN000333
4 C774 PN000064 C774
5 R0640 PN000878 L7896 PN000000
cd["res"] = cd.apply(lambda x : x["PartNumberB"] if x["PartNumberB"] else x["PartNumberA"], axis=1)
cd
Out[106]:
ReferenceIDA PartNumberA ReferenceIDB PartNumberB res
0 R331 PN000873 R331 PN000873
1 R454 PN000333 R454 PN000333
2 L7896 PN000447 R0640 PN000447
3 R150 PN000123 R150 PN000333 PN000333
4 C774 PN000064 C774 PN000064
5 R0640 PN000878 L7896 PN000000 PN000000
this is what you wanted
just set
dfbr['PartNumber'] = cd['res']
and dump to csv
dfbr.to_csv('sheet2.csv')
|
Python OS PostreSQL and quotes
Question: I can find plenty here about quotes in PSQL but nothing that quite fits this
problem.
First it's a kludge. I know it's a kludge but I think I'm stuck with it (open
to other alternatives though)
I have a near black box third-party linux appliance to which I have limited
access I have bash, python and psql to work with. I don't have psycopg2 or any
other pg libraries.
The DB I have to work with uses case-sensitive table names that need to be
quoted (don't ask...)
So, at the moment I write OS shell commands to get data which I then fiddle
about with and convert to JSON for my needs
A simple example:
pg_str = "psql -U pword dbname -A -t -c "
sql = "SELECT * FROM \"Addresses\" WHERE id=999"
os_str = pg_str + "\'" + sql + "\'" + ";"
data = string.split(os.popen(os_str).read())
No problem with that. I'm not claiming it's pretty, but it works (remember I
can't import any db libraries...)
It all goes wrong when I have a where clause on a text field:
pg_str = "psql -U pword dbname -A -t -c "
sql = "SELECT * FROM \"Addresses\" WHERE town='london'"
os_str = pg_str + "\'" + sql + "\'" + ";"
data = string.split(os.popen(os_str).read())
Too many quote combinations to cope with...?
I've obviously tried lots of escape combinations and have been googling for
several hours, but every solution seems to require libraries that I haven't
got access to.
I'm no python or psql expert - this is about my current limit. I feel sure
that I'm going about it the wrong way but am currently beaten on figuring out
the right way...
Answer: 1. There's no need to `\` escape `'` characters inside of `"` strings.
2. You can use `"""` strings to avoid needing to escape `"` characters.
3. Use string.replace to quote `'` characters for the shell, by replacing them with `'\''`. The resulting string will need to be surrounded by unescaped `'` characters when passed to the shell.
Using these rules, the SQL string can be made easily readable and editable:
pg_str = "psql -U pword dbname -A -t -c "
sql = """SELECT * FROM "Addresses" WHERE town='london'"""
sql = sql.replace("'", "'\\''")
os_str = pg_str + "'" + sql + "'" + ";"
data = string.split(os.popen(os_str).read())
|
Recurring function call with twisted
Question: I have a basic IRC bot that looks something like this (see below), what I
would like to do is use something like the `_5_mins` function to be called
every 5 mins with a `LoopingCall`
import sys
import re
from twisted.internet import reactor, task, defer, protocol
from twisted.python import log
from twisted.words.protocols import irc
from twisted.application import internet, service
import time
HOST, PORT = 'irc.freenode.net', 6667
class IrcProtocol(irc.IRCClient):
nickname = 'BOTSNAME'
password = 'NICKPASSWORD'
timeout = 600.0
def signedOn(self):
pMess = "IDENTIFY %s" % self.password
self.msg("NickServ",pMess)
time.sleep(10)
for channel in self.factory.channels:
self.join(channel)
def _5_mins(self):
self.msg(self.factory.channels[0],"5 minutes have elapsed")
class IrcFactory(protocol.ReconnectingClientFactory):
channels = ['#BOTCHANNEL']
protocol = IrcProtocol
if __name__ == '__main__':
reactor.connectTCP(HOST, PORT, IrcFactory())
log.startLogging(sys.stdout)
reactor.run()
elif __name__ == '__builtin__':
application = service.Application('IrcBot')
ircService = internet.TCPClient(HOST, PORT, IrcFactory())
ircService.setServiceParent(application)
How do I alter the `signedOn` function work with the `task.LoopingCall`
function or is there a better way?
EDIT: I was really close to a solution, the following is what I have gone with
def signedOn(self):
pMess = "IDENTIFY %s" % self.password
self.msg("NickServ",pMess)
time.sleep(10)
for channel in self.factory.channels:
self.join(channel)
lc = task.LoopingCall(self._5_mins)
lc.start(self.timeout)
Answer:
def signedOn(self):
pMess = "IDENTIFY %s" % self.password
self.msg("NickServ",pMess)
time.sleep(10)
for channel in self.factory.channels:
self.join(channel)
lc = task.LoopingCall(self._5_mins)
lc.start(self.timeout)
|
Python CSV module, add column to the side, not the bottom
Question: I am new in python, and I need some help. I made a python script that takes
two columns from a file and copies them into a "new file". However, every now
and then I need to add columns to the "new file". I need to add the columns on
the side, not the bottom. My script adds them to the bottom. Someone suggested
using CSV, and I read about it, but I can't make it in a way that it adds the
new column to the side of the previous columns. Any help is highly
appreciated.
Here is the code that I wrote:
import sys
import re
filetoread = sys.argv[1]
filetowrite = sys.argv[2]
newfile = str(filetowrite) + ".txt"
openold = open(filetoread,"r")
opennew = open(newfile,"a")
rline = openold.readlines()
number = int(len(rline))
start = 0
for i in range (len(rline)) :
if "2theta" in rline[i] :
start = i
for line in rline[start + 1 : number] :
words = line.split()
word1 = words[1]
word2 = words[2]
opennew.write (word1 + " " + word2 + "\n")
openold.close()
opennew.close()
Here is the second code I wrote, using CSV:
import sys
import re
import csv
filetoread = sys.argv[1]
filetowrite = sys.argv[2]
newfile = str(filetowrite) + ".txt"
openold = open(filetoread,"r")
rline = openold.readlines()
number = int(len(rline))
start = 0
for i in range (len(rline)) :
if "2theta" in rline[i] :
start = i
words1 = []
words2 = []
for line in rline[start + 1 : number] :
words = line.split()
word1 = words[1]
word2 = words[2]
words1.append([word1])
words2.append([word2])
with open(newfile, 'wb') as file:
writer = csv.writer(file, delimiter= "\n")
writer.writerow(words1)
writer.writerow(words2)
These are some samples of input files:
* <https://dl.dropbox.com/u/63216126/file5.txt>
* <https://dl.dropbox.com/u/63216126/file6.txt>
My first script works "almost" great, except that it writes the new columns at
the bottom and I need them at side of the previous columns.
Answer: The proper way to use `writerow` is to give it a single list that contains the
data for _all_ the columns.
words.append(word1)
words.append(word2)
writer.writerow(words)
|
Python Class Inheritance : dict how to pickle(save, load)
Question: I'm playing around with class inheritances and I'm stuck on how to pickle data
in the class dictionary.
If dump only the dictionary part of self, when I load the dictionary back to
self, self takes on a dict type instead of a class. But if I pickle the whole
class then I get an error.
### Error
pickle.PicklingError: Can't pickle <class 'main.model'>: it's not the same object as main.model
### Code
import os, pickle
class model(dict):
def __init__( self ):
pass
def add( self, id, val ):
self[id] = val
def delete( self, id ):
del self[id]
def save( self ):
print type(self)
pickle.dump( dict(self), open( "model.dict", "wb" ) )
def load( self ):
print 'Before upacking model.dic, self ==',type(self)
self = pickle.load( open( "model.dict", "rb" ) )
print 'After upacking model.dic, self ==',type(self)
if __name__ == '__main__':
model = model()
#uncomment after first run
#model.load()
#comment after first run
model.add( 'South Park', 'Comedy Central' )
model.save()
Answer: If all you want to do is have a class called model that is a subclass of dict
and can be properly pickled and unpickled back to an object that is also of
type model then you don't have to do anything special. The methods you have
defined in your example to add and delete are unnecessary you can just do them
directly on model instances as you would do with any other dict. The save and
load methods can be done using the pickle module instead of on the class
itself.
**code**
import pickle
class model(dict):
pass
a = model()
pickled = pickle.dumps(a)
b = pickle.loads(pickled)
print type(a), a
print type(b), b
**output**
<class '__main__.model'> {}
<class '__main__.model'> {}
below is another version which is maybe more in line with what you were trying
to achieve. But you should **NOT** do things this way. The load method is
weird so is the save. I put the code below to show it can be done but not
really something you want to do because it will end up being very confusing.
**another version (don't do this)**
import pickle
class model(dict):
def save(self):
with open("model.dict", "wb") as f:
pickle.dump(self, f)
def load(self):
with open("model.dict") as f:
return pickle.load(f)
#comment after first run
test = model()
test['South Park'] = 'Comedy Central'
test.save()
print type(test), test
#uncomment after first run
test2 = model().load()
print type(test2), test2
**Further Reading**
A great example of a subclass of dict that is is picklable is
[collections.OrderedDict](http://docs.python.org/2/library/collections.html#collections.OrderedDict).
It is part of the python standard library and is implemented in python so you
can have a peak at the source. The definition is 172 lines of code so it's not
too much code to look through. It also had to implement the `__reduce__`
method to achieve pickling because it has information about the order of items
that also needs to be pickled and unpickled. It's a good example of why you
might want to make your own subclass of dict, it adds the very useful feature
of respecting the order of values added to the dict.
|
How to process xml files in python
Question: I have a ~1GB XML file that has XML tags that I need to fetch data from. I
have the XML file in the following format (I'm only pasting sample data
because the actual file is about a gigabyte in size).
**report.xml**
<report>
<report-name name="ALL_TIME_KEYWORDS_PERFORMANCE_REPORT"/>
<date-range date="All Time"/>
<table>
<row campaignID="79057390" adGroupID="3451305670" keywordID="3000000" keyword="Content" avgPosition="1.55" cost="252910000" clicks="11" conv1PerClick="0" impressions="7395" day="2012-04-23" currency="INR" account="Virtual Voyage" timeZone="(GMT+05:30) India Standard Time" viewThroughConv="0"/>
<row campaignID="79057390" adGroupID="3451305670" keywordID="3000000" keyword="Content" avgPosition="1.16" cost="0" clicks="0" conv1PerClick="0" impressions="160" day="2012-04-23" currency="INR" account="Virtual Voyage" timeZone="(GMT+05:30) India Standard Time" viewThroughConv="0"/>
<row campaignID="79057390" adGroupID="3451305670" keywordID="3000000" keyword="Content" avgPosition="1.56" cost="0" clicks="0" conv1PerClick="0" impressions="34" day="2012-04-23" currency="INR" account="Virtual Voyage" timeZone="(GMT+05:30) India Standard Time" viewThroughConv="0"/>
</table>
</report>
1. What is the best way to parse/process XML files and fetch the data from xml tags in Python?
2. Are there any frameworks that can process XML files?
3. The method needs to be fast; it needs to finish in less than 100 seconds.
I've been using Hadoop with Python to process XML files and it usually takes
nearly 200 seconds just to process the data... So I'm looking for an
alternative solution that parses the above XML tags and fetches data from the
tags.
Here's the data from the tags in the sense:
campaignID="79057390" adGroupID="3451305670" keywordID="3000000" keyword="Content" avgPosition="1.16" cost="0" clicks="0" ...
After processing the XML file, I will store the data and values
(79057390,3451305670 ...) in a MySQL database. All I need is to be able to
process XML files about 1GB in size and save the processed data to a MySQL
database in less than 100 seconds.
Answer: I recently faced a similar problem, the way to solve it for me was to use the
iterparse function and lxml, at the end, it is all based on using SAX-like
parser instead of a DOM-like one, remember DOM works in memory while SAX is
event-driven, so you will save a ton of memory using SAX (and that means time
too!, as you will not need to wait to load all the document in order to parse
it!)
I think you can use something like this
import xml.etree.cElementTree as ET
file_path = "/path/to/your/test.xml"
context = ET.iterparse(file_path, events=("start", "end")) #Probably we could use only the start tag
# turn it into an iterator
context = iter(context)
on_members_tag = False
for event, elem in context:
tag = elem.tag
value = elem.text
if value :
value = value.encode('utf-8').strip()
if event == 'start' :
if tag == "row" :
attribs = elem.attrib
print "This is the campaignID %s and this is the adGroupID" % (attribs['campaignID'] , attribs['adGroupID'])
elem.clear() #Save memory!
|
Python pygame continuos image loading FPS
Question: Using pygame on a linux machine, continuously loading new images and
displaying them slows down the program.
The input images (400x300) are in PPM format to keep the file size constant
(360K) - not to affect IO and avoid any decompression delays.
It starts off at 50 frames per second and then after around 2 minutes its
around 25 frames per second.
import pygame
pygame.init()
clock = pygame.time.Clock()
screen = pygame.display.set_mode((800, 600),pygame.FULLSCREEN)
frame=1
while 1:
image = pygame.image.load(str(frame)+".ppm")
screen.blit(image,(0,0))
pygame.display.flip()
clock.tick(240)
frame=frame+1
if(frame%10==0):
print(clock.get_fps())
What can be done to keep the frame rate more consistent?
Most likely it has something to do with old references to images that need to
be garbage collected. Maybe not.
Is there anyway to load images continuously without creating new objects and
triggering the garbage collector or whatever is slowing the system down?
Answer: After many weeks of pondering, I think I finally figured out what your problem
is. For some reason, the computer must be remembering the old values of
`image`. After the line that blits, put
del image
I'm not entirely sure, but it might work.
|
ImportError: cannot import name BaseHTTPRequestHandler
Question: I try to run first example from [python
wiki](http://wiki.python.org/moin/BaseHttpServer) and when I try to run it:
$ python BaseHttpServer.py
I get an error **AttributeError: 'module' object has no attribute
'BaseHTTPRequestHandler'**.
I tested it on Python 2.7.3 on Linux Mageia 2 64-bit:
Traceback (most recent call last):
File "BaseHTTPServer.py", line 9, in <module>
import BaseHTTPServer
File "/home/vanveber/BaseHttpServer/BaseHTTPServer.py", line 14, in <module>
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
AttributeError: 'module' object has no attribute 'BaseHTTPRequestHandler'
And it on Python 2.7.3 on Windows 7 64-bit
Traceback (most recent call last):
File "BaseHTTPServer.py", line 11, in <module>
from BaseHTTPServer import BaseHTTPRequestHandler
File "C:\BaseHttpServer\BaseHTTPServer.py", line 11, in <module>
from BaseHTTPServer import BaseHTTPRequestHandler
ImportError: cannot import name BaseHTTPRequestHandler
**BUT!**
1. BaseHttpServer is a class from **standard Python library**.
2. If I write and run this code from **Python GUI on Windows** it works correctly!
What is a problem and why?!
Answer: **Solution: Rename the python file.**
**Explanation:** BaseHTTPServer is a module in the standard library. When you
have a python file called BaseHTTPServer.py in your local directory, you will
hide the standard library module, and you can no longer import it, because the
statement
import BaseHTTPServer
will not import the standard library module, but the local BaseHTTPServer.py
module.
|
How can I iterate over a data file without code duplication in python?
Question: I want to write a script to process some data files. The data files are just
ascii text with columns of data, here is a simple example...
The first column is an ID number, in this case from 1 to 3. The second column
is a value of interest. (The actual files I'm using have many more IDs and
values, but let's keep it simple here).
data.txt contents:
1 5
1 4
1 10
1 19
2 15
2 18
2 20
2 21
3 50
3 52
3 55
3 70
I want to iterate over the data and extract the values for each ID, and
process them, i.e. get all values for ID 1 and do something with them, then
get all values for ID 2 etc.
So I can write this in python.
#!/usr/bin/env python
def processValues(values):
print "Will do something with data here: ", values
f = open('data.txt', 'r')
datalines = f.readlines()
f.close()
currentID = 0
first = True
for line in datalines:
fields = line.split()
# if we've moved onto a new ID,
# then process the values we've collected so far
if (fields[0] != currentID):
# but if this is our first iteration, then
# we just need to initialise our ID variable
if (not first):
processValues(values) # do something useful
currentID = fields[0]
values = []
first = False
values.append(fields[1])
processValues(values) # do something with the last values
The problem I have is that `processValues()` must be called again at the end.
So this requires code duplication, and means that I might one day write a
script like this and forget to put the extra `processValues()` at the end, and
therefore miss the last ID. It also requires storing whether it is our 'first'
iteration, which is annoying.
Is there anyway to do this without having two function calls to
`processValues()` (one inside the loop for each new ID, one after the loop for
the last ID)?
The only way I can think of is by storing the line number and checking in the
loop if we're at the last line. But it seems that removes the point of the
'foreach' style processing where we store the line itself, and not the index
or the total number of lines. This would also apply to other scripting
languages like perl, where it would be common to iterate over lines with
`while(<FILE>)` and not have an idea of the number of lines remaining. Is it
always necessary to write the function call again at the end?
Answer: You want to look at
[itertools.groupby](http://docs.python.org/2/library/itertools.html#itertools.groupby)
if all occurrences of a key are contigious - a basic example...
from itertools import groupby
from operator import itemgetter
with open('somefile.txt') as fin:
lines = ( line.split() for line in fin )
for key, values in groupby(lines, itemgetter(0)):
print 'Key', key, 'has values'
for value in values:
print value
Alternatively - you can also look at using a
[collections.defaultdict](http://docs.python.org/2/library/collections.html#collections.defaultdict)
with a `list` as the default.
|
Alter each for-loop in a function to have error handling executed automatically after each failed iteration
Question: This question follows from [catch errors within generator and continue
afterwards](http://stackoverflow.com/questions/13645112/catch-errors-within-
generator-and-continue-afterwards)
I have about 50 similar (but different) functions which try to extract URLs
and such from websites. Because each website is different, each function is
different and because websites tend to change over time this code is messy and
cannot be trusted.
Here's a simplified sample, or look at the sample in the first question
def _get_units(self):
for list1 in self.get_list1():
for list2 in self.get_list2(list1):
for unit in list2:
yield unit
What I want to do with this function is essentially change the behavior to
match this:
def _get_units(self):
for list1 in self.get_list1():
try:
for list2 in self.get_list2(list1):
try:
for unit in list2:
try:
yield unit
except Exception as e:
log_exception(e)
except Exception as e:
log_exception(e)
except Exception as e:
log_exception(e)
In short, I want to turn this
for x in list:
do_stuff(x)
to this:
for x in list:
try:
do_stuff(x)
except Exception as e:
log_exception(e)
for each `for` in my functions.
But I want to do it in a pythonic way. I don't want `try:except` blocks
scattered all over the 50 functions I need to alter. Is this possible? If so,
how can I do it in the most DRY way, and can I do this with the error handling
in one place?
**UPDATE:** this question formerly included a `continue` statement along with
the logging, but as mgilson pointed out, this isn't necessary.
**UPDATE 2** with georgesl's answer the function becomes as follows:
from contextlib import contextmanager
@contextmanager
def ErrorManaged():
try:
yield
except Exception as e:
log_exception(e)
def _get_units(self):
for list1 in self.get_list1():
with ErrorManaged():
for list2 in self.get_list2(list1):
with ErrorManaged():
for unit in list2:
with ErrorManaged():
yield unit
which is a lot cleaner indeed. though, a mere decorator would be even better.
can anyone tell me if this is possible? if not, i'll accept georgesl's answer.
Answer: I might "decorate" the functions themselves. Presumably you have them stored
in a list or something if you're living by DRY principles:
def decorate_function(func):
def decorated(x):
try:
return func(x)
except Exception as e:
log_error(e)
return decorated
Now you can just decorate your functions with this and it will log your
errors. Note that this assumes that the `continue` statement above is
unnecessary. It looks like it isn't really used to me, but I might be missing
something.
If the functions really don't return something, then you could return `True`
or `False` depending on if you hit an exception. You could use that to write
your `continue` logic. Something like:
if not decorated_function(x): continue
|
What happens to file descriptors in Python 3 when .close() fails?
Question: The following Python 3 code exhibits some strange behavior (to me, at least)
when I run it through `strace`:
import os
import sys
if len(sys.argv) != 2:
print('Usage: ecpy <filename>')
sys.exit(1)
try:
print('my PID: %d' % os.getpid())
with open(sys.argv[1], 'w') as fp:
try:
fp.write('Hello Stack Overflow!')
except IOError as e:
print('### before close')
print(str(e))
sys.stdout.flush()
except IOError as e:
print('### after close')
print(str(e))
sys.stdout.flush()
print('### after exception block')
sys.stdout.flush()
Since I/O is buffered, if you run this code with `/dev/full`, it doesn't fail
until `fp` closes at the end of the `with` block. That's no surprise. In
Python 2.7.3rc2 (on my system), the code runs the exception handler after
actually closing the file descriptor corresponding to `fp`:
write(3, "Hello Stack Overflow!", 21) = -1 ENOSPC (No space left on device)
close(3) = 0
munmap(0x7f9de3f78000, 4096) = 0
write(1, "### after close\n", 16) = 16
write(1, "[Errno 28] No space left on devi"..., 35) = 35
write(1, "### after exception block\n", 26) = 26
However, in Python 3.2.3 (on my system), the file descriptor is still open
after the exception block runs:
write(3, "Hello Stack Overflow!", 21) = -1 ENOSPC (No space left on device)
write(1, "### after close\n", 16) = 16
write(1, "[Errno 28] No space left on devi"..., 35) = 35
write(1, "### after exception block\n", 26) = 26
...
write(3, "Hello Stack Overflow!", 21) = -1 ENOSPC (No space left on device)
write(3, "Hello Stack Overflow!", 21) = -1 ENOSPC (No space left on device)
close(3) = 0
The interpreter tries to write to the file a few more times and fails
silently. When does Python actually call `close()`? What is invoking it? This
behavior seems to leak a file descriptor.
Answer: I took the liberty of submitting an issue to bugs.python.org, let's wait and
see if it pans out.
<http://bugs.python.org/issue16597>
EDIT: Looks like it's a bug, good catch!
|
Is "cgi.escape" necessary in a Python application?
Question: Was looking over a developer's code. He did something that I have never seen
before in a Python application. His background is in PHP and is just learning
python, so I don't know if this is perhaps a holdover from the different
system architectures that he is used to working with.
He told me that the purpose of this code is to prevent the user from attacking
the application via code insertion. I'm pretty sure this is unnecessary for
our use case since we are never evaluating the data as code, but I just wanted
to make sure and ask the community.
# Import library
from cgi import escape
# Get information that the client submitted
fname = GET_request.get('fname', [''] )[0]
# Make sure client did not submit malicious code <- IS THIS NECESSARY?
if fname:
fname = escape(fname)
* Is this typically necessary in a Python application?
* In what situations is it necessary?
* In what situations is it not necessary?
Answer: If user input is going into a database, or anywhere else it might be executed,
then code injection could be a problem.
[This](http://stackoverflow.com/questions/60174/best-way-to-prevent-sql-
injection?lq=1) question asks about ways to prevent code injection in php, but
the principle is the same - SQL queries containing malicious code get
executed, potentially doing things like deleting all your data.
The [escape](http://docs.python.org/2/library/cgi.html#cgi.escape) function
converts <, > and & characters into html-safe sequences.
From those two links it doesn't look like escape() is enough on it's own, but
something does need to be done to stop malicious code. Of course this may well
be being taken care of elsewhere in your code.
|
How to add multiple files to py2app?
Question: I have a python script which makes a GUI. When a button 'Run' is pressed in
this GUI it runs a function from an imported package (which I made) like this
from predictmiP import predictor
class MiPFrame(wx.Frame):
[...]
def runmiP(self, event):
predictor.runPrediction(self.uploadProtInterestField.GetValue(), self.uploadAllProteinsField.GetValue(), self.uploadPfamTextField.GetValue(), \
self.edit_eval_all.Value, self.edit_eval_small.Value, self.saveOutputField)
When I run the GUI directly from python it all works well and the program
writes an output file. However, when I make it into an app, the GUI starts but
when I press the button nothing happens. predictmiP does get included in
build/bdist.macosx-10.3-fat/python2.7-standalone/app/collect/, like all the
other imports I'm using (although it is empty, but that's the same as all the
other imports I have).
How can I get multiple python files, or an imported package to work with
py2app?
my setup.py:
""" This is a setup.py script generated by py2applet
Usage: python setup.py py2app """
from setuptools import setup
APP = ['mip3.py']
DATA_FILES = []
OPTIONS = {'argv_emulation': True}
setup(
app=APP,
data_files=DATA_FILES,
options={'py2app': OPTIONS},
setup_requires=['py2app'],
)
* * *
edit:
It looked like it worked, but it only works for a little. From my GUI I call
blast.makeBLASTdb(self.uploadAllProteinsField.GetValue(), 'allDB')
# to test if it's working
dlg = wx.MessageDialog( self, "werkt"+self.saveOutputField, "werkt", wx.OK)
dlg.ShowModal() # Show it
dlg.Destroy() # finally destroy it when finished.
blast.makeBLASTdb looks like this:
def makeBLASTdb(proteins_file, database_name):
subprocess.call(['/.'+os.path.realpath(__file__).rstrip(__file__.split('/')[-1])+'blast/makeblastdb', '-in', proteins_file, '-dbtype', 'prot', '-out', database_name])
This function gets called, makeblastdb which I call through subprocess does
output a file. However, the program does not continue,
dlg = wx.MessageDialog( self, "werkt"+self.saveOutputField, "werkt", wx.OK)
dlg.ShowModal() # Show it
in the next lines never gets executed.
Answer: Since your setup.py is not provided, I will guess it does not resemble
something like:
from setuptools import setup
OPTIONS = {'packages' : ['predictmiP']}
setup(app=someapp.py, options={'py2app' : OPTIONS},
setup_requires=['py2app'])
Or maybe you are looking for `OPTIONS['includes']` ? Or maybe
`OPTIONS['frameworks']` ?
|
How to test functions that throw exceptions
Question: > **Possible Duplicate:**
> [How do you test that a Python function throws an
> exception?](http://stackoverflow.com/questions/129507/how-do-you-test-that-
> a-python-function-throws-an-exception)
I have to do white-box and black-box testing so i`m wondering how it is
possible to test a function that trows an exceptions, like this one
class validator_client():
def validate_client(self,client):
erori=[]
if client.get_identitate()=="":
erori.append("Nu exista ID!")
if client.get_nume()=="":
erori.append("Nu exista nume!")
if (client.get_filme_inchiriate()!="da" ) and (client.get_filme_inchiriate()!="nu") :
erori.append("Campul 'Filme inchiriate' completat gresit!")
if len(erori)>0:
raise ValidatorException(erori)
I`ve read something about assertRises() but i can not import the module with
this method, found this on stackowerflow:
from testcase import TestCase
import mymod
class MyTestCase(TestCase):
def test1(self):
self.assertRaisesWithMessage(SomeCoolException,
'expected message',
mymod.myfunc)
but I`m not able to make it work.
Answer: This is a working example that is a simplified version of what you will find
in the python docs. It checks that `random.shuffle` raises a `TypeError`
exception.
import random, unittest
class TestSequenceFunctions(unittest.TestCase):
def test_shuffle(self):
self.assertRaises(TypeError, random.shuffle, (1,2,3))
unittest.main()
|
How to change the layout of a Gtk application on fullscreen?
Question: I'm developing another image viewer using Python and Gtk and the viewer is
currently very simple: it consists of a `GtkWindow` with a `GtkTreeView` on
the left side displaying the list of my images, and a `GtkImage` on the right
side, displaying the actual image. So far, so good.
Now, I would like to go fullscreen, and display only the image with a black
background, etc.
I can see several ways of doing this:
* I can hide() the window, and display a big `GtkImage` instead, but I'm loosing all the stuff that I setup on the window before (signals for example), and I'm hiding the window which goes fullscreen, which is a bit weird;
* I can hide() the content of the window, remove the content of the window (which can have only one child, as far as I know?), and put a `GtkImage` inside instead (and do the reverse on exiting fullscreen);
* I can try to play with the containers inside my window, and hiding/showing their content when the window is going fullscreen/exiting fullscreen. More precisely, I was thinking of adding another `GtkHBox` as a direct child of my window, with two child, and displaying only the first one on "normal" mode, and only the second one on fullscreen.
That all seem a bit hacky, so I wonder what would be the recommended way to
handle this kind of situation. Thanks!
Answer: I think the simplest way to implement this is to have one layout with all your
widgets setup and the signals setup. Then when you toggle in and out off
fullscreen you have a set of widgets that you make visible and not visible.
Try out the demonstration below. It's a simple implementation that goes in and
out of fullscreen when you press F11. An HBox is used to make the layout,
which contains a label on the left and an image on the right. I've filled the
label with some dummy text so that it takes up a good amount of space. As you
toggle in and out of fullscreen it will toggle the visibility of the label and
thus make the image either take the full screen real estate or share it with
the label. I just used one of the stock images that comes with gtk for
demonstration purposes. Below are two screenshots showing the layout in and
out of fullscreen.
**Code**
import gtk
def keypress(win, event):
if event.keyval == gtk.keysyms.F11:
win.is_fullscreen = not getattr(win, 'is_fullscreen', False)
action = win.fullscreen if win.is_fullscreen else win.unfullscreen
action()
label.set_visible(not win.is_fullscreen)
win = gtk.Window()
win.connect("delete-event", gtk.main_quit)
win.connect('key-press-event', keypress)
image = gtk.image_new_from_stock(gtk.STOCK_ABOUT, gtk.ICON_SIZE_DIALOG)
label = gtk.Label(('test ' * 20 + '\n') * 20)
vbox = gtk.HBox()
vbox.add(label)
vbox.add(image)
win.add(vbox)
win.show_all()
gtk.main()
**Normal window**

**Full screen**

|
Why am I getting DistributionNotFound error when I try to run Pyramid project?
Question: **I installed in my new Windows 8 (x64):**
* python-2.7
* pywin32-218.win32-py2.7
* setuptools-0.6c11.win32-py2.7
* and pyramid (via easy_install)
I tried to run my pyramid project:
pserve I:\Projects\PyramidProject\development.ini
and pkg_resources.DistributionNotFound(req) was raised:

I:\Projects\MyProject>pserve development.ini Traceback (most recent
call last): File "C:\Python27\Scripts\pserve-script.py", line 9, in
<module>
load_entry_point('pyramid==1.4b1', 'console_scripts', 'pserve')() File
"C:\Python27\lib\site-packages\pyramid-1.4b1-py2.7.egg\pyramid\scripts\ps
erve.py", line 50, in main
return command.run() File "C:\Python27\lib\site-packages\pyramid-1.4b1-py2.7.egg\pyramid\scripts\ps
erve.py", line 301, in run
relative_to=base, global_conf=vars) File "C:\Python27\lib\site-packages\pyramid-1.4b1-py2.7.egg\pyramid\scripts\ps
erve.py", line 332, in loadserver
server_spec, name=name, relative_to=relative_to, **kw) File "C:\Python27\lib\site-packages\pastedeploy-1.5.0-py2.7.egg\paste\deploy\l
oadwsgi.py", line 255, in loadserver
return loadobj(SERVER, uri, name=name, **kw) File "C:\Python27\lib\site-packages\pastedeploy-1.5.0-py2.7.egg\paste\deploy\l
oadwsgi.py", line 271, in loadobj
global_conf=global_conf) File "C:\Python27\lib\site-packages\pastedeploy-1.5.0-py2.7.egg\paste\deploy\l
oadwsgi.py", line 296, in loadcontext
global_conf=global_conf) File "C:\Python27\lib\site-packages\pastedeploy-1.5.0-py2.7.egg\paste\deploy\l
oadwsgi.py", line 320, in _loadconfig
return loader.get_context(object_type, name, global_conf) File "C:\Python27\lib\site-packages\pastedeploy-1.5.0-py2.7.egg\paste\deploy\l
oadwsgi.py", line 454, in get_context
section) File "C:\Python27\lib\site-packages\pastedeploy-1.5.0-py2.7.egg\paste\deploy\l
oadwsgi.py", line 476, in _context_from_use
object_type, name=use, global_conf=global_conf) File "C:\Python27\lib\site-packages\pastedeploy-1.5.0-py2.7.egg\paste\deploy\l
oadwsgi.py", line 406, in get_context
global_conf=global_conf) File "C:\Python27\lib\site-packages\pastedeploy-1.5.0-py2.7.egg\paste\deploy\l
oadwsgi.py", line 296, in loadcontext
global_conf=global_conf) File "C:\Python27\lib\site-packages\pastedeploy-1.5.0-py2.7.egg\paste\deploy\l
oadwsgi.py", line 328, in _loadegg
return loader.get_context(object_type, name, global_conf) File "C:\Python27\lib\site-packages\pastedeploy-1.5.0-py2.7.egg\paste\deploy\l
oadwsgi.py", line 620, in get_context
object_type, name=name) File "C:\Python27\lib\site-packages\pastedeploy-1.5.0-py2.7.egg\paste\deploy\l
oadwsgi.py", line 640, in find_egg_entry_point
pkg_resources.require(self.spec) File "C:\Python27\lib\site-packages\distribute-0.6.32-py2.7.egg\pkg_resources.
py", line 690, in require
needed = self.resolve(parse_requirements(requirements)) File "C:\Python27\lib\site-packages\distribute-0.6.32-py2.7.egg\pkg_resources.
py", line 588, in resolve
raise DistributionNotFound(req) pkg_resources.DistributionNotFound: Paste
my development.ini:
[app:MyProject]
use = egg:MyProject
pyramid.reload_all = true
pyramid.reload_templates = true
pyramid.reload_assets = true
pyramid.debug_all = false
pyramid.debug_authorization = false
pyramid.debug_notfound = false
pyramid.debug_routematch = false
pyramid.debug_templates = true
mako.directories = app:view
mako.module_directory = %(here)s/cache/templates/
#mako.cache_type = file
#mako.cache_enabled = False
#mako.cache_dir = %(here)s/cache/view/
#mako.cache_impl = beaker
#mako.cache_timeout = 60
mako.input_encoding = utf-8
mako.imports = from markupsafe import escape_silent
mako.default_filters = escape_silent
#mako.error_handler =
sqlalchemy.url = mysql://user:[email protected]/MyProject_dev?charset=utf8
sqlalchemy.pool_recycle = 3600
beaker.session.type = file
beaker.session.cookie_expires = True
beaker.session.cookie_domain = MyProject.pl
beaker.session.data_dir = %(here)s/data/sessions/data
beaker.session.lock_dir = %(here)s/data/sessions/lock
beaker.session.key = MyProject.pl
beaker.session.secret = 57b0d7ff4c665d87e3c3745c2abf519ca7d4082a
beaker.session.validate_key = 57b0d7ff
beaker.cache.enabled = True
beaker.cache.type = memory
beaker.cache.data_dir = %(here)s/cache/data
beaker.cache.lock_dir = %(here)s/cache/lock
beaker.cache.regions = default_term, second_term, minute_term, hour_term, day_term, month_term, short_term, middle_term, long_term,
beaker.cache.second_term.expire = 1
beaker.cache.minute_term.expire = 60
beaker.cache.hour_term.expire = 3600
beaker.cache.day_term.expire = 86400
beaker.cache.month_term.expire = 2678400
beaker.cache.default_term.expire = 60
beaker.cache.short_term.expire = 30
beaker.cache.middle_term.expire = 300
beaker.cache.long_term.expire = 86400
site.front.default_skin = MyProject
site.front.available_skins = default mobile MyProject
default_skin = MyProject
available_skins = default mobile MyProject
[filter:tm]
use = egg:repoze.tm2#tm
commit_veto = repoze.tm:default_commit_veto
[pipeline:main]
pipeline =
egg:WebError#evalerror
tm
MyProject
[server:main]
use = egg:Paste#http
host = 127.0.0.1
port = 5000
# Begin logging configuration
[loggers]
keys = root, MyProject, sqlalchemy
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = INFO
handlers = console
[logger_MyProject]
level = DEBUG
handlers =
qualname = MyProject
[logger_sqlalchemy]
level = INFO
handlers =
qualname = sqlalchemy.engine
# "level = INFO" logs SQL queries.
# "level = DEBUG" logs SQL queries and results.
# "level = WARN" logs neither. (Recommended for production systems.)
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
# End logging configuration
Have you any idea, what did I wrong?
Answer: You need to run `easy_install paste` and `python setup.py develop` in your
virtualenv.
On a side note, you are using a very old version of pyramid. I know this
because your project uses weberror, repoze.tm2 and paste. If you have the
time, I'd suggest you look into upgrading to some best practices.
|
Rotation an object around his center using vpython
Question: I'm making a moonlander game using VPython. The moonlander itself must be able
to rotate around his own center, but he rotates around the wrong axis and/or
around his original position instead. The axis and position where it has to
rotate around don't change when the position of the moonlander changes. I
don't know how to do this.
The moonlander is made of curves, but I have a frame that contains the
moonlander. Here is the code (sorry about the code being in dutch, I hope you
understand enough):
from visual import *
from math import *
from random import randint
maanoppervlak = curve(pos=[(-20,-2),(-18,-5)],color=color.red)
maanoppervlak.append(pos=(-18,-5))
maanoppervlak.append(pos=(-15,-4),color=color.red)
maanoppervlak.append(pos=(-15,-4))
maanoppervlak.append(pos=(-14,-4),color=color.red)
maanoppervlak.append(pos=(-14,-4))
maanoppervlak.append(pos=(-13,-15),color=color.red)
maanoppervlak.append(pos=(-13,-15), color=color.green)
maanoppervlak.append(pos=(-7,-15),color=color.green)
maanoppervlak.append(pos=(-7,-15),color=color.red)
maanoppervlak.append(pos=(-3,4),color=color.red)
maanoppervlak.append(pos=(-3,4))
maanoppervlak.append(pos=(-1,0),color=color.red)
maanoppervlak.append(pos=(-1,0))
maanoppervlak.append(pos=(3,-2),color=color.red)
maanoppervlak.append(pos=(3,-2),color=color.green)
maanoppervlak.append(pos=(8,-2),color=color.green)
maanoppervlak.append(pos=(8,-2),color=color.red)
maanoppervlak.append(pos=(13,-8),color=color.red)
maanoppervlak.append(pos=(13,-8))
maanoppervlak.append(pos=(16,-5),color=color.red)
maanoppervlak.append(pos=(16,-5))
maanoppervlak.append(pos=(19,-12),color=color.red)
maanoppervlak.append(pos=(19,-12))
maanoppervlak.append(pos=(20,-7), color=color.red)
print scene.width
f = frame() #maken van een frame waar de maanlander in zit
f.axis = (1,0,0)
f.pos = vector(-4,-10,0)
maanlander = curve(frame=f, pos=[(1,1),(2,1),(1.5,1),(1.5,2),(2,2.5),(2,3.5),(2.75,3.5),
(2.25,4),(2.25,4.5),(2.75,5),(3.75,5),(4.25,4.5),
(4.25,4),(3.75,3.5),(4.5,3.5),(4.5,2.5),(3.5,2.5),(3.75,2),
(2.75,2),(3,2.5),(2,2.5),(4.5,2.5),(5,2),(5,1),(4.5,1),
(5.5,1)],color=color.blue)
maanlander.accel = vector(0,0) #acceleratie van de maanlander
f.accel = vector(0,0) #acceleratie van de frame (deze of die hierboven is waarschijnlijk overbodig)
f.velocity = vector(0,0) #snelheid van de frame
deltat = 0.005
state = true
while state == true: #start van de loop
rate(100)
f.velocity = f.velocity + maanlander.accel*deltat #bepaalt de snelheid adhv de accel.
maanlander.pos = maanlander.pos + f.velocity*deltat #bepaalt de positie adhv de snelheid
if scene.kb.keys: # is there an event waiting to be processed?
s = scene.kb.getkey() # obtain keyboard information
if s == 'left': #er wordt op linkertoets gedrukt
f.rotate(angle=pi/15., axis=(0,0,1), origin=(f.pos + vector(2.75,2.5,0))) #object moet rond zijn middden draaien (werkt nog niet)
elif s == 'right':
f.rotate(angle=-pi/15., axis=(0,0,1), origin=vector(2.75,2.5,0)) #zelfde als bij left, maar in omgekeerde richting
elif s == 'up': #'boost'
maanlander.accel = maanlander.accel + vector(0,10)
So, how can you rotate the moonlander, and also when it has a different
position than its original one?
Thanks in advance
Answer: Ignacio is saying translate your moonlander to the origin (the `center`) then
rotate everything about the center, then translate it back to where it was.
|
Python: Searching for Unicode string in HTML with index/find returns wrong position
Question: I am trying to parse the number of results from the HTML code returned from a
search query, however when I use find/index() it seems to return the wrong
position. The string I am searching for has an accent, so I try searching for
it in Unicode form.
A snippet of the HTML code being parsed:
<div id="WPaging_total">
Aproximádamente 37 resultados.
</div>
and I search for it like this:
str_start = html.index(u'Aproxim\xe1damente ')
str_end = html.find(' resultados', str_start + 16)#len('Aproxim\xe1damente ')==16
print html[str_start+16:str_end] #works by changing 16 to 24
The print statement returns:
damente 37
When the expected result is:
37
It seems str_start isn't starting at the beginning of the string I am
searching for, instead 8 positions back.
print html[str_start:str_start+5]
Outputs:
l">
The problem is hard to replicate though because it doesn't happen when using
the code snippet, only when searching inside the entire HTML string. I could
simply change str_start+16 to str_start+24 to get it working as intended,
however that doesn't help me understand the problem. Is it a Unicode issue?
Hopefully someone can shed some light on the issue.
Thank you.
**LINK:**
[http://guiasamarillas.com.mx/buscador/?actividad=Chedraui&localidad=&id_page=1](http://guiasamarillas.com.mx/buscador/?actividad=Chedraui&localidad=&id_page=1)
**SAMPLE CODE** :
from urllib2 import Request, urlopen
url = 'http://guiasamarillas.com.mx/buscador/?actividad=Chedraui&localidad=&id_page=1'
post = None
headers = {'User-Agent':'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2)'}
req = Request(url, post, headers)
conn = urlopen(req)
html = conn.read()
str_start = html.index(u'Aproxim\xe1damente ')
str_end = html.find(' resultados', str_start + 16)
print html[str_start+16:str_end]
Answer: Your problem ultimately boils down to the fact that in Python 2.x, the `str`
type represents a sequence of bytes while the `unicode` type represents a
sequence of characters. Because one character can be encoded by multiple
bytes, that means that the length of a `unicode`-type representation of a
string may differ from the length of a `str`-type representation of the same
string, and, in the same way, an index on a `unicode` representation of the
string may point to a different part of the text than the same index on the
`str` representation.
What's happening is that when you do `str_start =
html.index(u'Aproxim\xe1damente ')`, Python automatically decodes the `html`
variable, assuming that it is encoded in utf-8. (Well, actually, on my PC I
simply get a `UnicodeDecodeError` when I try to execute that line. Some of our
system settings relating to text encoding must be different.) Consequently, if
`str_start` is n then that means that `u'Aproxim\xe1damente '` appears at the
_nth character_ of the HTML. However, when you use it as a slice index later
to try and get content after the (n+16)th character, what you're actually
getting is stuff after the _(n+16)th byte_ , which in this case is not
equivalent because earlier content of the page featured accented characters
that take up 2 bytes when encoded in utf-8.
The best solution would be simply to convert the html to unicode when you
receive it. This small modification to your sample code will do what you want
with no errors or weird behaviour:
from urllib2 import Request, urlopen
url = 'http://guiasamarillas.com.mx/buscador/?actividad=Chedraui&localidad=&id_page=1'
post = None
headers = {'User-Agent':'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2)'}
req = Request(url, post, headers)
conn = urlopen(req)
html = conn.read().decode('utf-8')
str_start = html.index(u'Aproxim\xe1damente ')
str_end = html.find(' resultados', str_start + 16)
print html[str_start+16:str_end]
|
gtkSwitch in pygtk 2 on windows
Question: I'm creating a python program to switch some hardware devices on and off. The
program has to run on windows. I would really like to use the [gtkSwitch
widget](http://developer.gnome.org/gtk3/3.0/GtkSwitch.html) that was created
in gtk 3.0, because then it is immediately obvious that you want a device to
be switched on or off, but unfortunately gtk3 has not been properly ported to
windows for python. So is there any way to use the gtkswitch that comes with
gtk 3.0 without having to write the program in gtk3, or does anyone know of a
way to use python bindings for gtk 3.0 on windows?
thanks a lot!
Dirk Boonzajer
Answer: please see my code
<https://gist.github.com/shellexy/2f86cfa4a0448f7125e8>
I use gtk.ToggleButton to simulation gtk.Switch button.
<http://i.stack.imgur.com/fLvq7.png>
gtkswitchbutton.py
#!/usr/bin/python
# -*- coding: UTF-8 -*-
# vim:set shiftwidth=4 tabstop=4 expandtab textwidth=79:
'''GtkSwitch for PyGtk2
@author: Shellexy
@license: LGPLv3+
@see:
'''
import gtk, gobject
try: import i18n
except: from gettext import gettext as _
class SwitchButton(gtk.Table):
__gtype_name__ = 'SwitchButton'
__gsignals__ = {
'toggled': (gobject.SIGNAL_RUN_LAST, None, ()),
}
def __init__(self):
gtk.Table.__init__(self)
self.label = _("ON\tOFF")
self.tbutton = gtk.ToggleButton()
self.tbutton.set_label(self.label)
self.cbutton = gtk.Button()
self.cbutton.set_sensitive(True)
self.cbutton.set_can_focus(False)
self.cbutton.set_can_default(False)
self.attach(self.cbutton, 0, 2, 0, 1)
self.attach(self.tbutton, 0, 4, 0, 1)
self.tbutton.connect('toggled', self.do_switch)
self.cbutton.connect('clicked', lambda *args: self.clicked)
#self.connect('realize', self.do_switch)
self.set_active(False)
self.show_all()
pass
def toggled(self, *args):
return self.clicked()
def clicked(self, *args):
return self.tbutton.clicked()
def set_inconsistent(self, setting):
return self.tbutton(setting)
def set_active(self, is_active):
return gobject.idle_add(self.tbutton.set_active, not is_active)
def get_active(self):
return not self.tbutton.get_active()
def do_switch(self, *args):
return gobject.idle_add(self._do_switch, *args)
def _do_switch(self, *args):
t = self
tb = self.tbutton
b = self.cbutton
l = tb.get_child()
l.set_justify(gtk.JUSTIFY_FILL)
bs = tb.get_style().copy()
ls = l.get_style().copy()
bs.bg[gtk.STATE_NORMAL] = ls.bg[gtk.STATE_SELECTED]
ls.fg[gtk.STATE_NORMAL] = ls.text[gtk.STATE_SELECTED]
if self.get_children():
t.remove(b); t.remove(tb)
pass
if self.get_active():
t.attach(b, 2, 4, 0, 1) ; t.attach(tb, 0, 4, 0, 1)
bs.bg[gtk.STATE_PRELIGHT] = ls.bg[gtk.STATE_SELECTED]
ls.fg[gtk.STATE_PRELIGHT] = ls.text[gtk.STATE_SELECTED]
pass
else:
t.attach(b, 0, 2, 0, 1) ; t.attach(tb, 0, 4, 0, 1)
bs.bg[gtk.STATE_PRELIGHT] = ls.bg[gtk.STATE_INSENSITIVE]
ls.fg[gtk.STATE_PRELIGHT] = ls.text[gtk.STATE_NORMAL]
pass
tb.set_style(bs)
l.set_style(ls)
tb.do_focus(tb, 1)
self.emit('toggled')
pass
def main():
window = gtk.Window()
window.set_title('PyGtk2 SwitchButton')
options = [
['Connect: ', False],
['Auto Connect: ', True],
['Auto Proxy: ', True],
]
vbox = gtk.VBox()
vbox.set_spacing(5)
def foo(*args):
print args
for k, v in options:
s = SwitchButton()
s.set_active(v)
s.connect('toggled', foo, k)
hbox = gtk.HBox()
label = gtk.Label(k)
label.set_alignment(0, 0.5)
hbox.pack_start(label, 1, 1, 0)
hbox.pack_start(s, 0, 0, 0)
vbox.pack_start(hbox, 0, 0, 0)
pass
window.add(vbox)
window.show_all()
window.connect('destroy', lambda *args: gtk.main_quit())
gtk.main()
pass
if __name__=="__main__":
main()
|
Finding the optimal set of pairwise alignments from a similarity matrix, with some wrinkles, in Python
Question: I am trying to find the best alignment of two sequences from a similarity
matrix. Higher values indicate better alignments.
import numpy as np
a = np.array([
[0,5,5,5,5,5,5,5],
[3,10,0,0,0,9,0,0],
[3,0,10,0,0,0,1,0],
[3,0,0,9,0,0,0,0],
[3,0,0,0,0,0,0,10],
])
Each row/column must be aligned to exactly one column/row, except for row 0
and column 0, which may be used in 0 or more alignments.
That is, the best alignment for these sequences is:
(0,0)
(1,1)
(2,2)
(3,3)
(0,4)
(0,5)
(0,6)
(4,7)
`(1,5)` is not an aligned pair, because `(1,1)` is a better alignment, and
rows and columns > 0 can only participate in one alignment.
Any suggestions are appreciated.
Answer: I am changing the problem slightly. Now each column must be aligned to exactly
one row, and each row must be aligned to exactly one column. To make the
matrix square, I am adding rows/columns that are equivalent to row/column 0,
which previously could participate in multiple alignments. With a square
matrix and a requirement that each row/column participate in only one
alignment, the problem can be treated as an instance of the stable marriage
problem.
|
application error on google app engine launcher
Question: I am getting this error while running an application on google app engine
launcher. This is the error i am getting. I have tried reinstalling but it
didnt solve the error. Please tell me where i am going worng?!
<type 'exceptions.ImportError'> Python 2.7.3: C:\Python27\pythonw.exe
Sun Dec 02 11:43:06 2012
A problem occurred in a Python script. Here is the sequence of function calls leading up to the error, in the order they occurred.
C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py in _HandleRequest(self=<google.appengine.tools.dev_appserver.DevAppServerRequestHandler instance>)
2988 outfile = cStringIO.StringIO()
2989 try:
=> 2990 self._Dispatch(dispatcher, self.rfile, outfile, env_dict)
2991 finally:
2992 self.module_manager.UpdateModuleFileModificationTimes()
self = <google.appengine.tools.dev_appserver.DevAppServerRequestHandler instance>, self._Dispatch = <bound method DevAppServerRequestHandler._Dispat...v_appserver.DevAppServerRequestHandler instance>>, dispatcher = <google.appengine.tools.dev_appserver.MatcherDispatcher object>, self.rfile = <socket._fileobject object>, outfile = <cStringIO.StringO object>, env_dict = {'APPENGINE_RUNTIME': 'python', 'APPLICATION_ID': 'dev~rprpfind', 'CURRENT_VERSION_ID': '1.1', 'DEFAULT_VERSION_HOSTNAME': 'localhost:8081', 'REMOTE_ADDR': '127.0.0.1', 'REQUEST_ID_HASH': 'B6589FC6', 'REQUEST_METHOD': 'GET', 'SDK_VERSION': '1.7.3', 'SERVER_NAME': 'localhost', 'SERVER_PORT': '8081', ...}
C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py in _Dispatch(self=<google.appengine.tools.dev_appserver.DevAppServerRequestHandler instance>, dispatcher=<google.appengine.tools.dev_appserver.MatcherDispatcher object>, socket_infile=<socket._fileobject object>, outfile=<cStringIO.StringO object>, env_dict={'APPENGINE_RUNTIME': 'python', 'APPLICATION_ID': 'dev~rprpfind', 'CURRENT_VERSION_ID': '1.1', 'DEFAULT_VERSION_HOSTNAME': 'localhost:8081', 'REMOTE_ADDR': '127.0.0.1', 'REQUEST_ID_HASH': 'B6589FC6', 'REQUEST_METHOD': 'GET', 'SDK_VERSION': '1.7.3', 'SERVER_NAME': 'localhost', 'SERVER_PORT': '8081', ...})
2857 dispatcher.Dispatch(app_server_request,
2858 outfile,
=> 2859 base_env_dict=env_dict)
2860 finally:
2861 request_file.close()
base_env_dict undefined, env_dict = {'APPENGINE_RUNTIME': 'python', 'APPLICATION_ID': 'dev~rprpfind', 'CURRENT_VERSION_ID': '1.1', 'DEFAULT_VERSION_HOSTNAME': 'localhost:8081', 'REMOTE_ADDR': '127.0.0.1', 'REQUEST_ID_HASH': 'B6589FC6', 'REQUEST_METHOD': 'GET', 'SDK_VERSION': '1.7.3', 'SERVER_NAME': 'localhost', 'SERVER_PORT': '8081', ...}
C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py in Dispatch(self=<google.appengine.tools.dev_appserver.MatcherDispatcher object>, request=<AppServerRequest relative_url: / path: main.py ...mp', mode 'rb' at 0x0342AE38> force_admin: False>, outfile=<cStringIO.StringO object>, base_env_dict={'APPENGINE_RUNTIME': 'python', 'APPLICATION_ID': 'dev~rprpfind', 'CURRENT_VERSION_ID': '1.1', 'DEFAULT_VERSION_HOSTNAME': 'localhost:8081', 'REMOTE_ADDR': '127.0.0.1', 'REQUEST_ID_HASH': 'B6589FC6', 'REQUEST_METHOD': 'GET', 'SDK_VERSION': '1.7.3', 'SERVER_NAME': 'localhost', 'SERVER_PORT': '8081', ...})
714 forward_request = dispatcher.Dispatch(request,
715 outfile,
=> 716 base_env_dict=base_env_dict)
717
718 while forward_request:
base_env_dict = {'APPENGINE_RUNTIME': 'python', 'APPLICATION_ID': 'dev~rprpfind', 'CURRENT_VERSION_ID': '1.1', 'DEFAULT_VERSION_HOSTNAME': 'localhost:8081', 'REMOTE_ADDR': '127.0.0.1', 'REQUEST_ID_HASH': 'B6589FC6', 'REQUEST_METHOD': 'GET', 'SDK_VERSION': '1.7.3', 'SERVER_NAME': 'localhost', 'SERVER_PORT': '8081', ...}
C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py in Dispatch(self=<google.appengine.tools.dev_appserver.CGIDispatcher object>, request=<AppServerRequest relative_url: / path: main.py ...mp', mode 'rb' at 0x0342AE38> force_admin: False>, outfile=<cStringIO.StringO object>, base_env_dict={'APPENGINE_RUNTIME': 'python', 'APPLICATION_ID': 'dev~rprpfind', 'CURRENT_VERSION_ID': '1.1', 'DEFAULT_VERSION_HOSTNAME': 'localhost:8081', 'REMOTE_ADDR': '127.0.0.1', 'REQUEST_ID_HASH': 'B6589FC6', 'REQUEST_METHOD': 'GET', 'SDK_VERSION': '1.7.3', 'SERVER_NAME': 'localhost', 'SERVER_PORT': '8081', ...})
1791 memory_file,
1792 outfile,
=> 1793 self._module_dict)
1794 finally:
1795 logging.root.level = before_level
self = <google.appengine.tools.dev_appserver.CGIDispatcher object>, self._module_dict = {'Cookie': <module 'Cookie' from 'C:\Python27\lib\Cookie.pyc'>, 'StringIO': <module 'StringIO' from 'C:\Python27\lib\StringIO.pyc'>, 'UserDict': <module 'UserDict' from 'C:\Python27\lib\UserDict.py'>, '__builtin__': <module '__builtin__' (built-in)>, '__future__': <module '__future__' from 'C:\Python27\lib\__future__.pyc'>, '__main__': <module 'main' from 'C:\Users\RoHiT\Desktop\rprp\main.py'>, '_abcoll': <module '_abcoll' from 'C:\Python27\lib\_abcoll.py'>, '_bisect': <module '_bisect' (built-in)>, '_collections': <module '_collections' (built-in)>, '_functools': <module '_functools' (built-in)>, ...}
C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py in ExecuteCGI(config=<AppInfoExternal version=1 source_lang...ne runtime=python api_config=None >, root_path=r'C:\Users\RoHiT\Desktop\rprp', handler_path='main.py', cgi_path=r'C:\Users\RoHiT\Desktop\rprp\main.py', env={'APPENGINE_RUNTIME': 'python', 'APPLICATION_ID': 'dev~rprpfind', 'AUTH_DOMAIN': 'gmail.com', 'CONTENT_LENGTH': '', 'CONTENT_TYPE': 'application/x-www-form-urlencoded', 'CURRENT_VERSION_ID': '1.1', 'DEFAULT_VERSION_HOSTNAME': 'localhost:8081', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3', ...}, infile=<cStringIO.StringO object>, outfile=<cStringIO.StringO object>, module_dict={'Cookie': <module 'Cookie' from 'C:\Python27\lib\Cookie.pyc'>, 'StringIO': <module 'StringIO' from 'C:\Python27\lib\StringIO.pyc'>, 'UserDict': <module 'UserDict' from 'C:\Python27\lib\UserDict.py'>, '__builtin__': <module '__builtin__' (built-in)>, '__future__': <module '__future__' from 'C:\Python27\lib\__future__.pyc'>, '__main__': <module 'main' from 'C:\Users\RoHiT\Desktop\rprp\main.py'>, '_abcoll': <module '_abcoll' from 'C:\Python27\lib\_abcoll.py'>, '_bisect': <module '_bisect' (built-in)>, '_collections': <module '_collections' (built-in)>, '_functools': <module '_functools' (built-in)>, ...}, exec_script=<function ExecuteOrImportScript>, exec_py27_handler=<function ExecutePy27Handler>)
1691 reset_modules = exec_py27_handler(config, handler_path, cgi_path, hook)
1692 else:
=> 1693 reset_modules = exec_script(config, handler_path, cgi_path, hook)
1694 except SystemExit, e:
1695 logging.debug('CGI exited with status: %s', e)
reset_modules = True, exec_script = <function ExecuteOrImportScript>, config = <AppInfoExternal version=1 source_lang...ne runtime=python api_config=None >, handler_path = 'main.py', cgi_path = r'C:\Users\RoHiT\Desktop\rprp\main.py', hook = <google.appengine.tools.dev_appserver_import_hook.HardenedModulesHook object>
C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py in ExecuteOrImportScript(config=<AppInfoExternal version=1 source_lang...ne runtime=python api_config=None >, handler_path='main.py', cgi_path=r'C:\Users\RoHiT\Desktop\rprp\main.py', import_hook=<google.appengine.tools.dev_appserver_import_hook.HardenedModulesHook object>)
1379
1380 if module_code:
=> 1381 exec module_code in script_module.__dict__
1382 else:
1383 script_module.main()
module_code = <code object <module> at 034ED4E8, file "C:\Users\RoHiT\Desktop\rprp\main.py", line 1>, script_module = <module 'main' from 'C:\Users\RoHiT\Desktop\rprp\main.py'>, script_module.__dict__ = {'Page': <class 'models.Page'>, 'PythonTerm': <class 'models.PythonTerm'>, 'SearchTerm': <class 'models.SearchTerm'>, 'Video': <class 'models.Video'>, '__builtins__': {'ArithmeticError': <type 'exceptions.ArithmeticError'>, 'AssertionError': <type 'exceptions.AssertionError'>, 'AttributeError': <type 'exceptions.AttributeError'>, 'BaseException': <type 'exceptions.BaseException'>, 'BufferError': <type 'exceptions.BufferError'>, 'BytesWarning': <type 'exceptions.BytesWarning'>, 'DeprecationWarning': <type 'exceptions.DeprecationWarning'>, 'EOFError': <type 'exceptions.EOFError'>, 'Ellipsis': Ellipsis, 'EnvironmentError': <type 'exceptions.EnvironmentError'>, ...}, '__doc__': None, '__file__': r'C:\Users\RoHiT\Desktop\rprp\main.py', '__loader__': <google.appengine.tools.dev_appserver_import_hook.HardenedModulesHook object>, '__name__': 'main', '__package__': None, ...}
C:\Users\RoHiT\Desktop\rprp\main.py in ()
4 from google.appengine.ext import db
5 from models import PythonTerm, Page, SearchTerm, Video
=> 6 from bs4 import BeautifulSoup
7 from operator import itemgetter
8 import urllib
bs4 undefined, BeautifulSoup undefined
<type 'exceptions.ImportError'>: No module named bs4
args = ('No module named bs4',)
message = 'No module named bs4'
Answer: The code is trying to import the BeautifulSoup library, but cannot find it.
Make sure you've installed
[`beautifulsoup4`](http://pypi.python.org/pypi/beautifulsoup4) in your local
project path. Download the `.tar.gz` tarball, unpack it and copy the `bs4`
directory to your project.
|
ImportError: cannot import name timezone
Question: After installing django-norel, and running `python manage.py shell`, I get
this error:
>>> from django.utils import timezone
Traceback (most recent call last):
File "<console>", line 1, in <module>
ImportError: cannot import name timezone
using Ubuntu 12.04 LTS, python 2.7.3, django 1.4, and last versions of django-
nonrel, djangotoolbox and django-mongodb engine
It seems to be some kind of incompatibility problem. Should I use an earlier
version of django? If so, how can I specify the django version on the install
command?
Answer: You can't have both "Django 1.4" and "the latest version of django-nonrel".
Django-nonrel _replaces_ Django, and the latest version is built on Django
1.3, which doesn't have the `utils.timezone` module.
|
With python, how can I connect to and download files from, a WebDAV server?
Question: My school has a webdav file server that contains files that I frequently need
to download. For this server, I have a username and password that I can use to
connect to the server, and If I go to the URL in chrome I can view everything
just fine. Now my question is, how can I access and login to this WebDAV
server with python, and then download files from it. I have been unable to
find anything with google and apologize if there was a very simple solution
that I missed.
Answer: You can use [python-webdav-library](https://launchpad.net/python-webdav-lib)
from webdav import WebdavClient
url = 'https://somesite.net'
mydav = WebdavClient.CollectionStorer(url, validateResourceNames=False)
mydav.connection.addBasicAuthorization(<username>, <password>)
mydav.path = <path to file I want, ie '/a/b/c.txt'>
mydav.downloadFile(<local path ie. ~/Downloads/c.txt>)
|
Import error? (PYTHON 3.2)
Question: I have my own module named v_systems, and I'm trying to import that module in
another python file (which is also saved in the same directory as the file
v_systems is saved) I need to import it as `import v_systems as vs` or even if
I try to import as `import v_systems`.
However it gives me an error saying no module v_systems exists.
How may I fix this error?
Answer: It might not be in the system path. Do the following:
It needs to be in the directory of the `sys.path`. What I did is I created a
folder (doesn't really matter where) called "Modules" in which I have all of
my modules that I download/create in there. Say I put it in
`C:\Users\USER\Modules`. You can put this module in there as well.
You need to copy the path to the folder.
Then, go to Control Panel. Click System, then on the left panel there is an
option called "Advanced System Settings". Click that. From the bottom of the
window that pops up, click "Environment Variables". Look to see if you have a
variable created called `PYTHONPATH`. Most likely, you don't. So, create a
variable (in the second section) by pressing "NEW". Name it `PYTHONPATH` and
for the Variable value, put in the file path. (For my example, the file path
is `C:\Users\USER\Modules`). Hope this helps :)
I inserted a screenshot of how to get there once you get to the System
(Properties) location in Control Panel: 
|
Python and displaying HTML
Question: I've gotten pretty comfortable with Python and now I'm looking to make a
rudimentary web application. I was somewhat scared of Django and the other
Python frameworks so I went caveman on it and decided to generate the HTML
myself using another Python script.
Maybe this is how you do it anyways - but I'm just figuring this stuff out.
I'm really looking for a tip-off on, well, what to do next.
My Python script PRINTS the HTML (is this even correct? I need it to be on a
webpage!), but now what?
Thanks for your continued support during my learning process. One day I will
post answers!
-Tyler
Here's my code:
from SearchPhone import SearchPhone
phones = ["Iphone 3", "Iphone 4", "Iphone 5","Galaxy s3", "Galaxy s2", "LG Lucid", "LG Esteem", "HTC One S", "Droid 4",
"Droid RAZR MAXX", "HTC EVO", "Galaxy Nexus", "LG Optimus 2", "LG Ignite",
"Galaxy Note", "HTC Amaze", "HTC Rezound", "HTC Vivid", "HTC Rhyme", "Motorola Photon",
"Motorola Milestone", "myTouch slide", "HTC Status", "Droid 3", "HTC Evo 3d", "HTC Wildfire",
"LG Optimus 3d", "HTC ThunderBolt", "Incredible 2", "Kyocera Echo", "Galaxy S 4g",
"HTC Inspire", "LG Optimus 2x", "Samsung Gem", "HTC Evo Shift", "Nexus S", "LG Axis", "Droid 2",
"G2", "Droid x", "Droid Incredible"
]
print """<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>table of phones</title>
</head>
<body>
</body>
</html>
"""
#table
print '<table width="100%" border="1">'
for x in phones:
y = SearchPhone(x)
print "\t<tr>"
print "\t\t<td>" + str(y[0]) + "</td>"
print "\t\t<td>" + str(y[1]) + "</td>"
print "\t\t<td>" + str(y[2]) + "</td>"
print "\t\t<td>" + str(y[3]) + "</td>"
print "\t\t<td>" + str(y[4]) + "</td>"
print "\t</tr>"
print "</table>"
Answer: unfortunately, you are not quite right .
For python web development , as a starter you should pick a web development
framework,(Django, Flask, Bottle) if you are scared of Django , I suggest you
try to look at Web.py / bottle, they are very simiple and easy to catch up and
you will learn basic web develpment skills ,know the workflow, interacting
etc.
or , you just want to write a static webpage , just write a file with "htm"
"html" extentsion
html_str = "<html><head>This is header</head><body>This is body</body></html>"
f = open("yourpage.html","w")
f.write(html_str)
f.close()
Then you can open "yourpage.html" get your first web page written in Python
**statically** ..
|
Stream Parse Wiki Xml dump
Question: I'm not sure if this question has been asked before but it has been I couldn't
find it and please pardon my ignorance. I want to parse (stream parse)
wikipedia xml dumps that are about 40 Gigs. I'm trying to use lxml iterparse
aka stream parser to do this job for me but the code I wrote isn't working for
me for some reason. Before I explain what I want to do let's consider this xml
format
<root>
<page>
<title> A </title>
<text> ..........blah blah...... </text>
</page>
<page>
<title> B </title>
<text> This is a line of sample text in title B </text>
</page>
<page>
<title> C </title>
<text> ............blah blah........ </text>
</page>
</root>
In the above xml structure, which is very much what wiki dump has, what I want
to do is read each and every thing inside "page" tag (which basically
separates different articles in the dump) using a python stream parser such as
lxml (without loading the entire xml tree into memory which is impossible) and
do a regular expression stuff on it. Precisely what I want to do is, if in the
current "page" element look for inside the text tag "Category:Living People",
if such a matching regex is found, then write out the entire content inside
"text" tag into a text file. But before the regex part, I'm having issues with
the lxml parser itself. Sample code that I tried below. Thanks in advance :)
#LXML parser
from lxml import etree
def fast_iter(context, func):
for event, elem in context:
func(elem)
elem.clear()
while elem.getprevious() is not None:
del elem.getparent()[0]
del context
def process_element(elem):
print elem.xpath( 'description/text( )' )
context = etree.iterparse( MYFILE, tag='item' )
fast_iter(context,process_element)
Feel free to completely alter the code and give your own version, I really
wouldn't mind as long as I get my problem sorted!
Someone help please!
Answer:
from xml.sax import ContentHandler, parseString
from StringIO import StringIO
from lxml import etree
CONTENT = """
<root>
<page>
<title> A </title>
<text> ..........blah blah...... </text>
</page>
<page>
<title> B </title>
<text> This is a line of sample text in title B </text>
</page>
<page>
<title> C </title>
<text> ............blah blah........ </text>
</page>
</root>
"""
def fast_iter(context, func):
for action, elem in context:
func(elem)
elem.clear()
while elem.getprevious() is not None:
del elem.getparent()[0]
del context
def process_element(elem):
print elem.xpath( './text/text( )' )
class WikiContentHandler(ContentHandler):
def startDocument(self):
self.character_buffer = None
def startElement(self, name, attrs):
if name == 'text':
self.character_buffer = StringIO()
def endElement(self, name):
if name == 'text':
print self.character_buffer.getvalue()
self.character_buffer.close()
self.character_buffer = None
def characters(self, content):
if self.character_buffer != None:
self.character_buffer.write(content)
def parse_wiki():
parseString(CONTENT, WikiContentHandler())
if __name__ == '__main__':
parse_wiki()
context = etree.iterparse( StringIO(CONTENT), tag='page' )
fast_iter(context, process_element)
Above are two ways of parsing the XML, one using lxml, the other using the
Python standard library. Both of them print out all the information in the
`<text>` tags. Just look where the print statements are and do your additional
processing.
|
asynchronous subprocess with Popen
Question: Using Windows 7 + python 2.6, I am trying to run a simulation model in
parallel. I can launch multiple instances of the executable by double-clicking
on them in my file browser. However, asynchronous calls with `Popen` result in
each successive instance interrupting the previous one. For what it's worth,
the executable returns text to the console, but I don't need to collect
results interactively.
Here's where I am so far:
import multiprocessing, subprocess
def run(c):
exe = os.path.join("<location>","folder",str(c),"program.exe")
run = os.path.join("<location>","folder",str(c),"run.dat")
subprocess.Popen([exe,run],creationflags = subprocess.CREATE_NEW_CONSOLE)
def main():
pool = multiprocessing.Pool(3)
for c in range(10):
pool.apply_async(run,(str(c),))
pool.close()
pool.join()
if __name__ == '__main__':
main()
After scouring SO for a solution, I've learned that using multiprocessing may
be redundant, but I need some way to limit the number of cores working.
Answer: Enabled by @J.F. Sebastian's comment regarding the `cwd` argument.
import multiprocessing, subprocess
def run(c):
exe = os.path.join("<location>","folder",str(c),"program.exe")
run = os.path.join("<location>","folder",str(c),"run.dat")
subprocess.check_call([exe,run],cwd=os.path.join("<location>","folder"),creationflags = subprocess.CREATE_NEW_CONSOLE)
def main():
pool = multiprocessing.Pool(3)
for c in range(10):
pool.apply_async(run,(str(c),))
pool.close()
pool.join()
if __name__ == '__main__':
main()
|
Instantiating an object in a function - Python
Question: I'm relatively new to python but think I have a decent enough understanding,
except for (apparently) the correct way to use the "import" statement. I
assume that's the problem, but I don't know.
I have
from player import player
def initializeGame():
player1 = player()
player1.shuffleDeck()
player2 = player()
player2.shuffleDeck()
and
from deck import deck
class player(object):
def __init__(self):
self.hand = []
self.deck = deck()
def drawCard(self):
c = self.deck.cards
cardDrawn = c.pop(0)
self.hand.append(cardDrawn)
def shuffleDeck(self):
from random import shuffle
shuffle(self.deck.cards)
But when i try to initializeGame() it says "player1 has not been defined" and
I'm not really sure why. In that same file if I just use "player1 = player()"
then it woks perfectly fine but it refuses to work inside of a function. Any
help?
EDIT: ADDING THINGS THAT WEREN'T INCLUDED BEFORE
class deck(object):
def __init__(self):
self.cards = []
def viewLibrary(self):
for x in self.cards:
print(x.name)
def viewNumberOfCards(self, cardsToView):
for x in self.cards[:cardsToView]:
print(x.name)
from deck import deck
class player(object):
def __init__(self):
self.hand = []
self.deck = deck()
def drawCard(self):
c = self.deck.cards
cardDrawn = c.pop(0)
self.hand.append(cardDrawn)
def shuffleDeck(self):
from random import shuffle
shuffle(self.deck.cards)
and the traceback error is
player1.deck.cards
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
player1.deck.cards
NameError: name 'player1' is not defined
Answer:
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
player1.deck.cards
NameError: name 'player1' is not defined
This shows the line where the error is thrown: `player1.deck.cards`. Said line
is not in the code you gave us so we can only make assumptions on why you get
the exception.
However, it is very likely, that your script looks somehow like this:
initializeGame()
# and then do something with
player1.deck.cards
This however does not work, as `player1` and `player2` are only local
variables inside the `initializeGame` function. As soon as the function
returns no more references to them are left and they are most likely pending
for garbage collection.
So if you want to access those objects, you have to make sure that they stay
around. You could do this by having the variables globally, or you could
simply return them from your `initializeGame` function:
def initializeGame():
player1 = player()
player1.shuffleDeck()
player2 = player()
player2.shuffleDeck()
return player1, player2
Then you can just call it like this:
player1, player2 = initializeGame()
And have local references to the created objects.
Or even better, create an object that represents the whole game, where the
players are instance variables:
class Game:
def __init__ (self):
self.player1 = player()
self.player1.shuffleDeck()
self.player2 = player()
self.player2.shuffleDeck()
Then you can just create a `Game` instance and access the players using
`game.player1` or `game.player2`. And of course, having an object for the game
itself allows you to encapsulate a lot game related functions into the object
as well.
|
Twitter Python program giving connection error
Question: I am working on a Twitter data mining application in Python, using the modules
twitter and redis, based upon the book "Mining the Social Web" by Matthew
A.Russell. Whenever I try to get the information of friends and followers
using the redis module, it gives me the following error:
raise ConnectionError(self._error_message(e))
ConnectionError: Error 10061 connecting localhost:6379. No connection could be made because the target machine actively refused it.
When I don't use Redis, I can get the ids of all friends and followers. But I
require Redis to perform certain analysis stuff on the data. My source code is
as follows:
import json
import locale
import redis
from prettytable import PrettyTable
# Pretty printing numbers
from twitter__util import pp
# These functions create consistent keys from
# screen names and user id values
from twitter__util import getRedisIdByScreenName
from twitter__util import getRedisIdByUserId
SCREEN_NAME = "timoreilly"
locale.setlocale(locale.LC_ALL, '')
def calculate():
r = redis.Redis() # Default connection settings on localhost
follower_ids = list(r.smembers(getRedisIdByScreenName(SCREEN_NAME,
'follower_ids')))
followers = r.mget([getRedisIdByUserId(follower_id, 'info.json')
for follower_id in follower_ids])
followers = [json.loads(f) for f in followers if f is not None]
freqs = {}
for f in followers:
cnt = f['followers_count']
if not freqs.has_key(cnt):
freqs[cnt] = []
freqs[cnt].append({'screen_name': f['screen_name'], 'user_id': f['id']})
# It could take a few minutes to calculate freqs, so store a snapshot for later use
r.set(getRedisIdByScreenName(SCREEN_NAME, 'follower_freqs'),
json.dumps(freqs))
keys = freqs.keys()
keys.sort()
print 'The top 10 followers from the sample:'
fields = ['Date', 'Count']
pt = PrettyTable(fields=fields)
[pt.set_field_align(f, 'l') for f in fields]
for (user, freq) in reversed([(user['screen_name'], k) for k in keys[-10:]
for user in freqs[k]]):
pt.add_row([user, pp(freq)])
pt.printt()
all_freqs = [k for k in keys for user in freqs[k]]
avg = reduce(lambda x, y: x + y, all_freqs) / len(all_freqs)
print "\nThe average number of followers for %s's followers: %s" \
% (SCREEN_NAME, pp(avg))
# psyco can only compile functions, so wrap code in a function
try:
import psyco
psyco.bind(calculate)
except ImportError, e:
pass # psyco not installed
calculate()
Any and all help would be really appreciated. Thanks!
Answer: That's the port that Redis runs on. If I were you, I'd double-check that the
Redis server is running on the correct port and interface. To test that,
ensure that you've a telnet client installed and do the following:
$ telnet localhost 6379
If that fails, then the Redis server isn't actually running where the Redis
client library expects it to be by default, in which case, you'll have to
provide the client with the correct details in the constructor arguments.
|
Dictionary object to decision tree in Pydot
Question: I have a dictionary object as such:
menu = {'dinner':{'chicken':'good','beef':'average','vegetarian':{'tofu':'good','salad':{'caeser':'bad','italian':'average'}},'pork':'bad'}}
I'm trying to create a graph (decision tree) using pydot with the 'menu' data
[this](http://pythonhaven.wordpress.com/2009/12/09/generating_graphs_with_pydot).
'Dinner' would be the top node and its values (chicken, beef, etc.) are below
it. Referring to the link, the graph function takes two parameters; a source
and a node.
It would look something like
[this](http://pythonhaven.files.wordpress.com/2009/12/example1_graph.png):
Except 'king' would be 'dinner' and 'lord' would be 'chicken', 'beef', etc.
My question is: How do I access a key in a value? To create a tree from this
data I feel like I need to create a loop which checks if there is a value for
a specific key and plots it. I'm not sure how to call values for any
dictionary object (if it's not necessarily called 'dinner' or have as many
elements.).
Any suggestions on how to graph it?
Answer: ## Using a recursive function
You might want to consider using a **recursive** function (like the `visit` in
my code below, so that you are able to process a general nested dictionary. In
this function, you want to pass a `parent` parameter to keep track of who is
your incoming node. Also note you use `isinstance` to check if the dictionary
value of a key is a dictionary of its own, in that case you need to call your
`visit` recursively.
import pydot
menu = {'dinner':
{'chicken':'good',
'beef':'average',
'vegetarian':{
'tofu':'good',
'salad':{
'caeser':'bad',
'italian':'average'}
},
'pork':'bad'}
}
def draw(parent_name, child_name):
edge = pydot.Edge(parent_name, child_name)
graph.add_edge(edge)
def visit(node, parent=None):
for k,v in node.iteritems():
if isinstance(v, dict):
# We start with the root node whose parent is None
# we don't want to graph the None node
if parent:
draw(parent, k)
visit(v, k)
else:
draw(parent, k)
# drawing the label using a distinct name
draw(k, k+'_'+v)
graph = pydot.Dot(graph_type='graph')
visit(menu)
graph.write_png('example1_graph.png')
## Resulting tree structure

|
Downloading YouTube Captions with API in Python with utf-8 characters
Question: I'm using [Jeff's demo code](http://gdata-
samples.googlecode.com/svn/trunk/gdata/captions_demo.py) for using the YouTube
API and Python to interact with captions for my videos. And I have it working
great for my videos in English. Unfortunately, when I try to use it with my
videos that have automatic transcripts in Spanish, which contain characters
such as á¡, etc., I get an encoding error:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 25: ordinal not in range(128)
My Python script has `# -*- coding: utf-8 -*-` at the top and I've changed the
`CAPTIONS_LANGUAGE_CODE` to `'es'`, but it seems like the script is still
interpreting the .srt file it downloads as `ascii` rather than `utf-8`. The
line where it downloads the .srt file is:
if response_headers["status"] == "200":
self.srt_captions = SubRipFile.from_string(body)
How can I get Python to consider the srt file as `utf-8` so that it doesn't
throw an encoding error?
Thanks!
Answer: It looks like this isn't really a Youtube API issue at all, but a Python one.
Note that your error isn't an encoding error, but a decoding error; you've
stumbled upon the way that Python is designed to work (for better or for
worse). Many, many functions in Python will cast unicode data as 8-bit strings
rather than native unicode objects, using \x with a hex number to represent
characters greater than 127. (One such method is the "from_string" method of
the SubRipFile object you're using.) Thus the data is still unicode, but the
object is a string. Because of this, when you then are forcing a casting to a
unicode object (triggered by using the 'join' method of a unicode object in
the sample code you provided), Python will assume an ascii codec (the default
for 8-bit strings, regardless of data encoding) to deal with the data, which
then throws an error on those hex characters.
There are several solutions.
1) You could explicitly tell Python that when you run your join method to not
assume an ascii codec, but I always struggle with getting that right (and
doing it in every case). So I won't attempt some sample code.
2) You could forego native unicode objects and just use 8-bit strings to work
with your unicode data; this would only require you changing this line:
body = u'\n'.join(lines[2:])
To this:
body = '\n'.join(lines[2:])
There are potential drawbacks to this approach, however -- again, you'd have
to make sure you're doing it in every case; you also wouldn't be leveraging
Python-native unicode objects (which may or may not be an issue for later in
your code).
3) you could use the low-level 'codecs' module to ensure that the data is cast
as a native unicode object from the get-go rather than messing around with
8-bit strings. Normally, you accomplish such a task in this manner:
import codecs
f=codecs.open('captions.srt',encoding='utf-8')
l=f.readlines()
f.close()
type(l[0]) # will be unicode object rather than string object
Of course, you have the complication of using a SubRipFile object which
returns a string, but you could get around that by either sending it through a
StringIO object (so the codecs module can treat the ripped data as a file),
using the codecs.encode() method, etc. The Python docs have pretty good
sections on all of this.
Best of luck.
|
How to make a list of characters in Python
Question: My exam question asks me to take two letters as inputs, and display all
letters in the alphabet between the two inputs (inclusively). It also needs to
do this in the order the user input them (so GA produces `GFEDCBA` not
`ABCDEFG`). How would I approach this task?
Answer:
>>> import string
>>> def letterList (start, end):
# add a character at the beginning so str.index won't return 0 for `A`
a = ' ' + string.ascii_uppercase
# if start > end, then start from the back
direction = 1 if start < end else -1
# Get the substring of the alphabet:
# The `+ direction` makes sure that the end character is inclusive; we
# always need to go one *further*, so when starting from the back, we
# need to substract one. Here comes also the effect from the modified
# alphabet. For `A` the normal alphabet would return `0` so we would
# have `-1` making the range fail. So we add a blank character to make
# sure that `A` yields `1-1=0` instead. As we use the indexes dynamically
# it does not matter that we have changed the alphabet before.
return a[a.index(start):a.index(end) + direction:direction]
>>> letterList('A', 'G')
'ABCDEFG'
>>> letterList('G', 'A')
'GFEDCBA'
>>> letterList('A', 'A')
'A'
Note that this solution allows any kind of alphabet. We could set `a = ' ' +
string.ascii_uppercase + string.ascii_lowercase` and get such results:
>>> letterList('m', 'N')
'mlkjihgfedcbaZYXWVUTSRQPON'
And who needs ASCII when you have full unicode support?
>>> a = ' あいうえおかきくけこさしすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろわを'
>>> letterList('し', 'ろ')
'しすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろ'
|
how to include django templates in app installed via pip?
Question: I'm working on a django app ([django-
flux](https://github.com/deanmalmgren/django-flux)) and I'm trying to get it
to properly install with pip from
[pypi](http://pypi.python.org/pypi?name=django-
flux&version=0.1.0&%3aaction=display). From [this blog
post](http://blog.nyaruka.com/adding-a-django-app-to-pythons-cheese-shop-py)
and the [distutils
documentation](http://docs.python.org/2/distutils/setupscript.html#installing-
package-data), it seems like my
[setup.py](https://github.com/deanmalmgren/django-
flux/blob/e876e23917788b77a5954eef740112cf3dba7cbf/setup.py) and
[MANIFEST.in](https://github.com/deanmalmgren/django-
flux/blob/e876e23917788b77a5954eef740112cf3dba7cbf/MANIFEST.in) files should
be including the `flux/templates/flux/*.html` data files, for example, but
they are not included when I install the app via pip for some reason.
Any suggestion on what I am doing wrong? How can you install django templates
(among other non-python files)?
For reference, I have distutils 2.7.3.
Answer: You are missing `include_package_data=True` in your setup function.
For more information on this you could turn to Flask's excellent documentation
which covers writing a [Basic Setup
Script](http://flask.pocoo.org/docs/patterns/distribute/#basic-setup-script):
> include_package_data tells distribute to look for a MANIFEST.in file and
> install all the entries that match as package data.
Then your are importing `find_packages` but aren't using it so far (`packages
= find_packages()`)
|
I'm trying to print this array to a text file but only boxes show up, why? Python
Question:
import numpy as np
with open('matrix.txt', 'r') as f:
x = []
for line in f:
x.append(map(int, line.split()))
f.close()
a = array(x)
l, v = eig(a)
L = identity(len(l))
for i in xrange(len(l)):
L[i][i] = array(exp(l))[i]
exponent = dot(dot(v,L),inv(v))
file = open('expA.txt', 'w')
file.writelines(exponent)
file.close()
The variable `exponent` is a matrix and I want to print it to a file
`expA.txt`, however, when I run the code, only brackets show up.
**matrix.txt**
2 2
16 6
In addition (if you happen to know), the `exp(a)` is supposed to equal VLV^-1
but when I print the values of `exp(a)` and compare it to VLV^-1, I get
different values. The problem is:
> 1. Read in a text file in which you've specified a square matrix A;
> 2. Find A's eigenvalues (a vector l) and eigenvectors (a matrix V);
> 3. Apply the Python function exp() to the components of l and form a
> diagonal matrix L with those values on the diagonal;
> 4. Multiply the resulting matrix L with the eigenvector matrix from both
> sides, in this way: V L V-1. This undoes the diagonlization. It turns out
> that the result is the exponential of the matrix A; that is, eA = V L V-1.
>
I did this in my code and I got different values. Did I interpret this wrong?
Answer: I think your problem may be the line `file.writelines(exponent)`, where
`file.writelines` method is expecting an iterable container of strings.
Could you try using this instead:
with open('expA.txt', 'w') as f:
print >> f, exponent
By the way, you should try using `a = np.loadtxt('matrix.txt')` as an easier
way to read your input file.
|
Check if image is black
Question: I'm running a Python script under Windows which deals with 480x800 `PNG`
images with 32-bit depth. I need to check if the given image is fully black or
not. After some searching I've found that ImageMagick could help me to achieve
this but unfortunately there's no manual for such task.
So a more general question is how to check if the image consists only of one
color?
**Edit:**
My apologies for not providing all the information about the environment from
beginning. The python script is executed using Android's
[`monkeyrunner`](http://developer.android.com/tools/help/monkeyrunner_concepts.html).
Since it uses it's own instance of Jython (version 2.5) it's not possible to
use any modules from external libraries. Inside there's a `MonkeyImage` class
to work with screenshots taken from the device. So I adopted @eumiro's answer
for it.
Answer:
import Image
im = Image.load("image.png")
diff_colors = list(set(im.getdata()))
if len(diff_colors) == 1 and diff_colors[0] == (0, 0, 0):
print "all black"
**EDIT** as @JonClements proposes, this will be faster and stop as soon as
anything else than black is found:
import Image
im = Image.load("image.png")
if all(rgb == (0,0,0) for rgb in im.getdata()):
print "all black"
|
PythonMagic: How do a composite with a dissolve value?
Question: I like to do this with PythonMagick:
composite -dissolve 40 over.png base.png new.png
I have tried:
import PythonMagick
base = PythonMagick.Image("base.png")
over = PythonMagick.Image("over.png")
base.composite(over,0,0,PythonMagick.CompositeOperator.DissolveCompositeOp)
base.write("some.png")
That works fine, but how can I specify the dissolve value?
Thanks in advance
Thomas
Answer: Based on your other
[question](http://stackoverflow.com/questions/13662184/python-pil-lighten-
transparent-image-and-paste-to-another-one), I would recommend staying with
PIL since PythonMagick seems to be
[inac](http://sxoop.wordpress.com/2006/11/29/whither-
pythonmagick/)[tive](http://stackoverflow.com/questions/2445985/where-can-i-
find-pythonmagick-documentation). I also looked into
[Magick++](http://www.imagemagick.org/Magick++/)'s documentation, but it fails
to mention how to specify extra options for `composite`. I would stay away
from these libraries, and instead use the direct CLI provided by ImageMagick.
|
Effective way to get part of string until token
Question: I'm parsing a very big csv (big = tens of gigabytes) file in python and I need
only the value of the first column of every line. I wrote this code, wondering
if there is a better way to do it:
delimiter = ','
f = open('big.csv','r')
for line in f:
pos = line.find(delimiter)
id = int(line[0:pos])
Is there a more effective way to get the part of the string before the first
delimiter?
**Edit:** I do know about the CSV module (and I have used it occasionally),
but I do not need to load in memory every line of this file - I need the first
column. So lets focus on string parsing.
Answer:
>>> a = '123456'
>>> print a.split('2', 1)[0]
1
>>> print a.split('4', 1)[0]
123
>>>
But, if you're dealing with a CSV file, then:
import csv
with open('some.csv') as fin:
for row in csv.reader(fin):
print int(row[0])
And the csv module will handle quoted columns containing quotes etc...
|
python-dev installation error: ImportError: No module named apt_pkg
Question: I got stuck in a problem. I am Debian user, and I want to install python-dev,
but when I run the code in the shell as a root:
# aptitude install python-dev
I get the following error:
Traceback (most recent call last):
File "/usr/bin/apt-listchanges", line 28, in <module>
import apt_pkg
ImportError: No module named apt_pkg
What seems to be the problem and how can I resolve it?
Thanks in advance!
Answer: Make sure you have a working python-apt package. You could try and remove and
install that package again to fix the problem with apt_pkg.so not being
located.
apt-get install python-apt
|
Using exec vs import in python
Question: In django it's very common to use a `local_settings.py` file to supplement
`settings.py` so that each machine can have different settings. Usually people
do something like this.
try:
from local_settings import *
except ImportError:
print "No local settings found!"
But the settings file usually contains large lists such as `INSTALLED_APPS`.
If I want to add an app, I'd rather not copy the entire list into
`local_settings.py` and modify it (makes for less readable code, and updating
`settings.py` no longer affects any machine that re-writes `INSTALLED_APPS`).
So I figured I could do something like this:
try:
f = open('local_settings.py','r')
exec f.read()
except IOError:
print "No local settings found!"
Now `local_settings.py` no longer has to rewrite the entire `INSTALLED_APPS`
variable:
INSTALLED_APPS.append('debug_toolbar')
I was under the impression that anytime you're using `exec` or `eval` you're
probably doing something wrong. So my question is, is there anything wrong
with this and is there a better way to do it?
Answer: There is a much better way: import `INSTALLED_APPS` in `local_settings.py`,
copy it, then add to the copy:
import settings
INSTALLED_APPS = settings.INSTALLED_APPS[:] + ['debug_toolbar']
|
Get the runpath of a python console-script
Question: Short of it - I need to get a console_script to return to me its current run
path, so that I can override modules it is using at run time.
Here is my basic setup (summarized as best I could):
package_1:
caller_module.py
console_script_module:
main.py
dir_of_modules_to_use/
a.py
b.py
c.py
setup.py
setup.py contents:
setup(
entry_points = {
'console_scripts': [
'console-script-module = console_script_module.main:main'
]
}
)
Longer detail: So **caller_module.py** calls **console-script-module** by
issuing a subprocess call. In turn the modules seen in
**dir_of_modules_to_use** are run. I would like to provide my own version of
those modules by overriding them right before this happens through a separate
script. In order to do this I need to know the run path of where _console-
script-module_ has been installed as it is not consistent (changes in
virtualenv's for example).
I tried adding this in to **main.py** and using a separate command line
argument to call it:
def print_absoulute_dir():
print os.path.abspath('dir_of_modules_to_use')
Unfortunately this only returns the path of wherever I make the call to the
console script.
DISCLAIMER - this is hacky and awful I know, it's from code I inherited and I
just need something working in the short term. I unfortunately cannot change
the code within **caller_module.py** at this time, otherwise I would just
change how it is calling **console_script_module:main.py**.
Answer: I'm not sure if i really understand what you want, but every python module has
the `__file__` attribute, which contains its location in the filesystem, and
you can use sys.path to modify python module search path
import urllib
import sys
print urllib.__file__
print sys.path
|
Python : getsize not defined
Question: I am trying to sort the files based on their size and store the log in a file.
But I get an error that says 'getsize' not defined. Please help me fix this.
from ClientConfig import ClientConfig
import os
import os.path
class VerifyFileNSize:
def __init__(self):
self.config = ClientConfig()
self.parseInfo()
def parseInfo(self):
count = 0
size = 0
sort_file = []
originalPath = os.getcwd()
os.chdir(self.config.Root_Directory_Path())
log = open(self.config.File_Count(),'wb')
for root, dirs, files in os.walk("."):
for f in files:
sort_file.append(os.path.join(root, f))
sorted_file = sorted(sort_file, key=getsize)
for f in sorted_file:
log.write((str(os.path.getsize(f)) + " Bytes" + "|" + f + os.linesep).encode())
size += os.path.getsize(f)
count += 1
totalPrint = ("Client: Root:" + self.config.Root_Directory_Path() + " Total Files:" + str(count) + " Total Size in Bytes:" + str(size) + " Total Size in MB:" + str(round(size /1024/1024, 2))).encode()
print(totalPrint.decode())
log.write(totalPrint)
log.close()
os.chdir(originalPath)
if __name__ == "__main__":
VerifyFileNSize()
Answer: Try prepending `os.path`:
sorted_file = sorted(sort_file, key=os.path.getsize)
^^^^^^^^
Alternatively, you could just say `from os.path import getsize`.
|
Pig UDF running on AWS EMR with java.lang.NoClassDefFoundError: org/apache/pig/LoadFunc
Question: I am developing an application that try to read log file stored in S3 bucks
and parse it using Elastic MapReduce. Current the log file has following
format
-------------------------------
COLOR=Black
Date=1349719200
PID=23898
Program=Java
EOE
-------------------------------
COLOR=White
Date=1349719234
PID=23828
Program=Python
EOE
So I try to load the file into my Pig script, but the build-in Pig Loader
doesn't seems be able to load my data, so I have to create my own UDF. Since I
am pretty new to Pig and Hadoop, I want to try script that written by others
before I write my own, just to get a teast of how UDF works. I found one from
here <http://pig.apache.org/docs/r0.10.0/udf.html>, there is a
SimpleTextLoader. In order to compile this SimpleTextLoader, I have to add a
few imports, as
import java.io.IOException;
import java.util.ArrayList;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.InputFormat;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit;
import org.apache.pig.backend.executionengine.ExecException;
import org.apache.pig.data.Tuple;
import org.apache.pig.data.TupleFactory;
import org.apache.pig.data.DataByteArray;
import org.apache.pig.PigException;
import org.apache.pig.LoadFunc;
Then, I found out I need to compile this file. I have to download svn and pig
running
sudo apt-get install subversion
svn co http://svn.apache.org/repos/asf/pig/trunk
ant
Now i have a pig.jar file, then I try to compile this file.
javac -cp ./trunk/pig.jar SimpleTextLoader.java
jar -cf SimpleTextLoader.jar SimpleTextLoader.class
It compiles successful, and i type in Pig entering grunt, in grunt i try to
load the file, using
grunt> register file:/home/hadoop/myudfs.jar
grunt> raw = LOAD 's3://mys3bucket/samplelogs/applog.log' USING myudfs.SimpleTextLoader('=') AS (key:chararray, value:chararray);
2012-12-05 00:08:26,737 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal error. org/apache/pig/LoadFunc Details at logfile: /home/hadoop/pig_1354666051892.log
Inside the pig_1354666051892.log, it has
Pig Stack Trace
---------------
ERROR 2998: Unhandled internal error. org/apache/pig/LoadFunc
java.lang.NoClassDefFoundError: org/apache/pig/LoadFunc
I also try to use another UDF (UPPER.java) from
<http://wiki.apache.org/pig/UDFManual>, and I am still get the same error by
try to use UPPER method. Can you please help me out, what's the problem here?
Much thanks!
UPDATE: I did try EMR build-in Pig.jar at /home/hadoop/lib/pig/pig.jar, and
get the same problem.
Answer: Put the UDF jar in the /home/hadoop/lib/pig directory or copy the
pig-*-amzn.jar file to /home/hadoop/lib and it will work.
You would probably use a bootstrap action to do either of these.
|
How to parse this special config file in python?
Question: I have some config files and the data format can't be changed.
One of them looks like this:
root {
configuration {
field_a: "aaaa"
field_b: "bbbb"
}
child {
configuration {
field_a: "aaa1"
field_b: "bbb1"
}
}
child {
configuration {
field_a: "aaa2"
field_b: "bbb2"
}
}
}
What i need is to parse the file and save it as JSON objects:
{root:
{field_a:"aaaa",field_b:"bbbb"},
children: [{field_a:"aaa", field_b:"bbb"}, ... ]
}
Is there any way to make it possible ?
Answer: Your data format is unofficial-javascript-array. For great python, it's easy
to parse them to python dict in 4 line of codes. Then use simplejson to parse
to any json-format you like.
s = '''
root {
configuration {
field_a: "aaaa"
field_b: "bbbb"
}
child {
configuration {
field_a: "aaa"
field_b: "bbb"
}
}
}
'''
s = s.replace("{", ":{").replace("}", "},")
s = "{%s}"%s
import re
s = re.sub(r'(\w+)"', r'\1",', s)
s = re.sub(r"(\w+)\s*\:", r'"\1":', s)
print "string:", s
d = eval(s)
print "python dict:", d
import simplejson as json
print "json:", json.dumps(d)
|
String segmentation with html tags in python
Question: I am trying to break a string into smaller segments using Python. The various
cases can be:
str1 = "Hello world. This is an ideal example string."
Result: Hello world.
This is an ideal example string.
str2 = "`<H1>`Hello world.`</H1><P>`This is an HTML example string.`<P>`"
Result:
`<H1>`Hello world.`</H1>`
`<P>`This is an HTML example string.`<P>`
str3 = "1. Hello World. 2. This is a string."
Result:
1\. Hello World.
2\. This is a string.
Here is my code. But I cannot seem to achieve the 2nd case:
import re
string = """<h1>This is a string.</h1><a href="www.abc.com"> This is another part. <P/>"""
segment_regex = re.compile(r"""
(
\r\n|
\\r\\n|
\n|
\\n|
\r|
\\r|
\t|
\\t|
(?:
(?<=[^\d][\.|\!|\?])
\s+
(?=[A-Z0-9])
)|
(?:
(?<=[\.|\!|\?])\s*(?=<.*?>)
)
)
""", re.VERBOSE)
seg = segment_regex.split(string)
segments = seg[::2]
separator = seg[1::2]
print("Segments are ---->>")
for s in segments:
print (s)
print("Separators are ---->>")
for p in separator:
print (p)
Answer: The regex may be trying to do too many things at once. A simpler and more
manageable way would be to first detect the string type html, ideal, list
first and then invoke appropriate processors for each. Something like :-
import re
string = """<h1>This is a string.</h1><a href="www.abc.com"> This is another part. <P/>"""
if re.search('<.*?>', string):
split_html(string)
elif re.search('\\d\\.', string):
split_list(string)
else:
split_ideal(string)
Also while this may work for the cases mentioned a generic "splitter" will be
far more complex and I don't claim that this approach will work for all.
|
Opening/running Excel file from python
Question: I need to start excel and open a file directly from python. Currently I am
using:
import os
os.system('start excel.exe file.xls')
However I do not get desired result. I want to open a file from local
destination (file is in the same folder with program), but this code open's
the file with same name in my home (user) directory and not from my program
directory.
Answer: The problem is that the directory where the program is located is not used.
The current working directory is. So you have to find out which directory your
program is located in, which python conveniently prepared for you in:
sys.path[0]
and either change directory to it:
os.chdir(sys.path[0])
or give full path for the file you want to open
os.system('start excel.exe "%s\\file.xls"' % (sys.path[0], ))
Note, that while Windows generally accept forward slash as directory
separator, the command shell (`cmd.exe`) does not, so backslash has to be used
here. `start` is Windows-specific, so it's not big problem to hardcode it
here. More importantly note that Windows don't allow `"` in file-names, so the
quoting here is actually going to work (quoting is needed, because the path is
quite likely to contain space on Windows), **but it's bad idea to quote like
this in general!**
|
Way too slow wxPython application getting data from Google Spreadsheet and User input needs speed up solution
Question: I want to make an application that reads out city names, road names, and
distances between them from a Google Spreadsheet with several worksheets.
So far I have the code bellow working right. It reads out from spreadsheet,
receives input also from a user who wants to find out the road name (like Rode
60) between two cities and also the distance between them. However when I run
the application it is incredibly SLOW.
I think I have a server side-user side issue, but after reading tons of
documentation I'm very confused. Maybe I should consider an entirely different
approach. Maybe I need to read out all the spreadsheet with gspread and to
work only user side. Anyway. Now it's slow and I want to have like thousands
of city's in my spreadsheet later on and probably I will put there some more
data about them, like is it a country road or is it a highway, national road
etc. it will take ages until it will return the result with my current code.
Please help and please mind that I'm new to python, wxPython, Google API's or
to IGraph if you suggest that I should do these things with graphs. Today I
did set up IGraph as well for my Python 2.7. Maybe it's the key to my problem?
Please at least give me the right way, the right tutorials. I'm not expecting
anyone to do the dirty job for me. Thank you in advance!!!
import gdata.spreadsheet.service
import gdata.service
import atom.service
import gdata.spreadsheet
import atom
import gspread
import wx
import gdata.docs
import gdata.docs.service
import re
import os
import csv
import math
class laci(wx.Frame):
def __init__(self,parent,id):
wx.Frame.__init__(self,parent,id,'distance calculator',size=(500,500))
panel=wx.Panel(self)
test=wx.TextEntryDialog(None,"Beginning point: ",'Name of beginning point','...')
if test.ShowModal()==wx.ID_OK:
all1=test.GetValue()
test2=wx.TextEntryDialog(None,"Finishing point: ",'Name of finish','...')
if test2.ShowModal()==wx.ID_OK:
all2=test2.GetValue()
c = gspread.Client(auth=('[email protected]','.............'))
c.login()
# open spreadsheet
sht=c.open_by_key('.................................')
worksheet = sht.get_worksheet(0)
print worksheet
i = 1
j = 1
What = None
first_col = worksheet.col_values(1)
print first_col
stopper = 0
n = 3
m = 3
while worksheet.cell(i,1).value != None and stopper != 1:
if worksheet.cell(i,1).value == all1:
print all1
stopper = 1
else:
i = i+1
print i
if worksheet.cell(i,1).value == None:
boxy=wx.MessageDialog(None,'Wrong start point. You wanna see correct start points list?','Mistake?',wx.YES_NO)
answer=boxy.ShowModal()
boxy.Destroy
if answer == 5103:
boxl=wx.SingleChoiceDialog(None,'Accepted Starting point names:','In case of mistake',
['City1','City2','City3','City4','City5','City6','City7','City8'])
if boxl.ShowModal()==wx.ID_OK:
all1=boxl.GetStringSelection()
stopper = 0 # figyelj
i = 1
print all1
boxl.Destroy
else:
print 'how unfortunate'
if stopper == 1:
sline = []
while worksheet.cell(i,n).value != None:
line = worksheet.cell(i,n).value
sline.append(line)
n = n + 1
print sline
slinestr = str(sline)
stopper2 = 0
print sline
while worksheet.cell(j,1).value != None and stopper2 != 1:
if worksheet.cell(j,1).value == all2:
print all2
stopper2 = 1
else:
j = j+1
print j
if worksheet.cell(j,1).value == None:
boxz=wx.MessageDialog(None,'Wrong Finish point? Wanna see correct choices?','Mistake?',wx.YES_NO)
answer2=boxz.ShowModal()
boxz.Destroy
if answer2 == 5103:
boxl2=wx.SingleChoiceDialog(None,'Accepted Finishing point names:','In case of mistake',
['City1','City2','City3','City4','City5','City6','City7','City8'])
if boxl2.ShowModal()==wx.ID_OK:
all2=boxl2.GetStringSelection()
print all2
boxl2.Destroy
else:
print 'how unfortunate'
if stopper2 == 1:
sline2 = []
while worksheet.cell(j,m).value != None:
line2 = worksheet.cell(j,m).value
sline2.append(line2)
m = m + 1
print sline2
slinestr2 = str(sline2)
print sline
print sline2
t = list(set(sline) & set(sline2))
print t
t = t[0]
t = str(t)
worksheet2 = sht.worksheet(t)
print worksheet2
print worksheet2.cell(2,2)
i = 2
j = 2
iszam = 1
iszam2 = 1
stopi = 0
stopi2 = 0
km = 0
while worksheet2.cell(i,2).value != None and stopi != 1:
if worksheet2.cell(i,2).value == all1:
iszam = i
print iszam
print worksheet2.cell(i,3)
stopi = 1
i = i + 1
print i
while worksheet2.cell(j,2).value != None and stopi2 != 1:
if worksheet2.cell(j,2).value == all2:
iszam2 = j
print iszam2
print worksheet2.cell(j,3)
stopi2 = 1
j = j + 1
print j
if iszam2 < iszam:
while iszam2 != iszam:
km = km + int(worksheet2.cell(iszam2+1,3).value)
iszam2 = iszam2 + 1
print km
elif iszam2 > iszam:
while iszam != iszam2:
km = km + int(worksheet2.cell(iszam+1,3).value)
iszam = iszam + 1
print km
else:
km = 0
print km
km = str(km)
wx.StaticText(panel, -1, all1, (20,30))
wx.StaticText(panel, -1, slinestr, (80,30))
wx.StaticText(panel, -1, all2, (20,60))
wx.StaticText(panel, -1, slinestr2, (80,60))
wx.StaticText(panel, -1, 'Path =', (20,90))
wx.StaticText(panel, -1, t, (80,90))
wx.StaticText(panel, -1, 'Distance =', (20,120))
wx.StaticText(panel, -1, km, (80,120))
if __name__=='__main__':
app=wx.PySimpleApp() #runs it
frame=laci(parent=None,id=-1) #face of programme
frame.Show()
app.MainLoop()
Answer: The first place to start IMHO is to find **where** the delay is.
I'd start by moving the code into different functions/methods - that way
you'll be able to profile the code and see where the slowness is.
eg. (probably want broken down further than this)
* in the constructor just set up the wx objects
* then have a function that communicates with Google
* and another to write the data to wx.
After that you'll be able to do some profiling either yourself or using the
python profiler: (<http://docs.python.org/2/library/profile.html>). The most
important thing is finding what is taking the time and then you will know what
needs improving.
My _guess_ would be that you don't want to be doing it all remotely but should
grab as much as you think you'll need straight away and then do searching
locally - as I wouldn't be surprised if each function call on the spreadsheet
results in communication to the server.But I've not used the Google
Spreadsheet API so that is **_just a hunch_**
|
Arduino, python, pyserial and socket
Question: I am trying to write a simple webserver on an Arduino to test a few things,
but I couldn't find my Arduino with Ethernet on it.
"No worries" I thought, "I'll just write a socket server in python that acts
as a proxy for the serial connection".
import socket
import serial
host = ''
port = 8001
buffSize= 1024
serverSocket = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
serverSocket.bind((host, port))
serverSocket.listen(1)
ser = serial.Serial('COM3', 115200, timeout=None, dsrdtr =False,rtscts =False,xonxoff =False)
print "Listening..."
send = ""
while 1:
conn, remoteAddr = serverSocket.accept()
print "Connection...."
data = conn.recv(buffSize)
print "Recieved"
ser.write("%s\n"%data)
print "Sent"
print "Attempting to get reply"
while ser.inWaiting()>0:
conn.send( ser.read())
conn.close()
serverSocket.close()
However, whatever I try, it seems that the connection made by the browser
resets randomly and I'd get multiple rows of data. And the script resets the
Arduino each time it connects or disconnects from the serial port. I tried
using RealTerm and I got a proper answer, but python and serialness is just a
mess.
Can anyone help me?
Answer: Use the
[tcp_serial_redirect.py](http://pyserial.svn.sourceforge.net/viewvc/%2acheckout%2a/pyserial/trunk/pyserial/examples/tcp_serial_redirect.py)
script in the PySerial documentation. Is all you need.
|
skip excel lines with python
Question: I am making a Python script that parses an Excel file using the `xlrd`
library. What I would like is to do calculations on different columns `if` the
cells contain a certain value. Otherwise, skip those values. Then store the
output in a dictionary. Here's what I tried to do :
import xlrd
workbook = xlrd.open_workbook('filter_data.xlsx')
worksheet = workbook.sheet_by_name('filter_data')
num_rows = worksheet.nrows -1
num_cells = worksheet.ncols - 1
first_col = 0
scnd_col = 1
third_col = 2
# Read Data into double level dictionary
celldict = dict()
for curr_row in range(num_rows) :
cell0_val = int(worksheet.cell_value(curr_row+1,first_col))
cell1_val = worksheet.cell_value(curr_row,scnd_col)
cell2_val = worksheet.cell_value(curr_row,third_col)
if cell1_val[:3] == 'BL1' :
if cell2_val=='toSkip' :
continue
elif cell1_val[:3] == 'OUT' :
if cell2_val == 'toSkip' :
continue
if not cell0_val in celldict :
celldict[cell0_val] = dict()
# if the entry isn't in the second level dictionary then add it, with count 1
if not cell1_val in celldict[cell0_val] :
celldict[cell0_val][cell1_val] = 1
# Otherwise increase the count
else :
celldict[cell0_val][cell1_val] += 1
So here as you can see, I count the number of "cell1_val" values for each
"cell0_val". But I would like to skip those values which have "toSkip" in the
adjacent column's cell before doing the sum and storing it in the dict. I am
doing something wrong here, and I feel like the solution is much more simple.
Any help would be appreciated. Thanks.
Here's an example of my sheet :
cell0 cell1 cell2
12 BL1 toSkip
12 BL1 doNotSkip
12 OUT3 doNotSkip
12 OUT3 toSkip
13 BL1 doNotSkip
13 BL1 toSkip
13 OUT3 doNotSkip
Answer: Use
[`collections.defaultdict`](http://docs.python.org/2/library/collections.html#collections.defaultdict)
with
[`collections.Counter`](http://docs.python.org/2/library/collections.html#counter-
objects) for your nested dictionary.
Here it is in action:
>>> from collections import defaultdict, Counter
>>> d = defaultdict(Counter)
>>> d['red']['blue'] += 1
>>> d['green']['brown'] += 1
>>> d['red']['blue'] += 1
>>> pprint.pprint(d)
{'green': Counter({'brown': 1}),
'red': Counter({'blue': 2})}
Here it is integrated into your code:
from collections import defaultdict, Counter
import xlrd
workbook = xlrd.open_workbook('filter_data.xlsx')
worksheet = workbook.sheet_by_name('filter_data')
first_col = 0
scnd_col = 1
third_col = 2
celldict = defaultdict(Counter)
for curr_row in range(1, worksheet.nrows): # start at 1 skips header row
cell0_val = int(worksheet.cell_value(curr_row, first_col))
cell1_val = worksheet.cell_value(curr_row, scnd_col)
cell2_val = worksheet.cell_value(curr_row, third_col)
if cell2_val == 'toSkip' and cell1_val[:3] in ('BL1', 'OUT'):
continue
celldict[cell0_val][cell1_val] += 1
I also combined your if-statments and changed the calculation of `curr_row` to
be simpler.
|
python dictmixin object with property decorator
Question: I was instructed to use more pythonish way of setter and getters @property. So
we have something like this:
from UserDict import DictMixin
class A(dict):
def __init__(self, a, b):
self.a = a
self.b = b
@property
def a(self):
return self._a
@a.setter
def a(self, value):
self._a = value
def __getitem__(self, key):
return getattr(self, key)
def __setitem__(self, key, value):
setattr(self, key, value)
def keys(self):
return [k for k in self.__dict__.keys() if not k.startswith('_')]
def do_whatever(self):
pass
a = A(1,2)
print a.keys()
output is ['b'] and at first I wasn't expecting that, but it actually makes
sense.
Question is how to get all properties names but not names of methods. Any
ideas?
Answer: Properties are implemented as
[descriptors](http://docs.python.org/2/reference/datamodel.html#descriptors),
and so belong to the _class_ , not the _instance_ :
>>> class Question(object):
... @property
... def answer(self):
... return 7 * 6
...
>>> q = Question()
>>> q.answer
42
>>> q.__dict__
{}
>>> for key, value in Question.__dict__.iteritems():
... if isinstance(value, property):
... print key
...
answer
|
How do I open a .py file in python from a .py file in pypy?
Question: My program currently consists of 2 .py files.
I run the main part of the code in pypy (which is much faster) and then I open
a second file in python that plots my data using `matplotlib.pyplot`.
I have managed to open the using:
subprocess.Popen(['C:\\python26\\python.exe ','main_plot.py',])
which opens my second file...
import matplotlib.pyplot as pyplot
def plot_function(NUMBER):
'''some code that uses the argument NUMBER'''
pyplot.figure()
---plot some data---
pyplot.show()
However, I would like to be able to pass arguments to the `plot_function` that
opens in python. Is that possible?
Answer: Yes, the Popen constructor takes a list of length n. [See the note
here.](http://docs.python.org/2/library/subprocess.html#popen-constructor) So
just add the arguments to main_plot.py to your list:
subprocess.Popen(['C:\\python26\\python.exe ','main_plot.py','-n',1234])
EDIT (to respond to your edit):
You need to modify main_plot.py to accept a command line argument to call your
function. This will do it:
import matplotlib.pyplot as pyplot
def plot_function(NUMBER):
'''some code that uses the argument NUMBER'''
pyplot.figure()
---plot some data---
pyplot.show()
import argparse
if __name__=="__main__":
argp=argparse.ArgumentParser("plot my function")
argp.add_argument("-n","--number",type=int,default=0,required=True,help="some argument NUMBER, change type and default accordingly")
args=argp.parse_args()
plot_function(args.number)
|
best way to visualize google map in python
Question: I am wondering which is the best way to visualize google's map using python
and pygtk, for now I have tried this:
import gtk
import webkit
win = gtk.Window()
win.set_default_size(600, 400)
web = webkit.WebView()
web.open('https://google-developers.appspot.com/maps/documentation/javascript/examples/map-simple')
map=gtk.Frame('Maps')
map.add(web)
win.add(map)
win.show_all()
gtk.main()
but there are some problem, for example you can not resize the window
properly, and many exceptions are trown when you try to use the street view.
There exist a correct way to do such thing?
Answer: I have found a solution:
import gtk
import webkit
win = gtk.Window()
win.set_default_size(600, 400)
web = webkit.WebView()
web.open('https://google-developers.appspot.com/maps/documentation/javascript/examples/map-simple')
map=gtk.Frame('Maps')
scroll = gtk.ScrolledWindow()
scroll.add(web)
map.add(scroll)
win.add(map)
win.show_all()
gtk.main()
You have just to ad a scrolled view
|
ctypes/C++ segfault accessing member variables
Question: I am no stranger to the python ctypes module, but this is my first attempt at
combining C++, C and Python all in one code. My problem seems to be very
similar to [Seg fault when using ctypes with Python and
C++](http://stackoverflow.com/questions/12142766/seg-fault-when-using-ctypes-
with-python-and-c?lq=1), however I could not seem to solve the problem in the
same way.
I have a simple C++ file called `Header.cpp`:
#include <iostream>
class Foo{
public:
int nbits;
Foo(int nb){nbits = nb;}
void bar(){ std::cout << nbits << std::endl; }
};
extern "C" {
Foo *Foo_new(int nbits){ return new Foo(nbits); }
void Foo_bar(Foo *foo){ foo->bar(); }
}
which I compile to a shared library using:
g++ -c Header.cpp -fPIC -o Header.o
g++ -shared -fPIC -o libHeader.so Header.o
and a simple Python wrapper called `test.py`:
import ctypes as C
lib = C.CDLL('./libHeader.so')
class Foo(object):
def __init__(self,nbits):
self.nbits = C.c_int(nbits)
self.obj = lib.Foo_new(self.nbits)
def bar(self):
lib.Foo_bar(self.obj)
def main():
f = Foo(32)
f.bar()
if __name__ == "__main__":
main()
I would expect that when I call test.py, I should get the number 32 printed to
screen. However, all I get is a segmentation fault. If I change the
constructor to return the class instance on the stack (i.e. without the `new`
call) and then pass around the object, the program performs as expected. Also,
if I change the `bar` method in the `Foo` class such that it does not use the
`nbits` member, the program does not seg fault.
I have an limited understanding of C++, but the fact that I can make this
function as expected in C and in C++ but not in Python is a little confusing.
Any help would be greatly appreciated.
**Update** : Thanks to one of the comments below, the problem has been solved.
In this case, an explicit declaration of both restype and argtypes for the C
functions was required. i.e the following was added to the code:
lib.Foo_new.restype = C.c_void_p;
lib.Foo_new.argtypes = [C.c_int32];
lib.Foo_bar.restype = None;
lib.Foo_bar.argtypes = [C.c_void_p];
Answer: I would try the following:
extern "C"
{
Foo *Foo_new(int nbits)
{
Foo *foo = new Foo(nbits);
printf("Foo_new(%d) => foo=%p\n", nbits, foo);
return foo;
}
void Foo_bar(Foo *foo)
{
printf("Foo_bar => foo=%p\n", foo);
foo->bar();
}
}
to see if the values of `foo` match.
Also, you might want to look at Boost.Python to simplify creating Python
bindings of C++ objects.
|
'==' operator not working correctly in python
Question: Following piece of code is behaving incorrectly in my script :
from ctypes import *
base_addr = c_uint64(0)
base_addr_temp = c_uint64(0)
print base_addr
print base_addr_temp
if(base_addr == base_addr_temp):
print "val"
Output that i get :
> c_ulong(0L)
>
> c_ulong(0L)
I am using python 2.7.3 version.
Answer: I think that because these are objects, you'd have to compare them by value:
base_addr.value == base_addr_temp.value
It's been a while since I've done any Python, but in many languages, two
objects are only considered "equal" if they actually refer to the same object
(i.e. reference the same location in memory).
|
How to selet a specific Test with python unitest testsuites
Question: I have python unittest code organized as follows:
Maindir
|
|--Dir1
| |
| |-- test_A.py
| |-- test_B.py
| |-- test_C.py
|
|--Dir2
| ...
I assume you get the picture. In each of the `Dirx` directories I have a file
named `suite.py` which puts together a suite of tests from the tests in the
give directory (so you can choose specific test, omit other tests etc.). These
files look e.g. like the following (in case to choose all tests, they might
also select only a subset of tests) [also consider test <-> unittest]:
import test_A
import test_B
import test_C
suite1 = test.TestSuite()
suite1.addTests(test.TestLoader().loadTestsFromTestCase(test_A.MyTest))
suite1.addTests(test.TestLoader().loadTestsFromTestCase(test_B.MyTest))
suite1.addTests(test.TestLoader().loadTestsFromTestCase(test_C.MyTest))
The main Runner `execall.py` in the `Maindir` directory looks like this
from Dir1.suite import suite1
from Dir2.suite import suite2
suite_all = test.TestSuite([
suite1,
suite2])
if __name__ == '__main__':
test.main(defaultTest='suite_all')
Now I can do the following:
* Run all tests: 'execall.py' (as documented)
* Run a specific suite: `execall.py suite1` (as documented)
But how to run only a specific single test? And how to run all tests of the
specific file? I tried the following without success, with the same error:
`'TestSuite' object has no attribute 'xxx'`
execall.py suite1.test_A
execall.py suite1.test_A.test1
execall.py test_A
execall.py test_A.test1
`execall.py -h` gives very specific Examples how to run single tests or tests
in testcases, but in my case this does not seem to work. Any ideas are
appreciated.
Answer: One way to do it would be writing your own tests loader. I highly recommend
adopting the mechanism found in [Flask's testsuite
module](https://github.com/mitsuhiko/flask/blob/master/flask/testsuite/__init__.py).
The basic idea is:
1. Implement a routine which returns a `unittest.TestSuite()` object with all the Python modules which contain the required tests. This could be done e.g. by scanning the directory for `test_XXX.py` files (simply checking them by `startswith('test')` or regexp etc.).
2. Subclass `unittest.TestLoader` and override `loadTestsFromName(self, name, module)` which would use the testsuite generated in Step1. For example, in Flask:
for testcase, testname in find_all_tests(suite):
if testname == name or \
testname.endswith('.' + name) or \
('.' + name + '.') in testname or \
testname.startswith(name + '.'):
all_tests.append(testcase)
This allows loading tests by python module name, by test suite (test class)
name or just by test case name.
|
Estimating an Affine Transform between Two Images
Question: I have a sample image:

I apply the affine transform with the following warp matrix:
[[ 1.25 0. -128 ]
[ 0. 2. -192 ]]
and crop a 128x128 part from the result to get an output image:

Now, I want to estimate the warp matrix and crop size/location from just
comparing the sample and output image. I detect feature points using SURF, and
match them by brute force:

There are many matches, of which I'm keeping the best three (by distance),
since that is the number required to estimate the affine transform. I then use
those 3 keypoints to estimate the affine transform using getAffineTransform.
However, the transform it returns is completely wrong:
-0.00 1.87 -6959230028596648489132997794229911552.00
0.00 -1.76 -0.00
What am I doing wrong? Source code is below.
Perform affine transform (Python):
"""Apply an affine transform to an image."""
import cv
import sys
import numpy as np
if len(sys.argv) != 10:
print "usage: %s in.png out.png x1 y1 width height sx sy flip" % __file__
sys.exit(-1)
source = cv.LoadImage(sys.argv[1])
x1, y1, width, height, sx, sy, flip = map(float, sys.argv[3:])
X, Y = cv.GetSize(source)
Xn, Yn = int(sx*(X-1)), int(sy*(Y-1))
if flip:
arr = np.array([[-sx, 0, sx*(X-1)-x1], [0, sy, -y1]])
else:
arr = np.array([[sx, 0, -x1], [0, sy, -y1]])
print arr
warp = cv.fromarray(arr)
cv.ShowImage("source", source)
dest = cv.CreateImage((Xn, Yn), source.depth, source.nChannels)
cv.WarpAffine(source, dest, warp)
cv.SetImageROI(dest, (0, 0, int(width), int(height)))
cv.ShowImage("dest", dest)
cv.SaveImage(sys.argv[2], dest)
cv.WaitKey(0)
Estimate affine transform from two images (C++):
#include <stdio.h>
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/nonfree/nonfree.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <algorithm>
using namespace cv;
void readme();
bool cmpfun(DMatch a, DMatch b) { return a.distance < b.distance; }
/** @function main */
int main( int argc, char** argv )
{
if( argc != 3 )
{
return -1;
}
Mat img_1 = imread( argv[1], CV_LOAD_IMAGE_GRAYSCALE );
Mat img_2 = imread( argv[2], CV_LOAD_IMAGE_GRAYSCALE );
if( !img_1.data || !img_2.data )
{
return -1;
}
//-- Step 1: Detect the keypoints using SURF Detector
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect( img_1, keypoints_1 );
detector.detect( img_2, keypoints_2 );
//-- Step 2: Calculate descriptors (feature vectors)
SurfDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute( img_1, keypoints_1, descriptors_1 );
extractor.compute( img_2, keypoints_2, descriptors_2 );
//-- Step 3: Matching descriptor vectors with a brute force matcher
BFMatcher matcher(NORM_L2, false);
std::vector< DMatch > matches;
matcher.match( descriptors_1, descriptors_2, matches );
double max_dist = 0;
double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
//-- PS.- radiusMatch can also be used here.
sort(matches.begin(), matches.end(), cmpfun);
std::vector< DMatch > good_matches;
vector<Point2f> match1, match2;
for (int i = 0; i < 3; ++i)
{
good_matches.push_back( matches[i]);
Point2f pt1 = keypoints_1[matches[i].queryIdx].pt;
Point2f pt2 = keypoints_2[matches[i].trainIdx].pt;
match1.push_back(pt1);
match2.push_back(pt2);
printf("%3d pt1: (%.2f, %.2f) pt2: (%.2f, %.2f)\n", i, pt1.x, pt1.y, pt2.x, pt2.y);
}
//-- Draw matches
Mat img_matches;
drawMatches( img_1, keypoints_1, img_2, keypoints_2, good_matches, img_matches,
Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
//-- Show detected matches
imshow("Matches", img_matches );
imwrite("matches.png", img_matches);
waitKey(0);
Mat fun = getAffineTransform(match1, match2);
for (int i = 0; i < fun.rows; ++i)
{
for (int j = 0; j < fun.cols; j++)
{
printf("%.2f ", fun.at<float>(i,j));
}
printf("\n");
}
return 0;
}
/** @function readme */
void readme()
{
std::cout << " Usage: ./SURF_descriptor <img1> <img2>" << std::endl;
}
Answer: The cv::Mat getAffineTransform returns is made of doubles, not of floats. The
matrix you get probably is fine, you just have to change the printf command in
your loops to
printf("%.2f ", fun.at<double>(i,j));
or even easier: Replace this manual output with
std::cout << fun << std::endl;
It's shorter and you don't have to care about data types yourself.
|
Easiest way to draw tree diagrams?
Question: I want to have a program which generates diagrams for trees, ones looking
pretty much like [this](http://i.imgur.com/P11MX.jpg)
It is to be a part of a project I am working on using C#, but if there is a
way to make a Python or Javascript do it, that be okay as well. Maybe some C#
library, or JavaScript/Python library with parameters I can provide to it ?
The most important thing, regardless of the programming language, is that it
be easy to use.
Answer: You may want to use [pydot](http://code.google.com/p/pydot/), which is an
interface to the [Graphviz](http://www.graphviz.org/) .DOT format
visualization software. As outlined in the Graphviz
[guide](http://www.graphviz.org/pdf/dotguide.pdf), the .DOT format lets you
design graphs similar to the one you posted as well as more much more
sophisticated ones.
Here's an example from the `pydot` doc:
import pydot
graph = pydot.Dot('graphname', graph_type='digraph')
subg = pydot.Subgraph('', rank='same')
subg.add_node(pydot.Node('a'))
graph.add_subgraph(subg)
subg.add_node(pydot.Node('b'))
subg.add_node(pydot.Node('c'))
If you're looking at Javascript instead,
[canviz](http://code.google.com/p/canviz/) is a well-respected library that
allows you to draw .DOT graphs to browser canvases.
|
What is the naming convention for Python class references
Question: What is the naming convention for a variable referencing a class in Python?
class MyClass(object):
pass
# which one is correct?
reference_to_class = MyClass
# or
ReferenceToClass = MyClass
Here is another example that resembles my situation:
# cars.py
class Car(object):
pass
class Sedan(Car):
pass
class Coupe(Car):
pass
class StatonWagon(Car):
pass
class Van(Car):
pass
def get_car_class(slug, config):
return config.get(slug)
# config.py
CONFIG = {
'ford-mustang': Coupe,
'buick-riviera': Coupe,
'chevrolet-caprice': Sedan,
'chevy-wan' Van:
'ford-econoline': Van
}
# main.py
from config.py import CONFIG
from cars import get_car_class
MyCarClass = get_car_class('buick-riviera')
my_car = MyCarClass()
I would prefer ReferenceToClass, that everybody new to the code knows it's a
class and not an instance. But as @poplitea wrote, literature reference would
be great.
Answer: On module level the second. As a function argument, the first.
|
Selecting conserved residues in protein using PyMol script by importing residue list from a text file
Question: I wanted to select certain highly conserved residues in a protein (computed by
a scoring mechanism and listed in a text file - each residue in a single line)
using a PyMol script. The PyMol script below that I am using is not working. I
will be very grateful to you if someone could help me out.
Parts of the script work perfectly fine when run separately - the Pymol script
when a residue number is mentioned in the script without importing the list
from the text file and just the python script for loading numbers from a file
into an array also works fine when run separately. But the problem is seen
when it is combined as in my script below - when residue number needs to be
taken from the array as i after importing the list from the text file. Any
help is appreciated. Thanks!
#!/usr/bin/python
from pymol import cmd
import string
cmd.load("/home/xyz/proteinA.pdb", 'protein')
f=open('/home/xyz/residuedata.txt','r')
array = []
filecontents = f.read().split()
for val in filecontents:
array.append(int(val))
f.close()
for i in array:
cmd.select("residuedata", "resi i")
cmd.show('sphere', "residuedata")
cmd.color ('red', "residuedata")
Answer: The `"resi i"` string does not know about your `i` variable.
Did you intend to do this?
for i in array:
cmd.select("residuedata", "resi {}".format(i))
|
python script to open excel workbook, and change the name of the first worksheet and execute stored proc
Question: i have this python script that's supposed to open an excel workbook from
directory and then change the name of the first worksheet and then run a
stored procedure on the first worksheet and the subsequent worksheet. When I
run the script it gives me an error - 'str' object attribute 'title' is read-
only. Any help or suggestions is appreciated.
import os
import pyodbc
import openpyxl
from openpyxl import load_workbook
dirList = os.listdir("""\\Raw_Data\\HSRx""")
#database connection
#loop through excel workbooks
#loop through excel sheets in each workbook
#run stored proc on each worksheet
#close files and disconnect to sql server
conn = pyodbc.connect('DRIVER={SQLServer};SERVER=localhost;DATABASE=mydb;UID=myname;PWD=mypwd')
cursor = conn.cursor()
for n in dirList:
path =os.path.join("""\\Raw_Data\\HSRx""",n)
workbook = openpyxl.reader.excel.load_workbook(path)
worksheets = workbook.get_sheet_names()
sheet = ['None','None']
i = 0
print n
for worksheet in worksheets:
worksheet.title = "Sheetone"
sheet[i] = "Sheetone"
i += 1
print worksheet
cursor.execute("""exec SP_IMPORT_HSRx_Ins ?, ?, ?""",n,sheet[0],sheet[1])
cursor.commit()
conn.close()
Here is the traceback:
CS NOV20 12.xlsx Traceback (most recent call):
File "HSRXRecons.py", line 25, in <module> worksheet.title = "Sheetone"
AttributeError: 'str' object attribute 'title' is read-only
Answer: `worksheet` is a string, because `get_sheet_names()` returns a list of
strings. You cannot assign attributes to strings.
>>> 'Sheet1'.title = 'Sheetone'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'str' object attribute 'title' is read-only
To change the title pass the sheet name to `get_sheet_by_name`:
workbook.get_sheet_by_name(worksheet).title = "Sheetone"
|
Changing str to int in Python
Question: I am new to python. I am trying to make a small elavator program.
The code:
import time
elavator = True
# 0 is B floor // basement
elavatorFloors = [0,1,2,3,4,5]
while elavator == True:
getLevel = input ("Which floor would you like to visit? ( 1 - 5) ")
time.sleep(2)
print "you're now at floor, " +int.getLevel
I get this error: AttributeError: type object 'int' has no attribute
'getLevel' Aside from changing the str to int, am I using any bad technique? I
really want to improve my programming thinking and code writing. Thank you :)
Answer: `int()` is a type, so to instantiate it, you need to do `int(getLevel)`, not
`int.getLevel`.
Check [the docs](http://docs.python.org/3/library/functions.html#int) for
examples and more.
Also note that `getLevel` is an odd name for the variable, as it implies it is
a function that gives the level. A better name would be just `level`.
It's also worth noting that `input()` evaluates the input as Python code (In
Python 2.x, in 3.x it acts as `raw_input()` does in older versions), so
`getLevel` will already be a number, although I would recommend using
`raw_input()` instead, and keeping your conversion from a string to an
integer, as it won't allow arbitrary execution of code, which is much better.
|
Python MultiThreading Queue suddenly stops doing anything
Question: I have a file that contains 600K+ lines of stuff I want to process.
So I use multithreading to speed up the process.
But the problem is for example I use 50 as the number of threads, after
processing 50 lines the script just does nothing else. It doesnt terminate nor
show anything else.
This is my code for reference:
#!/usr/bin/env python
from __future__ import print_function
import re
import sys
from Queue import *
from threading import Thread, Lock
#struct parameters
if len(sys.argv) != 3: # the program name and the two arguments
# stop the program and print an error message
sys.exit("Usage: python " + sys.argv[0] + " filename maxthreads")
accountlist = sys.argv[1]
maxthreads = int(sys.argv[2])
def dojob(email, password):
#here is some job to process all my users data
#end dojob
#this function will process the items in the queue, in serial
def processor():
if queue.empty() == True:
print ("the Queue is empty!")
sys.exit(1)
try:
job = queue.get()
job = job.strip('\r\n')
newdata = job.split(':')
email = newdata[0]
password = newdata[1]
#pass to dojob and process
print("Processing:", email)
dojob(email, password)
queue.task_done()
except:
print ("Failed to operate on job")
#set variables
queue = Queue()
threads = maxthreads
#a list of job items. you would want this to be more advanced,like reading from a file or database
jobs = open(accountlist)
#iterate over jobs and put each into the queue in sequence
for job in jobs:
print ("inserting job into the queue:", job)
queue.put(job)
#start some threads, each one will process one job from the queue
for i in range(threads):
th = Thread(target=processor)
th.setDaemon(True)
th.start()
#wait until all jobs are processed before quitting
queue.join()
Any idea's why its doing just stopping the process.
Sample output:
#for example thread is 2
inserting job into queue: [email protected]
inserting job into queue: [email protected]
inserting job into queue: [email protected]
(...until the end of the file...)
#once everything was added to the queue, is starts processing.
processing: [email protected]
processing: [email protected]
#then here the problem occurs, it doesnt do anything else.
#it doesnt continue to the next queued job.
Answer: It sounds like you need a loop inside `processor()`:
def processor():
while not queue.empty():
try:
job = queue.get()
...
Otherwise, every thread processes one job and stops.
> I use multithreading to speed up the process.
Depending on the nature of the processing, you may or may not get any speedup
from using multiple threads. This has to do with the [Global Interpreter Lock
(GIL)](http://en.wikipedia.org/wiki/Global_Interpreter_Lock). If you find that
you're not getting any speedup due to the GIL, you might want to consider
using the
[`multiprocessing`](http://docs.python.org/2/library/multiprocessing.html)
module.
|
Django 1.5b1: executing django-admin.py causes "No module named settings" error
Question: I've recently installed Django-1.5b1. My system configuration:
* OSX 10.8
* Python 2.7.1
* Virtualenv 1.7.2
When I call **django-admin.py** command I get the following error
(devel)ninja Django-1.5b1: django-admin.py
Usage: django-admin.py subcommand [options] [args]
Options:
-v VERBOSITY, --verbosity=VERBOSITY
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings=SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath=PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Print traceback on exception
--version show program's version number and exit
-h, --help show this help message and exit
Traceback (most recent call last):
File "/Users/sultan/.virtualenvs/devel/bin/django-admin.py", line 5, in <module>
management.execute_from_command_line()
File "/Users/sultan/.virtualenvs/devel/lib/python2.7/site-packages/django/core/management/__init__.py", line 452, in execute_from_command_line
utility.execute()
File "/Users/sultan/.virtualenvs/devel/lib/python2.7/site-packages/django/core/management/__init__.py", line 375, in execute
sys.stdout.write(self.main_help_text() + '\n')
File "/Users/sultan/.virtualenvs/devel/lib/python2.7/site-packages/django/core/management/__init__.py", line 241, in main_help_text
for name, app in six.iteritems(get_commands()):
File "/Users/sultan/.virtualenvs/devel/lib/python2.7/site-packages/django/core/management/__init__.py", line 108, in get_commands
apps = settings.INSTALLED_APPS
File "/Users/sultan/.virtualenvs/devel/lib/python2.7/site-packages/django/conf/__init__.py", line 52, in __getattr__
self._setup(name)
File "/Users/sultan/.virtualenvs/devel/lib/python2.7/site-packages/django/conf/__init__.py", line 47, in _setup
self._wrapped = Settings(settings_module)
File "/Users/sultan/.virtualenvs/devel/lib/python2.7/site-packages/django/conf/__init__.py", line 132, in __init__
raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e))
ImportError: Could not import settings 'settings' (Is it on sys.path?): No module named settings
Did anyone have the same errors? Can anyone advise or help with it?
Thanks,
Sultan
Answer: I had the same issue when starting a new project. I solved the problem by
giving this command at the command prompt:
export DJANGO_SETTINGS_MODULE=
in this way I unset the variable that was pointing to a "`settings`" file (discovered using `env | grep DJANGO_SETTINGS_MODULE`) I set before starting using _virtualenv_
after unsetting the variable the `django-admin.py` script worked like a charm!
|
Loop patchwork in python3
Question: I need to create a patchwork in Python 3. All I have left to do is create a
loop which makes the design border the graphics window. I know I need a for
loop however I am not sure how to do this.
This is what I have so far:
from graphics import *
def main():
height = eval(input("What is the height of the window"))
width = eval(input("What is the width of the window"))
colour = input("enter the colour of the patch")
win = GraphWin("Patch", 100*width, 100*height)
boat_x = 0
boat_y = 0
for x in range (4):
boat(win, boat_x, boat_y, colour)
boat_x = boat_x + 23
for i in range(height * 5):
boat(win, boat_x, boat_y, colour)
boat_x = boat_x + 24
for j in range(height * 5):
boat(win, boat_x, boat_y, colour)
boat_y = boat_y + 100
win.getMouse()
win.close()
def boat(win, x, y, colour):
body1 = Polygon(Point(1+x,95+y), Point(5+x,100+y),
Point(20+x,100+y), Point(24+x,95+y))
body1.draw(win)
line1 = Line(Point(13+x,95+y), Point(13+x,90+y))
line1.draw(win)
sail1 = Polygon(Point(1+x,90+y), Point(24+x,90+y), Point(13+x, 73+y))
sail1.setFill(colour)
sail1.draw(win)
body2 = Polygon(Point(1+x, 63), Point(5+x, 68),
Point(20+x,68), Point(24+x,63))
body2.draw(win)
line2 = Line(Point(13+x,63), Point(13+x,58))
line2.draw(win)
sail2 = Polygon(Point(1+x,58), Point(24+x, 58), Point(13+x,40))
sail2.setFill(colour)
sail2.draw(win)
body3 = Polygon(Point(1+x,28), Point(5+x,33),
Point(20+x,33), Point(24+x, 28))
body3.draw(win)
line3 = Polygon(Point(13+x,28), Point(13+x,23))
line3.draw(win)
sail3 = Polygon(Point(1+x,23), Point(24+x, 23), Point(13+x, 5))
sail3.setFill(colour)
sail3.draw(win)
main()
So far this creates the top border but nothing else. I am also aware that the
boat function isn't the most efficient way of drawing
Answer: When you say that you need to "make the design border the graphics window" I
assume you want your `boat` design to be repeated several times along each
edge of the window (that is, the top, bottom, left and right).
This should be doable in two loops. One will draw the top and bottom edges,
the other two will draw the left and right edges. I'm not too sure how your
drawing code works, so I'm guessing at some offsets here:
top = 0
bottom = (height-1) * 100
for x in range(0, width*100, 25):
boat(x, top, colour)
boat(x, bottom, colour)
left = 0
right = width * 100 - 25
for y in range(100, (height-1)*100, 100):
boat(left, y, colour)
boat(right, y, colour)
This should call your `boat` subroutine every 25 pixels across the top and
bottom, and every 100 pixels along the left and right edges. Adjust the `top`,
`bottom`, `left` and `right` values and the parameters in the `range` calls in
the loops to make the spacing suit your needs (I just made it up). This code
avoids drawing the corner items twice, though depending on how the drawing
routine works that might not be necessary.
|
calling standard python functions from rubypython
Question: I am trying to use the rubypython gem. Not sure how to call standard python
functions like len and set. In the python examples I see len(text3) and
set(text3).
How do I call these in rubypython?
Here is the link to rubypython: <http://rubypython.rubyforge.org/>
Answer: Well, my Ruby knowledge is limited, and my knowledge of the `rubypython` gem
is non-existent. However, I do know the standard functions you refer to a part
of the `__builtin__` module, which is automatically imported into the python
namespace. Fortunately, there's nothing preventing you from importing it
explicitly again (which is perfectly safe in Python). You then _might_ be able
do something like `__builtin__.set()`. No guarantees, though.
|
how do i write a program to click a particular link in Python
Question: My program takes an user input and searches it through a particular webpage .
Further i want it to go and click on a particular link and then download the
file present there .
Example :
1. The webpage : <http://www.rcsb.org/pdb/home/home.do>
2. The search Word :"1AW0"
3. after you search the word on the website it takes you to : <http://www.rcsb.org/pdb/explore/explore.do?structureId=1AW0>
I want the program to go on the right hand side of the webpage and download
the pdb file from the **DOWNLOAD FILES** option
I have managed to write a program using the mechanize module to automatically
search the word however unable to find a way i can click on a link
my code :
import urllib2
import re
import mechanize
br = mechanize.Browser()
br.open("http://www.rcsb.org/pdb/home/home.do")
## name of the form that holds the search text area
br.select_form("headerQueryForm")
## "q" name of the teaxtarea in the html script
br["q"] = str("1AW0")
response = br.submit()
print response.read()
any help or any suggestions would help .
Btw i am intermediate programmer in Python and I am trying to learn the Jython
module to try make this work .
Thanks in advance
Answer: Here's how I would have done it:
'''
Created on Dec 9, 2012
@author: Daniel Ng
'''
import urllib
def fetch_structure(structureid, filetype='pdb'):
download_url = 'http://www.rcsb.org/pdb/download/downloadFile.do?fileFormat=%s&compression=NO&structureId=%s'
filetypes = ['pdb','cif','xml']
if (filetype not in filetypes):
print "Invalid filetype...", filetype
else:
try:
urllib.urlretrieve(download_url % (filetype,structureid), '%s.%s' % (structureid,filetype))
except Exception, e:
print "Download failed...", e
else:
print "Saved to", '%s.%s' % (structureid,filetype)
if __name__ == "__main__":
fetch_structure('1AW0')
fetch_structure('1AW0', filetype='xml')
fetch_structure('1AW0', filetype='png')
Which provides this output:
Saved to 1AW0.pdb
Saved to 1AW0.xml
Invalid filetype... png
Along with the 2 files `1AW0.pdb` and `1AW0.xml` which are saved to the script
directory (for this example).
<http://docs.python.org/2/library/urllib.html#urllib.urlretrieve>
|
How can I provide Sphinx documentation for a namedtuple (with autodoc)?
Question: I am trying to document a Python project with Sphinx, but I'm having trouble
combining the `autodoc` extension with a `namedtuple` generated class.
In one document, `gammatone.rst`, I have:
:mod:`gammatone` -- gammatone filterbank toolkit
================================================
.. automodule:: gammatone
:members:
.. automodule:: gammatone.coeffs
:members:
In my `gammatone/coeffs.py`, I have:
from collections import namedtuple
ERBFilterCoeffs = namedtuple(
'ERBFilterCoeffs', # Most parameters omitted
[
'A0',
'gain',
])
The code generated by `namedtuple` includes very generic docstrings that
Sphinx's `autodoc` module picks up and includes. I'd rather document the class
properly myself, without forgoing `autodoc` for the rest of the module.
I've tried putting something like this just before the class:
"""
.. class:: ERBFilterCoeffs(A0, gain)
:param A0: A0 coefficient
:param gain: Gain coefficient
Magic coefficients.
"""
...but it doesn't show up in the generated docs. Putting it after the class
results in it being nested underneath the generic class documentation, rather
than replacing it.
How do I simply tell Sphinx (and the `autodoc` extension) to use my
documentation for the `ERBFilterCoeffs` class instead of that generated by
`namedtuple`?
Answer: How about after defining `ERBFilterCoeffs` with the namedtuple, try assigning
that doc string to `ERBFilterCoeffs.__doc__`?
EDIT: Ok, how about this then:
class ERBFilterCoeffs(namedtuple('ERBFilterCoeffs','a b c')):
"""
this is the doc string for ERBFilterCoeffs
"""
|
List all files in an online directory with Python?
Question: Hello i was just wondering i'm trying to create a python application that
downloads files from the internet but at the moment it only downloads one file
with the name i know... is there any way that i can get a list of files in an
online directory and downloaded them? ill show you my code for downloading one
file at a time, just so you know a bit about what i wan't to do.
import urllib2
url = "http://cdn.primarygames.com/taxi.swf"
file_name = url.split('/')[-1]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
file_size_dl = 0
block_sz = 8192
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
status = status + chr(8)*(len(status)+1)
print status,
f.close()
So what is does is it downloads taxi.swf from this website but what i want it
to do is to download all .swf's from that directory "/" to the computer?
Is it possible and thank you so much in advanced. -Terrii-
Answer: Since you're trying to download a bunch of things at once, start by looking
for a site index or a webpage that neatly lists everything you want to
download. The mobile version of the website is usually lighter than the
desktop and is easier to scrape.
This website has exactly what you're looking for: [All
Games](http://www.primarygames.com/mobile/category/all/).
Now, it's really quite simple to do. Just, extract all of the game page links.
I use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/bs4/doc/)
and [requests](http://python-requests.org/) to do this:
import requests
from bs4 import BeautifulSoup
games_url = 'http://www.primarygames.com/mobile/category/all/'
def get_all_games():
soup = BeautifulSoup(requests.get(games_url).text)
for a in soup.find('div', {'class': 'catlist'}).find_all('a'):
yield 'http://www.primarygames.com' + a['href']
def download_game(url):
# You have to do this stuff. I'm lazy and won't do it.
if __name__ == '__main__':
for game in get_all_games():
download_game(url)
The rest is up to you. `download_game()` downloads a game given the game's
URL, so you have to figure out the location of the `<object>` tag in the DOM.
|
Python & lxml / xpath: Parsing XML
Question: I need to get the value from the FLVPath from this link :
<http://www.testpage.com/v2/videoConfigXmlCode.php?pg=video_29746_no_0_extsite>
from lxml import html
sub_r = requests.get("http://www.testpage.co/v2/videoConfigXmlCode.php?pg=video_%s_no_0_extsite" % list[6])
sub_root = lxml.html.fromstring(sub_r.content)
for sub_data in sub_root.xpath('//PLAYER_SETTINGS[@Name="FLVPath"]/@Value'):
print sub_data.text
But no data returned
Answer: You're using `lxml.html` to parse the document, which causes lxml to lowercase
all element and attribute names (since that doesn't matter in html), which
means you'll have to use:
sub_root.xpath('//player_settings[@name="FLVPath"]/@value')
Or as you're parsing a xml file anyway, you could use `lxml.etree`.
|
Python + Sqlite: Unable to determine cause of incorrect number of bindings supplied error
Question: I'll get right into it - I first created a local db for myself:
import sqlite3
conn = sqlite3.connect("tofire.db") #
cursor = conn.cursor()
# create a table
cursor.execute("""CREATE TABLE incidents
(Id INTEGER PRIMARY KEY, prime_street text, cross_street text, dispatch_time text,
incident_number text, incident_type text, alarm_level text, area text, dispatched_units text, date_added text)
""")
This went without a hitch - the next part is my function, and it uses
beautiful soup to scrape a table into a list of lists. I am then attempting to
write the information in each sublist to the sqlite database.
# Toronto Fire Calls
import urllib2
import sqlite3
import time
import csv
import threading
from bs4 import BeautifulSoup
# Beautiful Soup imports the URL as html
def getincidents ():
response = urllib2.urlopen('http://www.toronto.ca/fire/cadinfo/livecad.htm')
html = response.read()
# We give the html its own variable.
soup = BeautifulSoup(html)
# Find the table we want on the Toronto Fire Page
table = soup.find("table", class_="info")
# Find all the <td> tags in the table and assign them to variable.
cols = table.find_all('td')
# Find the length of rows, which is the number of <font> tags, and assign it to a variable num_cols.
num_cols = len(cols)
# Create an empty list to hold each of the <font> tags as an element
colslist = []
totalcols = 0
# For each <font> in cols, append it to colslist as an element.
for col in cols:
colslist.append(col.string)
totalcols = len(colslist)
# Now colslist has every td as an element from [0] to totalcols = len(colslist)
# The First 8 <font> entries are always the table headers i.e. Prime Street, Cross Street, etc.
headers = colslist[0:8]
# Prime Street
# Cross Street
# Dispatch Time
# Incident Number
# Incident Type
# Alarm Level
# Area
# Dispatched Units
# Get the indexes from 0 to the length of the original list, in steps of list_size, then create a sublist for each.
# lists = [original_list[i:i+list_size] for i in xrange(0, len(original_list), list_size)]
list_size = 8
i = 0
incidents = [colslist[i:i+list_size] for i in xrange(0, len(colslist), list_size)]
# Works!
num_inci = len(incidents) # Get the number of incidents
added = time.strftime("%Y-%m-%d %H:%M")
update = 'DB Updated @ ' + added
# SQL TIME, Connect to our db.
conn = sqlite3.connect("tofire.db")
cursor = conn.cursor()
lid = cursor.lastrowid
# Now we put each incident into our database.
for incident in incidents[1:num_inci]:
incident.append(added)
to_db = [(i[0:10]) for i in incident]
import ipdb; ipdb.set_trace()
cursor.executemany("INSERT INTO incidents (prime_street, cross_street, dispatch_time, incident_number, incident_type, alarm_level, area, dispatched_units, date_added) VALUES (?,?,?,?,?,?,?,?,?)", to_db)
conn.commit()
print update
print "The last Id of the inserted row is %d" % lid
threading.Timer(300, getincidents).start()
getincidents()
I always end up with error message "Incorrect Number of Bindings Supplied" -
and it claims that I'm trying to use 9 in my statement when 10 are supplied.
I've tried to narrow down the cause of this, but have had no success.
Answer: As Ned Batchelder recently put it, "First Rule of Debugging: When in Doubt,
Print More Out." After you append `added` to `incident`, `incident` itself has
9 items in it:
print(incident)
# [u'NORFINCH DR, NY', u'FINCH AVE W / HEPC', u'2012-12-09 17:32:57', u'F12118758', u'Medical - Other', u'0', u'142', u'\r\nP142, \r\n\r\n', '2012-12-09 17:46']
So it looks like all you really need to do is use `incident` as the second
argument to `cursor.execute`. Or, if you want to get rid of some of that
whitespace around items like `u'\r\nP142, \r\n\r\n'`, you could use
to_db = [i.strip() for i in incident]
* * *
for incident in incidents[1:num_inci]:
incident.append(added)
to_db = [i.strip() for i in incident]
import ipdb; ipdb.set_trace()
cursor.execute(
"""INSERT INTO incidents
(prime_street, cross_street, dispatch_time, incident_number,
incident_type, alarm_level, area, dispatched_units, date_added)
VALUES (?,?,?,?,?,?,?,?,?)""", to_db)
lid = cursor.lastrowid
|
Pickling Django QuerySet
Question:
from myapp.models import MyModel
from cPickle import *
tmp = MyModel.objects.all()[:1]
print(loads(dumps(t, -1)) == t)
#Output is "False"
In my case pickled query result differs from unpickled. I already read here:
<https://docs.djangoproject.com/en/dev/ref/models/querysets/#pickling-
querysets> that such operations are actually allowed. So - what am I doing
wrong?
upd #1: Tried cPickle and regular Pickle - got 'False' from both
upd #2: Possible resolution - converting QuerySet to Python list with
`list()`. Found it while reading these:
<https://docs.djangoproject.com/en/dev/ref/models/querysets/#when-querysets-
are-evaluated>
Answer: The problem is that you are trying to compare two querysets and querysets
doesn't have a `__cmp__` method defined.
So, you can compare a queryset with itself and you will get this:
>> tmp == tmp
True
This is because, as there are no `__cmp__` method, `==` evaluates `True` if
both objects have the same identity (the same memory address). You can read it
from [here](http://docs.python.org/2/reference/datamodel.html#object.__cmp__)
So, when you do this:
>> loads(dumps(tmp, -1)) == tmp
False
you will get a `False` as a result because objects have different memory
addresses. If you convert queries into a "comparable" object, you can get the
behaviour you want. Try with this:
>> set(loads(dumps(tmp, -1))) == set(tmp)
True
Hope it helps!
|
UnicodeEncodeError Python when creating .csv file
Question: I am trying to create a .csv file with data that I have stored into a list
from Twitter search API. I have saved the last 100 tweets with a keyword that
I chose (in this case 'reddit') and I am trying to save each tweet into a cell
in a .csv file. My code is below and I am returning an error that is:
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position
0: ordinal not in range(128)
If anyone knows what I can do to fix this it would be greatly appreciated!
import sys
import os
import urllib
import urllib2
import json
from pprint import pprint
import csv
import sentiment_analyzer
import codecs
class Twitter:
def __init__(self):
self.api_url = {}
self.api_url['search'] = 'http://search.twitter.com/search.json?'
def search(self, params):
url = self.make_url(params, apitype='search')
data = json.loads(urllib2.urlopen(url).read().decode('utf-8').encode('ascii', 'ignore'))
txt = []
for obj in data['results']:
txt.append(obj['text'])
return '\n'.join(txt)
def make_url(self, params, apitype='search'):
baseurl = self.api_url[apitype]
return baseurl + urllib.urlencode(params)
if __name__ == '__main__':
try:
query = sys.argv[1]
except IndexError:
query = 'reddit'
t = Twitter()
s = sentiment_analyzer.SentimentAnalyzer()
params = {'q': query, 'result_type': 'recent', 'rpp': 100}
urlName = t.make_url(params)
print urlName
txt = t.search(params)
print s.analyze_text(txt)
myfile = open('reddit.csv', 'wb')
wr = csv.writer(myfile, quoting=csv.QUOTE_MINIMAL)
wr.writerow(txt)
Answer: Read the documentaton on the
[`csv`](http://docs.python.org/2/library/csv.html?highlight=csv#csv) module:
> **Note**
>
> This version of the csv module doesn’t support Unicode input. Also, there
> are currently some issues regarding ASCII NUL characters. Accordingly, all
> input should be UTF-8 or printable ASCII to be safe; see the examples in
> section
> [Examples](http://docs.python.org/2/library/csv.html?highlight=csv#csv-
> examples).
That said, you can probably parse the `.csv` file yourself without too much
difficulty using Python's built-in Unicode string support -- there's also this
[answer](http://stackoverflow.com/questions/5020446/python-a-resilient-
actually-working-csv-implementation-for-non-ascii).
|
The truth value of an array with more than one element is ambiguous error? python
Question:
from numpy import *
from pylab import *
from math import *
def TentMap(a,x):
if x>= 0 and x<0.5:
return 2.*a*x
elif x>=0.5 and x<=1.:
return 2.*a*(1.-x)
# We set a = 0.98, a typical chaotic value
a = 0.98
N = 1.0
xaxis = arange(0.0,N,0.01)
Func = TentMap
subplot(211)
title(str(Func.func_name) + ' at a=%g and its second iterate' %a)
ylabel('X(n+1)') # set y-axis label
plot(xaxis,Func(a,xaxis), 'g', antialiased=True)
subplot(212)
ylabel('X(n+1)') # set y-axis label
xlabel('X(n)') # set x-axis label
plot(xaxis,Func(a,Func(a,xaxis)), 'bo', antialiased=True)
My TentMap function isn't working properly. I keep getting the error "The
truth value of an array with more than one element is ambiguous. Use a.any()
or a.all()" I don't understand how I'm supposed to use those. Basically, the
TentMap function takes a value X and returns a certain value depending on what
X is. So if 0<=x<0.5 then it returns 2*a*x and if 0.5<=x<=1 then it returns
2*a*(1-x).
Answer: If you compare a numpy array with a number, you'll get another array:
>>> from numpy import arange
>>> xaxis = arange(0.0, 0.04, 0.01)
>>> xaxis
array([ 0. , 0.01, 0.02, 0.03])
>>> xaxis <= .02
array([ True, True, True, False], dtype=bool)
The problem starts when you want to `and` this with something else, or use it
in a boolean context:
>>> xaxis <= .02 and True
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
>>> bool(xaxis <= .02)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
And that's what you're trying to do in the constructor of your `TentMap`. Are
you sure you don't need to use `a` where you're using `x`?
|
How do I plot 3 subplots in the same display window? python
Question:
# Import plotting routines
from pylab import *
# 1D ODE that has a pitchfork bifurcation
# x_dot = r * x - x * x * x
def PitchforkODE(r,x):
return r * x - x * x * x
# 1D Euler
def OneDEuler(r,x,f,dt):
return x + dt * f(r,x)
# Improved 1D Euler
def ImprovedOneDEuler(r,x,f,dt):
xtemp = x + dt * f(r,x)
return x + dt * ( f(r,x) + f(r,xtemp) ) / 2.0
# 4th Order Runge-Kutta Euler Method
def RKOneD(r,x,f,dt):
k1 = dt * f(r,x)
k2 = dt * f(r,x + k1/2.0)
k3 = dt * f(r,x + k2/2.0)
k4 = dt * f(r,x + k3)
return x + ( k1 + 2.0 * k2 + 2.0 * k3 + k4 ) / 6.0
# Integrator function that calls one of the three functions
# Fills up array
def Integrator(x1,x2,x3,x4,t,N,Func,dt):
for n in xrange(0,N):
x1.append( Func(r,x1[n],PitchforkODE,dt) )
x2.append( Func(r,x2[n],PitchforkODE,dt) )
x3.append( Func(r,x3[n],PitchforkODE,dt) )
x4.append( Func(r,x4[n],PitchforkODE,dt) )
t.append( t[n] + dt )
# Simulation parameters
# Integration time step
dt = 0.2
# Control parameter of the pitchfork ODE:
r = 1.0
# Set up arrays of iterates for four different initital conditions
x1 = [ 0.1]
x2 = [-0.1]
x3 = [ 2.1]
x4 = [-2.1]
x5 = [ 0.1]
x6 = [-0.1]
x7 = [ 2.1]
x8 = [-2.1]
x9 = [ 0.1]
x10 = [-0.1]
x11 = [ 2.1]
x12 = [-2.1]
# Time
t = [ 0.0]
# The number of time steps to integrate over
N = 50
#The different functions
a = OneDEuler
b = ImprovedOneDEuler
c = RKOneD
# Setup the plot
subplot(3,1,1)
Func = a
Integrator(x1,x2,x3,x4,t,N,Func,dt)
ylabel('x(t)') # set y-axis label
title(str(Func.func_name) + ': Pitchfork ODE at r= ' + str(r)) # set plot title
axis([0.0,dt*(N+1),-2.0,2.0])
# Plot the time series
plot(t,x1,'b')
plot(t,x2,'r')
plot(t,x3,'g')
plot(t,x4,'m')
subplot(212)
Func = b
Integrator(x5,x6,x7,x8,t,N,Func,dt)
ylabel('x(t)') # set y-axis label
title(str(Func.func_name) + ': Pitchfork ODE at r= ' + str(r)) # set plot title
axis([0.0,dt*(N+1),-2.0,2.0])
# Plot the time series
plot(t,x5,'b')
plot(t,x6,'r')
plot(t,x7,'g')
plot(t,x8,'m')
subplot(3,1,3)
Func = c
Integrator(x9,x10,x11,x12,t,N,Func,dt)
xlabel('Time t') # set x-axis label
ylabel('x(t)') # set y-axis label
title(str(Func.func_name) + ': Pitchfork ODE at r= ' + str(r)) # set plot title
axis([0.0,dt*(N+1),-2.0,2.0])
# Plot the time series
plot(t,x9,'b')
plot(t,x10,'r')
plot(t,x11,'g')
plot(t,x12,'m')
I'm trying to plot 3 different subplots on the same display window. One on top
of the other. So basically, 3 rows 1 column. Each plot represents a different
function a, b or c. Each plot should have 4 different lines.
Answer: Well...It looks like you are doing the plotting part correctly. The code below
gives you the figure further below.
from pylab import *
subplot(3,1,1)
plot(arange(33))
subplot(3,1,2)
plot(arange(44))
subplot(3,1,3)
plot(arange(55),'r')

Your code does have some problems though, the first thing I found was that
your t and x vectors aren't the same size.
|
bdate_range AttributeError
Question: The following lines give AttributeError: `'module' object has no attribute
'bdate_range'`.
I think this might have something to do with a circular reference; but, I
don't know where.
import pandas as pd
times = pd.bdate_range(start=pd.datetime(2012,11,14,0,0,0),
end=pd.datetime(2012,11,17,0,0,0),
freq='10T')
This is the traceback:
AttributeError Traceback (most recent call last)
<ipython-input-3-1eb62db1246d> in <module>()
4
5
----> 6 times = pd.bdate_range(start=pd.datetime(2012,11,14,0,0,0),end=pd.datetime(2012,11,17,0,0,0),
freq='10T')
7 filtered_times = [x for x in times if x.time() >= time(9,30) and x.time() <= time(16,20)]
8 prices = randn(len(filtered_times))
AttributeError: 'module' object has no attribute 'bdate_range'
Answer: The `bdate_range` function was [**introduced in pandas version
0.8.0**](http://pandas.pydata.org/pandas-
docs/version/0.8.0/whatsnew.html?highlight=bdate_range). _So this ought to
work fine if you upgrade to pandas >= 0.8.0 (and I would recommend using the
latest stable release)._
_Note: The[pandas website](http://pandas.pydata.org/) allows you to search the
docs by version number (select your versions's docs on the right-hand side of
the [main page](http://pandas.pydata.org/)). In version 0.7.3 there are [**no
search results** for `bdate_range`](http://pandas.pydata.org/pandas-
docs/version/0.7.3/search.html?q=bdate_range&check_keywords=yes&area=default)._
### For the latest features and bug-fixes, keep your favourite Data Analysis
library up-to-date!
|
Is there a way to write a Python script that creates and executes code?
Question: Is there a way in Python to create Python code inside the Python script and
then execute/test it?
My function has the following type of form (as an example)
def f(n):
if n<=3: return [0, 0, 6, 12][n]
return 2*f(n-1) - 4*f(n-2) - 5*f(n-3) + 15*f(n-4)
But I want to be able to create these kinds of functions dynamically (or any
arbitrary function for that matter) and then test their outputs during runtime
(as opposed to copying/pasting this function into the script and then manually
testing it).
Not sure if this makes sense, please ask for elaboration if needed. I've
already looked into eval and exec but couldn't get them to work with entire
function definitions, just basic statements like 1+2, etc.
Answer: There are a number of ways to do this kind of thing.
If the function can be described without "stepping outside the language", you
can just define a local function and return it, as in Blender's answer. This
is usually what you want when you think you need to define new functions
(borrowing Blender's example):
def make_func(a, b):
def f(n):
return n**a + b
return f
Sometimes, you can do even better, and represent the functions as data. For
example, how do you create an arbitrary polynomial function? Well, you don't
have to; you can have a _generic_ polynomial function that takes a list of
coefficients and a value and evaluates it; then all you need to do is create
coefficient lists.
In fact, I think this is the one you want here. As you say:
> It may be return 2*f(n-1) - 4*f(n-2) - 5*f(n-3) + 15*f(n-4) one minute, or
> return f(n-1) + 3*f(n-2) another, or f(n-1)+f(n-2)+f(n-3)+f(n-4)+5*f(n-5)
> depending on what I need it to be.
This can definitely be represented as a list of coefficients:
def make_recursive_func(coefficients, baseval):
def f(n):
if n < len(coefficients): return baseval[n]
return sum(coefficient * f(n-i-1) for i, coefficient in enumerate(coefficients))
return f
But it's probably even simpler to write a single
`eval_recursive_func(coefficients, baseval)`, if all you're ever going to do
with the returned function is call it immediately and then forget it.
Sometimes—rarely, but not never—you really do need to execute code on the fly.
As Himanshu says, `eval` and `exec` and friends are the way to do this. For
example:
newcode = '''
def f(n):
if n<=3: return [0, 0, 6, 12][n]
return 2*f(n-1) - 4*f(n-2) - 5*f(n-3) + 15*f(n-4)
'''
exec(newcode)
Now the `f` function has been defined, exactly as if you'd just done this:
def f(n):
if n<=3: return [0, 0, 6, 12][n]
return 2*f(n-1) - 4*f(n-2) - 5*f(n-3) + 15*f(n-4)
It's a bit different in Py3 than Py2, and there are variations depending on
what context you want things executed in, or whether you just want it executed
or evaluated or compiled or treated like an import, etc. But this is the basic
idea.
If you can't think of why you'd want to write the first instead of the second,
then you don't need this.
And if you can't figure out how to generate the right string on the fly, you
shouldn't be doing this.
And, as Ignacio Vazquez-Abrams points out, if these functions can be built out
of user input, you need to do something to validate that they're safe, usually
by compiling iteratively and walking the AST.
Finally, even more rarely, you need to use the `new` module (and/or `inspect`)
to create a new function object on the fly out of bits of other function
objects (or even out of hand-crafted bytecode). But if you need to know how to
do that, you probably already know how.
|
Plotting a differentiated graph from nested lists in Python
Question: I have made a script which produces a graph from data in a nested list
(temperature and changes).
#!/usr/bin/python
import matplotlib.pyplot as plt
temperature = [['65', '65.5', '66', '66.5', '67', '67.5', '68', '68.5', '69', '69.5', '70', '70.5', '71', '71.5', '72', '72.5', '73', '73.5', '74', '74.5', '75', '75.5', '76', '76.5', '77', '77.5', '78', '78.5', '79', '79.5', '80', '80.5', '81', '81.5', '82', '82.5', '83', '83.5', '84', '84.5', '85', '85.5', '86', '86.5', '87', '87.5', '88', '88.5', '89', '89.5', '90', '90.5', '91', '91.5', '92', '92.5', '93', '93.5', '94', '94.5', '95'], ['65', '65.5', '66', '66.5', '67', '67.5', '68', '68.5', '69', '69.5', '70', '70.5', '71', '71.5', '72', '72.5', '73', '73.5', '74', '74.5', '75', '75.5', '76', '76.5', '77', '77.5', '78', '78.5', '79', '79.5', '80', '80.5', '81', '81.5', '82', '82.5', '83', '83.5', '84', '84.5', '85', '85.5', '86', '86.5', '87', '87.5', '88', '88.5', '89', '89.5', '90', '90.5', '91', '91.5', '92', '92.5', '93', '93.5', '94', '94.5', '95'], ['65', '65.5', '66', '66.5', '67', '67.5', '68', '68.5', '69', '69.5', '70', '70.5', '71', '71.5', '72', '72.5', '73', '73.5', '74', '74.5', '75', '75.5', '76', '76.5', '77', '77.5', '78', '78.5', '79', '79.5', '80', '80.5', '81', '81.5', '82', '82.5', '83', '83.5', '84', '84.5', '85', '85.5', '86', '86.5', '87', '87.5', '88', '88.5', '89', '89.5', '90', '90.5', '91', '91.5', '92', '92.5', '93', '93.5', '94', '94.5', '95']]
changes = [['94.566', '94.210', '93.836', '93.443', '93.030', '92.597', '92.145', '91.673', '91.181', '90.669', '90.137', '89.585', '89.011', '88.409', '87.760', '87.019', '86.063', '84.577', '81.806', '76.130', '65.071', '47.659', '28.454', '14.158', '6.305', '2.678', '1.128', '0.480', '0.210', '0.095', '0.045', '0.022', '0.012', '0.006', '0.004', '0.002', '0.002', '0.001', '0.001', '0.001', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000'], ['94.566', '94.210', '93.836', '93.443', '93.030', '92.597', '92.145', '91.673', '91.181', '90.669', '90.138', '89.588', '89.016', '88.420', '87.788', '87.088', '86.239', '85.028', '82.929', '78.744', '70.282', '55.446', '36.209', '19.361', '8.976', '3.874', '1.634', '0.691', '0.298', '0.132', '0.060', '0.029', '0.015', '0.008', '0.004', '0.003', '0.002', '0.001', '0.001', '0.001', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000'], ['94.566', '94.210', '93.836', '93.443', '93.030', '92.597', '92.145', '91.673', '91.181', '90.669', '90.138', '89.588', '89.016', '88.421', '87.790', '87.093', '86.255', '85.072', '83.059', '79.131', '71.434', '58.441', '41.784', '25.977', '14.170', '6.919', '3.146', '1.386', '0.608', '0.270', '0.122', '0.057', '0.028', '0.014', '0.007', '0.004', '0.003', '0.001', '0.001', '0.001', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000']]
products=[]
for t,c in zip(temperature,changes):
products.append(t)
products.append(c)
plt.plot(*products)
plt.xlabel('temperature')
plt.ylabel('changes')
plt.show()
And the graph looks like:

Just wondering how to go about making a differentiated version of the graph as
well (E.G. rate of change over temperature)?
The examples for plotting derivatives I have tried working with all deal with
two or three numbers, and nothing this big and nested, hence I had a hard time
trying to apply them to my dataset. (The data generated and fed into this
script is always in the form of a nested list).
Answer: You can easily approximate derivatives by calculating finite differences.
While you can do this with nested lists, you are much better off using numpy
arrays. See below:
import matplotlib.pyplot as plt
import numpy as np
temperature = [['65', '65.5', '66', '66.5', '67', '67.5', '68', '68.5', '69', '69.5', '70', '70.5', '71', '71.5', '72', '72.5', '73', '73.5', '74', '74.5', '75', '75.5', '76', '76.5', '77', '77.5', '78', '78.5', '79', '79.5', '80', '80.5', '81', '81.5', '82', '82.5', '83', '83.5', '84', '84.5', '85', '85.5', '86', '86.5', '87', '87.5', '88', '88.5', '89', '89.5', '90', '90.5', '91', '91.5', '92', '92.5', '93', '93.5', '94', '94.5', '95'], ['65', '65.5', '66', '66.5', '67', '67.5', '68', '68.5', '69', '69.5', '70', '70.5', '71', '71.5', '72', '72.5', '73', '73.5', '74', '74.5', '75', '75.5', '76', '76.5', '77', '77.5', '78', '78.5', '79', '79.5', '80', '80.5', '81', '81.5', '82', '82.5', '83', '83.5', '84', '84.5', '85', '85.5', '86', '86.5', '87', '87.5', '88', '88.5', '89', '89.5', '90', '90.5', '91', '91.5', '92', '92.5', '93', '93.5', '94', '94.5', '95'], ['65', '65.5', '66', '66.5', '67', '67.5', '68', '68.5', '69', '69.5', '70', '70.5', '71', '71.5', '72', '72.5', '73', '73.5', '74', '74.5', '75', '75.5', '76', '76.5', '77', '77.5', '78', '78.5', '79', '79.5', '80', '80.5', '81', '81.5', '82', '82.5', '83', '83.5', '84', '84.5', '85', '85.5', '86', '86.5', '87', '87.5', '88', '88.5', '89', '89.5', '90', '90.5', '91', '91.5', '92', '92.5', '93', '93.5', '94', '94.5', '95']]
changes = [['94.566', '94.210', '93.836', '93.443', '93.030', '92.597', '92.145', '91.673', '91.181', '90.669', '90.137', '89.585', '89.011', '88.409', '87.760', '87.019', '86.063', '84.577', '81.806', '76.130', '65.071', '47.659', '28.454', '14.158', '6.305', '2.678', '1.128', '0.480', '0.210', '0.095', '0.045', '0.022', '0.012', '0.006', '0.004', '0.002', '0.002', '0.001', '0.001', '0.001', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000'], ['94.566', '94.210', '93.836', '93.443', '93.030', '92.597', '92.145', '91.673', '91.181', '90.669', '90.138', '89.588', '89.016', '88.420', '87.788', '87.088', '86.239', '85.028', '82.929', '78.744', '70.282', '55.446', '36.209', '19.361', '8.976', '3.874', '1.634', '0.691', '0.298', '0.132', '0.060', '0.029', '0.015', '0.008', '0.004', '0.003', '0.002', '0.001', '0.001', '0.001', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000'], ['94.566', '94.210', '93.836', '93.443', '93.030', '92.597', '92.145', '91.673', '91.181', '90.669', '90.138', '89.588', '89.016', '88.421', '87.790', '87.093', '86.255', '85.072', '83.059', '79.131', '71.434', '58.441', '41.784', '25.977', '14.170', '6.919', '3.146', '1.386', '0.608', '0.270', '0.122', '0.057', '0.028', '0.014', '0.007', '0.004', '0.003', '0.001', '0.001', '0.001', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000', '0.000']]
temperature = np.array(temperature, dtype=np.float32).transpose()
changes = np.array(changes, dtype=np.float32).transpose()
plt.figure()
plt.plot(temperature, changes)
plt.xlabel('temperature')
plt.ylabel('changes')
plt.show()
delta_t = temperature[1:]-temperature[:-1]
t_av = 0.5*(temperature[1:]+temperature[:-1])
dc_dt = (changes[1:]-changes[:-1])
plt.figure()
plt.plot(t_av, dc_dt)
plt.xlabel('temperature')
plt.ylabel('dc/dt')
plt.show()
|
about python, I can't understand the example provided in a book
Question: I'm a new learner about python and there are some problems when I try to
repeat the example provided in a guide book. This example is about
recommendation algorithm. This example is trying to implement an item list
which stores the users having rated the particular item. this is the
codes(python 2.7)
def UserSimilarity(train):
#build inverse table for item_users
item_users=dict()
for u,items in train.items():
for i in items.keys():
if i not in item_users:
item_users[i]=set()
item_users[i].add(u)
#calculate co-rated items between users
C=dict()
N=dict()
for i, users in item_users.items():
print i,users
#print N[u]
for u in users:
N[u]=N[u]+1
print N[u]
for v in users:
print C[u][v]
if u==v:
continue
C[u][v]=C[u][v]+1
#calculate finial similarity matrix W
W=dict()
for u, related_users in C.items():
for v, cuv in related_users.items():
W[u][v]=cuv/math.sqrt(N[u]*N[v])
return W
ps: the data format of 'train' is a dictionary and like
`{UserId1:{ItemId1:Ratings1,ItemId2,Rating2,...},...}`
The problem I met is that
Traceback (most recent call last):
File "D:\Users\Administrator\workspace\GroupLens\src\test3.py", line 82, in <module>
UserSimilarity(train_dic)
File "D:\Users\Administrator\workspace\GroupLens\src\test3.py", line 66, in UserSimilarity
N[u]=N[u]+1
KeyError: '3'
I don't know how to improve it and hope someone would help me! Thanks a lot!!
Answer: The main issue is that you are defining a new dictionary (`N = dict()`), and
then iterating through your `users`, trying to create a dictionary key based
on a given user. That part is fine, but the issue arises when you do this:
N[u]=N[u]+1
Assigning a value to the dictionary is fine, but look at the right side - you
are trying to assign to `N[u]` the value of `N[u] + 1`, when `N[u]` doesn't
exist yet (hence the error). I'm not 100% sure what the overall goal is (so
this may be misguided), but if your aim is to increment a number based on how
many times a user occurs, you could use a `defaultdict`, which is created with
a type as an argument (here an `int`). This means that if the key is not found
(as in your error above), the default value is based on the type you declared
(here `0`):
In [1]: from collections import defaultdict
In [2]: N = defaultdict(int)
In [3]: users = [1, 2, 3, 2, 1, 2]
In [4]: for u in users:
...: N[u] += 1
...:
...:
In [5]: N
Out[5]: defaultdict(<type 'int'>, {1: 2, 2: 3, 3: 1})
Alternatively, you could use a normal dictionary but with the `get` method,
which returns a value if it is found but returns a default if not (a default
that you can specify yourself):
In [1]: N = dict()
In [2]: users = [1, 2, 3, 2, 1, 2]
In [3]: for u in users:
...: N[u] = N.get(u, 0) + 1
...:
...:
In [4]: N
Out[4]: {1: 2, 2: 3, 3: 1}
|
Scale imread matrix in python
Question: I am looking for a way to rescale the matrix given by reading in a png file
using the matplotlib routine imread, e.g.
from pylab import imread, imshow, gray, mean
from matplotlib.pyplot import show
a = imread('spiral.png')
#generates a RGB image, so do
show()
but actually I want to manually specify the dimension of $a$, say 200x200
entries, so I need some magic command (which I assume exists but cannot be
found by myself) to interpolate the matrix.
Thanks for any useful comments : )
Cheers
Answer: You could try using the PIL (Image) module instead, together with numpy. Open
and resize the image using Image then convert to array using numpy. Then
display the image using pylab.
import pylab as pl
import numpy as np
import Image
path = r'\path\to\image\file.jpg'
img = Image.open(path)
img.resize((200,200))
a = np.asarray(img)
pl.imshow(a)
pl.show()
Hope this helps. :)
|
What does "input() already active" mean in python fileinput module?
Question: I'm trying to read 2nd line in text.txt file:
import fileinput
x = 0
for line in fileinput.input([os.path.expandvars("$MYPATH/text.txt")]):
if x < 3:
x += 1
if x == 2:
mydate = line
fileinput.close()
print "mydate : ", mydate
But I get an error:
Traceback (most recent call last):
File "/tmp/tmpT8RvF_.py", line 4, in <module>
for line in fileinput.input([os.path.expandvars("$MYPATH/text.txt")]):
File "/usr/lib64/python2.6/fileinput.py", line 102, in input
raise RuntimeError, "input() already active"
RuntimeError: input() already active
What is wrong above?
Answer: To get the second line from the `fileinput.input()` iterable, just call
`.next()` twice:
finput = fileinput.input([os.path.expandvars("$MYPATH/text.txt")])
finput.next() # skip first line
mydate = finput.next() # store second line.
You can also use the [`itertools.islice()`
function](http://docs.python.org/2/library/itertools.html#itertools.islice) to
select just the second line:
import itertools
finput = fileinput.input([os.path.expandvars("$MYPATH/text.txt")])
mydate = itertools.islice(finput.next(), 1, 2).next() # store second line.
Both methods ensure that no more than two lines are ever read from the input.
The `.input()` function returns a global singleton object, that the other
functions operate on. You can only run _one_ `fileinput.input()` instance _at
a time_. Make sure you called `fileinput.close()` before you open a new
`input()` object.
You should use the `fileinput.FileInput()` class instead to create multiple
instances.
|
text process in python
Question: I have data like this:
value1
something text
something text
And I want to change the something text with the value. Example:
value1
value1
value1
Answer: One way:
import sys
file = open('file','r')
for line in file:
if line.startswith('value'):
pattern=line
sys.stdout.write(pattern)
Save the script to `script.py` and run it with `python script.py`
_(where`script` is something descriptive)_.
value1
value1
value1
value2
value2
value3
value3
value3
value3
value3
And redirect the output `python script.py > new_file`
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.