text
stringlengths 226
34.5k
|
---|
Splitting Code in different Files - Python / App Engine
Question: I have multiple classes and everything is in a single main.py file which is
getting really messy...
I'm new to Python however I've taken a couple of courses now and I'm getting
quite good but in none of the lessons was a reference on how to do this...
The main problem I have is I don't understand how splitting code works in
Python and what I have to re-import in every file... So rather then an answer
I would love some documentation or preferably a video tutorial on this.
I'll give you some sample code:
class MainHandler(webapp2.RequestHandler, usersHandler, postsHandler):
class usersHandler(DdHandler):
class DdHandler():
I want to have each class in a separate file kinda like Java and C#
Answer: You are looking for how to divide your code into modules:
<http://docs.python.org/2/tutorial/modules.html>
You might not always want one file for each class, but rather a logical
grouping of classes. In your case you might have a file named db.py where you
put usersHandler and DbHandler
In your main file you would then do something like
import db
db_handler = db.DbHandler()
Also keep in mind that class names should start with an upper case,
UsersHandler instead of usersHandler. See more conventions at
<http://www.python.org/dev/peps/pep-0008/>
|
How do I get rid of spaces in the 'message' when sending SMS via Kannel
Question: I've setup Kannel in Ubuntu using a USB Modem and I can send SMS via the
browser using the URL as seen below
localhost:13013/cgi-bin/sendsms?username=kannel&password=kannel&to=+254781923855&text='Kid got swag'
In python, I have the following script which works only if the message to be
sent does not have spaces.
import urllib.request
def send_sms(mobile_no, message):
url="http://%s:%d/cgi-bin/sendsms?username=%s&password=%s&to=%s&text=%s" \
% ('localhost', 13013, 'kannel', 'kannel', str(mobile_no), message)
f = urllib.request.urlopen(url)
print("sms sent")
If I call the function with **NO** spaces in the message, it works and the
message is sent.
sms.send_sms('+254781923855', 'kid_got_swag')
If I have spaces in the message, it fails with the error belw
sms.send_sms('+254781923855', 'kid got swag')
Traceback (most recent call last):
File "/home/lukik/workspace/projx/src/short_message.py", line 24, in <module>
sms.send_sms('+254781923855', 'kid got swag')
File "/home/lukik/workspace/projx/src/short_message.py", line 18, in send_sms
f = urllib.request.urlopen(url)
File "/usr/lib/python3.2/urllib/request.py", line 139, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.2/urllib/request.py", line 376, in open
response = meth(req, response)
File "/usr/lib/python3.2/urllib/request.py", line 488, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python3.2/urllib/request.py", line 414, in error
return self._call_chain(*args)
File "/usr/lib/python3.2/urllib/request.py", line 348, in _call_chain
result = func(*args)
File "/usr/lib/python3.2/urllib/request.py", line 496, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request
I've tried other variants of calling urllib but they all fail coz of the
spaces in the message....
Answer: In your request you send via browser, the message is inside quotes -
&text='Kid got swag'
Try that in your request -
url="http://%s:%d/cgi-bin/sendsms?username=%s&password=%s&to=%s&text='%s'" \
% ('localhost', 13013, 'kannel', 'kannel', str(mobile_no), message)
Notice the single quotes at `&text='%s'`.
PS: I'd recommend using [requests](http://docs.python-
requests.org/en/latest/user/quickstart/#make-a-request) for requests like
this. You could construct your urls better that way, like this -
>>> payload = {'key1': 'value1', 'key2': 'value2'}
>>> r = requests.get("http://httpbin.org/get", params=payload)
|
Using signals concept to do events
Question: This is i am doing to send mail once a record is updated in the database.I had
defined the receivers in separate file called listeners.py to receive the
signals.
signals.py
import django.dispatch
send_email_to = django.dispatch.Signal()
listeners.py
@receiver(send_mail_to)
def send_update(sender, instance, created, **kwargs):
if instance.author_name:
message = "Book details has been updated"
subject = "Book updates"
send_mail(subject, message, settings.DEFAULT_FROM_EMAIL,[instance.email,])
post_save.connect(send_update, sender=Book)
views.py
def addbook(request):
form = BookForm
if request.POST:
form = BookForm(request.POST)
if form.is_valid():
cd = form.cleaned_data
form.save()
post_save.connect(send_update, sender=Book)
return redirect('/index/')
return render_to_response('addbook.html',{ 'form':form },context_instance=RequestContext(request))
I am getting an error message as ahown below.
NameError at /addbook/
global name 'send_update' is not defined
Request Method: POST
Request URL: http://localhost:8000/addbook/
Django Version: 1.4.3
Exception Type: NameError
Exception Value:
global name 'send_update' is not defined
Exception Location: /root/Samples/DemoApp/DemoApp/views.py in addbook, line 50
Python Executable: /usr/bin/python
Python Version: 2.7.0
Python Path:
['/root/Samples/DemoApp',
'/usr/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg',
'/usr/lib/python27.zip',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-linux2',
'/usr/lib/python2.7/lib-tk',
'/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/lib/python2.7/site-packages',
'/usr/lib/python2.7/site-packages/PIL',
'/usr/lib/python2.7/site-packages/gst-0.10',
'/usr/lib/python2.7/site-packages/gtk-2.0',
'/usr/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg-info',
'/usr/lib/python2.7/site-packages/webkit-1.0']
Server time: Sat, 23 Mar 2013 19:05:01 +0500
Can any one see what would be the problem.
Thanks
Answer: Because you defined `send_update` in your views that has no value. That's why
the error say you need to define the send_update.
But why you put the `post_save.connect(send_update, sender=Book)` in your
views? You must remove that. It is already understood in your listeners.py. So
whatever action you make using Book model, that model will send a request to
that signal.
|
Finding minimum value of word from a lot of files in python?
Question: I have 1000 `.txt` files on which I run this code. What I have to do is find
the largest value of ENSG in the files and delete other values of ENSG which
are less than the largest one. Then I have to find the minimum value from the
same text file by looking at utr length and save it in my `output.txt` file. I
want the output of 1000 `.txt` files in 1 `output.txt` file. This code works
approximately fine, but it overwrites the result. It only shows the result of
last file:
import glob
f2 = glob.glob("./*.txt")
all_text=""
for fpath in f2:
f = open(fpath,"r")
list_lines = f.readlines()
dic={}
sent="ENSG"
temp_list=[]
for line in list_lines:
all_text=all_text+line
name= line.rsplit()[0].strip()
score=line.rsplit()[1].strip()
dic[name]=score
for i in dic.keys():
if sent in i:
temp_list.append(dic[i])
hiegh_score=max(temp_list)
def check(index):
reverse_text=all_text[index+1::-1]
index2=reverse_text.find("\n")
if sent==reverse_text[:index2+1][::-1][1:len(sent)+1]:
return False
else:
return True
list_to_min=dic.values()
for i in temp_list:
if i!=hiegh_score:
index=all_text.find(str(i))
while check(index):
index=all_text.find(str(i),index+len(str(i)))
all_text=all_text[0:index]+all_text[index+len(str(i)):]
list_to_min.remove(str(i))
file2=open("my_try4.txt","w")
file2.write(all_text)
min_score= min(list_to_min)
for j in dic.keys():
if min_score==dic[j]:
k="min score is :"+str(min_score)+" for person "+j
file2.write(k)
print "%6d : %s" % (len(list_lines),fpath)
file2.close()
f.close()
I have text files like this `4.txt`:
ENSBTAG00000020679 197
ENSCAFG00000009872 2585
ENSG00000018236 1935
ENSG00000018236 230
ENSG00000018236 257
ENSG00000018236 338
ENSG00000018236 922
ENSG00000018236 922
ENSRNOG00000004438 14
ENSRNOG00000004438 14
Now it should select ENSG with 1935 and delete all other value of ENSG. Now
the text file should look like this:
ENSBTAG00000020679 197
ENSCAFG00000009872 2585
ENSG00000018236 1935
ENSRNOG00000004438 14
ENSRNOG00000004438 14
And now, by looking at this text file, we find the shortest value and save it
in a text file (We do this on 1000 files and output should be on 1 file).
output.txt
textfile4 14
Answer: It was easier to rewrite this than to figure out what was wrong with your
code:
import os.path
import glob
import re
import itertools
from collections import namedtuple, deque
from operator import attrgetter
R_PREFIX_VALUE = re.compile(r'^(?P<prefix>[A-Z]+)(?P<suffix>\d+)\s+(?P<value>\d+)\s*$')
getvalue = attrgetter('value')
def interleave(seq, val):
return itertools.chain.from_iterable(itertools.izip(seq, itertools.repeat(val)))
class Fileline(namedtuple('Fileline', 'filename prefix suffix value')):
@classmethod
def _fromstr(cls, s, filename=None, rematch=R_PREFIX_VALUE.match):
m = rematch(s)
if not m:
raise ValueError('No valid line found in %r' % s)
d = m.groupdict()
d['value'] = int(d['value'])
d['filename'] = filename
return cls(**d)
def _asstr(self):
return '{}{} {}'.format(self.prefix, self.suffix, self.value)
def max_value_with_prefix(lineseq, prefix, getvalue=getvalue):
withprefix = (line for line in lineseq if line.prefix==prefix)
return max_value(withprefix)
def filter_lt_line(lineseq, maxline):
for line in lineseq:
if line.prefix != maxline.prefix or line.value >= maxline.value:
yield line
def extreme_value(fn, lineseq, getvalue=getvalue):
try:
return fn((l for l in lineseq if l is not None), key=getvalue)
except ValueError:
return None
def max_value(lineseq):
return extreme_value(max, lineseq)
def min_value(lineseq):
return extreme_value(min, lineseq)
def read_lines(fn, maker=Fileline._fromstr):
with open(fn, 'rb') as f:
return deque(maker(l, fn) for l in f)
def write_file(fn, lineseq):
lines = (l._asstr() for l in lineseq)
newlines = interleave(lines, '\n')
with open(fn, 'wb') as f:
f.writelines(newlines)
def write_output_file(fn, lineseq):
lines = ("{} {}".format(l.filename, l.value) for l in lineseq)
newlines = interleave(lines, "\n")
with open(fn, 'wb') as f:
f.writelines(newlines)
def filter_max_returning_min(fn, prefix):
lineseq = read_lines(fn)
maxvalue = max_value_with_prefix(lineseq, prefix)
filteredlineseq = deque(filter_lt_line(lineseq, maxvalue))
write_file(fn, filteredlineseq)
minline = min_value(filteredlineseq)
return minline
def main(fileglob, prefix, outputfile):
minlines = []
for fn in glob.iglob(fileglob):
minlines.append(filter_max_returning_min(fn, prefix))
write_output_file(outputfile, minlines)
The entry point is `main()`, which is called like `main('txtdir', 'ENSG',
'output.txt')`. For each file `filter_max_returning_min()` will open and
rewrite the file and return the min value. There's no need to keep a dict or
list of every line of every file you visited.
(BTW, destructively overwriting files seems like a bad idea! Have you
considered copying them elsewhere?)
When you isolate separate concerns into separate functions, it becomes very
easy to recompose them for different execution behavior. For example, it's
trivial to run this task on all files in parallel by adding two small
functions:
def _worker(args):
return filter_max_returning_min(*args)
def multi_main(fileglob, prefix, outputfile, processes):
from multiprocessing import Pool
pool = Pool(processes=processes)
workerargs = ((fn, prefix) for fn in glob.iglob(fileglob))
minlines = pool.imap_unordered(_worker, workerargs, processes)
write_file(outputfile, minlines)
Now you can start up a configurable number of workers, each of which will work
on one file, and collect their min values when they are done. If you have very
large files or a great number of files and are not IO bound this might be
faster.
Just for fun, you can also easily turn this into a CLI utility:
def _argparse():
import argparse
def positive_int(s):
v = int(s)
if v < 1:
raise argparse.ArgumentTypeError('{:r} must be a positive integer'.format(s))
return v
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description="""Filter text files and write min value.
Performs these operations on the text files in supplied `filedir`:
1. In each file, identify lines starting with the matching `maxprefix`
which do *not* contain the maximum value for that prefix in that file.
2. DESTRUCTIVELY REWRITE each file with lines found in step 1 removed!
3. Write the minimum value (for all lines in all files) to `outputfile`.
""")
parser.add_argument('filedir',
help="Directory containg the text files to process. WILL REWRITE FILES!")
parser.add_argument('maxprefix', nargs="?", default="ENSG",
help="Line prefix which should have values less than max value removed in each file")
parser.add_argument('outputfile', nargs="?", default="output.txt",
help="File in which to write min value found. WILL REWRITE FILES!")
parser.add_argument('-p', '--parallel', metavar="N", nargs="?", type=positive_int, const=10,
help="Process files in parallel, with N workers. Default is to process a file at a time.")
return parser.parse_args()
if __name__ == '__main__':
args = _argparse()
fileglob = os.path.join(args.filedir, '*.txt')
prefix = args.maxprefix
outputfile = args.outputfile
if args.parallel:
multi_main(fileglob, prefix, outputfile, args.parallel)
else:
main(fileglob, prefix, outputfile)
Now you can invoke it from the command line:
$ python ENSG.py txtdir ENSCAFG --parallel=4
|
IF statement error new variable input python
Question: The problem here is that I just cant get python to check if Currency1 is in
string, and if its not then print that there is an error,but if Currency1 IS
in string then move on and ask the user to input Currency2, and then check it
again.
Answer: You were actually trying for:
if type(Currency1) in (float, int):
...
but `isinstance` is better here:
if isinstance(Currency1,(float,int)):
...
or even better, you can use the `numbers.Number` abstract-base class:
import numbers
if isinstance(Currency1,numbers.Number):
* * *
Although ... `Currency1 = str(raw_input(...))` will guarantee that `Currency1`
is a string (not an integer or float). Actually, `raw_input` makes that
guarantee and the extra `str` here is just redundant :-).
If you want a function to check if a string can be converted to a number, then
I think the easiest way would be to just try it and see:
def is_float_or_int(s):
try:
float(s)
return True
except ValueError:
return False
|
GUI's in Python
Question: The way I have it right now, I'm working with Easygui, but I don't really like
how Easygui looks on my Windows 7 system. I was wondering if there is any way
to use the actual Windows GUI's (WinAPI, is it called?)
If this is possible, where can I find a tutorial on how to use it? (Ex putting
in different buttons than just "Ok", and that kind of thing
Thanks!
Answer: For direct access to the windows api you can use
[pywin32](http://sourceforge.net/projects/pywin32/) (there are 64bit version
available, the 32 is just part of the name).
import win32api
win32api.MessageBox (None, "Hello, World!", "Greetings")
|
Python - read in a previously 'list' variable from file
Question: I previously created a `list` and saved it to a file 'mylist.txt'. However,
when I read it in it's a string, meaning I can't access each element as I
like. I have been trying and searching for ways to fix this, but to no avail.
In the text document, the list is one line and looks something like:
[(['000', '001', '002'], ('010', '011', '012')), (['100', '101', '102'], ('110', '111', '112'))]
so that if this list was equal to `mylist`, I could do
>>> print mylist[0]
(['000', '001', '002'], ('010', '011', '012'))
>>> print mylist[0][0]
['000', '001', '002']
>>> print mylist[0][0][2]
002
etc.
The above is useful to me, but reading in the list has the following effect:
>>>myreadlist=open("mylist.txt",'r').read()
>>>myreadlist
"[(['000', '001', '002'], ('010', '011', '012')), (['100', '101', '102'], ('110', '111', '112'))]"
>>>myreadlist[0]
'['
>>>print myreadlist[0]
[
>>>myreadlist[:15]
"[(['000', '001'"
etc. I know the format of `mylist` is bad, but it works for what I want and it
took a very long time to generate it. I've tried just copy-pasting the list to
python like `mylist = <paste>`, but the list is far too long and I get a
memory error.
Is there a way to read the file and use it as a `list` so I can access each
element like normal (i.e. as shown in the first print statements above)?
Thanks very much
Answer: Pass the string to
[`ast.literal_eval`](http://docs.python.org/3.3/library/ast.html#ast.literal_eval).
It will safely parse the string into the appropriate structures:
>>> import ast
>>> with open("file.txt", 'r') as f:
data = ast.literal_eval(f.read())
>>> # You're done!
|
Interact with long running python process
Question: I have a long running python process running headless on a raspberrypi
(controlling a garden) like so:
from time import sleep
def run_garden():
while 1:
/* do work */
sleep(60)
if __name__ == "__main__":
run_garden()
The 60 second sleep period is plenty of time for any changes happening in my
garden (humidity, air temp, turn on pump, turn off fan etc), BUT what if i
want to manually override these things?
Currently, in my `/* do work */` loop, i first call out to another server
where I keep config variables, and I can update those config variables via a
web console, but it lacks any sort of real time feel, because it relies on the
60 second loop (e.g. you might update the web console, and then wait 45
seconds for the desired effect to take effect)
The raspberryPi running `run_garden()` is dedicated to the garden and it is
basically the only thing taking up resources. So i know i have room to do
something, I just dont know what.
Once the loop picks up the fact that a config var has been updated, the loop
could then do exponential backoff to keep checking for interaction, rather
than wait 60 seconds, but it just doesnt feel like that is a whole lot better.
Is there a better way to basically jump into this long running process?
Answer: Why not use an event based loop instead of sleeping for a certain amount of
time.
That way your loop will only run when a change is detected, and it will always
run when a change is detected (which is the point of your question?).
You can do such a thing by using:
[python event objects](http://docs.python.org/2/library/threading.html#event-
objects)
Just wait for one or all of your event objects to be triggered and run the
loop. You can also wait for X events to be done, etc, depending if you expect
one variable to be updated a lot.
Or even a system like: [broadcasting
events](http://stackoverflow.com/questions/1092531/event-system-in-python)
|
How do I use mlabwrap to call a matlab function with cell arguments from python?
Question: Well, I was proud of myself that I got mlabwrap installed properly, but now I
cannot get it to work with matlab cells. In python, lists are analogous to
cells, so I figured I would input a list and mlabwrap would convert it to a
cell. Unfortunately, it does not seem to work that way.
For example, I have a matlab m-file:
function list_test(x)
display(x);
In python, if I type
mlab.list_test([[1,2],[3,4]])
I get:
x =
1 2
3 4
Thus, mlabwrap seems to take my two nested lists and turn them into a 2x2
matrix, which is not what I want.
When I try
mlab.list_test([[1,2],[3,4,5]])
then I get:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/Ben/.virtualenvs/test/lib/python2.7/site-packages/mlabwrap.py", line 607, in mlab_command
return self._do(name, *args, **update({'nout':nout}, kwargs))
File "/Users/Ben/.virtualenvs/test/lib/python2.7/site-packages/mlabwrap.py", line 534, in _do
mlabraw.put(self._session, argnames[-1], arg)
TypeError: a float is required
Clearly no dice.
If I have to, I imagine I could write some python code to convert lists into
several 1-D arrays, feed the 1-D arrays into matlab using mlabwrap and write
some matlab code to convert those 1-D arrays into cells. But this is messy,
and I would like to know if there is an easier way. Can mlabwrap do this for
me somehow?
Here are the details of my setup. OS: Mountain Lion (OS X 10.8), Python: 2.7,
Matlab: 2010b, mlabwrap: 1.1
Answer: Unfortunately, mlabwrap has limited support for cell arrays; both when passing
cell arrays into matlab, and when receiving cell arrays from matlab.
Here's the answer for your immediate question:
>>> from mlabwrap import mlab as matlab
>>> a = [[1, 2], [3, 4]]
>>> cell = matlab.mat2cell(array(a), [1, 1], [2])
>>> matlab.display(cell)
PROXY_VAL2__ =
[1x2 double]
[1x2 double]
Note that this really only works with regularly-sized lists. I.e.
[[1,2],[3,4]] works, but [[1,2],[3,4,5]] does not. This is because mlabwrap
doesn't handle dtype=object arrays particularly well, instead requiring
dtype=float arrays.
Let's swich over to matlab for a quick comparison:
>> display(cell)
cell =
[1x2 double] [1x2 double]
Looks good! However, when we switch back to python, and try and actually
access the cell array that we've created:
>>> cell[0][0]
error: Unable to get matrix from MATLAB(TM) workspace
>>> cell[0, 0]
error: Unsupported index type: <type 'tuple'>
>>> type(cell)
mlabwrap.MlabObjectProxy
Unfortunately, mlabwrap doesn't really allow access to the data stored in
MlabObjectProxy objects. There are a few ways to try and get around this. You
could write `cell_insert` and `cell_pop` functions in matlab. These should
enable you to put python variables into an existing cell array, and get
python-readable variables out from the cell array. Alternatively, you could
write the cell array to a .mat file from matlab, and read it into python using
`scipy.io.loadmat()`
Honestly, unless you absolutely need cell arrays for some reason, I would try
and avoid using them through mlabwrap.
|
Setup wizard like script in Python
Question: I'm writing a simple _setup wizard_ like script in Python. Basically it
prompts the user to enter some values and answer some yes/no questions. Based
on the user input the script will then make directories, create and initialize
config files, create symlinks, set permissions and so on.
As the user makes choices different paths are taken and the structure of
directories and existence of symlinks may differ. Many problems may occur at
each step that might need the user to change their input or rollback the whole
thing.
1 -Is this the best approach to write this script? Is this text menu setup
wizard a good idea at all?
2- Is there a module that can help make this simpler so that I don't reinvent
the wheel?
3- Should I actually perform each step as user makes a choice or wait until
the end and do everything at once?
4- What is the best way to remember the already created structure so that I
can write a rollback function?
I don't want any code as an answer; any suggestions, opinions or external
links are appreciated.
Answer: I don't do GUI stuff. You can write one, but let's say you do this entirely on
command line.
1. I would suggest take in all user inputs before making physical side-effects. In other words, don't start creating directories until the user has finished all the options. Python documentation tool Sphinx is a good example. It asks users many questions when a user launches `quickstart`. Sphinx doesn't generate the physical directory and configuration file until the end. This eliminates the need to "remember" is tiring. Too many branches. Don't do that. Do the whole setup at the very end.
2. Depends. If you want to make a simple command line interface, Python has argpase to make command line options. The above is made possible using [docopt](http://docopt.org/) library which is built on top of argparse. But this is useful if you want to have command-lines. If your script only need to invoke "python script.py" and then start asking user questions, I don't know any useful library that handles setup stuff.
Actually I was in the middle of developing one, called `dcoprompt` but it
isn't finished. <https://bitbucket.org/yeukhon/docprompt> basically it was
supposed to allow you write down your setup prompts and then remember them.
The code base is terrible, not very efficient. You can try but I won't finish
the feature until summer due to heavy homework load this semester.
So the answer is no. you have to write the code yourself. Just a lot of raw
input and a lot of variables.
1. Again, wait until the end to make side-effect.
2. Again, wait until the end to make side-effect.
* * *
**edit**
Say you wait until the end to create directories and symlinks and at one of
the step IOError occurs, you want to undo the whole setup. If all you are
creating are directories, files and symlinks, add them to a dictionary of
lists. See my edit.
def physical_setup(...):
memory = {
'dirs': [],
'symlinks': [],
'files': []
}
try:
# start doing physical setup
memory['dirs'].append('/tmp/dir1')
os.path.mkdir('/tmp/dir1')
# catching all exceptions is considered a bad practice but sometimes be a little badass
except Exception as e:
for key, valist in memory.iteritems():
if key == 'dirs':
for dir in valist:
shutil.rmtree(dir)
**important** : the code above has one issue, you should unlink, delete files
and dirs before delting the folders. Because if the files are part of the
already-deleted directory you will have to catch the exception silently. A lot
of code. Just unlink, delete file and dir.
|
Tkinter problems with GUI when entering while loop
Question: I have a simple GUI which run various scripts from another python file,
everything works fine until the GUI is running a function which includes a
while loop, at which point the GUI seems to crash and become in-active. Does
anybody have any ideas as to how this can be overcome, as I believe this is
something to do with the GUI being updated,Thanks. Below is a simplified
version of my GUI.
GUI
#!/usr/bin/env python
# Python 3
from tkinter import *
from tkinter import ttk
from Entry import ConstrainedEntry
import tkinter.messagebox
import functions
AlarmCode = "2222"
root = Tk()
root.title("Simple Interface")
mainframe = ttk.Frame(root, padding="3 3 12 12")
mainframe.grid(column=0, row=0, sticky=(N, W, E, S))
mainframe.columnconfigure(0, weight=1)
mainframe.rowconfigure(0, weight=1)
ttk.Button(mainframe, width=12,text="ButtonTest",
command=lambda: functions.test()).grid(
column=5, row=5, sticky=SE)
for child in mainframe.winfo_children():
child.grid_configure(padx=5, pady=5)
root.mainloop()
functions
def test():
period = 0
while True:
if (period) <=100:
time.sleep(1)
period +=1
print(period)
else:
print("100 seconds has passed")
break
What will happen in the above is that when the loop is running the application
will crash. If I insert a break in the else statement after the period has
elapsed, everything will work fine. I want users to be able to click when in
loops as this GUI will run a number of different functions.
Answer: Don't use `time.sleep` in the same thread than your Tkinter code: it freezes
the GUI until the execution of `test` is finished. To avoid this, you should
use
[`after`](http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/universal.html)
widget method:
# GUI
ttk.Button(mainframe, width=12,text="ButtonTest",
command=lambda: functions.test(root))
.grid(column=5, row=5, sticky=SE)
# functions
def test(root, period=0):
if period <= 100:
period += 1
print(period)
root.after(1000, lambda: test(root, period))
else:
print("100 seconds has passed")
* * *
**Update:**
In your comment you also add that your code won't use `time.sleep`, so your
original example may not be the most appropiate. In that case, you can create
a new thread to run your _intensive_ code.
Note that I posted the alternative of `after` first because multithreading
should be used only if it is completely necessary - it adds overhead to your
applicacion, as well as more difficulties to debug your code.
from threading import Thread
ttk.Button(mainframe, width=12,text="ButtonTest",
command=lambda: Thread(target=functions.test).start())
.grid(column=5, row=5, sticky=SE)
# functions
def test():
for x in range(100):
time.sleep(1) # Simulate intense task (not real code!)
print(x)
print("100 seconds has passed")
|
os.system for submitting command
Question: I am using os.system to submit a command to the system.
I.e.,
import os
os.system(my_cmd)
But I was wondering how could I obtain the output, i.e., let us say i am in
the bash and I type in my cmd, I'd get an output of this form:
Job <57960787> is submitted to queue <queueq>.
How can I, in python, using the os.system(cmd), also obtain the text output,
and parse it to obtain the job id, 57960787.
Thanks!
Answer: It is better to use the `subprocess` module documentation
[here](http://docs.python.org/2/library/subprocess.html), example below:
import subprocess,re
p = subprocess.Popen('commands',stdout=subprocess.PIPE,stderr=subprocess.PIPE)
results, errors = p.communicate()
print results
re.search('<(\d+)>', results).group(1) #Cheers, Jon Clements
Or you can even use `os.popen` documentation
[here](http://docs.python.org/2/library/os.html#os.popen),
p_os = os.popen("commands","r")
line = p_os.readline()
print line
re.search('<(\d+)>', line).group(1) #Cheers, Jon Clements
Or as John Clements kindly suggested, you can use `subprocess.check_output`,
Documentation
[here](http://docs.python.org/2/library/subprocess.html#subprocess.check_output)
>>> subprocess.check_output(["echo", "Hello World!"])
'Hello World!\n'
|
Remove near-duplicate elements in Python list while preserving variables
Question: I have a list that contains near-duplicate elements, with the exception of a
number that identifies the element. I want to remove all duplicates while
preserving the number of the first element containing a duplicate.
For example, I want to replace `l` with `lnew`:
l = ['iter1apple','iter2banana','iter3carrot','iter4apple','iter5orange','iter6banana','iter7mango']
lnew = ['iter1apple','iter2banana','iter3carrot','iter5orange','iter7mango']
I'm guessing this has something to do with splitting the number from the rest
of the list element, converting the list to set and using `defaultdict` with
the elements from the split, but I can't figure out how.
Any suggestions would be appreciated.
Answer: If I have understood you correctly, you want to discard the items which end in
one element that is already contained in the list. In that case, you can use a
regular expression and a list to track the elements that have been used:
import re
l = ['iter1apple', 'iter2banana', 'iter3carrot', 'iter4apple', 'iter5orange', 'iter6banana', 'iter7mango']
duplicates = []
lnew = []
for item in l:
match = re.match("^iter\d+(\w+)$", item)
if match and not match.group(1) in duplicates:
duplicates.append(match.group(1))
lnew.append(item)
# lnew = ['iter1apple','iter2banana','iter3carrot','iter5orange','iter7mango']
|
creat a list from sql query output with the attribute name instead of the index using python
Question: I have this code.
cursor.execute("select id, name from client")
clientids= cursor.fetchall()
clientidList = []
for clientid in clientids:
#I can do that
clientidList.append(clientid [0])
#but I can't do that.
clientidList.append(clientid ['id'])
Whit the second try I get an error `TypeError: 'tuple' object is not callable
`
any idea why this is not possible? is there is any other way to accomplish
that, because it's more comprehensive when I put the attribute name than the
index, exactly in a query that has more than 20 columns output. I tried
[this](http://stackoverflow.com/questions/10195139/how-to-retrieve-sql-result-
column-value-using-column-name-in-python) but it didn't work for me
Thanks!
Answer: Try this:
import mysql.connector
db_config = {
'user': 'root',
'password': 'root',
'port' : '8889',
'host': '127.0.0.1',
'database': 'clients_db'
}
cnx = {} # Connection placeholder
cnx = mysql.connector.connect(**db_config)
cur = cnx.cursor()
cur.execute('SELECT id FROM client')
columns = cur.column_names
clientids = []
for (entry) in cur:
count = 0
buffer = {}
for row in entry:
buffer[columns[count]] = row
count += 1
clientids.append(buffer)
cur.close()
clientidList = []
for client in clientids:
clientidList.append(client['id'])
pprint.pprint(clientids)
pprint.pprint(clientidList)
**Update**
Updated the code to select row names too. Not foolproof I guess. Test it some
:)
|
Boolean Variable in TKinter 8.5
Question: I did this inside the python shell:
from Tkinter import *
v = BooleanVar()
But I got the following error:
Traceback (most recent call last):
File "<pyshell#52>", line 1, in <module>
v = BooleanVar()
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 320, in __init__
Variable.__init__(self, master, value, name)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 185, in __init__
self._tk = master.tk
AttributeError: 'NoneType' object has no attribute 'tk'
>>>
And then while just playing around trying to make it work I tried this:
v = BooleanVar
And this worked.
So I went on to do the next logical thing, checking if the variable has been
initialised and got this:
>>> v
<class Tkinter.BooleanVar at 0xb6f666bc>
Then I tried to initialise with the value `True` and for that I used the `set`
method:
v.set(True)
For which I got the following error:
Traceback (most recent call last):
File "<pyshell#63>", line 1, in <module>
v.set(True)
TypeError: unbound method set() must be called with BooleanVar instance as first argument (got bool instance instead)
**What** is going on?
**Please** help me with this issue
**Goal** I want to use this as the variable associated with a check button in
a menu
**specs** Linux mint 14, python 2.7, Tkinter 8.5
Answer: Short answer: Call `root = tk.Tk()` first.
* * *
Long answer: The line that is raising the error,
self._tk = master.tk
is failing because `master` is `None`. However, look at the code (in
Tkinter.py) above this line:
class Variable:
def __init__(self, master=None, value=None, name=None):
if not master:
master = _default_root
...
self._tk = master.tk
If you explicitly pass a `master` widget to `BooleanVar` (which is a subclass
of `Variable`) then `master` would not be `None`. Or, if `_default_root` were
not `None`, then `master` would not be `None`.
* * *
In a normal Tkinter application, you will make a root window before making a
`BooleanVar`. In that case, `tk.BooleanVar()` will not raise an error.
`root = tk.Tk()` sets the `_default_root` global variable to itself (as long
as the `useTk` parameter is `True` \-- which it is by default). The
`_default_root` is used by `BooleanVar` as the widget's master if no master is
explicitly set with `tk.BooleanVar(master)`.
So in summary, either call `root = tk.Tk()` or something similar to set the
`_default_root` before calling `tk.BooleanVar()`, or pass an explicit master
widget as the first argument: `tk.BooleanVar(master)`.
In [1]: import Tkinter as tk
In [2]: root = tk.Tk()
In [3]: x = tk.BooleanVar()
In [4]: x.set(True)
In [5]: x.get()
Out[5]: 1
|
Python advice using delays with tkinter
Question: I have created GUI using tkinter, which will run on the RaspberryPi and will
perform various actions such as lighting LEDs. The problem I have is toggling
an LED on and off using the root.after scheduling as if I use time.sleep(),
the GUI will freeze while in this sleep. Here is my code, below I want to
replace time.sleep() with some kind of delay roughly 500ms.
def toggleLED(root,period=0):
if (period <15) and (Status is "On"):
GPIO.output(11, True)
time.sleep(0.5) #Needs to be replaced as causing GUI to freeze
GPIO.output(11, False)
root.after(1000, lambda: toggleLED(root, period)) #schedule task every 1 second while condition is true
elif (Status == "Off"):
print("Lights have been switched off")
else:
GPIO.output(11, True)
thanks
This is one solution, but it seems very messy:
def toggleLED(root,period=0):
global Flash
while (period <30) and (Status is "On"):
if (Flash is True):
GPIO.output(11, False)
Flash = False
break
elif (Flash is False):
GPIO.output(11, True)
Flash = True
break
else:
break
if (period <30) and (Status == "On"):
period +=1
print(period)
root.after(500, lambda: toggleLED(root, period))
elif (Status == "Off"):
print("Lights have been switched off")
else:
GPIO.output(11, True)
Answer: Part of the problem is your while loop -- you don't need any sort of loop
since you have the event loop.
Here's an example of toggling a label every 500ms for 30 seconds:
import Tkinter as tk
class Example(tk.Frame):
def __init__(self, *args, **kwargs):
tk.Frame.__init__(self, *args, **kwargs)
self._job_id = None
self.led = tk.Label(self, width=1, borderwidth=2, relief="groove")
self.start_button = tk.Button(self, text="start", command=self.start)
self.stop_button = tk.Button(self, text="stop", command=self.stop)
self.start_button.pack(side="top")
self.stop_button.pack(side="top")
self.led.pack(side="top")
def stop(self):
if self._job_id is not None:
self.after_cancel(self._job_id)
self._job_id = None
self.led.configure(background="#ffffff")
def start(self):
self._job_id = self.after(500, lambda: self.toggle(60))
def toggle(self, counter):
bg = self.led.cget("background")
bg = "#ffffff" if bg == "#ff0000" else "#ff0000"
self.led.configure(background=bg)
if counter > 1:
self._job_id = self.after(500, lambda: self.toggle(counter-1))
root = tk.Tk()
Example(root).pack(side="top", fill="both", expand=True)
root.mainloop()
|
subprocess.popen pid change
Question: I have a script in which I use subprocess.Popen to start an external program
and process.kill() to kill it pretty much as soon as it's started. I've been
getting Windows Error [5] (Access Denied) every time the script tries to kill
it. I've realized that the pid of the program is actually changing after it's
opened. Is there a way, in Python, to monitor the process for the change, or
to just retrieve the new pid?
Here is the code:
import subprocess
import time
proc = subprocess.Popen(Path/to/WinSCP.exe)
time.sleep(2)
proc.kill()
The error:
Traceback (most recent call last):
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1410, in __call__
return self.func(*args)
File "C:\Path", line 231, in __call__
self.scpsetup()
File "C:\Path", line 329, in scpsetup
proc.kill()
File "C:\Python27\lib\subprocess.py", line 1019, in terminate
_subprocess.TerminateProcess(self._handle, 1)
WindowsError: [Error 5] Access is denied
Answer: This is what I ended up doing;
import tempfile
import subprocess
import time
# Create a temp file to receive output
tf = tempfile.NamedTemporaryFile(delete=False)
output = open(tf.name, "w")
# Open and close WinSCP
subprocess.Popen(Path/To/WinSCP.exe)
time.sleep(2)
subprocess.call("TASKKILL /IM WinSCP.exe", stdout=output)
tf.close()
The issue I had with methods like this before was that I couldn't hide the
output of the command. This may not be the prettiest way to accomplish this
but it works.
Also note that I am using Windows 8. I understand that the command itself may
vary slightly in different versions of Windows.
|
Limit count of threads for python script
Question: I have a python 2.7 script on a CentOS linux server which had a bug starting
new threads until I could not login to the server anymore.
Is there a way to tell the python runtime not to create/start/use more than 50
threads (and to throw exceptions instead or kill the script) to prevent this
problem, or do I have to implement this in the script myself?
The threads are started via `thread.start_new_thread()`.
Answer: According to the [documentation on the python `threading`
module](http://docs.python.org/2/library/threading.html#module-threading) (the
newer threading module) you can call an `active_count()` method on the module
and find out how many threads are running. Now, I understand that you are
using the lower-level `thread` module, so I looked into it by running:
import thread
dir(thread)
This produced the list:
['LockType', '__doc__', '__name__', '__package__', '_count', '_local', 'allocate', 'allocate_lock', 'error', 'exit', 'exit_thread', 'get_ident', 'interrupt_main', 'stack_size', 'start_new', 'start_new_thread']
(I show this because it is very useful in finding out what modules contain,
specifically in the interactive terminal)
You can see that the `thread` module contains a `_count` field, which when
called (e.g. `thread._count()`) should return the number of threads running
(you could check this value and raise exceptions when it exceeds your
maximum).
Of course, the underscore in front of `_count` means that the method is
treated, somewhat like a private method. However, in Python you can still
access it.
|
Fill in form using Spynner in Python
Question: I am trying to fill in my user name and password on this site:
<https://www.imleagues.com/Login.aspx>
From there, I want to submit it and log in. I have been able to click the
login button, but it tells me I have incorrect username and password. How
should I go about filling these in? I thought I had it using this:
URL = 'https://www.imleagues.com/Login.aspx'
address = "http://www.imleagues.com/School/Team/Home.aspx?Team=27d6c31187314397b00293fb0cfbc79a"
b = spynner.Browser()
b.show()
b.load(URL)
b.wk_fill('input[name=ctl00$ContentPlaceHolder1$inUserName]', '******')
b.wk_fill('input[name=ctl00$ContentPlaceHolder1$inPassword]', '******')
but apparently this doesn't work. Thanks for any help.
Answer: This works for me:
import spynner
URL = 'https://www.imleagues.com/Login.aspx'
address = "http://www.imleagues.com/School/Team/Home.aspx?Team=27d6c31187314397b00293fb0cfbc79a"
b = spynner.Browser()
b.show()
b.load(URL)
b.runjs('$("#ctl00_ContentPlaceHolder1_inUserName").val("testUser")')
b.runjs('$("#ctl00_ContentPlaceHolder1_inPassword").val("testPass")')
|
Plone 3.1.2 - TypeError in ATDocument.getText() method
Question: My task is to unload content from a Plone 3.1.2 website and load information
about the content to an SQL database + file system
I've recreated the website, got access to ZODB and recreated object and folder
structure. I am also able to read properties of folders, files and documents.
I can't get the .getText() method of ATDocument to work. The Traceback looks
like this:
Traceback (most recent call last):
File "C:\Users\jan\Eclipse_workspace\Plone\start.py", line 133, in ?
main()
File "C:\Users\jan\Eclipse_workspace\Plone\start.py", line 118, in main
print dokument.getText()
File "e:\Program Files\Plone 3\Data\Products\Archetypes\ClassGen.py", line 54, in generatedAccessor
File "e:\Program Files\Plone 3\Data\Products\Archetypes\BaseObject.py", line 828, in Schema
TypeError: ('Could not adapt', <ATDocument at /*object_path*>, <InterfaceClass Products.Archetypes.interfaces._schema.ISchema>)
I suspect that there is a problem with connecting the object to interface
ISchema, but I've never worked with Plone before and don't know it's object
model.
Any suggestions what might be wrong or missing, how can I fix it and/or what
to do next? I suspect that I have to connect ISchema interface class with this
object somehow, but have no idea where to start. Any suggestions?
I'll be greatful for any help since I'm stuck for 2 days now and not moving
forward.
I know nothing about ZCML format or how to edit it. Because after `>>> print
dokument.getText()` in debug mode the script jumps to `makeMethod()` method in
Generator class I assume that the script doesn't execute `.getText()` but
tries to create this method instead.
Since `inspect.getmembers(dokument)` returns a `getText()` method I'm really
confused. Do you know in which ZCML file might be related to ATDocument class?
Or where can I look for any information on this subject?
My start.py file doesn't do much else than the following imports:
from ZODB.FileStorage import FileStorage
from ZODB.DB import DB
from OFS.Application import Application
from BTrees import OOBTree
from Products.CMFPlone.Portal import PloneSite
then it gets access to dokument object and tries to execute `.getText()`
Edit 213-03-26 15:27 (GMT):
About the .zcml files The site I've received was 3 folders: Products
(extracted to `\Plone 3\Data`), lib and package-includes.
Inside the lib there is python folder containing 3 subfolders: 'common', 'abc'
and 'def' (names changed not to release client's information). Each of these
subfolders contains a configure.zcml file, one of these also includes
override.zcml file.
In the folder package-includes there are 4 files, each of them 1 line long.
They contain the following lines:
<include package="abc" file="configure.zcml" />
<include package="def" file="overrides.zcml" />
<include package="common" file="configure.zcml" />
<include package="def" file="configure.zcml" />
These zcml files are not copied at the moment. Where can I copy these to have
these imported?
Answer: You are missing component registrations, usually registered when loading the
ZCML files in a site.
You want to end up with the possibility to run `bin/instance run
yourscript.py` instead, which leaves all the tedious site and ZCML loading to
Zope.
Once you have that running reliably, you can then access the site in a script
that sets up the local component manager and a security manager:
from zope.app.component.hooks import setSite
from Testing.makerequest import makerequest
from AccessControl.SecurityManagement import newSecurityManager
site_id = 'Plone' # adjust to match your Plone site object id.
admin_user = 'admin' # usually 'admin', probably won't need adjusting
app = makerequest(app)
site = app[site_id]
setSite(site)
user = app.acl_users.getUser(admin_user).__of__(site.acl_users)
newSecurityManager(None, user)
# `site` is your Plone site, now correctly set up
Save this script somewhere, and run it with:
bin/instance run path/to/yourscript.py
|
'lxml.etree._Element' object has no attribute 'write' ??? (PYTHON)
Question:
from lxml import etree
root = etree.Element('root1')
element = etree.SubElement(root, 'element1')
root.write( 'xmltree.xml' )
Error:
AttributeError: 'lxml.etree._Element' object has no attribute 'write'
how can I fix this?
Answer: If you are wanting to save your new xml to a file then `etree.tostring` is the
method to use.
E.g.
>>> from lxml import etree
>>> root = etree.Element('root1')
>>> element = etree.SubElement(root, 'element1')
>>> print etree.tostring(root,pretty_print=True) ## Print document
<root1>
<element1/>
</root1>
>>> with open('xmltree.xml','w') as f: ## Write document to file
... f.write(etree.tostring(root,pretty_print=True))
...
>>>
|
Multiprocessing python-server creates too many temp-directories
Question: I'm trying to implement a server in python3.3 that has a separate thread
preloaded to do all the processing for the incoming connections.
from multiprocessing import Process, Pipe, Queue
from multiprocessing.reduction import reduce_socket
import time
import socketserver,socket
def process(q):
while 1:
fn,args = q.get()
conn = fn(*args)
while conn.recv(1, socket.MSG_PEEK):
buf = conn.recv(100)
if not buf: break
conn.send(b"Got it: ")
conn.send(buf)
conn.close()
class MyHandler(socketserver.BaseRequestHandler):
def handle(self):
print("Opening connection")
print("Processing")
self.server.q.put(reduce_socket(self.request))
while self.request.recv(1, socket.MSG_PEEK):
time.sleep(1)
print("Closing connection")
class MyServer(socketserver.ForkingTCPServer):
p = Process
q = Queue()
parent_conn,child_conn = Pipe()
def __init__(self,server_address,handler):
socketserver.ForkingTCPServer.__init__(self,server_address, handler)
self.p = Process(target=process,args=(self.q,))
self.p.start()
def __del__(self):
self.p.join()
server_address = ('',9999)
myserver = MyServer(server_address,MyHandler)
myserver.serve_forever()
I can test that it works using the following script:
from multiprocessing.reduction import reduce_socket
import time
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(('localhost', 9999))
time.sleep(1)
print("reduce_socket(s)")
fn,args = reduce_socket(s)
time.sleep(1)
print("rebuild_socket(s)")
conn = fn(*args)
time.sleep(1)
print("using_socket(s)")
conn.send("poks")
print conn.recv(255)
conn.send("poks")
print conn.recv(255)
conn.send("")
print conn.recv(255)
conn.close()
Unfortunately there seems to be something that is wrong since after running
the test for n times, my tmp-folder is filled with subfolders:
$ ls /tmp/pymp*|wc -l
32000
These temporary files are created by `socket_reduce()`. Interestingly the
`rebuild/reduce_socket()` in the client also creates the temporary files, but
they are removed once the function exits. The maximum amount of folders in my
current tmp-filesystem is 32000 which causes a problem. I could remove the
/tmp/pymp*-files by hand or somewhere in the server, but I guess there should
also be the correct way to do this. Can anyone help me with this?
Answer: Okay, Kind of fixed it. From the .`./lib/python3.3/multiprocessing/util.py:`
$ grep "def get_temp_dir" -B5 /usr/local/lib/python3.3/multiprocessing/util.py
#
# Function returning a temp directory which will be removed on exit
#
def get_temp_dir():
It seems that the temporary directory should be available until the process
quits. Since my `process()` and `main()` both run forever, the temporary file
won't be removed. To fix it I can create another process that will hand the
reduced_socket to the `process()`:
def process(q):
while 1:
fn,args = q.get()
conn = fn(*args)
while conn.recv(1, socket.MSG_PEEK):
buf = conn.recv(100)
if not buf: break
conn.send(b"Got it: ")
conn.send(buf)
conn.close()
q.put("ok")
class MyHandler(socketserver.BaseRequestHandler):
def socket_to_process(self,q):
q.put(reduce_socket(self.request))
q.get()
def handle(self):
p = Process(target=self.socket_to_process,args=(self.server.q,))
p.start()
p.join()
This way the temporary file is created in a subprocess that will exit once the
`process()` has done its thing with the input. I don't think this is an
elegant way of doing it but it works. If someone knows better, please let
stackoverflow know.
|
ImportError: No module named opencv.cv
Question: I installed opencv on Linux Mint using:
sudo apt-get install python-opencv
When I attempt to import the opencv module using:
from opencv.cv import *
I receive the error:
from opencv.cv import *
ImportError: No module named opencv.cv
Any idea why this error is occurring and how to resolve the problem?
Answer: Try doing
import cv
as in
[examples](http://opencv.willowgarage.com/documentation/python/cookbook.html)
or
from cv import *
|
Is there a standard solution for Gauss elimination in Python?
Question: Is there somewhere in the cosmos of `scipy/numpy/...` a standard method for
Gauss-elimination of a matrix?
One finds many snippets via google, but I would prefer to use "trusted"
modules if possible.
Answer: I finally found, that it can be done using **LU decomposition**. Here the
**U** matrix represents the reduced form of the linear system.
from numpy import array
from scipy.linalg import lu
a = array([[2.,4.,4.,4.],[1.,2.,3.,3.],[1.,2.,2.,2.],[1.,4.,3.,4.]])
pl, u = lu(a, permute_l=True)
Then `u` reads
array([[ 2., 4., 4., 4.],
[ 0., 2., 1., 2.],
[ 0., 0., 1., 1.],
[ 0., 0., 0., 0.]])
Depending on the solvability of the system this matrix has an upper triangular
or trapezoidal structure. In the above case a line of zeros arises, as the
matrix has only rank `3`.
|
python twisted man-in-the-middle implementation
Question: What I need is a sort of man-in-the-middle implementation: I need a server who
receives connections from clients (binary data with different lengths) and
forwards the stream to a server it connects to (acting as a client), and then
sends the data back from the server it is connected to, to the clients.
It actually works standing between the clients and the servers, and passing
the data they exchange (which is a stream, so it continuously get from one
side and sends to the other one).
The server is static, so it is always the same, and its address can even be
hardcoded; however when a client drops the connection, this server must also
drop the connection to the "real" server.
I've been looking around, but couldn't find a solution or an example for such
a simple problem.
The code I've made works actually, but I have not yet managed to find how to
put a reference into the server part that says "this is your assigned client",
or into the client that says "this is your server". Here's my code:
#!/usr/bin/env python
from twisted.internet import protocol, reactor
from twisted.protocols import basic
client = None
server = None
class ServerProtocol(protocol.Protocol):
def connectionMade(self):
global server
factory = protocol.ClientFactory()
factory.protocol = ClientProtocol
server = self
reactor.connectTCP('localhost', 1324, factory)
def dataReceived(self, data):
global client
client.transport.write(data)
class ClientProtocol(protocol.Protocol):
def connectionMade(self):
global client
# Here's the instance of the client
client = self
def dataReceived(self, data):
global server
server.transport.write(data)
def main():
import sys
from twisted.python import log
log.startLogging(sys.stdout)
factory = protocol.ServerFactory()
factory.protocol = ServerProtocol
# Here's the instance of the server
server = ServerProtocol
reactor.listenTCP(2593, factory)
reactor.run()
if __name__ == '__main__':
main()
Now, the point is that the instance can't be contained into the global
objects, and should be put inside the two classes: how?
Answer: I've managed to solve the issue by myself and, for future references (or to
help anybody else who had this problem), here's the code I used to solve it.
I think both my solution and the one kindly given by jedwards work; now I just
have to study his own a little more to be sure that what I've done is correct:
this is my first application using the Twisted framework and studying somebody
else's solution is the way to learn something new! :)
#!/usr/bin/env python
from twisted.internet import protocol, reactor
from twisted.protocols import basic
class ServerProtocol(protocol.Protocol):
def __init__(self):
self.buffer = None
self.client = None
def connectionMade(self):
factory = protocol.ClientFactory()
factory.protocol = ClientProtocol
factory.server = self
reactor.connectTCP('gameserver16.gamesnet.it', 2593, factory)
def dataReceived(self, data):
if (self.client != None):
self.client.write(data)
else:
self.buffer = data
def write(self, data):
self.transport.write(data)
print 'Server: ' + data.encode('hex')
class ClientProtocol(protocol.Protocol):
def connectionMade(self):
self.factory.server.client = self
self.write(self.factory.server.buffer)
self.factory.server.buffer = ''
def dataReceived(self, data):
self.factory.server.write(data)
def write(self, data):
self.transport.write(data)
print 'Client: ' + data.encode('hex')
def main():
import sys
from twisted.python import log
log.startLogging(sys.stdout)
factory = protocol.ServerFactory()
factory.protocol = ServerProtocol
reactor.listenTCP(2593, factory)
reactor.run()
if __name__ == '__main__':
main()
|
Get header values of reply using pycurl
Question: I would like to know some ways to capture and access the header information of
the reply when making a request with PyCurl:
c = pycurl.Curl()
c.setopt(c.URL,'MY_URL')
c.setopt(c.COOKIEFILE,'cookies')
c.setopt(c.COOKIE,'cookies')
c.setopt(c.POST,1)
c.setopt(c.POSTFIELDS,'MY AUTH VALUES')
c.setopt(c.VERBOSE, True)
b = StringIO.StringIO()
c.setopt(c.WRITEFUNCTION, b.write)
c.perform()
The reply will be well-formatted JSON written to buffer b.
I wish to recover the value of the "Location" header in the reply.
When trying to use curl, this value can be seen in the verbose output:
[... Curl output ...]
> GET XXXXXXXXX
[... Request ...]
[... Curl output ...]
< HTTP/1.1 302 Found
[... Other headers ...]
< Location: YYYYYYYYYYYYYYY
[... Rest of reply ...]
How do I recover the value of the `Location` header from python?
Answer: **If you have to use PyCurl**
Then you can pass a callback function to get the header information:
# code...
# Callback function invoked when header data is ready
def header(buf):
# Print header data to stderr
import sys
sys.stderr.write(buf)
# Returning None implies that all bytes were written
# more code...
c.setopt(pycurl.HEADERFUNCTION, header)
# yet more code...
Find out more from [the
docs](http://pycurl.sourceforge.net/doc/callbacks.html).
**You can also use requests instead of pycurl**
While this may not be possible, and does not directly answer your question, I
would recommend that you use the [requests
library](https://github.com/kennethreitz/requests) instead of pyCurl:
import requests
payload = {"key":"value"}
cookies = {"key":"value"}
r = requests.post('https://my.example.com', data=payload, cookies=cookies)
location = r.headers["Location"]
content = r.text
print(content)
It will make your life much easier. Find out more by [reading the
docs](http://www.python-requests.org/)
|
Python: how to have mutually exclusive groups in subparser using argparse?
Question: I am writing a program like:
import argparse
def task1(args):
print "running task 1"
def task2(args):
print "running task 2"
if __name__=="__main__":
parser=argparse.ArgumentParser(description="How can I have mutually exclusive groups in subparsers?")
subparsers=parser.add_subparsers()
t1sub=subparsers.add_parser("task1")
#....
t1sub.set_defaults(func=task1)
# here I would like to have a mutually exclusive group
# when task 1 of the option one between --in and --out is required, but one excludes the other
# apparently a subparser has no add_group() not add_mutually_exclusive_group(), though
t2sub=subparsers.add_parser("task2")
#....
t1sub.set_defaults(func=task2)
args = parser.parse_args()
args.func(args)
As explained when I run task1 one between `--in` or `--out` is required but
not both. How can I add this functionality to my program??
Answer: Subparsers support all the methods a normal parser supports, including
`add_mutually_exclusive_group()`:
>>> megroup = t1sub.add_mutually_exclusive_group()
>>> megroup.add_argument('--in', action='store_true')
_StoreTrueAction(option_strings=['--in'], dest='in', nargs=0, const=True, default=False, type=None, choices=None, help=None, metavar=None)
>>> megroup.add_argument('--out', action='store_true')
_StoreTrueAction(option_strings=['--out'], dest='out', nargs=0, const=True, default=False, type=None, choices=None, help=None, metavar=None)
>>> parser.parse_args(['task1', '--in'])
Namespace(func=<function task1 at 0x10a0d9050>, in=True, out=False)
>>> t1sub.print_help()
usage: task1 [-h] [--in | --out]
optional arguments:
-h, --help show this help message and exit
--in
--out
>>> parser.print_help()
usage: [-h] {task1,task2} ...
How can I have mutually exclusive groups in subparsers?
positional arguments:
{task1,task2}
optional arguments:
-h, --help show this help message and exit
|
"Flattening" a list of dictionaries
Question: So my aim is to go from:
fruitColourMapping = [{'apple': 'red'}, {'banana': 'yellow'}]
to
finalMap = {'apple': 'red', 'banana': 'yellow'}
A way I got is:
from itertools import chain
fruits = list(chain.from_iterable([d.keys() for d in fruitColourMapping]))
colour = list(chain.from_iterable([d.values() for d in fruitColourMapping]))
return dict(zip(fruits, colour))
Is there any better more pythonic way?
Answer:
finalMap = {}
for d in fruitColourMapping:
finalMap.update(d)
|
How to aggregate data into dictionary (or some other database) in python?
Question: I am wondering if the following would be a good method in python to aggregate
data that needs to be queried in another function by multiple keys or if I
would have better performance using SQLite to read and write data.
For example some pseudocode of the function that aggregates:
import sys
def aggregatesources(sys.argv[1],sys.argv[2],sys.argv[3]):
source1 = open(sys.argv[1], 'r') #source1.txt
source1data = source1.read()
source2 = open(sys.argv[2], 'r') #source2.txt
source1data = source2.read()
source3 = open(sys.argv[3], 'r') #source3.txt
source1data = source3.read()
aggregated_data = source1 + source2 + source3 # + etc...
This is the function that needs to make an aggregation of sources but my
question is when I supply the sources as:
type1, 32
type2, 9
type3, 12
type4, 21
etc...
is there a way to take the aggregated data and associate it within a larger
dictionary so that:
type1, [source1, 32], [source2,etc...], [etc...]
I want to use python's dictionary querying speed to make this instantaneous,
but if there are alternative solutions that can do the same thing please
elaborate on those.
Answer: This should do what you're looking for:
import csv
def add_source_to_dict(mydict, sourcefilename):
with open(sourcefilename, 'rb') as csvfile:
my_reader = csv.reader(csvfile)
for atype, value in my_reader:
if not atype in mydict:
mydict[atype]={}
mydict[atype][sourcefilename] = value
return mydict
data = {}
data = add_source_to_dict(data, "source1.txt")
Interactively:
>>> data = {}
>>> data = add_source_to_dict(data, "source1.txt")
>>> data = add_source_to_dict(data, "source2.txt")
>>> data
{
'type1,': {
'source2.txt': '44',
'source1.txt': '32'
},
'type3,': {
'source2.txt': '46',
'source1.txt': '12'
},
'type2,': {
'source2.txt': '45',
'source1.txt': '9'
},
'type4,': {
'source2.txt': '47',
'source1.txt': '21'
}
}
|
Python 3.3 can't seem to find the decoder module in pocketsphinx library
Question: I am getting the error **AttributeError: 'module' object has no attribute
'Decoder'.** I am using the following code. Because there was no setup.py file
included I moved the dlls and libraries into their respective folders in the
python33 directory manually. (Library files in lib and dlls in DLLs). I am
able to import the pocketsphinx and sphinxbase modules without error. Any help
would be appreciated.
import pocketsphinx
import sphinxbase
lmd="C:\Python33\Lib\site-packages\pocketsphinx\model\lm\HUB4_trigram_lm\language_model.arpaformat"
hmd="C:\Python33\Lib\site-packages\pocketsphinx\model\hmm\en_US"
dictd="C:\Python33\Lib\site-packages\pocketsphinx\model\lm\HUB4_trigram_lm\language_model.vocabulary"
fh = open('test1.wav', 'rb')
fh.seek(44)
decoder = pocketsphinx.Decoder(hmmd,lmd, dictp)
nsamp = Decoder.decode_raw(fh)
hyp, uttid, score = decoder.get_hyp()
print('Got result'+ hyp+'score'+score)
Answer: > nsamp = Decoder.decode_raw(fh)
This part must be `decoder.decode_raw(fh)`. You are trying to look for Decoder
in default module instead of method in an object.
|
Python regex find two groups
Question:
>>> text = '<a data-lecture-id="47"\n data-modal-iframe="https://class.coursera.org/neuralnets-2012-001/lecture/view?lecture_id=47"\n href="https://class.coursera.org/neuralnets-2012-001/lecture/47"\n data-modal=".course-modal-frame"\n rel="lecture-link"\n class="lecture-link">\nAnother diversion: The softmax output function [7 min]</a>'
>>> import re
>>> re.findall(r'data-lecture-id="(\d+)"|(.*)</a>',a)
>>> [('47', ''), ('', 'Another diversion: The softmax output function [7 min]')]
How do i extract the data out like this:
>>> ['47', 'Another diversion: The softmax output function [7 min]']
I think there should be some smarter regex expressions.
Answer: It is [not recommended to parse HTML with reguar
expressions](http://stackoverflow.com/questions/590747/using-regular-
expressions-to-parse-html-why-not). You can give a try to the
[`xml.dom.minidom`](http://docs.python.org/2/library/xml.dom.minidom.html)
module:
from xml.dom.minidom import parseString
xml = parseString('<a data-lecture-id="47"\n data-modal-iframe="https://class.coursera.org/neuralnets-2012-001/lecture/view?lecture_id=47"\n href="https://class.coursera.org/neuralnets-2012-001/lecture/47"\n data-modal=".course-modal-frame"\n rel="lecture-link"\n class="lecture-link">\nAnother diversion: The softmax output function [7 min]</a>')
anchor = xml.getElementsByTagName("a")[0]
print anchor.getAttribute("data-lecture-id"), anchor.childNodes[0].data
|
Python: pass function as parameter, with options to be set
Question: In Python, I need to call many very similar functions on the **same input
arguments** `sampleA` and `sampleB` . The only thing is that some of these
functions require an **option** to be set, and some don't. For example:
import scipy.stats
scipy.stats.mannwhitneyu(sampleA, sampleB)
[...some actions...]
scipy.stats.mstats.ks_twosamp(sampleA, sampleB, alternative='greater')
[...same actions as above...]
scipy.stats.mstats.mannwhitneyu(sampleA, sampleB, use_continuity=True)
[...same actions as above...]
Therefore I would like to pass the names of such functions as input argument
of a more generic function `computeStats`, as well as `sampleA` and `sampleB`,
but I don't know how to handle options that I sometimes have to use.
def computeStats(functionName, sampleA, sampleB, options???):
functionName(sampleA, sampleB) #and options set when necessary
...some actions...
return testStatistic
How do I specify an option that sometimes has to be set, sometimes not?
Answer: Use [`**kwargs`](http://stackoverflow.com/questions/1415812/why-use-kwargs-in-
python-what-are-some-real-world-advantages-over-using-named):
def computeStats(func, sampleA, sampleB, **kwargs):
func(sampleA, sampleB, **kwargs)
...some actions...
return testStatistic
Then you'll be able to use `computeStats()` like so:
computeStats(scipy.stats.mstats.ks_twosamp, sampleA, sampleB, alternative='greater')
That said, I am not entirely convinced you need this at all. How about simply
def postprocessStats(testStatistic):
...some actions...
return testStatistic
postprocessStats(scipy.stats.mstats.ks_twosamp(sampleA, sampleB, alternative='greater'))
?
I think this is easier to read and at the same time is more general.
|
How to use beaker without install it?
Question: Beaker is not a part of python standard library, and **I want to make my
application has no dependencies rather than the python standard library
itself**. To accomplish this, I download beaker and extract as a sub-package
of my application.
Then, I use this:
import os, inspect, sys
sys.path.append(os.path.abspath('./beaker'))
import beaker.middleware
app = beaker.middleware.SessionMiddleware(bottle.app(), session_opts)
And get this error
Traceback (most recent call last):
File "start.py", line 8, in <module>
from kokoropy import kokoro_init
File "/home/gofrendi/workspace/kokoropy/kokoropy/__init__.py", line 9, in <module>
import beaker.middleware
File "/home/gofrendi/workspace/kokoropy/kokoropy/beaker/middleware.py", line 11, in <module>
from beaker.cache import CacheManager
ImportError: No module named beaker.cache
The problem is laid on beaker.middleware line 11:
from beaker.cache import CacheManager
The interpreter cannot recognize beaker package since it is not installed
Actually I can fix that by change that line into this:
from cache import CacheManager
But by doing that, I need to modify a lot.
So, is there any way to use beaker without install it and without doing too
many modification?
**PS:** Below is my directory structure
kokoropy
|
|--- __init__.py <-- this is where I write my script
|
|--- beaker
|
|--- __init__.py
**EDIT:** The accepted answer is correct, but in my case, I run the script at
one-level top directory. Therefore, below solution seems to be more robust:
import os, inspect, sys
sys.path.append(os.path.dirname(__file__))
Or maybe this: [How do I get the path of the current executed file in
python?](http://stackoverflow.com/questions/2632199/how-do-i-get-the-path-of-
the-current-executed-file-in-python) :)
Answer: You have to add the directory that contains the `beaker` directory to the path
not the beaker directory itself:
<root>
|
--beaker
|
-- <...>
In this case you need to add the `<root>` directory to the path.
According to your example code this would be:
sys.path.append(os.path.abspath('.'))
Which probably means that you run your program from this folder, which would
add it to the `PYTHONPATH` automatically. (So it should run without you
modifying the `PYTHONPATH` at all).
EDIT:
For more information on the topic you can checkout the Python docs about
modules: [Modules in python](http://docs.python.org/2/tutorial/modules.html).
|
Analysis of images using PIL in Python
Question: I found this code:
import PIL
from PIL import Image
from matplotlib import pyplot as plt
im = Image.open('./color_gradient.png')
w, h = im.size
colors = im.getcolors(w*h)
def hexencode(rgb):
r=rgb[0]
g=rgb[1]
b=rgb[2]
return '#%02x%02x%02x' % (r,g,b)
for idx, c in enumerate(colors):
plt.bar(idx, c[0], color=hexencode(c[1]),edgecolor=hexencode(c[1]))
plot.show()
For the exact link one can look here-[Plot image color histogram using
matplotlib](http://stackoverflow.com/questions/12182891/plot-image-color-
histogram-using-matplotlib)
My questions are what are the meaning of the axes, and how can I generate a
table out these values? I would like to run some statistics, like percent of
green, or red in the picture...
Thanks
Answer: From the [PIL Documentation](http://effbot.org/imagingbook/image.htm):
> **getcolors**
>
> im.getcolors() => a list of (count, color) tuples or None
>
> im.getcolors(maxcolors) => a list of (count, color) tuples or None
The Y axis in the referred graph is the pixel count of that colour and the X
axis comprises the (unsorted?) list of colours in the graph.
|
Most efficient way to "nibble" the first line of text from a text document then resave it in python
Question: I have a text document that I would like to repeatedly remove the first line
of text from every 30 seconds or so.
I have already written (or more accurately copied) the code for the python
resettable timer object that allows a function to be called every 30 seconds
in a non blocking way if not asked to reset or cancel.
[Resettable timer in python repeats until
cancelled](http://stackoverflow.com/questions/15651931/resettable-timer-in-
python-repeats-until-cancelled/15652019?noredirect=1#15652019)
(If someone could check the way I implemented the repeat in that is ok,
because my python sometimes crashes while running that, would be appreciated
:))
I now want to write my function to load a text file and perhaps copy all but
the first line and then rewrite it to the same text file. I can do this, this
way I think... but is it the most efficient ?
def removeLine():
with open(path, 'rU') as file:
lines = deque(file)
try:
print lines.popleft()
except IndexError:
print "Nothing to pop?"
with open(path, 'w') as file:
file.writelines(lines)
This works, but is it the best way to do it ?
Answer: I'd use the [`fileinput`
module](http://docs.python.org/2/library/fileinput.html) with `inplace=True`:
import fileinput
def removeLine():
inputfile = fileinput.input(path, inplace=True, mode='rU')
next(inputfile, None) # skip a line *if present*
for line in inputfile:
print line, # write out again, but without an extra newline
inputfile.close()
`inplace=True` causes `sys.stdout` to be redirected to the open file, so we
can simply 'print' the lines.
The `next()` call is used to skip the first line; giving it a default `None`
suppresses the `StopIteration` exception for an empty file.
This makes rewriting a _large_ file more efficient as you only need to keep
the `fileinput` readlines buffer in memory.
I don't think a `deque` is needed at all, even for your solution; just use
`next()` there too, then use `list()` to catch the remaining lines:
def removeLine():
with open(path, 'rU') as file:
next(file, None) # skip a line *if present*
lines = list(file)
with open(path, 'w') as file:
file.writelines(lines)
but this requires you to read all of the file in memory; don't do that with
large files.
|
handling telnetlib timeout error in python
Question: I am trying to write a python script to establish a telnet connection (using
telnetlib) to a range of hosts:
for i in range(len(HOST)):
print "scanning " + HOST[i] + " ...\n"
tn = telnetlib.Telnet(HOST[i],23,3)
The problem is when one of the connections times out, the script executions
interrupts and returns the following error:
Traceback (most recent call last):
File "C:\Python27\telnet.py", line 24, in <module>
tn = telnetlib.Telnet(HOST[i],23,3)
File "C:\Python27\lib\telnetlib.py", line 209, in __init__
self.open(host, port, timeout)
File "C:\Python27\lib\telnetlib.py", line 225, in open
self.sock = socket.create_connection((host, port), timeout)
File "C:\Python27\lib\socket.py", line 571, in create_connection
raise err
socket.timeout: timed out
Anyone knows how to skip this error and continue the script?
Answer: You need to use a `try...except` block to catch the exception and tell the
interpreter to ignore it. For example:
import socket
for i in range(len(HOST)):
print "scanning " + HOST[i] + " ...\n"
try:
tn = telnetlib.Telnet(HOST[i],23,3)
except socket.timeout:
pass
In this case it's a good idea to explicitly state which exception you want to
catch (`socket.timeout`). Sockets can throw many different types of exceptions
so using a generic `except:` statement might mask a problem with opening,
reading or writing to the socket.
|
How to programmatically measure the elements' sizes in HTML source code using python?
Question: I'm doing webpage layout analysis in python. A fundamental task is to
programmatically measure the elements' sizes given HTML source codes, so that
we could obtain statistical data of content/ad ratio, ad block position, ad
block size for the webpage corpus.
An obvious approach is to use the width/height attributes, but they're not
always available. Besides, things like `width: 50%` needs to be calculated
after loading into DOM. So I guess loading the HTML source code into a window-
size-predefined-browser (like
[mechanize](http://wwwsearch.sourceforge.net/mechanize/) although I'm not sure
if window's size could be set) is a good way to try, but mechanize doesn't
support the return of an element size anyway.
Is there any universal way (without width/height attributes) to do it in
python, preferably with some library?
Thanks!
Answer: I suggest You to take a look at [Ghost](http://jeanphix.me/Ghost.py/) \-
webkit web client written in python. It has JavaScript support so you can
easily call JavaScript functions and get its return value. Example shows how
to find out google text box width:
>>> from ghost import Ghost
>>> ghost = Ghost()
>>> ghost.open('https://google.lt')
>>> width, resources = ghost.evaluate("document.getElementById('gbqfq').offsetWidth;")
>>> width
541.0 # google text box width 541px
|
python string match time complexity
Question: I need help for this question. I think the time complexity is O(n), but my
friend insists that this is O(n^2). one of reason is because of `fn =
fn[index+1 ::]`
#filename: string_expression_matcher.cc
#string: stxpm.c
#should return
import sys
string = "string_expression_matcher.cc"
subStr = "stxpm.c"
fn = list(string)
for i in subStr:
try:
index = fn.index(i)
fn = fn[index+1 ::]
except:
print ("can't dup")
sys.exit()
print ("found")
Here is my algorithm:
1. s in subStr:
* The loop starts at: "string_expression_matcher.cc"
* The remaining string output of this step is: "tring_expression_matcher.cc"
2. t in subStr
* The loop starts at: "tring_expression_matcher.cc"
* The remaining is: "ring_expression_matcher.cc"
3. x in subStr
* The loop starts at: "ring_expression_matcher.cc"
* The remaining is: "pression_matcher.cc"
4. p in subStr
* The loop starts at: "pression_matcher.cc"
* The remaining is: "ression_matcher.cc"
and so on to last step.
Given:
n = len(subStr)
m = len(string)`
what is time complexity in this program? Thanks for everyone, but i really
want to know if O(n) or O(n^2). I know the code is not perfect but please
focus on time complexity.. Thanks so much
Does anyone know how python string copy works? what happen when we do fn =
fn[index+1 ::]?
I asked a distinguish engineer. he said the result is O(m*n). how about you?
Answer: Your algorithm (in terms of number of comparisons) is `O(n)`, where `n` is
length of the string. In the worst case both string and pattern will be the
same and then for every character in `subStr` you will move to next character
of `string`. It'll be equivalent to simple comparison of strings.
However your implementation may be `O(n^2)` in terms of other operations and
the reason for this, as you mentioned in you question, is the following line:
fn = fn[index+1 ::]
This is effectively copying the string (assuming the slice above is
implemented as a copy). If you consider previous example again, for every
character in a string you'd have to copy all remaining characters, which is
`O(n^2)`. This is because you'll be copying `n-1` characters first, then
`n-2`, `n-3` and so on, and at the last iteration you will copy just one
character. Total amount of items to be copied will be then
`n-1`+`n-2`+`...`+`1`, which, as the [arithmetic
progression](http://en.wikipedia.org/wiki/Arithmetic_progression), is equal to
`(n-1)*((n-1)+1)/2 = (n-1)*n/2 = O(n^2)`. For other situations this could be
generalised to `O(m*n)`, where `m` is length of the pattern.
What your friend might like to tell you was: your algorithm is linear, but
your implementation is not. It can be easily solved though. Use solution
presented by @thkang or something more transparent to get rid of hidden
complexity, for example:
try:
si = iter(string)
for c in subStr:
while c != si.next():
pass
except StopIteration:
print "no match"
else:
print "match"
|
How to change value of the variable in one python script from another python script
Question: I have to files python1 and python2.
Python2.py has code something like this:
import sys
variable1=value1
variable2=value2
#and some python code from here on
The python1 script should take the input value for variable1 and variable2 and
change the corresponding value in python2.py without tampering the other code.
And example 2: If any user gives the variable1 value as android. The value of
variable1 in python2 should be changed to android
Thanks
Note: python2 is not a configuration file, has many other modules
Answer: You can just `import python2` in `python1`, then use `python2.variable1=...`
|
List: Counting and Deleting
Question: Given a list, for example List=["X","X","O","O",'O','O'], how would I count
how many "X"'s there are in a list, and then subtract that many "O"s from the
list. The List will always put all X's first and all O's last. I thought I
could do something like...
List=["X","X","O","O",'O','O']
ListCount=List.count("X")
del List[-(List2):0]
I thought this would yield ["X","X","O","O"] by deleting the O's from -2:0 but
absolutely nothing happened. In python.
Answer: As noted in my comment, another data structure could be more useful here, as
you don't care about order:
data = {"X": 2, "O": 4}
data["O"] -= data["X"]
If you ever need the actual list, it's easy to create with a quick [generator
expression](http://www.youtube.com/watch?v=pShL9DCSIUw):
from itertools import chain, repeat
data_list = list(chain.from_iterable(repeat(key, count) for key, count in data.items()))
Or, to go the other way:
from collections import Counter
data = Counter(data_list)
|
Inserting data into MySQL database from Python
Question: I have 2 tables TBL1 and TBL2. TBL1 has 3 columns date, id, nSql. TBL2 has 3
columns date, custId, userId. I have 17 rows in TBL1 with id 1 to 17 (which
will grow later). Each nSql has a SQL query in it. For example nSql for id=1
is: "select date, pId as custId, tId as userId from TBL3" For example nSql for
id=2 is: "select date, qId as custId, rId as userId from TBL4" ... nSql result
is always same 3 columns.
Below query only runs nSql for id =1. So, in TBL2 I have only output for nSql
=1. I want result for all nSql. I want my query to run for all nSql not just
for id =1.
import MySQLdb
# Open database connection
con=MySQLdb.Connection(host="localhost", user="root", passwd="root", db="test")
# create a cursor object using cursor() method
cur=con.cursor()
selectStatement=("select nSql from TBL1") # I do not want to limit the number of id to select. For example, I do not want: select nSql from TBL1 where id in (1, 2, ..17)
cur.execute(selectStatement)
res=cur.fetchone()
nSql=res[0]
cur.execute(nSql)
reslt=cur.fetchall()
for row in reslt:
date= row[0]
custId= row[1]
userId=row[2]
insertStatement=("insert into TBL2( date, custId, userId) values ('%s', %d, %d)" % (date, custId, userId))
cur.execute(insertStatement)
con.commit()
Answer: You already fetch the `nSql` result and loop over it. You need to loop over
both:
cur.execute(selectStatement)
res = cur.fetchall()
for outerrow in res:
nSql = outerrow[0]
cur.execute(nSql)
# rest of your code
|
How to extract data from the file by using regular expression in python?
Question: I have a file, in that emailid, phoneno and my dateofbirth is there. In python
by using regular expression, how to find the 3 fields one by one? The desired
output looks like
Emailid: [email protected]
Phoneno: 1234567890
dateofbirth: xx-xx-xx
I knew how to find the fields individually. But I am not aware how to find 3
at a time .The below snippet shows how to find the emailid from the file. This
code output looks like
Emaildid: [email protected]
# ...........................
import sys,re
pattern=r'''(?P<emailid>[a-zA-Z\.]*\@[a-zA-Z]*\.c[a-zA-Z]*)'''
regobj = re.compile(pattern, re.VERBOSE)
for line in sys.stdin:
results= regobj.finditer(line)
for result in results:
sys.stdout.write("%s\n"%result.group('emailid'))
Answer: You can iterate over over all non-overlapping matches for the RE pattern in
string using the `finditer` method as follows:
import sys,re
pattern = re.compile(r'''(?P<emailid>[a-zA-Z.]*@[a-zA-Z]*\.c[a-zA-Z]*).*?(?P<phone>\(?[0-9]{3}\)?[-. ]?[0-9]{3}[-. ]?[0-9]{4}).*?(?P<dob>[0-9]{2}-[0-9]{2}-[0-9]{2})''', re.DOTALL)
for result in pattern.finditer(sys.stdin.read()):
sys.stdout.write("Emailid: %s\n"%result.group('emailid'))
sys.stdout.write("Phoneno: %s\n"%result.group('phone'))
sys.stdout.write("dateofbirth: %s\n"%result.group('dob'))
|
start python pdb with multiple arguments?
Question: I was wondering if there is a way to start pdb with multiple arguments.
Currently I know I can do this:
python -m pdb script.py
and then manually setup break points, with:
(Pdb) break
(Pdb) break 2
Breakpoint 1 at /home/ozn/test2.py:2
(Pdb) break 3
Breakpoint 2 at /home/ozn/test2.py:3
(Pdb) break
I could also insert:
pdb.set_trace() (or with ipdb.set_trace()
in different lines (which is eased by stuff like python-mode in vim). However,
if I take that approach, e.g.
# note: break points from python-mode in vim
print "hello "
a = 1
import ipdb; ipdb.set_trace() # XXX BREAKPOINT
a =+1
import ipdb; ipdb.set_trace() # XXX BREAKPOINT
print a
i = 9
I can't list all the breakpoints I have with the command `break` when inside
`pdb`. Here is example: I run the file, it produces output, and switches to
`pdb` session, but command `break` is empty:
[2] ozn@deboz:~ $ python 1.py
hello
> /home/ozn/1.py(4)<module>()
3 import ipdb; ipdb.set_trace() # XXX BREAKPOINT
----> 4 a =+1
5 import ipdb; ipdb.set_trace() # XXX BREAKPOINT
ipdb> list
1 print "hello "
2 a = 1
3 import ipdb; ipdb.set_trace() # XXX BREAKPOINT
----> 4 a =+1
5 import ipdb; ipdb.set_trace() # XXX BREAKPOINT
6 print a
7
8 i = 9
ipdb> break
ipdb>
_Ideally_ I would like to start pdb like this:
python -m pdb script.py b 2 b 3
and when inside , the prompt should do this:
(Pdb) break
(Pdb) break 2
Breakpoint 1 at /home/ozn/test2.py:2
(Pdb) break 3
Breakpoint 2 at /home/ozn/test2.py:3
(Pdb) break
Alternatively, I would be happy to run my script from within `vim` when
running python mode with some break points. Right now, it just hangs. Meaning,
if I press `<lead>r` when the code has break points in it, it will hang, or at
the best case will produce some garbage like this:
~
~
~
~
~
~
~
Code running.> /home/ozn/1.py(4)<module>()
3 import ipdb; ipdb.set_trace() # XXX BREAKPOINT
----> 4 a =+1
5 import ipdb; ipdb.set_trace() # XXX BREAKPOINT
ipdb>
When setting the breakpoints to be `import pdb; pdb.set_trace()`, vim
completely hangs and produces the following message :
Code running.
# questions:
1. Can my vim python-mode be better configured so it behaves with breakpoints?
2. Do you know of a way to run "debugging scripts" or start pdb with multiple arguments? Bonus questions:
3. Any alternatives to the plugin `vdebug` ?
Answer: **Question1:**
Yes. I also have the same problem with you when using python-mode. Vim
completely hangs and just shows '`Code running.`' I figured out that the
problem occurs at '`~/.vim/bundle/python-mode/autoload/pymode/run.vim`'.
`<leader>r` makes this script run, and the script is stuck at the line '`py
execfile(vim.eval('expand("%:p")'), context)`'. I didn't make effort to solve
this bug in the script. Instead, I use a simple script to make everything run.
I make a 'python.vim' file, paste in the following code and put the file at
'`~/.vim/plugin/after/ftdetect/python.vim`'(if you don't have this folder,
create one).
" Python
if executable("python")
autocmd BufRead,BufNewFile *.py map <F5> :w<cr>:!python %<CR>
else
autocmd BufRead,BufNewFile *.py map <F5> :echo "you need to install Python first!"<CR>
endif
What we need is to run python code and pdb in vim, right? It works now!
However, when you press `<F5>` in a python file in vim, it will jump out the
vim to run python in terminal, and when the python program finishes, it will
automatically jump back to vim. It's ok if you like it.
However, I have found **a better way.** Just install a vim plugin called
'**conque** ', <https://code.google.com/p/conque/> , and install '**iPython**
'. Then, you should change the 'python.vim' code as following.
" Python
if executable("python")
autocmd BufRead,BufNewFile *.py map <F5> :execute 'ConqueTermSplit ipython '.expand('%:p')<CR>
else
autocmd BufRead,BufNewFile *.py map <F5> :echo "you need to install Python first!"<CR>
endif
Now, it will split a window to run python code for you just inside the vim
when you press `<F5>`.

**Question 2 & 3:**
I did't know a multiple arguments way to run pdb. Maybe you can customize the
code above to achieve this.But I can recommend you a cool graphical python
debug tool in vim, called 'vim-debug'.
You can get 'vim-debug' from <http://jaredforsyth.com/projects/vim-debug>
Hope these help! :)
|
Accessing YAML data in Python
Question: I have a YAML file that parses into an object, e.g.:
{'name': [{'proj_directory': '/directory/'},
{'categories': [{'quick': [{'directory': 'quick'},
{'description': None},
{'table_name': 'quick'}]},
{'intermediate': [{'directory': 'intermediate'},
{'description': None},
{'table_name': 'intermediate'}]},
{'research': [{'directory': 'research'},
{'description': None},
{'table_name': 'research'}]}]},
{'nomenclature': [{'extension': 'nc'}
{'handler': 'script'},
{'filename': [{'id': [{'type': 'VARCHAR'}]},
{'date': [{'type': 'DATE'}]},
{'v': [{'type': 'INT'}]}]},
{'data': [{'time': [{'variable_name': 'time'},
{'units': 'minutes since 1-1-1980 00:00 UTC'},
{'latitude': [{'variable_n...
I'm having trouble accessing the data in python and regularly see the error
`TypeError: list indices must be integers, not str`
I want to be able to access all elements corresponding to `'name'` so to
retrieve each data field I imagine it would look something like:
import yaml
settings_stream = open('file.yaml', 'r')
settingsMap = yaml.safe_load(settings_stream)
yaml_stream = True
print 'loaded settings for: ',
for project in settingsMap:
print project + ', ' + settingsMap[project]['project_directory']
and I would expect each element would be accessible via something like
`['name']['categories']['quick']['directory']`
and something a little deeper would just be:
`['name']['nomenclature']['data']['latitude']['variable_name']`
or am I completely wrong here?
Answer: The brackets, `[]`, indicate that you have lists of dicts, not just a dict.
For example, `settingsMap['name']` is a **list** of dicts.
Therefore, you need to select the correct dict in the list using an integer
index, before you can select the key in the dict.
So, giving your current data structure, you'd need to use:
settingsMap['name'][1]['categories'][0]['quick'][0]['directory']
Or, revise the underlying YAML data structure.
* * *
For example, if the data structure looked like this:
settingsMap = {
'name':
{'proj_directory': '/directory/',
'categories': {'quick': {'directory': 'quick',
'description': None,
'table_name': 'quick'}},
'intermediate': {'directory': 'intermediate',
'description': None,
'table_name': 'intermediate'},
'research': {'directory': 'research',
'description': None,
'table_name': 'research'},
'nomenclature': {'extension': 'nc',
'handler': 'script',
'filename': {'id': {'type': 'VARCHAR'},
'date': {'type': 'DATE'},
'v': {'type': 'INT'}},
'data': {'time': {'variable_name': 'time',
'units': 'minutes since 1-1-1980 00:00 UTC'}}}}}
then you could access the same value as above with
settingsMap['name']['categories']['quick']['directory']
# quick
|
python can't print unpacked floats from network
Question: I try to get floats from an UDP datagram and to print them to verify:
import socket
from struct import *
socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
socket.bind( ('127.0.0.1', 2416) )
msg = bytearray( 4*1024 )
f1 = 0.0
f2 = 0.0
f3 = 0.0
while True:
nBytes = socket.recv_into( msg )
print( '%d bytes received' % nBytes )
(f1) = unpack_from( '!f', msg, 0 )
(f2) = unpack_from( '!f', msg, 4 )
(f3) = unpack_from( '!f', msg, 8 )
print( '%f, %f, %f received' % ( f1, f2, f3 ))
The following error is raised:
$ python Server.py
12 bytes received
Traceback (most recent call last):
File "Server.py", line 13, in <module>
print( '%f, %f, %f received' % ( f1, f2, f3 ))
TypeError: a float is required
The expected output is `1.2, 3.4, 5.6 received`.
Context:
Python 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:57:17) [MSC v.1600 64 bit (AMD64)] on win32
For information the Java UDP sender (client):
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetAddress;
import java.nio.ByteBuffer;
public class Client {
public static void main( String[] args ) throws Exception {
try( DatagramSocket clientSocket = new DatagramSocket()) {
InetAddress target = InetAddress.getByName( "localhost" );
ByteBuffer msg = ByteBuffer.allocate( 4*1024 );
for(;;) {
msg.clear();
msg.putFloat( 1.20f );
msg.putFloat( 3.40f );
msg.putFloat( 5.60f );
msg.putDouble( 7.80 );
msg.putDouble( 9.10 );
msg.putDouble( 11.120 );
msg.flip();
clientSocket.send(
new DatagramPacket( msg.array(), msg.limit(), target, 2416 ));
Thread.sleep( 2000 );
}
}
}
}
Answer: This line is the problem:
(f1) = unpack_from( '!f', msg, 0 )
Try
(f1,) = unpack_from( '!f', msg, 0 )
Note the additional comma. Similarly for the other two lines.
As you mentioned, `unpack_from` returns a tuple. `(f1)` is not a tuple, it is
a single value. `(f1,)` is a tuple containing one element.
|
Python Namedtuple Indexing Instance
Question: So Here is my code for my lab coding project that I am currently working on:
from collections import namedtuple
Restaurant = namedtuple('Restaurant', 'name cuisine phone dish price')
# Restaurant attributes: name, kind of food served, phone number, best dish, price of that dish
RC = [Restaurant("Thai Dishes", "Thai", "334-4433", "Mee Krob", 12.50),
Restaurant("Nobu", "Japanese", "335-4433", "Natto Temaki", 5.50),
Restaurant("Nonna", "Italian", "355-4433", "Stracotto", 25.50),
Restaurant("Jitlada", "Thai", "324-4433", "Paht Woon Sen", 15.50),
Restaurant("Nola", "New Orleans", "336-4433", "Jambalaya", 5.50),
Restaurant("Noma", "Modern Danish", "337-4433", "Birch Sap", 35.50),
Restaurant("Addis Ababa", "Ethiopian", "337-4453", "Yesiga Tibs", 10.50)]
My question to you as a beginner is: what method(s) should I use to allow my
program to index specific parts of the list?
For example, how do I go about indexing a list of all of the restaurants from
the greater list? This list includes just the restaurants from the list not
all the other information like the phone numbers, etc...
I have used both slice methods and list functions in attempt to figure this
out myself and it did not prove to work. >:(
Answer: I am not sure when you say index if you just want the values or for
performance. But for basic retrieval you could just do something like this
[r.name for r in RC]
Which would give you all the names of restaurants you have in RC
You could get fancier,
RC = [...]
def getValues(name):
return [getattr(r, name) for r in RC]
Then you can just do,
getValues('name')
If you need it to cache you can look into using memoize
|
What is a way to convert text to a string in Python 3.3.0
Question: Hi so i'm having a hard time getting Text from my program on Python to convert
to a string so i can write it to a file without it just writing numbers in the
file. i put this code :
from tkinter import *
a = Tk()
a.title("i need help")
a.geometry("600x600")
entryText = StringVar(None)
codeEdit = Text(a)
codeEdit.insert(END, "")
codeEdit.pack(side='top')
text = str(codeEdit)
def setLoc():
saveFile = open("DATA\Test.txt", "w")
saveFile.write(text)
saveFile.close()
return
writeButton = Button(text="Write",command=setLoc)
writeButton.pack(side='bottom')
so thats the code to write the obj locFile to the file Test.txt but when i
type something in the Text box on the program and hit the writButton it will
write to the file just not what i typed in it puts .50132192 so i wanted to
know how i could convert it to a String?
Answer: You need to use the `Text` widget's `get` method to get all of the text from
`'1.0'` (line 1, character 0) to `END`.
Here's a modified version of your code that does this in a `write_text`
function. I also added scroll bars and switched to using `grid` instead of
pack.
from tkinter import *
from tkinter import ttk
def write_text():
text = edit.get('1.0', END)
with open("DATA/Test.txt", "w") as f:
f.write(text)
root = Tk()
root.title("This May Help")
root.geometry("600x600")
edit = Text(root, width=80, height=25, wrap=NONE)
edit.insert('1.0', '[enter text]')
edit.grid(column=0, row=0, sticky=(N,W,E,S))
yscroll = ttk.Scrollbar(root, orient=VERTICAL, command=edit.yview)
yscroll.grid(column=1, row=0, sticky=(N,S))
edit['yscrollcommand'] = yscroll.set
xscroll = ttk.Scrollbar(root, orient=HORIZONTAL, command=edit.xview)
xscroll.grid(column=0, row=1, sticky=(W,E))
edit['xscrollcommand'] = xscroll.set
write_button = Button(text="Write", command=write_text)
write_button.grid(column=0, row=2)
|
Why does HTTP POST request body need to be JSON enconded in Python?
Question: I ran into this issue when playing around with an external API. I was sending
my body data as a dictionary straight into the request and was getting 400
errors:
data = {
"someParamRange": {
"to": 1000,
"from": 100
},
"anotherParamRange": {
"to": True,
"from": False
}
}
When I added a json.dumps wrap, it works:
data = json.dumps({
"someParamRange": {
"to": 1000,
"from": 100
},
"anotherParamRange": {
"to": True,
"from": False
}
})
I don't entirely understand why this is necessary, as dictionaries and JSON
objects are syntactically identical. Can someone help me understand what is
going on behind the scenes here?
For completeness, here are my headers:
headers = {'API-KEY': 'blerg', 'Accept-Encoding': 'UTF-8', 'Content-Type': 'application/json', 'Accept': '*/*', 'username': 'user', 'password': 'pwd'}
EDIT:
I didn't mention this earlier but now I feel that it may be relevant. I am
using the Python Requests library, and another post seems to suggest that you
should never have to encode parameters to a request object:
<http://stackoverflow.com/a/14804320/1012040>
"Regardless of whether GET/POST you never have to encode parameters again, it
simply takes a dictionary as an argument and is good to go."
Seems like serialization shouldn't be necessary?
My request object:
response = requests.post(url, data=data, headers=headers)
Answer: Apparently your API requires JSON-encoded and not form-encoded data. When you
pass a `dict` in as the `data` parameter, the data is form-encoded. When you
pass a string (like the result of `json.dumps`), the data is not form-encoded.
Consider this quote from the requests documentation:
> > Typically, you want to send some form-encoded data — much like an HTML
> form. To do this, simply pass a dictionary to the data argument. Your
> dictionary of data will automatically be form-encoded when the request is
> made.
>>
>> There are many times that you want to send data that is not form-encoded.
If you pass in a string instead of a dict, that data will be posted directly.
>>
>> For example, the GitHub API v3 accepts JSON-Encoded POST/PATCH data:
>>> import json
>>> url = 'https://api.github.com/some/endpoint'
>>> payload = {'some': 'data'}
>>> r = requests.post(url, data=json.dumps(payload))
Refs:
* <http://www.w3.org/TR/html401/interact/forms.html#h-17.13.3.4>
* <http://docs.python-requests.org/en/latest/user/quickstart/#more-complicated-post-requests>
|
Is pickle not compatible with twisted?
Question: I have made 2 application: The client extract data from a sql server (10k
lines), and send every line pickled to a "collector" server via socket. The
server uses twisted (this is mandatory) and receive every line, unpikle it and
store the data in another sql server.
Everytime i start sending data from client to server, in the first 200 line
(everytime a different line) **the server** throws an exception: SOMETIMES it
something like:
Traceback (most recent call last):
File "collector2.py", line 81, in dataReceived
self.count,account = pickle.loads(data)
File "/usr/lib/python2.6/pickle.py", line 1374, in loads
return Unpickler(file).load()
File "/usr/lib/python2.6/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib/python2.6/pickle.py", line 1138, in load_pop
del self.stack[-1]
IndexError: list assignment index out of range
But it's NOT every time the same. Printing my exception i red: Exception: pop
from empty list Exception: list index out of range Exception: "'" Exception:
list assignment index out of range
Another strange errors is: File "/usr/lib/python2.6/pickle.py", line 1124, in
find_class **import**(module) exceptions.ImportError: No module named ond'
for i in listaSAI:
crm={}
try:
crm['uid']=i[0]
except:
crm['uid']=None
try:
crm['type_cond']=i[01]
except:
crm['type_cond']=None
try:
crm['population_id']=i[2]
except:
crm['population_id']=None
try:
crm['product_id']=i[3]
except:
crm['product_id']=None
try:
crm['a_id']=i[4]
except:
crm['a_id']=None
try:
crm['status']=i[5]
except:
crm['status']=None
#time.sleep(0.001)
serialized = pickle.dumps((count,crm))
#print "sent num", count, crm
s.sendall(serialized)
count += 1
And my server:
def dataReceived(self, data):
try:
self.count,account = pickle.loads(data)
except Exception as e:
print "Eccezione:", e
print self.count+1
print "DISAGIO", data
print traceback.print_exc()
Printing the data in my client tells me that everything it's ok. _If i try to
slow down the process of sending using time.sleep(0.01) in my client,
EVERYTHING IS FINE, and no exception are raised._
What can i do to debug my code?
p.s. I suspect that exceptions.ImportError: No module named ond' refers to
"type_cond" key in crm.
Answer: Since you have no problem when adding a delay between writes, it is clear the
problem is not related to pickle, but to the transport of the data.
One option is that the client keeps writing data to the socket faster than the
server can process it, resulting in buffer overflow and packet loss.
Or, do you have more than one thread writing to the socket potentially at the
same time?
|
Binning frequency distribution in Python
Question: I have data in the two lists _value_ and _freq_ like this:
value freq
1 2
2 1
3 3
6 2
7 3
8 3
....
and I want the output to be
bin freq
1-3 6
4-6 2
7-9 6
...
I can write few lines of code to do this. However, I am looking if there are
builitin functions in standard python or Numpy? I found the solution when you
are given data in array/list with repetition i.e. they are not already grouped
into frequency table(eg. ` d= [1,1,2,3,3,3,6,6,7,7,7,8,8,8,...]`. However, in
this case I could not find the answers. I do not want to convert my data into
single expanded list like `d` first and use histogram function.
Answer:
import numpy as np
values = [1,2,3,6,7,8]
freqs = [2,1,3,2,3,3]
hist, _ = np.histogram(values, bins=[1, 4, 7, 10], weights=freqs)
print hist
output:
[6 2 6]
|
Python matplotlib and libpng Incompatibility issue
Question: I'm really suffering from this problem for so long.
Originally, after plotting something with matplotlib, I could easily save the
image.
However, after installing scipy, I couldn't save my image anymore.
(I installed matplot and scipy by using pip.)
I tried to look up some information, but I still can't solve the problem.
My operating system is Mac OS X Lion (10.7)
I think the following links are some relevant issues
<https://github.com/ipython/ipython/issues/2710>
[Matplotlib pylab savefig runtime error in python
3.2.3](http://stackoverflow.com/questions/11408173/matplotlib-pylab-savefig-
runtime-error-in-python-3-2-3)
[matplotlib and libpng issues with ipython
notebook](http://stackoverflow.com/questions/13817940/matplotlib-and-libpng-
issues-with-ipython-notebook)
[libpng15 static link
issues](http://stackoverflow.com/questions/11685764/libpng15-static-link-
issues)
It seems that if I can relink the libraries or set DYLD_LIBRARY_PATH (actually
I don't know what that is...)
Or maybe I have to recompile something?
By the way, I'm very new to linux-based system, so it would be really nice if
someone could explain it in a relatively simple way. Thank you very much.
Below are some error messages:
libpng warning: Application was compiled with png.h from libpng-1.5.4
libpng warning: Application is running with png.c from libpng-1.4.10
libpng warning: Incompatible libpng version in application and library
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/Library/Python/2.7/site-packages/matplotlib/backends/backend_macosx.pyc in save_figure(self, *args)
476 if filename is None: # Cancel
477 return
--> 478 self.canvas.print_figure(filename)
479
480 def prepare_configure_subplots(self):
/Library/Python/2.7/site-packages/matplotlib/backend_bases.pyc in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, **kwargs)
2094 orientation=orientation,
2095 bbox_inches_restore=_bbox_inches_restore,
-> 2096 **kwargs)
2097 finally:
2098 if bbox_inches and restore_bbox:
/Library/Python/2.7/site-packages/matplotlib/backend_bases.pyc in print_png(self, *args, **kwargs)
1856 from backends.backend_agg import FigureCanvasAgg # lazy import
1857 agg = self.switch_backends(FigureCanvasAgg)
-> 1858 return agg.print_png(*args, **kwargs)
1859
1860 def print_ps(self, *args, **kwargs):
/Library/Python/2.7/site-packages/matplotlib/backends/backend_agg.pyc in print_png(self, filename_or_obj, *args, **kwargs)
502 _png.write_png(renderer._renderer.buffer_rgba(),
503 renderer.width, renderer.height,
--> 504 filename_or_obj, self.figure.dpi)
505 finally:
506 if close:
RuntimeError: Could not create write struct
Answer: If you save a JPG you don't need PNG support. There is no need for PIL either:
import pylab as pl
pl.plot([0.2,0.3,0.4], [0.1,0.2,0.3], label='series name')
pl.xlabel('x label')
pl.ylabel('y label')
pl.ylim([0.0, 1.0])
pl.xlim([0.0, 1.0])
pl.title('Title')
pl.legend(loc="lower left")
pl.savefig('output.jpg')
pl.show()
|
How to make a window that occupies the full screen without maximising?
Question: I'm writing in python using Qt
I want to create the application window (with decorations) to occupy the full
screen size. Currently this is the code I have:
avGeom = QtGui.QDesktopWidget().availableGeometry()
self.setGeometry(avGeom)
the problem is that it ignores window decorations so the frame is larger... I
googled and what not, found this:
<http://harmattan-dev.nokia.com/docs/library/html/qt4/application-
windows.html#window-geometry>
which seems to indicate I need to set the frameGeometry to the `avGeom`
however I haven't found a way to do that. Also, in the comments in the above
link it says what I'm after may not be even possible as the programme can't
set the frameGeometry before running... If that is the case I just want
confirmation that my problem is not solvable.
EDIT:
So I played around with the code a bit and this gives what I want... however
the number 24 is basically through trial and error until the window title is
visible.... I want some better way to do this... which is window manager
independent..
avGeom = QtGui.QDesktopWidget().availableGeometry()
avGeom.setTop(24)
self.setGeometry(avGeom)
Now I can do what I want but purely out of trial and error
Running Ubuntu, using Spyder as an IDE
thanks
Answer: Use [`QtGui.QApplication().desktop().availableGeometry()`](http://doc-
snapshot.qt-project.org/4.8/qdesktopwidget.html#availableGeometry) for the
size of the window:
#!/usr/bin/env python
#-*- coding:utf-8 -*-
from PyQt4 import QtGui, QtCore
class MyWindow(QtGui.QWidget):
def __init__(self, parent=None):
super(MyWindow, self).__init__(parent)
self.pushButtonClose = QtGui.QPushButton(self)
self.pushButtonClose.setText("Close")
self.pushButtonClose.clicked.connect(self.on_pushButtonClose_clicked)
self.layoutVertical = QtGui.QVBoxLayout(self)
self.layoutVertical.addWidget(self.pushButtonClose)
titleBarHeight = self.style().pixelMetric(
QtGui.QStyle.PM_TitleBarHeight,
QtGui.QStyleOptionTitleBar(),
self
)
geometry = app.desktop().availableGeometry()
geometry.setHeight(geometry.height() - (titleBarHeight*2))
self.setGeometry(geometry)
@QtCore.pyqtSlot()
def on_pushButtonClose_clicked(self):
QtGui.QApplication.instance().quit()
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
app.setApplicationName('MyWindow')
main = MyWindow()
main.show()
sys.exit(app.exec_())
|
python fdb, trying to connect to an external firebird 1.5 super server
Question: I'm trying to connect to a Firebird 1.5 database that is located on a server,
from my local machine with Python fdb libary. but I'm having no luck.
the server is windows 2008 server R1 running Firebird 1.5.6 as a service. It
also has a System DSN called `firebird`.
How can i connect to it via python? I'm using this code:
import fdb
db = fdb.connect(host='192.168.40.28', database="C:\databases\database12.GDB", user='admin', password='admin')
but it generates this result:
Traceback (most recent call last):
File "data.py", line 4, in <module>
db = fdb.connect(host='192.168.40.28', database="C:\databases\database12.GDB", user='admin', password='admin')
File "/usr/local/lib/python2.7/dist-packages/fdb/fbcore.py", line 666, in connect
"Error while connecting to database:")
fdb.fbcore.DatabaseError: ('Error while connecting to database:\n- SQLCODE: -902\n- Unable to complete network request to host "192.168.40.28".\n- Failed to establish a connection.', -902, 335544721)
what am I doing wrong here?
Answer: Assuming that the IP `192.168.40.28` is correct my next quess would be that
you don't have the port `3050` open (thats the default port for Firebird).
Check your server's firewall and open the port. You can use some other port
instead of `3050` by seting the `RemoteServicePort` parameter in the
`firebird.conf` file, but then you have to set the port parameter in the
`connect` method too.
|
rpy2 module not working in Python3.2
Question: I am trying to import the rpy2 (version2.3.4) library into Python
(version3.2.3) on a Ubuntu 12.10 machine. The rpy2 documentation says that
rpy2 works under all Python 3 versions and I am also finding other topics
related to rpy2 and Python3.2 which show that these versions should work
together. Anyhow when I try to import a module:
from rpy2 import robjects
the result is this:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python3.2/dist-packages/rpy2/robjects/__init__.py", line 14, in <module>
import rpy2.rinterface as rinterface
File "/usr/local/lib/python3.2/dist-packages/rpy2/rinterface/__init__.py", line 8, in <module>
raise RuntimeError("Python (>=2.7 and < 3.0) or >=3.3 are required to run rpy2")
RuntimeError: Python (>=2.7 and < 3.0) or >=3.3 are required to run rpy2
So, is rpy2 really not working with Python3.2 what would fit to the
information the projects is giving me or waht might be the problem.
thx.
Answer: > The rpy2 documentation says that rpy2 works under all Python 3 versions
Not quite, I hope; check the part about [installing
rpy2](http://rpy.sourceforge.net/rpy2/doc-2.3/html/overview.html#installation).
Python 3.2 will probably never be supported by rpy2 (Python 3.2 is already
EOL). If your are after using Python 3, update to Python 3.3.
|
What's wrong with my filter query to figure out if a key is a member of a list(db.key) property?
Question: I'm having trouble retrieving a filtered list from google app engine datastore
(using python for server side). My data entity is defined as the following
class Course_Table(db.Model):
course_name = db.StringProperty(required=True, indexed=True)
....
head_tags_1=db.ListProperty(db.Key)
So the head_tags_1 property is a list of keys (which are the keys to a
different entity called Headings_1).
I'm in the Handler below to spin through my Course_Table entity to filter the
courses that have a particular Headings_1 key as a member of the head_tags_1
property. However, it doesn't seem like it is retrieving anything when I know
there is data there to fulfill the request since it never displays the logs
below when I go back to iterate through the results of my query (below). Any
ideas of what I'm doing wrong?
def get(self,level_num,h_key):
path = []
if level_num == "1":
q = Course_Table.all().filter("head_tags_1 =", h_key)
for each in q:
logging.info('going through courses with this heading name')
logging.info("course name filtered is %s ", each.course_name)
MANY MANY THANK YOUS
Answer: ~~I assume h_key is key of headings_1, since head_tags_1 is a list, I believe
what you need is IN
operator.<https://developers.google.com/appengine/docs/python/datastore/queries>~~
Note: your indentation inside the for loop does not seem correct.
My bad apparently '=' for list is already check membership. Using = to check
membership is working for me, can you make sure h_key is really a datastore
key class?
Here is my example, the first get produces result, where the 2nd one is not
import webapp2 from google.appengine.ext import db
class Greeting(db.Model):
author = db.StringProperty()
x = db.ListProperty(db.Key)
class C(db.Model): name = db.StringProperty()
class MainPage(webapp2.RequestHandler):
def get(self):
ckey = db.Key.from_path('C', 'abc')
dkey = db.Key.from_path('C', 'def')
ekey = db.Key.from_path('C', 'ghi')
Greeting(author='xxx', x=[ckey, dkey]).put()
x = Greeting.all().filter('x =',ckey).get()
self.response.write(x and x.author or 'None')
x = Greeting.all().filter('x =',ekey).get()
self.response.write(x and x.author or 'None')
app = webapp2.WSGIApplication([('/', MainPage)],
debug=True)
|
PATH environment variable in python
Question: I'm using OS X 10.8.3.
If you open a terminal,
echo $PATH
/usr/local/bin is there, also if you run it via sh or bash
however the python code output of:
import os
print os.environ.copy()
lacks the /usr/local/bin path
Can anyone explain how $PATH works? is there something that extends it? why
did the python script didn't print the $PATH I see in the terminal? Does it
behave the same on linux distributions?
How did I encountered it? I installed a sublime 2 plugin, js2coffee, the
plugin runs a subprocess (import subprocess), providing the name of an exec,
js2coffee - which was in the /usr/local/bin, a path that wasn't in the python
os env. In order to fix it I had to add it to the env:
env = os.environ.copy()
env["PATH"] = "/usr/local/bin/"
js2coffee = subprocess.Popen(
'js2coffee',
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True,
env= env
)
Answer: Terminal windows host interactive shells, typically `bash`. Shells get
initialized using a variety of profile and "rc" files, as documented in their
man pages (e.g.
[bash](https://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/bash.1.html)).
That initialization will change the environment in myriad ways.
In particular, `/etc/profile` runs the
[`path_helper`](https://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man8/path_helper.8.html)
tool to add directories to the `PATH` variable.
Applications launched from the Finder, Dock, Launchpad, etc. do not run shells
and don't have similar environments. They inherit a fairly basic environment
from their parent process, ultimately going back to launchd. See, for example,
the output of `launchctl export`. You could also use Automator, AppleScript
Editor, or the third-party tool Platypus to run the `env` command from a GUI
app to see what it has.
I'm not certain about what is standard for Linux shells, but the same
principal applies. Programs launched from your desktop environment will
inherit the environment directly. Shells will initialize their environment
using various script files and may therefore have different environments.
|
Sorting multidimensional JSON objects in python
Question: I have an object as such and want to sort it by time (line first, point
second) in each dimension (simplified json):
[{
"type":"point"
},
{
"type":"line",
"children": [
{
"type":"point"
},
{
"type":"point"
},
{
"type":"line"
}
]
},
{
"type":"point"
}]
This **dimention could be deeper** and have much more points/lines within each
other.
The **sorted output** would be something like this:
[{
"type":"line",
"children": [
{
"type":"line"
},
{
"type":"point"
},
{
"type":"point"
}
]
},
{
"type":"point"
},
{
"type":"point"
}]
Thanks
Answer: You'd need to process this recursively:
from operator import itemgetter
def sortLinesPoints(data):
if isinstance(data, dict):
if 'children' in data:
sortLinesPoints(data['children'])
else:
for elem in data:
sortLinesPoints(elem)
data.sort(key=itemgetter('type'))
|
Returning a value from TkInter Slider + making slider horizontal with classes
Question: I am having a problem printing the value of a slider bar. I created a button
below the slider to do so, but it prints before it is pressed and doesn't
print when pressed.
I also have the problem of making teh slider horizontal. I know it is
"orient=HORIZONTAL" and it works when it is not in a class but when it is in a
class it gives me trouble.
Here is the code, and as always, thank you so much!! :)
import Tkinter as tki # Tkinter -> tkinter in Python 3
import tkMessageBox
class GUI(tki.Tk):
def __init__(self):
tki.Tk.__init__(self)
self.wm_title("Menu Title")
RWidth=500
RHeight=300
self.geometry(("%dx%d")%(RWidth,RHeight))
menubar = tki.Menu(self)
# create a pulldown menu, and add it to the menu bar
menu0 = tki.Menu(menubar, tearoff=0)
menu0.add_command(label="Run Slider", command=self.slider_prompt)
menu0.add_separator()
menu0.add_command(label="Exit", command=self.quit)
menubar.add_cascade(label="Slider", menu=menu0)
# display the menu
self.config(menu=menubar)
def slider_prompt(self, msg='This is a slider'):
slider_window= self.top = tki.Toplevel(self)
slider_window.title("Slider Title")
RWidth=300
RHeight=200
slider_window.geometry(("%dx%d")%(RWidth,RHeight))
label0 = tki.Label(slider_window, text=msg)
label0.pack()
slider = tki.Scale(slider_window, from_=100, to=1000) # have to make horizontal
slider.pack()
#put button here that starts the program (pass the slider value)
button_cheat = tki.Button( slider_window, text='Print Value', command=self.print_slide_value(slider.get()) )
button_cheat.pack()
button_close = tki.Button(slider_window, text='Exit', command=lambda: self.top.destroy())
button_close.pack()
def print_slide_value(self, slider_value):
print slider_value
gui = GUI()
gui.mainloop()
Answer: When you use a command argument you have to pass there a reference to a
callable object. Function that will be called after the event. What you're
doing is that you're passing there the result from
`self.print_slide_value(slider.get())`, which is `None`.
Instead do:
def print_slide_value():
print slider.get()
button_cheat = tki.Button(slider_window, text='Print Value', command=print_slide_value)
Also I had no problems with this:
slider = tki.Scale(slider_window, from_=100, to=1000, orient=tki.HORIZONTAL)
|
Python: Search for and delete nested lists of dictionaries
Question: I have a tree of nested lists and dictionaries that I need to recursively go
through and remove entire dictionaries that match specific criteria. For
instance, I need to remove all dictionaries with the 'type' of 'Folder' that
have no children (or an empty list of children).
I am still a beginner Pythonist so please forgive the brute-forceness.
Here's a sample dictionary formatted for easy copy and paste.
{'children': [{'children': [{'key': 'group-1',
'name': 'PRD',
'parent': 'dc-1',
'type': 'Folder'},
{'children': [{'key': 'group-11',
'name': 'App1',
'parent': 'group-2',
'type': 'Folder'}],
'key': 'group-2',
'name': 'QA',
'parent': 'dc-1',
'type': 'Folder'},
{'key': 'group-3',
'name': 'Keep',
'parent': 'dc-1',
'type': 'Host'}],
'key': 'dc-1',
'name': 'ABC',
'parent': 'root',
'type': 'Datacenter'}],
'key': 'root',
'name': 'Datacenters',
'parent': None,
'type': 'Folder'}
In this dictionary the only tree that should remain is /root/dc-1/group-3. The
group-11 folder should be deleted first, then its parent (since the child is
no longer there), etc.
I have tried many different recursive methods but can't seem to get it to work
properly. Any help would be greatly appreciated.
def cleanup(tree):
def inner(tree):
if isinstance(tree, dict):
if 'type' in tree and tree['type'] == 'Folder':
if 'children' not in tree or not tree['children']:
print 'Deleting tree: ' + str(tree['name'])
if str(tree['key']) not in del_nodes:
del_nodes.append(str(tree['key']))
else:
for item in tree.values():
inner(item)
# Delete empty folders here
if del_nodes:
print 'Perform delete here'
if 'children' in tree and isinstance(tree['children'], (list, tuple)):
getvals = operator.itemgetter('key')
tree['children'].sort(key=getvals)
result = []
# groupby is the wrong method. I need a list of tree['children'] that doesn't contain keys in del_nodes
for k, g in itertools.groupby(tree['children'], getvals):
result.append(g.next())
tree['children'][:] = result
del_nodes = []
else:
for item in tree.values():
inner(item)
elif isinstance(tree, (list, tuple)):
for item in tree:
inner(item)
if isinstance(item, dict):
if 'type' in item and item['type'] == 'Folder':
if 'children' not in item or not item['children']:
print 'Delete ' + str(item['name'])
if str(item['key']) not in del_nodes:
del_nodes.append(str(item['key']))
elif isinstance(item, (list, tuple)):
if not item:
print 'Delete ' + str(item['name'])
if str(item['key']) not in del_nodes:
del_nodes.append(str(item['key']))
inner(tree)
Answer: I'd suggest you write a function to walk your datastructure and call a
function on each node.
_Updated to avoid the "deleting item from iterated sequence" bug_
**E.g.**
def walk(node,parent=None,func=None):
for child in list(node.get('children',[])):
walk(child,parent=node,func=func)
if func is not None:
func(node,parent=parent)
def removeEmptyFolders(node,parent):
if node.get('type') == 'Folder' and len(node.get('children',[])) == 0:
parent['children'].remove(node)
d = {'children': [{'children': [{'key': 'group-1',
'name': 'PRD',
'parent': 'dc-1',
'type': 'Folder'},
{'children': [{'key': 'group-11',
'name': 'App1',
'parent': 'group-2',
'type': 'Folder'}],
'key': 'group-2',
'name': 'QA',
'parent': 'dc-1',
'type': 'Folder'},
{'key': 'group-3',
'name': 'Keep',
'parent': 'dc-1',
'type': 'Host'}],
'key': 'dc-1',
'name': 'ABC',
'parent': 'root',
'type': 'Datacenter'}],
'key': 'root',
'name': 'Datacenters',
'parent': None,
'type': 'Folder'}
**Notes**
* _Walk_ function uses three arguments, the child node, the parent node and the _work_ function.
* The _walk_ function calls the _work_ function after visiting the child nodes.
* The _work_ function takes both child and parent nodes as arguments so pruning the child is as easy as `parent['children'].remove(child)`
* _Update_ : As noticed in the comments, if you delete from a sequence while iterating, it will skip elements. `for child in list(node.get('children',[]))` in the `walk` function copies the list of children allowing the entries to be removed from the parent's key without skipping.
**Then** :
>>> walk(d,func=removeEmptyFolders)
>>> from pprint import pprint
>>> pprint(d)
{'children': [{'children': [{'key': 'group-3',
'name': 'Keep',
'parent': 'dc-1',
'type': 'Host'}],
'key': 'dc-1',
'name': 'ABC',
'parent': 'root',
'type': 'Datacenter'}],
'key': 'root',
'name': 'Datacenters',
'parent': None,
'type': 'Folder'}
|
Using Sage Math library within Python
Question: I am trying to make a visualization of a graph using Sage. I need to make the
visualization exactly as I am writing the Python code.
I have downloaded and installed the Sage for Ubuntu and Sage Notebook is
working perfectly. But I want to take user input from Tkinter and then show
those input on the Graph (generated by Sage). However, I am unable to import
sage in the Python Shell. How can I do so?
Answer: From looking at the [faq](http://www.sagemath.org/doc/faq/faq-usage.html#how-
do-i-import-sage-into-a-python-script), it looks like what you need to do is
add the following line to your Python file:
from sage.all import *
Then, it looks like you need to run your script by using the Python
interpreter bundled with Sage from the command line/console:
sage -python /path/to/my/script.py
However, if you want to use Sage directly from the shell, you should probably
try using the [interactive
shell](http://www.sagemath.org/doc/tutorial/interactive_shell.html). (just
type in `sage` or maybe `sage -python` from the command line)
Caveat: I haven't tested any of this myself, so you might need to do a bit of
experimenting to get everything to work.
|
Python: Using pdb with Flask application
Question: I'm using Flask 0.9 with Python 2.7.1 within a virtualenv, and starting my app
with `foreman start`
In other apps I've built when I add the following line to my app:
import pdb; pdb.set_trace()
then reload the browser window, my terminal window displays the pdb
interactive debugger:
(pdb)
However in my app when I add those lines nothing happens. The browser window
hangs and shows a constant state of loading yet nothing shows in the console.
Is there some magic that needs to happen?
Answer: This is because you're using Foreman, which captures the standard output.
To debug your app with `pdb`, you'll need to "manually" run it, using `python
app.py` or whatever you use.
Alternatively, you can use [WinPDB](http://winpdb.org/) (which, despite the
name, has _nothing_ to do with the operating system), which will let you
remotely debug a Python process. You can even use it when the program is
running on another server.
|
Installing QuantLib python SWIG module on Google app engine
Question: I am new to GAE. I wish to use the QuantLib python library (SWIG) as a module
inside google app engine. I was following this blog post to set up QuantLib-
SWIG on Ubuntu. <http://blog.quantess.net/2012/09/26/quantlib-get-it-working-
on-ubuntu/>
I have compiled the modules for python using `make -c Python` after installing
the required boost c++ libraries as mentioned in the post.
I've copied the QuantLib folder to my app folder. The QunatLib folder contains
the following files:
__init__.py
__init__.pyc
QuantLib.py
QuantLib.pyc
_QuantLib.so*
This is my app directory structure:
app.yaml
index.py
QuantLib/
However, when I do an
import QunatLib
in the index.py in my app folder, I get the following error:
<type 'exceptions.ImportError'>: No module named _QuantLib
args = ('No module named _QuantLib',)
message = 'No module named _QuantLib'
I also get this is dev_appserver logs:
ImportError: No module named _QuantLib
_QuantLib is a `.so` file. Is there a way I can fix this problem? Or any other
way to use QuantLib libraries for GAE?
Thanks.
Answer: No.
There are a limited number of 3rd party libraries that are not pure python.
You cannot add your own non pure python libraries to appengine runtime.
Here is the current list of included 3rd party libs
<https://developers.google.com/appengine/docs/python/tools/libraries27>
You can add any **pure python** libraries in your own code base.
|
using python logging in multiple modules
Question: I have a small python project that has the following structure -
Project
-- pkg01
-- test01.py
-- pkg02
-- test02.py
-- logging.conf
I plan to use the default logging module to print messages to stdout and a log
file. To use the logging module, some initialization is required -
import logging.config
logging.config.fileConfig('logging.conf')
logr = logging.getLogger('pyApp')
logr.info('testing')
At present, i perform this initialization in every module before i start
logging messages. Is it possible to perform this initialization only once in
one place such that the same settings are reused by logging all over the
project?
Answer: Best practice is, in each module, to have a logger defined like this:
import logging
logger = logging.getLogger(__name__)
near the top of the module, and then in other code in the module do e.g.
logger.debug('My message with %s', 'variable data')
If you need to subdivide logging activity inside a module, use e.g.
loggerA = logging.getLogger(__name__ + '.A')
loggerB = logging.getLogger(__name__ + '.B')
and log to `loggerA` and `loggerB` as appropriate.
In your main program or programs, do e.g.:
def main():
"your program code"
if __name__ == '__main__':
import logging.config
logging.config.fileConfig('/path/to/logging.conf')
main()
or
def main():
import logging.config
logging.config.fileConfig('/path/to/logging.conf')
# your program code
if __name__ == '__main__':
main()
See [here](http://docs.python.org/howto/logging.html#logging-from-multiple-
modules) for logging from multiple modules, and
[here](http://docs.python.org/howto/logging.html#configuring-logging-for-a-
library) for logging configuration for code which will be used as a library
module by other code.
**Update:** When calling `fileConfig()`, you may want to specify
`disable_existing_loggers=False` if you're using Python 2.6 or later (see [the
docs](http://docs.python.org/2/library/logging.config.html#logging.config.fileConfig)
for more information). The default value is `True` for backward compatibility,
which causes all existing loggers to be disabled by `fileConfig()` unless they
or their ancestor are explicitly named in the configuration. With the value
set to `False`, existing loggers are left alone. If using Python 2.7/Python
3.2 or later, you may wish to consider the `dictConfig()` API which is better
than `fileConfig()` as it gives more control over the configuration.
|
why is the code returning IndexError error in python when the synsets of the word exists
Question: I do not understand why I am getting this error. Please help
>>> mylist = []
>>> file1 = open("medDict.txt", "r")
>>> for line in file1:
from nltk.corpus import wordnet
print line
wordFromList2 = wordnet.synsets(line)[0]
mylist.append(wordFromList2)
abnormal
Traceback (most recent call last):
File "<pyshell#10>", line 4, in <module>
wordFromList2 = wordnet.synsets(line)[0]
IndexError: list index out of range
medDict.txt contains the below words
abnormal
acne
ache
diarrhea
fever
Answer: @Blender was right about whitespace sensitivity for `word.synsets()`. If you
need to access any `synsets` that have **whitespace** in natural language,
Wordnet uses the **underscore** `_` instead of ``. E.g. if you want to find
something like `kick the bucket` you access the synsets from the NLTK WN
interface with `wn.synsets("kick_the_bucket")`
>>> from nltk.corpus import wordnet as wn
>>> wn.synsets('kick the bucket')
[]
>>> wn.synsets('kick_the_bucket')
[Synset('die.v.01')]
However, do note that sometimes WordNet has encoded some synset with dashes
instead of underscore. E.g. `9-11` is accessible but `9_11` isn't.
>>> wn.synsets('9-11')
[Synset('9/11.n.01')]
>>> wn.synsets('9_11')
[]
Now to resolve your problems with your code.
**1.** When you read a file line by line, you also read the invisible but
existing `\n` in the line. So you need to change this:
>>> mylist = []
>>> file1 = open("medDict.txt", "r")
to this:
>>> words_from_file = [i.strip() for i in open("medDict.txt", "r")]
**2.** I'm not very sure you really want `wordnet.synsets(word)[0]`, this
means you only take the first sense, do note that it might not be the `Most
Frequent Sense (MFS)`. So instead of doing this:
>>> wordFromList2 = wordnet.synsets(line)[0]
>>> mylist.append(wordFromList2)
I think the more appropriate way is to use a `set` instead and then `update`
the set
>>> list_of_synsets = set()
>>> for i in words_from_file:
>>> list_of_synsets.update(wordnet.synsets(i))
>>> print list_of_synsets
|
SQLAlchemy ValueError for slash in password for create_engine()
Question: Fairly simple-looking problem: my Python script is attempting to create a
SQLAlchemy database connection. The password contains a forward slash:
engineString = 'postgresql://wberg:pass/word@localhost/mydatabase'
engine = sqlalchemy.create_engine(engineString)
But the second line raises:
ValueError: invalid literal for int() with base 10: 'pass'
Using a raw string (prepending with 'r') doesn't help. Is there some way to
get SQLAlchemy to accept that password? My normal next step would be to try to
construct the connection with subordinate methods, but I can't see another way
of making a connection in the doc. Am I simply not allowed passwords with
slashes here? I can accomodate this, but it seems unlikely that the toolkit
could have gotten this far without that feature.
Versions: Python 2.6.6, SQLAlchemy 0.8.0
Answer: Slashes aren't valid characters for URL component strings. You need to URL-
encode the password portion of the connect string:
from urllib import quote_plus as urlquote
from sqlalchemy.engine import create_engine
engineString = 'postgresql://wberg:%s@localhost/mydatabase' % urlquote('pass/word')
engine = create_engine(engineString)
|
Python SQLITE3 SELECT query with datetime calculated string not working
Question: I have a SQLite3 DB with a table named `TEST_TABLE`, which looks like this:
("ID" TEXT,"DATE_IN" DATE,"WEEK_IN" number);
There are 2 entries in the table:
1|2012-03-25|13
2|2013-03-25|13
I'm trying to write a query that returns the ID for week 13 of this year. I
want to use the program again next year, so I cannot hardcode "2013" as the
year.
I used datetime to calculate a value for this year, creating a `datetime.date`
object with content like this: "2013-01-01". I then converted this to a
string:
this_year = (datetime.date(datetime.date.today().isocalendar()[0], 1, 1))
test2 = ("'"+str(this_year)+"'")
Then I queried the SQLite DB:
cursr = con.cursor()
con.text_factory = str
cursr.execute("""select ID from TEST_TABLE where WEEK_IN = 13 and DATE_IN > ? """,[test2])
result = cursr.fetchall()
print result
[('1',), ('2',)]
This returns the IDs 1 and 2, but this is no good, because ID 1 has '2012' as
the year.
The strange thing is, if I don't use datetime for the string, but create the
var manually, IT WORKS CORRECTLY.
test2 = ('2013-01-01')
cursr.execute("""select ID from TEST_TABLE where WEEK_IN = 13 and DATE_IN > ? """,[test2])
result = cursr.fetchall()
print result
[('2',)]
So why won't the query work correctly when I create the string via datetime? A
string is a string, right? So what am I missing here?
Answer: Instead of converting `this_year` into a string, just leave it as a
`datetime.date` object:
this_year = DT.date(DT.date.today().year,1,1)
* * *
import sqlite3
import datetime as DT
this_year = (DT.date(DT.date.today().isocalendar()[0], 1, 1))
# this_year = ("'"+str(this_year)+"'")
# this_year = DT.date(DT.date.today().year,1,1)
with sqlite3.connect(':memory:') as conn:
cursor = conn.cursor()
sql = '''CREATE TABLE TEST_TABLE
("ID" TEXT,
"DATE_IN" DATE,
"WEEK_IN" number)
'''
cursor.execute(sql)
sql = 'INSERT INTO TEST_TABLE(ID, DATE_IN, WEEK_IN) VALUES (?,?,?)'
cursor.executemany(sql, [[1,'2012-03-25',13],[2,'2013-03-25',13],])
sql = 'SELECT ID FROM TEST_TABLE where WEEK_IN = 13 and DATE_IN > ?'
cursor.execute(sql, [this_year])
for row in cursor:
print(row)
yields
(u'2',)
* * *
The sqlite3 database adapter will quote arguments for you when you write
parametrized SQL and use the 2-argument form of `cursor.execute`. So you do
not need (or want) to quote arguments manually yourself.
So
this_year = str(this_year)
instead of
this_year = ("'"+str(this_year)+"'")
also works, but as shown above, both lines are unnecessary, since `sqlite3`
will accept `datetime` objects as arguments as well.
also works.
Since sqlite3 automatically quotes arguments, when you manually add quotes,
the final argument gets two sets of quotes. The SQL ends up comparing
In [59]: '2012-03-25' > "'2013-01-01'"
Out[59]: True
which is why both rows were (erroneously) returned.
|
python list vote(['G', 'G', 'N', 'G', 'C'])
Question:
vote(['G', 'G', 'N', 'G', 'C'])
I want to get this result : `('G', [1, 3, 0, 1])`
g_count = 0
n_count = 0
l_count = 0
c_count = 0
for i in range(len(ballots)):
if ballots[i] == 'G':
g_count += 1
elif ballots[i] =='N':
n_count += 1
elif ballots[i] == 'L':
l_count +=1
else:
c_count += 1
return [n_count,g_count,l_count,c_count]
how do i get the 'G' at the front?
Answer: something like this:
In [9]: from collections import Counter
In [15]: def vote(lis):
....: c=Counter(lis)
....: return c.most_common()[0][0],[c[x] for x in "NGLC"]
....:
In [16]: vote(['G', 'G', 'N', 'G', 'C'])
Out[16]: ('G', [1, 3, 0, 1])
In [17]: vote(['G', 'G', 'N', 'G', 'C','L','L'])
Out[17]: ('G', [1, 3, 2, 1])
In [18]: vote(['G', 'L', 'N', 'G', 'C','L','L'])
Out[18]: ('L', [1, 2, 3, 1])
here `c.most_common()` returns `[('G', 3), ('C', 1), ('N', 1)]`, from this you
can get the `'G'`.
|
Python - Updating contents of option menu
Question: So the problem I am currently having is that I want to update the second
option menu, based on what the user selected in the first. I think I have to
use a lambda function here to make it so that the frame updates or something,
but I am unsure of how exactly to do this. Here is my code so far:
from tkinter import *
import time
class CustomerEntryForm(Frame):
def __init__(self):
Frame.__init__(self)
self.master.title("Customer Entry form:")
self.pack()
execute = True
thirtyMonthList = [4,6,9,11]
thirtyOneMonthList = [1,2,6,7,8,10,12]
monthList = []
dayList = []
for i in range(1,13):
monthList.append(i)
initialMonth = IntVar(self)
initialMonth.set(monthList[0])
initialDay = IntVar(self)
def resetDayOptionMenu():
for i in range(1,len(dayList)+1):
dayList.remove(i)
def setDayList():
resetDayOptionMenu()
if initialMonth.get() == 2:
for i in range(1, 29):
dayList.append(i)
initialDay.set(dayList[0])
elif initialMonth.get() in thirtyMonthList:
for i in range(1, 31):
dayList.append(i)
initialDay.set(dayList[0])
elif initialMonth.get() in thirtyOneMonthList:
for i in range(1, 32):
dayList.append(i)
initialDay.set(dayList[0])
self.om2 = OptionMenu(self, initialMonth, *monthList, command = setDayList())
self.om2.grid(row=0)
self.om = OptionMenu(self, initialDay, *dayList)
self.om.grid(row=1)
root = CustomerEntryForm()
root.mainloop()
I appreciate any help. Thanks.
Answer: It would be easier to remove and then just add the second `OptionMenu` field
after the month changes.
Like this:
...
thirtyMonthList = [4,6,9,11]
initialMonth = IntVar(self)
initialMonth.set(1)
initialDay = IntVar(self)
initialDay.set(1)
def removeDayOptionMenu():
self.om.destroy()
def setDayList(event):
removeDayOptionMenu()
if initialMonth.get() == 2:
addDayOptionMenu(range(1,29))
elif initialMonth.get() in thirtyMonthList:
addDayOptionMenu(range(1,31))
else:
addDayOptionMenu(range(1,32))
def addDayOptionMenu(dayList):
self.om = OptionMenu(self, initialDay, *dayList)
self.om.grid(row=1)
self.om2 = OptionMenu(self, initialMonth, *range(1,12), command = setDayList)
self.om2.grid(row=0)
self.om = OptionMenu(self, initialDay, *range(1,32))
self.om.grid(row=1)
|
Python Tkinter rotate image animation
Question: I have a PhotoImage that I created using PIL and then added to a TKinter
canvas. The image shows up fine. However, I can't get the PIL rotate function
to work correctly. Here is my code:
#This works fine
image = Image.open('img.png')
canvas_image = ImageTk.PhotoImage(rotated_image)
canvas_object = canvas.create_image(30+10*int(steps),250, image=canvas_image)
canvas.pack()
#this does not work
canvas.delete(canvas_object)
rotated_image = image.rotate(1)
canvas_image = ImageTk.PhotoImage(rotated_image)
canvas_object = canvas.create_image(30+10*int(steps),250, image=canvas_image)
canvas.update()
However, the image just does not show up in this case. I want to be able to
animate the rotation of an image, but I can't even get a rotation to work at
all! Any suggestions are greatly appreciated.
Edit: correction, the rotation will not work after it's already been added to
the screen
Answer:
import Tkinter as tk
from PIL import ImageTk
from PIL import Image
class SimpleApp(object):
def __init__(self, master, filename, **kwargs):
self.master = master
self.filename = filename
self.canvas = tk.Canvas(master, width=500, height=500)
self.canvas.pack()
self.update = self.draw().next
master.after(1, self.update)
def draw(self):
image = Image.open(self.filename)
angle = 0
while True:
tkimage = ImageTk.PhotoImage(image.rotate(angle))
canvas_obj = self.canvas.create_image(
250, 250, image=tkimage)
self.master.after_idle(self.update)
yield
self.canvas.delete(canvas_obj)
angle += 10
angle %= 360
root = tk.Tk()
app = SimpleApp(root, 'image.png')
root.mainloop()
|
is there any way to jump to a specified function name using help() command in python
Question: I always use `help(object)` command in python and I would like to know is
there any way I can skip most of the text and kinda jump to the function that
I want. For example:
>>> import boto
>>> s3 = boto.connect_s3()
>>> help(s3)
it gives me a very long description of this object:
Help on S3Connection in module boto.s3.connection object:
...
...
...
server_name(self, port=None)
to be more clear, can I do something like:
>>> help(s3, server_name)
Answer: Just pass in the method:
help(s3.server_name)
|
Finding text in json with python
Question: I've got a json file that I'm importing into Python and trying to look for the
occurrence of a phrase. If I try this:
any(keyword in s for s in json_data)
where keyword is the thing I'm looking for and json_data is what I got from
using json.load(). It always returns false even when the keyword is in
json_data. Here's how I'm indexing into the json:
json_data["thing1"]["thing2"][0]["thing3"]
The field [0] varies from 0-16 and the thing I want is in thing3. Why can't I
get a True even when the keyword is in json_data?
Answer:
any(keyword in s for s in json_data)
is looking in the first level dictionary, where `thing1` is located. You have
to look at the dictionary containing `thing3`, which is the indexed one
any(keyword in s for s in json_data["thing1"]["thing2"])
Assuming `thing3` is the `keyword` you are looking for...
|
How to read a pdb file and perform FFT based docking using python?
Question: Suppose I have two PDB files (one of them is as follows)
ATOM 1 N MET A 1 66.104 56.583 -35.505
ATOM 2 CA MET A 1 66.953 57.259 -36.531
ATOM 3 C MET A 1 67.370 56.262 -37.627
ATOM 4 O MET A 1 67.105 55.079 -37.531
ATOM 5 CB MET A 1 68.227 57.852 -35.867
ATOM 6 CG MET A 1 67.848 58.995 -34.899
ATOM 7 SD MET A 1 66.880 58.593 -33.421
.... . .. ... . . ...... ...... ......
.... . .. ... . . ...... ...... ......
This file can be read in python using following script.
import sys
x=[];y=[];z=[]
res=[]
Nr=0
for fn in sys.argv[1:]:
f=open(fn,'r')
while 1:
line=f.readline()
if not line: break
if line[0:6]=='ATOM ' :
rx=float(line[30:38]);ry=float(line[38:46]);rz=float(line[46:54])
if line[21]=='A' :
x.append(rx); y.append(ry); z.append(rz)
Nr=Nr+1
res.append(line[17:20])
for i in range(1,Nr-1):
print fn, i, res[i], x[i], y[i], z[i]
f.close
Now I would like to generate the grid of `N*N*N` dimension and rotate and
translate the molecule on the grid. The rotation and translation can be done
by using FFT (Fast Fourier Transform).
I tried to write something like follows
import numpy as np
import fftw as fft
class Grid3D(object):
def __init__(self, grid_dimension):
x = y = z = grid_dimension
self.grid = np.zeros([x, y, z], dtype=float)
All this is actually to perform docking of two molecules using 3d grid and
FFT. I wanted to know how to proceed further or any better way?
Answer: An answer to you first question, **"How to read a pdb file"**
If you want to end up with a numpy array, you can use `numpy.genfromtxt` which
is very nice, and much easier to implement and use than your looped reading.
It is also much more robust to spacing of the files, etc.
import numpy as np
data = np.genfromtxt('filename.txt',
names = 'ATOM,index,res,MET,A,count,x,y,z',
dtype=['S4',int,'S2','S3','S1',int,float,float,float])
Now `data` is a numpy "structured array", which can easily be accessed as
follows:
In [13]: data
Out[13]:
array([('ATOM', 1, 'N', 'MET', 'A', 1, 66.104, 56.583, -35.505),
('ATOM', 2, 'CA', 'MET', 'A', 1, 66.953, 57.259, -36.531),
('ATOM', 3, 'C', 'MET', 'A', 1, 67.37, 56.262, -37.627),
('ATOM', 4, 'O', 'MET', 'A', 1, 67.105, 55.079, -37.531),
('ATOM', 5, 'CB', 'MET', 'A', 1, 68.227, 57.852, -35.867),
('ATOM', 6, 'CG', 'MET', 'A', 1, 67.848, 58.995, -34.899),
('ATOM', 7, 'SD', 'MET', 'A', 1, 66.88, 58.593, -33.421)],
dtype=[('ATOM', 'S4'), ('index', '<i8'), ('el', 'S2'), ('MET', 'S3'), ('A', 'S1'), ('count', '<i8'), ('x', '<f8'), ('y', '<f8'), ('z', '<f8')])
In [14]: data['x']
Out[14]: array([ 66.104, 66.953, 67.37 , 67.105, 68.227, 67.848, 66.88 ])
In [15]: data['y']
Out[15]: array([ 56.583, 57.259, 56.262, 55.079, 57.852, 58.995, 58.593])
In [16]: data['index']
Out[16]: array([1, 2, 3, 4, 5, 6, 7])
In [17]: data[3]
Out[17]: ('ATOM', 4, 'O', 'MET', 'A', 1, 67.105, 55.079, -37.531)
|
Drawing Stars with turtle in python
Question: I need to write a function that draws random stars based upon the question of
how many stars the person wants. a=color,b=length and c,d are the starting
coordinates. I am not really sure where I am going wrong any tips,hints or
help would be appreciated.
import turtle
from random import*
def star(a,b,c,d):
x=5
y=0
turtle.color(a)
turtle.begin_fill()
turtle.penup()
turtle.goto(c,d)
turtle.pendown()
while x>0:
turtle.forward(b)
turtle.right(144)
turtle.forward(b)
x-=1
turtle.end_fill()
star('red',100,0,0)
def random_color():
randvar=randrange(0,5)
if randvar==0:
return ('red')
elif randvar==1:
return ('blue')
elif randvar==2:
return ('green')
elif randvar==3:
return ('yellow')
else:
return ('black')
def length():
randvar=randrange(5,71)
def x():
randvar=randrange(-280,281)
def y():
randvar=randrange(-200,201)
def night_sky():
z=int(input('How many stars do you want?'))
a=random_color
b=length
c=x
d=y
while z>0:
star(a,b,c,d)
z-=1
Answer: To call a function, put parentheses after the function name:
a=random_color()
b=length()
c=x()
d=y()
* * *
Make sure you call `night_sky()` at the end of the script. Currently, only
star('red',100,0,0)
is getting called. That's why you only see one star.
* * *
The functions `length`, `x` and `y` need to use `return`. Otherwise, `None` is
returned by default.
def length():
return randrange(5,71)
def x():
return randrange(-280,281)
def y():
return randrange(-200,201)
* * *
You need to move the statements defining `a`, `b`, `c`, and `d` into the
`while`-loop, lest the same star gets drawn `z` times. While we're at it, the
`while`-loop can be more simply written as a `for`-loop:
for i in range(z):
a=random_color()
b=length()
c=x()
d=y()
star(a,b,c,d)
* * *
Your code will become more self-documenting if you use more descriptive
variable names:
def star(color, side_length, x, y):
print(color, side_length, x, y)
turtle.color(color)
turtle.begin_fill()
turtle.penup()
turtle.goto(x, y)
turtle.pendown()
for i in range(5):
turtle.forward(side_length)
turtle.right(144)
turtle.forward(side_length)
turtle.end_fill()
* * *
So with these changes, the code becomes:
import turtle
import random
def star(color, side_length, x, y):
print(color, side_length, x, y)
turtle.color(color)
turtle.begin_fill()
turtle.penup()
turtle.goto(x, y)
turtle.pendown()
for i in range(5):
turtle.forward(side_length)
turtle.right(144)
turtle.forward(side_length)
turtle.end_fill()
def random_color():
randvar = randrange(0, 5)
if randvar == 0:
return ('red')
elif randvar == 1:
return ('blue')
elif randvar == 2:
return ('green')
elif randvar == 3:
return ('yellow')
else:
return ('black')
def length():
return random.randrange(5, 71)
def xcoord():
return random.randrange(-280, 281)
def ycoord():
return random.randrange(-200, 201)
def night_sky():
z = int(input('How many stars do you want?'))
for i in range(z):
color = random_color()
side_length = length()
x = xcoord()
y = ycoord()
star(color, side_length, x, y)
night_sky()
|
How to tell if python instance was compiled as framework?
Question: How can one tell if a given instance of Python (on OS X) was compiled with the
`--enable-framework` flag?
The one thing I tried is not entirely conclusive:
% python -c 'import sysconfig, pprint; pprint.pprint(sysconfig.get_config_vars())' | grep -i framework
'LIBS': '-ldl -framework CoreFoundation',
'PYTHONFRAMEWORK': '',
'PYTHONFRAMEWORKDIR': 'no-framework',
'PYTHONFRAMEWORKINSTALLDIR': '',
'PYTHONFRAMEWORKPREFIX': '',
'RESSRCDIR': 'Mac/Resources/framework',
'SHLIBS': '-ldl -framework CoreFoundation',
'WITH_NEXT_FRAMEWORK': 0,
Answer: The definitive test for an OS X framework build:
./configure --enable-framework ...
is the presence of `PYTHONFRAMEWORK` in the Python build config variables.
This idiom is used throughout the Python standard library:
if get_config_var("PYTHONFRAMEWORK"):
# framework build
See for example, [this
test](http://hg.python.org/cpython/file/v3.3.0/Lib/sysconfig.py) at line 221
in the `sysconfig` module itself.
|
python mock Requests and the response
Question: I am a beginner to using mock in python and trying to use
<http://www.voidspace.org.uk/python/mock>. Please tell me the basic calls to
get me working in below scenario. I am using pythons Requests module.
In my views.py, I have a function that makes variety of requests.get() calls
with different response each time
def myview(request):
res1 = requests.get('aurl')
res2 = request.get('burl')
res3 = request.get('curl')
In my test class I want to do something like this but cannot figure out exact
method calls
Step 1:
//Mock the requests module
//when mockedRequests.get('aurl') is called then return 'a response'
//when mockedRequests.get('burl') is called then return 'b response'
//when mockedRequests.get('curl') is called then return 'c response'
Step 2:
Call my view
Step 3:
verify response contains 'a response', 'b response' , 'c response'
Please help me to complete Step 1.
Answer: This is how you can do it (you can run this file as-is):
import requests
import unittest
from unittest import mock
# This is the class we want to test
class MyGreatClass:
def fetch_json(self, url):
response = requests.get(url)
return response.json()
# This method will be used by the mock to replace requests.get
def mocked_requests_get(*args, **kwargs):
class MockResponse:
def __init__(self, json_data, status_code):
self.json_data = json_data
self.status_code = status_code
def json(self):
return self.json_data
if args[0] == 'http://someurl.com/test.json':
return MockResponse({"key1": "value1"}, 200)
else:
return MockResponse({"key2": "value2"}, 200)
return MockResponse({}, 404)
# Our test case class
class MyGreatClassTestCase(unittest.TestCase):
# We patch 'requests.get' with our own method. The mock object is passed in to our test case method.
@mock.patch('requests.get', side_effect=mocked_requests_get)
def test_fetch(self, mock_get):
# Assert requests.get calls
mgc = MyGreatClass()
json_data = mgc.fetch_json('http://someurl.com/test.json')
self.assertEqual(json_data, {"key1": "value1"})
json_data = mgc.fetch_json('http://someotherurl.com/anothertest.json')
self.assertEqual(json_data, {"key2": "value2"})
# We can even assert that our mocked method was called with the right parameters
self.assertIn(mock.call('http://someurl.com/test.json'), mock_get.call_args_list)
self.assertIn(mock.call('http://someotherurl.com/anothertest.json'), mock_get.call_args_list)
self.assertEqual(len(mock_get.call_args_list), 2)
if __name__ == '__main__':
unittest.main()
**Important Note:** If your `MyGreatClass` class lives in a different package,
say `my.great.package`, you have to mock `my.great.package.requests.get`
instead of just 'request.get'. In that case your test case would look like
this:
import unittest
from unittest import mock
from my.great.package import MyGreatClass
# This method will be used by the mock to replace requests.get
def mocked_requests_get(*args, **kwargs):
# Same as above
class MyGreatClassTestCase(unittest.TestCase):
# Now we must patch 'my.great.package.requests.get'
@mock.patch('my.great.package.requests.get', side_effect=mocked_requests_get)
def test_fetch(self, mock_get):
# Same as above
if __name__ == '__main__':
unittest.main()
Enjoy!
|
How to Setup LIBSVM for Python
Question: I built [libsvm](http://www.csie.ntu.edu.tw/~cjlin/libsvm/) on Mac OS X with
Make.
$ tar xzfv libsvm-3.17.tar.gz
$ cd libsvm-3.17
$ make
This built the various libsvm binaries:
$ ls
COPYRIGHT heart_scale svm-predict.c svm-train.c tools
FAQ.html java svm-scale svm.cpp windows
Makefile matlab svm-scale.c svm.def
Makefile.win python svm-toy svm.h
README svm-predict svm-train svm.o
I also linked to this in `/usr/local`:
$ ls -la /usr/local/
...
svm -> /usr/local/libsvm-3.17/
And appended the Python bindings to my path:
import sys
sys.path.append('/usr/local/svm/python')
But the Python bindings cannot find the "LIBSVM" library:
$ python test.py
Traceback (most recent call last):
File "test.py", line 8, in <module>
import svmutil
File "/usr/local/svm/python/svmutil.py", line 5, in <module>
from svm import *
File "/usr/local/svm/python/svm.py", line 22, in <module>
raise Exception('LIBSVM library not found.')
Exception: LIBSVM library not found.
Can anyone tell me how to set this up? In the python readme for libsvm the
only description is
Installation
============
On Unix systems, type
> make
The interface needs only LIBSVM shared library, which is generated by
the above command. We assume that the shared library is on the LIBSVM
main directory or in the system path.
What am I missing?
Answer: Instead of going through libsvm in order to access it with Python (I installed
libsvm through MacPorts, and `import svmutil` fails), you might want to
install the popular [scikit-learn](http://scikit-learn.org/stable/) package,
which contains an optimized version of [libsvm with Python
bindings](http://scikit-learn.org/stable/modules/svm.html#svm).
The install is very simple with [MacPorts](http://macports.org): `sudo port
install py27-scikit-learn` (adapt `py27` to whatever version of Python you
use).
|
Scipy sparse matrices - purpose and usage of different implementations
Question: Scipy has [many different types of sparse matrices
available](http://docs.scipy.org/doc/scipy/reference/sparse.html). What are
the most important differences between these types, and what is the difference
in their intended usage?
I'm developing a code in python based on a sample code1 in Matlab. One section
of the code utilizes sparse matrices - which seem to have a single (annoying)
type in Matlab, and I'm trying to figure out which type I should use2 in
python.
* * *
1: This is for a class. Most people are doing the project in Matlab, but I
like to create unnecessary work and confusion --- apparently.
2: This is an academic question: I have the code working properly with the
'[CSR](http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html#scipy.sparse.csr_matrix)'
format, but I'm interesting in knowing what the optimal usages are.
Answer: Sorry if I'm not answering this completely enough, but hopefully I can provide
some insight.
CSC (Compressed Sparse Column) and CSR (Compressed Sparse Row) are more
compact and efficient, but difficult to construct "from scratch". Coo
(Coordinate) and DOK (Dictionary of Keys) are easier to construct, and can
then be converted to CSC or CSR via `matrix.tocsc()` or `matrix.tocsr()`.
CSC is more efficient at accessing column-vectors or column operations,
generally, as it is stored as arrays of columns and their value at each row.
CSR matrices are the opposite; stored as arrays of rows and their values at
each column, and are more efficient at accessing row-vectors or row
operations.
|
src/lxml/etree_defs.h:9:31: fatal error: libxml/xmlversion.h: No such file or directory
Question: 1. I am running the following comand for installing the packages in that file " `pip install -r requirements.txt --download-cache=~/tmp/pip-cache`".
2. requirement.txt contains pacakages like
# Data formats
# ------------
PIL==1.1.7 #
html5lib==0.90
httplib2==0.7.4
lxml==2.3.1
# Documentation
# -------------
Sphinx==1.1
docutils==0.8.1
# Testing
# -------
behave==1.1.0
dingus==0.3.2
django-testscenarios==0.7.2
mechanize==0.2.5
mock==0.7.2
testscenarios==0.2
testtools==0.9.14
wsgi_intercept==0.5.1
while comming to install "lxml" packages i am getting the following eror
Requirement already satisfied (use --upgrade to upgrade): django-testproject>=0.1.1 in /usr/lib/python2.7/site-packages/django_testproject-0.1.1-py2.7.egg (from django-testscenarios==0.7.2->-r requirements.txt (line 33))
Installing collected packages: lxml, Sphinx, docutils, behave, dingus, mock, testscenarios, testtools, South
Running setup.py install for lxml
Building lxml version 2.3.1.
Building without Cython.
ERROR: /bin/sh: xslt-config: command not found
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
building 'lxml.etree' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i686 -mtune=atom -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i686 -mtune=atom -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-i686-2.7/src/lxml/lxml.etree.o -w
In file included from src/lxml/lxml.etree.c:239:0:
src/lxml/etree_defs.h:9:31: fatal error: libxml/xmlversion.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
Complete output from command /usr/bin/python -c "import setuptools;__file__='/root/Projects/ir/build/lxml/setup.py';execfile(__file__)" install --single-version-externally-managed --record /tmp/pip-SwjFm3-record/install-record.txt:
Building lxml version 2.3.1.
Building without Cython.
ERROR: /bin/sh: xslt-config: command not found
** make sure the development packages of libxml2 and libxslt are installed **
Using build configuration of libxslt
running install
running build
running build_py
copying src/lxml/cssselect.py -> build/lib.linux-i686-2.7/lxml
copying src/lxml/__init__.py -> build/lib.linux-i686-2.7/lxml
copying src/lxml/sax.py -> build/lib.linux-i686-2.7/lxml
copying src/lxml/pyclasslookup.py -> build/lib.linux-i686-2.7/lxml
copying src/lxml/usedoctest.py -> build/lib.linux-i686-2.7/lxml
copying src/lxml/doctestcompare.py -> build/lib.linux-i686-2.7/lxml
copying src/lxml/_elementpath.py -> build/lib.linux-i686-2.7/lxml
copying src/lxml/ElementInclude.py -> build/lib.linux-i686-2.7/lxml
copying src/lxml/builder.py -> build/lib.linux-i686-2.7/lxml
copying src/lxml/html/clean.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/__init__.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/_dictmixin.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/ElementSoup.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/usedoctest.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/defs.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/builder.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/_html5builder.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/diff.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/html5parser.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/_diffcommand.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/_setmixin.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/soupparser.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/html/formfill.py -> build/lib.linux-i686-2.7/lxml/html
copying src/lxml/isoschematron/__init__.py -> build/lib.linux-i686-2.7/lxml/isoschematron
copying src/lxml/etreepublic.pxd -> build/lib.linux-i686-2.7/lxml
copying src/lxml/tree.pxd -> build/lib.linux-i686-2.7/lxml
copying src/lxml/etree_defs.h -> build/lib.linux-i686-2.7/lxml
copying src/lxml/isoschematron/resources/rng/iso-schematron.rng -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/rng
copying src/lxml/isoschematron/resources/xsl/XSD2Schtrn.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/RNG2Schtrn.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_svrl_for_xslt1.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_message.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_skeleton_for_xslt1.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_abstract_expand.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_dsdl_include.xsl -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/readme.txt -> build/lib.linux-i686-2.7/lxml/isoschematron/resources/xsl/iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i686 -mtune=atom -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i686 -mtune=atom -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o build/temp.linux-i686-2.7/src/lxml/lxml.etree.o -w
In file included from src/lxml/lxml.etree.c:239:0:
src/lxml/etree_defs.h:9:31: fatal error: libxml/xmlversion.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
----------------------------------------
Command /usr/bin/python -c "import setuptools;__file__='/root/Projects/ir/build/lxml/setup.py';execfile(__file__)" install --single-version-externally-managed --record /tmp/pip-SwjFm3-record/install-record.txt failed with error code 1
Storing complete log in /root/.pip/pip.log
Can anyone check and guide me what would be the problem and why?Any package
missing to install.
Answer: Install libxslt-devel & libxml2-devel using
sudo apt-get install libxml2-dev libxslt1-dev
After installing follow the above one
|
Co-dependent default arguments
Question: I have code like so:
import random
def helper():
c = random.choice([False, True]),
d = 1 if (c == True) else random.choice([1, 2, 3])
return c, d
class Cubic(object):
global coefficients_bound
def __init__(self, a = random.choice([False, True]),
b = random.choice([False, True]),
(c, d) = helper()):
...
...
The helper() function is introduced as I cannot have co-dependent arguments in
the definition of the function itself - Python complains that it cannot find c
when it's calculating d.
I want to be able to create an object of this class like so, changing a
default argument:
x = Cubic(c = False)
But I get this error:
Traceback (most recent call last):
File "cubic.py", line 41, in <module>
x = Cubic(c = False)
TypeError: __init__() got an unexpected keyword argument 'c'
Is this possible with how I've written it? If not, what way should I do this?
Answer: How about simply:
class Cubic(object):
def __init__(self, c=None, d=None):
if c is None:
c = random.choice([False, True])
if d is None:
d = 1 if c else random.choice([1, 2, 3])
print c, d
|
Python Google App Engine Receiving a string in stead of JSON object
Question: I am sending a HTTP POST request from android to a server using the script
below
URI website = new URI("http://venkygcm.appspot.com");
HttpClient client = new DefaultHttpClient();
HttpPost request = new HttpPost(website);
request.setHeader("Content-Type", "application/json");
String currentDateTimeString = DateFormat.getDateTimeInstance().format(new Date());
JSONObject obj = new JSONObject();
obj.put("reg_id","Registration ID sent to the server");
obj.put("datetime",currentDateTimeString);
StringEntity se = new StringEntity(obj.toString());
request.setEntity(se);
HttpResponse response = client.execute(request);
String out = EntityUtils.toString(response.getEntity());
As I have sent a JSON Object, I must receive a JSON Object in the server.
Instead I get a string containing the data of the body. The server is made in
Python Google App Engine.
import webapp2
class MainPage(webapp2.RequestHandler):
def post(self):
self.response.out.write(" This is a POST Request \n")
req = self.request
a = req.get('body')
self.response.out.write(type(a))
app = webapp2.WSGIApplication([('/', MainPage)], debug=True)
I tried what AK09 suggested but i still get a string kind of object. What
should be my next step?
import webapp2
import json
class MainPage(webapp2.RequestHandler):
def post(self):
self.response.out.write("This is a POST Request \n")
req = self.request
a = req.get('body')
b = json.dumps(a)
self.response.out.write(type(a))
self.response.out.write(type(b))
app = webapp2.WSGIApplication([('/', MainPage)], debug=True)
Answer: Finally this code worked
import webapp2
import json
class MainPage(webapp2.RequestHandler):
def post(self):
self.response.out.write("This is a POST Request \n")
req = self.request
a = req.body
b = json.loads(a)
self.response.out.write(b)
self.response.out.write(b['reg_id'])
self.response.out.write(b['datetime'])
self.response.out.write(type(b))
app = webapp2.WSGIApplication([('/', MainPage)], debug=True)
b comes out to be of the type List as is required.
|
Python requests library pre_request hook
Question: I'm having an issue with trying to switch from an ancient version of python-
requests (0.14) to a newer version (1.1, 1.2 whatever). The problem is that we
have a system which posts images on twitter using the following library:
<https://github.com/maraujop/requests-oauth>
The main problem is with this line of code:
# This is taken from the documentation of the library mentioned above
session = requests.session(hooks={'pre_request': oauth_hook})
session.post(...)
As the Session constructor no longer accepts the hooks parameter. I found that
the post method accept the hooks argument though and chagned the code like
this:
session = requests.session()
session.post(..., hooks={'pre_request': oauth_hook})
This is better than before, however the pre_request is no longer accepted in
newer version of python-requests (you can find this hook in the documentation
of requests 0.14 but not in any of the newer versions). Can somebody help on
this?
Answer: You need to create the request object yourself, pass that to the hook, then
call `.prepare()` on it:
import requests
request = requests.Request('POST', url, ...)
request = oauth_hook(request)
prepared = request.prepare()
then send:
session = requests.session()
resp = session.send(prepared)
|
Photo folder string replacement Regular Expressions python
Question: I would like to replace
text = '2012-02-23 | My Photo Folder'
with
new_text = '20120223_MyPhotoFolder'
I found a regular expression that matches my date format here
<http://regexlib.com/RETester.aspx?regexp_id=933>
what is the best way to approach this? Do I need regular expression groups and
then do the replacement in those groups?
I assume I could simply search for " | " and replace with "_ and "-" with "" by normal string.replace(), but I would like to find a more general solution.
Thanks in advance.
Answer:
import re
text = '2012-02-23 | My Photo Folder'
pattern = r'''
(?P<year>\d{4}) # year group consisting of 4 digits
-
(?P<month>\d{2}) # month group consisting of 2 digits
-
(?P<date>\d{2}) # date group consisting of 2 digits
\s\|\s
(?P<name_with_spaces>.*$) # name_with_spaces consuming the rest of the string to the end
'''
compiled = re.compile(pattern, re.VERBOSE)
result = compiled.match(text)
print('{}{}{}_{}'.format(
result.group('year'),
result.group('month'),
result.group('date'),
result.group('name_with_spaces').translate(None,' ')))
Output:
>>>
20120223_MyPhotoFolder
* * *
### A little explanation:
[`re.VERBOSE`](http://docs.python.org/2/library/re.html#re.VERBOSE) lets us
write regular expression in multiple lines making it more readable and also
allows comments.
[`'{}{}{}_{}'.format`](http://docs.python.org/2/library/string.html#format-
examples) is just a string interpolation method which puts arguments in the
places specified by `{}`.
[`translate`](http://docs.python.org/2/library/string.html#string.translate)
method is applied to `result.group('name_with_spaces')` to remove spaces.
|
Python - plotting large number of lines
Question: I am trying to read in a file containing XY endpoints of line segments and a
value associated with the segment, then plot the line segments colored by the
value given. The problem I am having is that there is potentially hundreds of
thousands to millions of line segments and when I attempt to read in these
larger files I run into a memory error. Is there a more memory efficient way
of doing this?
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cm as cmx
import sys
import csv
if len(sys.argv) > 1:
flofile = sys.argv[1]
else:
flofile = "GU3\GU3.flo"
fig = plt.figure()
ax = fig.add_subplot(111)
jet = cm = plt.get_cmap('jet')
cNorm = colors.Normalize(vmin=0)
scalarMap = cmx.ScalarMappable(norm=cNorm,cmap=jet)
with open(flofile) as FLO:
title = FLO.readline()
limits = [float(tp) for tp in FLO.readline().split()]
FLO.readline()#headers
for line in FLO:
if 'WELLS' in line: break
frac = ([float(tp) for tp in line.split()])
ax.plot([frac[0],frac[2]],[frac[1],frac[3]],color=colorVal)
#ax.plot(*call_list)
scalarMap._A = []
plt.colorbar(scalarMap)
plt.xlim([0,limits[0]])
plt.ylim([0,limits[1]])
plt.show()
This code works for small files. Thanks.
Answer: You might consider doing the plotting on a bitmap image first, which doesn't
have the memory problem, and after that fine tune the plot/image with
matplotlib. As an example:
from PIL import Image
from PIL import ImageDraw
import random
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
s = (500,500)
N = 100000
im = Image.new('RGBA', s, (255,255,255,255))
draw = ImageDraw.Draw(im)
for i in range(N):
x1 = random.random() * s[0]
y1 = random.random() * s[1]
x2 = random.random() * s[0]
y2 = random.random() * s[1]
c = random.random() * 256
draw.line(((x1,y1),(x2,y2)), fill=(0, 255 - int(c), int(c), 255), width=1)
plt.imshow(np.asarray(im), extent=(-1,1,-1,1), aspect='equal', origin='lower')
plt.show()
|
Python processing for loop for comparison
Question: I need to compare 2 tables of similar schema and have 2 generator objects..How
do I compare these 2 generators row by row in Python. Need to implement the
file comparison logic,
If generator-object-1 = generator-object-1:
then read-next-row-generator-object-1,read-next-row-generator-object-1
elif generator-object-1 > generator-object-2:
then read-next-row-generator-object-2
elif generator-object-1 < generator-object-2
then read-next-row-generator-object-1
Is there any better way to do in Python?
Answer: I used this in the past:
import operator
def mergeiter(*iterables, **kwargs):
"""Given a set of sorted iterables, yield the next value in merged order"""
iterables = [iter(it) for it in iterables]
iterables = {i: [next(it), i, it] for i, it in enumerate(iterables)}
if 'key' not in kwargs:
key = operator.itemgetter(0)
else:
key = lambda item, key=kwargs['key']: key(item[0])
while True:
value, i, it = min(iterables.values(), key=key)
yield value
try:
iterables[i][0] = next(it)
except StopIteration:
del iterables[i]
if not iterables:
raise
This would list items from the given iterables in sorted order, provided the
input iterables are themselves already sorted.
The above generator would iterate over your two generators in the same order
as your psuedo-code would.
|
Trouble coding an LED selector in Python
Question: I have 4 LEDs connected to GPIO outputs of a Raspberry Pi. I want to use the
argv command so that I can select the LEDs using a simple binary code. For
example:
python test.py 1010
This would light up the first and third LEDs in the row. The problem is I
don't think Im approaching this correctly. Here is my code so far
from sys import argv
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(11, GPIO.OUT)
GPIO.setup(12, GPIO.OUT)
GPIO.setup(13, GPIO.OUT)
GPIO.setup(15, GPIO.OUT)
x, y = argv
print "Script:", x
if y == '0000':
GPIO.output(11, 0)
GPIO.output(12, 0)
GPIO.output(13, 0)
GPIO.output(15, 0)
if y == '0001':
GPIO.output(11, 0)
GPIO.output(12, 0)
GPIO.output(13, 0)
GPIO.output(15, 1)
GPIO.cleanup()
Im leaving out the rest of the combinations so I dont bloat this question. Im
running into several syntax issues, but Im wondering if I should just scrap
this and go about it another way. Im new and any advice would be appreciated.
Answer: It sounds very much like what you really want is to map the pieces of your
input string to the values of your calls to `GPIO.output`. You can do that
easily (and with far less code than you currently have) by iterating over the
control string:
led_map = {
# This maps each "bit" of your input string to the correct ID for GPIO.
0 : 11,
1 : 12,
2 : 13,
3 : 15
}
for i in xrange(len(y)):
bit = y[i]
GPIO.output(led_map[i], int(bit))
This setup prevents you from having to code each permutation separately (which
quickly becomes terrible even with only two or three bits, let alone four or
more). Rather than thinking in terms of permutations, you can just consider
the bits individually, and perform the appropriate functions on each of them
in turn.
|
removing linebreaks in python?
Question: I'm working on a simple python game in which the player attempts to guess
letters contained in a word. The problem is, when I print a word, it's
printing the \n at the end.
From my initial research, I think I need to use r.string() to remove it.
However, I'm not sure where it would go.
Sorry for the newbie question.
import random
with open('wordlist.txt') as wordList:
secretWord = random.sample(list(wordList), 1)
print (secretWord)
Answer: You can use `.strip()` to strip out whitespace:
secret_word = random.choice(wordList).strip()
|
Why do I have to do `sys.stdin = codecs.getreader(sys.stdin.encoding)(sys.stdin)`?
Question: I'm writing a python program which upper-cases all input (a replacement for
the non-working `tr '[:lowers:]' '[:upper:]'`). The locale is `ru_RU.UTF-8`
and I use `PYTHONIOENCODING=UTF-8` to set the STDIN/STDOUT encodings. This
correctly sets `sys.stdin.encoding`. **So, why do I still need to explicitly
create a decoding wrapper if`sys.stdin` already knows the encoding?** If I
don't create the wrapping reader, the `.upper()` function doesn't work
correctly (does nothing for non-ASCII characters).
import sys, codecs
sys.stdin = codecs.getreader(sys.stdin.encoding)(sys.stdin) #Why do I need this?
for line in sys.stdin:
sys.stdout.write(line.upper())
Why does `stdin` have `.encoding` if it doesn't use it?
Answer: To answer "why", we need to understand Python 2.x's built-in
[`file`](http://docs.python.org/2/library/stdtypes.html#bltin-file-objects)
type,
[`file.encoding`](http://docs.python.org/2/library/stdtypes.html#file.encoding),
and their relationship.
The built-in `file` object deals with raw bytes---always reads and writes raw
bytes.
The `encoding` attribute describes the encoding of the raw bytes in the
stream. This attribute may or may not be present, and may not even be reliable
(e.g. we set `PYTHONIOENCODING` incorrectly in the case of standard streams).
The only time any automatic conversion is performed by `file` objects is when
writing `unicode` object to that stream. In that case it will use the
`file.encoding` if available to perform the conversion.
In the case of reading data, the file object will not do any conversion
because it returns raw bytes. The `encoding` attribute in this case is a hint
for the user to perform conversions manually.
`file.encoding` is set in your case because you set the `PYTHONIOENCODING`
variable and the `sys.stdin`'s `encoding` attribute was set accordingly. To
get a text stream we have to wrap it manually as you have done in your example
code.
To think about it another way, imagine that we didn't have a separate text
type (like Python 2.x's `unicode` or Python 3's `str`). We can still work with
text by using raw bytes, but keeping track of the encoding used. This is kind
of how the `file.encoding` is meant to be used (to be used for tracking the
encoding). The reader wrappers that we create automatically does the tracking
and conversions for us.
Of course, automatically wrapping `sys.stdin` would be nicer (and that is what
Python 3.x does), but changing the default behaviour of `sys.stdin` in Python
2.x will break backwards compatibility.
The following is a comparison of `sys.stdin` in Python 2.x and 3.x:
# Python 2.7.4
>>> import sys
>>> type(sys.stdin)
<type 'file'>
>>> sys.stdin.encoding
'UTF-8'
>>> w = sys.stdin.readline()
## ... type stuff - enter
>>> type(w)
<type 'str'> # In Python 2.x str is just raw bytes
>>> import locale
>>> locale.getdefaultlocale()
('en_US', 'UTF-8')
The [`io.TextIOWrapper`
class](https://docs.python.org/3/library/io.html#io.TextIOWrapper) is part of
the standard library since Python 2.6. This class has an `encoding` attribute
that is used to convert raw bytes to-and-from Unicode.
# Python 3.3.1
>>> import sys
>>> type(sys.stdin)
<class '_io.TextIOWrapper'>
>>> sys.stdin.encoding
'UTF-8'
>>> w = sys.stdin.readline()
## ... type stuff - enter
>>> type(w)
<class 'str'> # In Python 3.x str is Unicode
>>> import locale
>>> locale.getdefaultlocale()
('en_US', 'UTF-8')
The `buffer` attribute provides access to the raw byte stream backing `stdin`;
this is usually a `BufferedReader`. Note below that it does **not** have an
`encoding` attribute.
# Python 3.3.1 again
>>> type(sys.stdin.buffer)
<class '_io.BufferedReader'>
>>> w = sys.stdin.buffer.readline()
## ... type stuff - enter
>>> type(w)
<class 'bytes'> # bytes is (kind of) equivalent to Python 2 str
>>> sys.stdin.buffer.encoding
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: '_io.BufferedReader' object has no attribute 'encoding'
In Python 3 the presence or absence of the `encoding` attribute is consistent
with the type of stream used.
|
Numpy not converting to new version
Question: I just installed Numpy 1.7.1 on Ubuntu by downloading the `tar.gz` file from
[Sourceforge](http://sourceforge.net/projects/numpy/files/NumPy/1.7.1rc1/). I
did `tar zxvf` on the tar file, then `python setup.py build` and `sudo python
setup.py install`.
But my Numpy is still the old version (1.3.0). (I checked by running Python
and asks for `numpy.version.version`.) Why is that?
Answer: You can see a little bit about where your packages are and make sure they're
built in the right place by using:
which python
or
file `which python`
(those are backticks) on the command line, and this will tell you where your
python installation is.
Then, launch `python` (or `ipython`) and run
import numpy
numpy.__file__
If you've installed any packages that do work, import and look at their file
locations too. You'll probably notice that `numpy.__file__` is not in the same
location as where you just installed your new numpy package. Be sure that
these match up, or add the location where you installed the new numpy to
`PYTHONPATH`.
|
remote db initialization in Flask application
Question: I have flask app running on Server-A with mysqlDB. Before using the app we did
something like this:
$ python -c "from your_app import db; db.create_all()"
to initialize the DB. Now we are planning to move DB to a new server,
Server-B. So the app will be running at Server-A and its mysql DB will be at
Server-B. In this case, how do I need to initialize the database? And where do
I need to execute the `create_all()` command? Does Server-B need any Flask
specific mysql packages?
Answer: Assuming that you have updated the application's database connection
information so that `db` now points at the MySQL database on Server B you
should be able to use the exact same command on Server A. Whatever you are
using to create the tables should still create the necessary tables in the
database on Server B.
|
Opencv 2.4.3 estimateRigidTransform in python
Question: This is really a mess for me and I have a hard time trying to solve this. I
want to use the method cv2.estimateRigitTransform in Python with numpy arrays
but I can't find anywhere an example or something else that explains the
format of the src and dst numpy arrays needed.
Could somebody send me a little example of these 2 arrays that will work with
this method? I have all the data (in 2D of course) but can't find how to shape
it in a numpy array.
Please help me. Thank you!
Answer: Here's a basic example that tries to align two random sets of 10 points
import numpy as np
import cv2
shape = (1, 10, 2) # Needs to be a 3D array
source = np.random.randint(0, 100, shape).astype(np.int)
target = source + np.array([1, 0]).astype(np.int)
transformation = cv2.estimateRigidTransform(source, target, False)
[Documentation is
here](http://docs.opencv.org/modules/video/doc/motion_analysis_and_object_tracking.html#estimaterigidtransform).
|
What is causing the variance in the execution time of this python script?
Question: I have a simple python script however it displays a much higher execution time
when it's run for the first time in a while. If I execute it immediately after
it's faster by a few factors.
This script is run on a private test server with no applications running on it
so I don't think a lack of system resources is what is causing it to run
slower.
#!/usr/bin/env python
import redis,time,sys
print "hello"
$ time python test.py
real 0m0.149s
user 0m0.072s
sys 0m0.076s
$ time python test.py
real 0m0.051s
user 0m0.020s
sys 0m0.028s
Can anyone explain the variance in the execution time?
I've ran similar tests for php scripts that include external scripts and
there's negligible variance in the execution time of that script.
This variance affects my application because such scripts are called several
times and cause the response to be delivered between 70ms and 450ms.
Answer: There can be several factors. Two I can think off of right now:
1. Initial byte compilation.
Python caches the compiled bytecode in `.pyc` files, on a first run that file
needs to be created, subsequent runs only need to verify the timestamp on the
byte code cache.
2. Disk caching
The Python interpreter, the 3 libraries you refer to directly, anything
_those_ libraries use, all need to be loaded from disk, quite apart from the
script and it's bytecode cache. The OS caches such files for faster access.
If you ran other things on the same system, those files will be flushed from
the cache and need to be loaded again.
The same applies to directory listings; the checks for where to find the
modules in the module search path and tests for bytecode caches all are sped
up by cached directory information.
If such startup times affect your application, consider creating a daemon that
services these tasks as a service. RPC calls (using sockets or localhost
network connections) will almost always beat those startup costs. A message
queue could provide you with the architecture for such a daemon.
|
Django - call original and overriden save method
Question: This may be a noobish question but it bothers me quite a lot (I'm quite new to
both django and python)
In my django app, I overrided the save() method of a model to perform some
interaction on the file system.
I created a form class like this :
class AddItemForm(ModelForm):
class Meta:
model = OriginalModel
So, in my views, when I call form.save(), eveything works fine.
But, when testing my app, I'd like to be able to call the original save()
method to avoid creating plenty of files that I won't use and have to take
care of.
What I tried is to create a savebis() method, in order to preserve the
original save() method, but then how can I pass it to the ModelForm, so that I
can call form.save() or form.savebis()?
EDIT : savebis() is already written and working in my model. I want to be able
to call it from a modelform instance, but I don't know how to do this.
Answer: From your question, it sounds to me like you've got some optional processing
that should occur in the `Model`'s save method. As suggested in the question
comments, just add a `TESTING = True` type constant to your `settings.py` file
used during testing and check this value in the `Model` `save` method:
from django.conf import settings
class OriginalModel(Model):
...
def save(self, *args, **kwargs):
if not settings.TESTING:
# Do some file system interaction, but not during testing.
pass
# Now save instance as per normal.
return super(originalModel, self).save(*args, **kwargs)
If you go down the path of overriding the `ModelForm` `save` method then you
have to replicate existing functionality of the `ModelForm` `save` method, and
changing it to call you model's `savebis()` method instead of the `save()`
method. I'd advise against going down this path as it make the code more
complex than need be.
|
Finding first N occurrences of regex in Python
Question: So this should be easy but I somehow miss the answer on SO or Python docs. I
am using this code:
myregex.findall(source)
This produces all matches of myregex as a list. Now, the problem is that
source is long and I only need first 6 occurrences of substring matching
myregex. I imagine that it would be much faster if matching process could stop
after finding first n occurrences. How do I do something like:
myregex.findall(source, n)
?
Answer: Use [`re.finditer`](http://docs.python.org/2/library/re.html#re.finditer):
import itertools
for m in itertools.islice(re.finditer(pat, text), 6):
...
`re.finditer` is a generator that produces `match` objects on demand. You can
get the complete match from `m.group(0)`, or individual pattern matches from
`m.group(1)` and up.
|
Converting a matrix created with MATLAB to Numpy array with a similar syntax
Question: I'm playing with the code snippets of the course I'm taking which is
originally written in MATLAB. I use Python and convert these matrices to
Python for the toy examples. For example, for the following MATLAB matrix:
s = [2 3; 4 5];
I use
s = array([[2,3],[4,5]])
It is too time consuming for me to re-write all the toy examples this way
because I just want to see how they work. Is there a way to directly give the
MATLAB matrix as string to a Numpy array or a better alternative for this?
For example, something like:
s = myMagicalM2ArrayFunction('[2 3; 4 5]')
Answer: [`numpy.matrix`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.matrix.html)
can take string as an argument.
Docstring:
matrix(data, dtype=None, copy=True)
[...]
Parameters
----------
data : array_like or string
If `data` is a string, it is interpreted as a matrix with commas
or spaces separating columns, and semicolons separating rows.
* * *
In [1]: import numpy as np
In [2]: s = '[2 3; 4 5]'
In [3]: def mag_func(s):
...: return np.array(np.matrix(s.strip('[]')))
In [4]: mag_func(s)
Out[4]:
array([[2, 3],
[4, 5]])
|
Why isn't Python importing this correctly?
Question: So, I have two files, HelloWorld.py and EnterExit.py. Here is the code for
HelloWorld:
import EnterExit
print('Hello world!')
print('What is your name?')
myName = input()
print('It is good to meet you, ' + myName + '!')
end()
And this is EnterExit:
def end():
print('Press enter to continue')
input()
When I run HelloWorld, it works until end() is called. Then it says end()
isn't defined. What am I doing wrong here?
Answer: Either write:
EnterExit.end()
Or:
from EnterExit import end # or import *
end()
|
Django: How to set foreign key checks to 0
Question: Ok so i'm migrating database from sqlite to mysql , i had few errors but i
already resolved them. Now i have problem with this option because i don't
know how to disable it. I tried
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'slave',
'USER': 'root',
'PASSWORD': 'root',
'OPTIONS': {
"init_command": "SET foreign_key_checks = 0;",
},
'HOST': '',
'PORT': '',
}
}
But it doesn't works and i don't know why.
Ofc i use json files to migration
python manage.py dumpdata --indent 2 --natural > dump.json
python manage.py loaddata dump.json
When I'm loading data on begining i can see:
SET SQL_AUTO_IS_NULL = 0
SET foreign_key_checks=0
But after some time:
SELECT (1) AS `a` FROM `xxx` WHERE `xxx`.`id` = 8 LIMIT 1
SET foreign_key_checks=1
And then i see exception. Traceback isn't important because it is connected
with foreignkeys you can read more here
<http://coffeeonthekeyboard.com/django-fixtures-with-circular-foreign-
keys-480/>
I know that i need to disable this option.
I tried :
<http://djangosaur.tumblr.com/post/7842592399/django-loaddata-mysql-foreign-
key-constraints>
But like i said it doesn't work.
Can someone help...
Answer: you can put this at the end of your `settings.py`:
import sys
if 'loaddata' in sys.argv:
# only change this for loaddata command.
DATABASES['default']['OPTIONS'] = {
"init_command": "SET foreign_key_checks = 0;",
}
|
matplotlib major display issue with dense data sets
Question: I've run into a fairly serious issue with matplotlib and Python. I have a
dense periodogram data set and want to plot it. The issue is that when there
are more data points than can be plotted on a pixel, the package does not pick
the min and max to display. This means a casual look at the plot can lead you
to incorrect conclusions.
Here's an example of such a problem:

The dataset was plotted with `plot()` and `scatter()` overlayed. You can see
that in the dense data fields, the blue line that connects the data does not
reach the actual peaks, leading a human viewer to conclude the peak at ~2.4 is
the maximum, when it's really not.
If you zoom-in or force a wide viewing window, it is displayed correctly.
`rasterize` and `aa` keywords have no effect on the issue.
Is there a way to ensure that the min/max points of a `plot()` call are always
rendered? Otherwise, this needs to be addressed in an update to matplotlib.
I've never had a plotting package behave like this, and this is a pretty major
issue.
Edit:
x = numpy.linspace(0,1,2000000)
y = numpy.random.random(x.shape)
y[1000000]=2
plot(x,y)
show()
Should replicate the problem. Though it may depend on your monitor resolution.
By dragging and resizing the window, you should see the problem. One data
point should stick out a y=2, but that doesn't always display.
Answer: This is due to the path-simplification algorithm in matplotlib. While it's
certainly not desirable in some cases, it's deliberate behavior to speed up
rendering.
The simplification algorithm was changed at some point to avoid skipping
"outlier" points, so newer versions of mpl don't exhibit this exact behavior
(the path is still simplified, though).
If you don't want to simplify paths, then you can disable it in the rc
parameters (either in your `.matplotlibrc` file or at runtime).
E.g.
import matplotlib as mpl
mpl.rcParams['path.simplify'] = False
import matplotlib.pyplot as plt
However, it may make more sense to use an "envelope" style plot. As a quick
example:
import matplotlib.pyplot as plt
import numpy as np
def main():
num = 10000
x = np.linspace(0, 10, num)
y = np.cos(x) + 5 * np.random.random(num)
fig, (ax1, ax2) = plt.subplots(nrows=2)
ax1.plot(x, y)
envelope_plot(x, y, winsize=40, ax=ax2)
plt.show()
def envelope_plot(x, y, winsize, ax=None, fill='gray', color='blue'):
if ax is None:
ax = plt.gca()
# Coarsely chunk the data, discarding the last window if it's not evenly
# divisible. (Fast and memory-efficient)
numwin = x.size // winsize
ywin = y[:winsize * numwin].reshape(-1, winsize)
xwin = x[:winsize * numwin].reshape(-1, winsize)
# Find the min, max, and mean within each window
ymin = ywin.min(axis=1)
ymax = ywin.max(axis=1)
ymean = ywin.mean(axis=1)
xmean = xwin.mean(axis=1)
fill_artist = ax.fill_between(xmean, ymin, ymax, color=fill,
edgecolor='none', alpha=0.5)
line, = ax.plot(xmean, ymean, color=color, linestyle='-')
return fill_artist, line
if __name__ == '__main__':
main()

|
Can this cmp function be better written as a key for sorted()?
Question: I was writing a solution to the problem of how to generate the maximum integer
by rearranging then concatenating integers from a set. I was given that for
the maximum integer, the arrangement of any two of its adjacent integers from
the set when expressed as an integer is greater than the arrangement of them
being swapped. i.e. given {98, 9}; the arrangement 998 is greater than 989.
This holds for all the constituent integers in the examples given below.
I needed to write some code that worked for Python 2.7 as well as 3.3 and came
up with the following Python code:
try:
cmp # Python 2 OK or NameError in Python 3
def maxnum(x):
return ''.join(sorted((str(n) for n in x),
cmp=lambda x,y:cmp(int(y+x), int(x+y))))
except NameError:
# Python 3
from functools import cmp_to_key
def cmp(x, y):
return -1 if x<y else ( 0 if x==y else 1)
def maxnum(x):
return ''.join(sorted((str(n) for n in x),
key=cmp_to_key(lambda x,y:cmp(int(y+x), int(x+y)))))
for numbers in [(1, 34, 3, 98, 9, 76, 45, 4), (54, 546, 548, 60)]:
print('Numbers: %r\n Largest integer: %15s' % (numbers, maxnum(numbers)))
This gives output:
Numbers: (1, 34, 3, 98, 9, 76, 45, 4)
Largest integer: 998764543431
Numbers: (54, 546, 548, 60)
Largest integer: 6054854654
Now Raymond Hettinger states that 'key=' is superior to 'cmp=' and I tend to
agree with him, (before his Pycon 2013 talk ["Transforming code into
Beautiful, Idiomatic Python"](https://www.youtube.com/watch?v=OSGv2VnC0go)).
My problem is that **I haven't been able to solve the problem with a key
function as elegant as the cmp function**.
Any ideas? Thanks.
Answer: There is a quite easy way (or i should say principle) to do it: You just have
to compare the mantissa of the numbers, e.g:
1 -> 1.0
34 -> 3.4
548 -> 5.48
and so on…
How can you get that? Well, you have to get the length of the number `length =
log10(number)//1` and divide the number by `10**length`:
from math import log10
def mantissa(number):
return number / 10**( log10(number)//1 )
Now you can use the mantissa as key function:
>>> sorted((1, 34, 3, 98, 9, 76, 45, 4), key = mantissa )
[1, 3, 34, 4, 45, 76, 9, 98]
So we just have to reverse the `sorted`-result and now we can use it just like
you did:
>>> for numbers in [(1, 34, 3, 98, 9, 76, 45, 4), (54, 546, 548, 60)]:
... print 'Numbers: %r\n Largest integer: %15s' % (
numbers,
''.join(str(i) for i in sorted(numbers, key = mantissa, reverse = True))
)
...
Numbers: (1, 34, 3, 98, 9, 76, 45, 4)
Largest integer: 989764543431
Numbers: (54, 546, 548, 60)
Largest integer: 6054854654
|
Python module and object names clash
Question: Please consider the following Python modules excerpts:
foo.py:
class Foo:
(...)
bar.py:
import foo
foo = foo.Foo()
The variable foo, which was a module object, is overwritten with a Foo object.
I know that I can use other names for the object, e.g.:
foobar = foo.Foo()
but semantically it makes more sense in my code to have it called foo, since
it will be the only instance.
(I tried to workaround this by dropping classes and using modules only, but I
went back to using classes because using modules only had "robustness"
problems.)
This is kind of a philosophical question, but what is the "right" way of
handling this potential object/module names clash?
Answer: In my opinion there is nothing wrong with what you are currently doing, but to
make it more clear for everyone reading the code I would suggest changing your
code to something like the following:
import foo as foo_mod
foo = foo_mod.Foo()
Or alternatively:
from foo import Foo
foo = Foo()
This prevents the name clash so it will be more obvious that the variable
`foo` in your module is not going to refer to the module of the same name.
|
Reading and writing unique XML tags in Python
Question: I want to take a .txt file in XML of this sort:
<?xml version = ""?>
<data>
<a1>cat</a1>
<a5>bird</a5>
<a4>window</a4>
</data>
count the length of each string and output:
<?xml version = ""?>
<result>
<r1>3</r1>
<r5>4</r5>
<r4>6</r4>
</result>
What's the best way to write a .txt xml format file with the above output and
corresponding tags? I'm using `xml.etree.cElementTree` to parse it.
Answer:
import xml.etree.ElementTree as ET
import re
xdata = '''
<data>
<a1>cat</a1>
<a5>bird</a5>
<a4>window</a4>
</data>'''
root = ET.fromstring(xdata)
for apptag in root.findall("*"):
apptag.text = str(len(apptag.text))
apptag.tag = re.sub(r'^a(.*)',r'r\1',apptag.tag)
root.tag = 'result'
ET.ElementTree(root).write('test.xhtml')
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.