text
stringlengths 226
34.5k
|
---|
entering console inputs from within python file
Question: In my python file, I have made a GUI widget that takes some inputs from user.
I have imported a python module in my python file that takes some input using
raw_input(). I have to use this module as it is, I have no right to change it.
When I run my python file, it ask me for the inputs (due to raw_input() of
imported module). I want to use GUI widget inputs in that place. How can I
pass the user input (that we take from widget) as raw_input() of imported
module?
Answer: First, if `import`ing it directly into your script isn't actually a
requirement (and it's hard to imagine why it would be), you can just run the
module (or a simple script wrapped around it) as a separate process, using
`subprocess` or `pexpect`.
Let's make this concrete. Say you want to use this silly module `foo.py`:
def bar():
x = raw_input("Gimme a string")
y = raw_input("Gimme another")
return 'Got two strings: {}, {}'.format(x, y)
First write a trivial `foo.wrapper.py`:
import foo
print(foo.bar())
Now, instead of calling `foo.do_thing()` directly in your real script, run
`foo_wrapper` as a child process.
I'm going to assume that you already have the input you want to send it in a
string, because that makes the irrelevant parts of the answer simpler (in
fact, it makes them possible—if you wanted to use some GUI code for that,
there's really no way I could show you how unless you first tell us which GUI
library you're using).
So:
foo_input = 'String 1\nString 2\n'
with subprocess.Popen([sys.executable, 'foo_wrapper.py'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE) as p:
foo_output, _ = p.communicate(foo_input)
Of course in real life you'll want to use an appropriate path for
`foo_wrapper.py` instead of assuming that it's in the current working
directory, but this should be enough to illustrate the idea.
* * *
Meanwhile, if "I have no right to change it" just means "I don't (and
shouldn't) have checkin rights to the foo project's github site or the
relevant subtree on our company's P4 server" or whatever, there's a really
easy answer: Fork it, and change the fork.
Even if it's got a weak copyleft license like LGPL: fork it, change the fork,
publish your fork under the same license as the original, then use your fork.
If you're depending on the foo package being installed on every target system,
and can't depend on your replacement foo being installed instead, that's a bit
more of a problem. But if the function or method that actually calls
`raw_input` is just a small fraction of the actual code in `foo`, you can fix
that by monkeypatching `foo` at runtime.
* * *
And that leads to the last-ditch possibility: You can always monkeypatch
`raw_input` itself.
Again, I'm going to assume that you already have the input you need to give it
to make things simpler.
So, first you write a replacement function:
foo_input = ['String 1\n', 'String 2\n']
def fake_raw_input(prompt):
global foo_input
return foo_input.pop()
Now, there are two ways you can patch this in. Usually, you want to do this:
import foo
foo.raw_input = fake_raw_input
This means any code in `foo` that calls `raw_input` will see the function you
crammed into its module globals instead of the normal builtin. Unless it does
something really funky (like looking up the builtin directly and copying it to
a local variable or something), this is the answer.
If you need to handle one of those really funky edge cases, and you don't mind
doing something questionable, you can do this:
import __builtin__
__builtin__.raw_input = fake_raw_input
You must do this before the first `import foo` anywhere in your problem. Also,
it's not clear whether this is intentionally guaranteed to work, accidentally
guaranteed to work (and should be fixed in the future), or not guaranteed to
work. But it does work (at least for CPython 2.5-2.7, which is what you're
probably using).
|
adding lines in C code using Python script
Question: I need to calculate the execution time of a loop in C code and for that i need
to write a python script that adds "gettimeofday" before and after the loop by
detecting the comments before and after the loop.
Here is the code:
int main(int argc, char** argv) {
int i,j;
int M = argv[0][0] * 10000;
int res = argc;
// loopId = 1; depth = 1; outermost
for (i=0; i<M; i++) {
// loopId = 2; depth = 2; innermost
for (j=0; j<M; j++) {
res *= 7 % 71;
}
// end loop (loopId = 2)
// loopId = 3; depth = 2; innermost
for (j=0; j<M; j++){
res += 9 % 91;
}
// end loop (loopId = 3)
}
// end loop (loopId = 1)
return res;
}
Answer:
import sys, re
expS=re.compile(r'\s*//\s*loopId = (\d+); depth = \d+; \w+')
expE=re.compile(r'\s*//\s*end loop \(loopId = (\d+)\)')
lines, varcnt = [], 0
with open(sys.argv[1]) as f:
for line in f:
line = line.rstrip()
lines += [ line ]
m = re.match(expS, line)
if m:
varcnt += 1
loopid = int(m.group(1))
lines += [ 'gettimeofday(&tv[{}], 0);'.format((loopid-1)*2) ]
continue
m = re.match(expE, line)
if m:
loopid = int(m.group(1))
sid, eid = (loopid-1)*2, (loopid-1)*2+1
lines += [ 'gettimeofday(&tv[{}], 0);'.format(eid) ]
lines += [ 'printf("Id {}: %ld\\n", tdiff_xxx(&tv[{}],&tv[{}]));'.format(
loopid, sid, eid) ]
print '#include <sys/time.h>'
print 'struct timeval tv[{}];'.format(varcnt*2)
print 'long tdiff_xxx(struct timeval *t0, struct timeval *t1) {'
print ' return (t1->tv_sec-t0->tv_sec)*1000000 + t1->tv_usec-t0->tv_usec;'
print '}'
for l in lines: print l
|
Setting up Emacs24 for python development
Question: I want to configure Emacs24 for python development. So far I've fallowed the
instructions in [this blog post](http://www.yilmazhuseyin.com/blog/dev/emacs-
setup-python-development/) and done all the steps successfully, but nothing
happened when I reopened Emacs. It's maybe because the blog post is a little
out of date (May 2011) and it has been tested on Emacs23. does anybody have
any better instructions? preferably tested recently on Emacs24.
What I need most is auto-complete for python (version >3), and django after
that.
By the way I'm using LinuxMint14 if it is any important.
Answer: It's probably best to install things from one of the repositories. `pymacs`
and `pyflakes` are both in MELPA. This repo also has the `flymake-python-
pyflakes` \- which is kind of an extension of the snippet in the blog post.
You will probably have very little use for `ropemacs` at first because that's
not intended for Python development proper, it's for extending Emacs with
Python (rather than Emacs Lisp).
So, I'd say, first add this:
(add-to-list 'package-archives
'("melpa" . "http://melpa.milkbox.net/packages/") t)
(add-to-list 'package-archives
'("marmalade" . "http://marmalade-repo.org/packages/"))
(package-initialize)
to your Emacs init file (usually `~/.emacs`), evaluate it with `M-x``eval-
buffer`. Then `M-x``list-packages`, search for Pymacs, pyflakes, auto-complete
and whatever you like. Pressing `RET` on package name will open a buffer with
package description. Pressing `i` on package name will schedule it for
installation, pressing `x` will install all packages scheduled for
installation.
Also note that ropes is a Python library needed for many code-related tasks in
various editors. So you'd need to install that too, sooner or later. Usually,
if you have Python installed, you'd already have `pip` program, so I'd suggest
you do:
$ pip install rope ropemacs
It will be probably:
$ pip3 install rope_py3k
(I'm guessing from the package name).
Rather than installing it by hand. If `pip` isn't installed by default:
$ sudo apt-get install pip
(it could be also named `python-pip`, at least this is the name on RHEL
distros). Also on RHEL there are two different versions, `python-pip` and
`python-pip3`, the other one being for Python 3.X I believe, so install
whichever is appropriate.
The benefit of using an installer of this kind is that it will do all the
maintenance job in the way that others can anticipate, and so would be able to
help if need be.
There are also lots of Python-related bits of Emacs Lisp code floating around.
I'd suggest you check out <https://github.com/jorgenschaefer/elpy/wiki>
(installable through MELPA). MELPA also lists PyDE support:
<http://pyde.bitbucket.org/> but I don't know what it is.
|
Django: New class added in model.py not showing in admin site
Question: I'm a front-end dev struggling along with Django. I have the basics pretty
much down but I've hit at wall at the following point.
I have a site running locally and also on a dev machine. Locally I've added an
extra class model to an already existing app, registered it in the relevant
admin.py and checked it in the settings. Locally the new class and relevant
fields appear in admin but when I move this all to dev they're not appearing.
The app is called 'publish'.
My method was as follows:
1. Created the new class in the publish > models.py file:
class Whitepaper(models.Model):
title = models.CharField(max_length=200)
slug = models.SlugField(max_length=100, blank=True)
pub_date = models.DateField('date published')
section = models.ForeignKey('Section', related_name='whitepapers', blank=True, null=True)
description = models.CharField(max_length=1000)
docfile = models.FileField(upload_to="whitepapers/%Y/%m/%d", null=True, blank=True)
1. Updated and migrated the model with South using:
python manage.py schemamigration publish --auto
and
python manage.py migrate publish
1. Registered the class in the admin.py file:
from models import Section, Tag, Post, Whitepaper
from django.contrib import admin
from django import forms
admin.site.register(Whitepaper)
The app is listed in the settings.py file:
INSTALLED_APPS = (
...,
...,
'publish',
...,
)
As this is running on a dev server that's hosting a few other testing areas,
restarting the whole thing is out of the question so I've been 'touching' the
.wsgi file.
On my local version this got the model and fields showing up in the admin but
on the dev server they are nowhere to be seen.
What am I missing?
Thanks ye brainy ones.
Answer: I figured out the problem. Turns out the login I was using to get into the
admin didn't have superuser privileges. So I made a new one with:
python manage.py createsuperuser
After logging in with the new username and password I could see all my new
shiny tables!
|
The included urlconf xxxx.urls doesn't have any patterns in it
Question: I want to get an url in a modelform class. I have seen in [The included
urlconf manager.urls doesn't have any patterns in
it](http://stackoverflow.com/questions/6482573/the-included-urlconf-manager-
urls-doesnt-have-any-patterns-in-it). But **reverse_lazy** function not work
for my case.
My case:
**captchahelper** is an **app** in **root**.
**root urlpatterns (urls.py under root project):**
urlpatterns = patterns("",
.....
# captcha
url(r'^captcha/', include('captchahelper.urls')),
.....
}
**captcha urlpatterns(urls.py under captchahelper project):**
urlpatterns = patterns('',
url(r'^$', views.captcha , name="views_captcha"),
url(r'^refresh/$', views.refresh , name="views_refresh_captcha"),
)
**view.py under captchahelper project**
def captcha(request):
.....
return HttpResponse(captcha.gen_img_by_code(code),'image/jpeg')
get:
class CaptchaForm(forms.ModelForm):
.....
captcha = Captcha()
encoded = captcha.get_encrypt_code()
captcha_image = urlresolvers.reverse_lazy('views_captcha') + '?encoded='+encoded
.....
def clean_captcha_text(self):
....
turn out error:
/account/signup/ is current url(also include in root.urls),when a visitor
views this page,above class will be init in the corresponding view function.
ImproperlyConfigured at /account/signup/
The included urlconf root.urls doesn't have any patterns in it
D:\Python27\lib\site-packages\django\core\urlresolvers.py in reverse
return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)) ...
▼ Local vars
Variable Value
ns_pattern
''
viewname
'views_captcha'
args
[]
current_app
None
prefix
u'/'
parts
['views_captcha']
resolver
<RegexURLResolver root.urls (None:None) ^/>
kwargs
{}
path
[]
resolved_path
[]
urlconf
'root.urls'
view
'views_captcha'
I just feel django is so complex ,because I come from ASP MVC3.I can easy
generate an url with querystring from a statement like
`Url.Action("CaptchaController", "CaptchaView", new { encoded = "encoded"})`
in MVC3.But If I want to generate a url with querystring in django, the code
will be `reverse('captcha') + '?encoded=' + encoded`.It is really ugly....
Answer: You are calling the class, not the def method itself in your urls file. Your
project seems a little odd, since most files have a `*.py` ending, but here's
an example from a project I'm already using:
urls.py
...
url(r'^getCustomerData/$', views.getCustomerData),
...
and in `views.py,
from app.models import Customer
def getCustomerData(request):
//process and return
So for you, I'd suggest changing your url resolver from
url(r'^$', views.captcha , name="views_captcha"),
url(r'^refresh/$', views.refresh , name="views_refresh_captcha")
to
url(r'^$', views.captcha.CAPTCHA_HANDLER_DEF , name="views_captcha"),
url(r'^refresh/$', views.captcha.refresh , name="views_refresh_captcha")
and define those two functions within captcha.
|
How to decrypt ciphertext using openssl in C?
Question: How do I able to decrypt a cipher text which is encrypted using AES in python.
Encrypt.py
Using this I made cipher text using AES and concatenated that with the IV and
wrote that in to a file file.txt.
from Crypto.Cipher import AES
import hashlib, os
Plain = 'This is a string' #String to Encrypt
key = 'mysecretpassword'
#key = hashlib.md5(key).digest() #Key
IV = 'InitializationVector'
IV = hashlib.md5(IV).digest() #Initialization Vector
print len(IV)
Obj1 = AES.new(key, AES.MODE_CBC, IV)
Cipher = Obj1.encrypt(Plain) #Cipher text
File = open('file.txt','w')
File.write(Cipher+IV) #Concatenated the string and IV and
#wrote that to a file file.txt
File.close()
Decrypt.c
Using this now I got the Cipher text and IV from file.txt. Now how do I able
to decrypt the Cipher using openssl or any other libraries?
#include <stdio.h>
#include <string.h>
int main ()
{
char filename[] = "file.txt";
FILE *file = fopen ( filename, "r" );
char key[] = "mysecretpassword";
if (file != NULL) {
char line [1000];
char *p = line;
char *array = line;
while(fgets(line,sizeof line,file)!= NULL) {
fprintf(stdout,"%s\n",line);
char otherString[strlen(line)-15];
strncpy(otherString, p, strlen(line)-16);
otherString[strlen(otherString)-1] = '\0';
printf("%s\n", otherString);//Here I got the Encrypted string
array=array+(strlen(array)-16);
printf("%s\n",array);//Here I got the IV
//Here how to decrypt the Cipher text using "IV" and "key"
}
fclose(file);
}
else {
perror(filename);
}
return 0;
}
Really I am a newbie. Please forgive for the errors in my question and please
feel free to help me, that would be your most kindness. Thanks a lot in
advance.
Answer:
#include "openssl/aes.h"
char buffer[1000];
AES_KEY dec_key;
AES_set_decrypt_key(key, 128, &dec_key);
AES_cbc_encrypt(otherString, buffer, strlen(line) - 16,
&dec_key, array, AES_DECRYPT);
//AES_KEY dec_key;
//AES_set_decrypt_key(key, keySize, &dec_key);
//AES_cbc_encrypt(ciphertext, result, textLen, &dec_key, iv, AES_DECRYPT);
This works for me. 128 is bits for 16-byte key.
But i believe, you also have bug somewhere in -1 -15 -16 string length. This
part of while loop can be changed to fix the problem:
int strLen = strlen(line) - 16;
char otherString[strLen + 1];
strncpy(otherString, p, strLen);
otherString[strLen] = '\0';
array = array + strLen;
Here is also nice AES CBC encryption/decryption [example code](http://misc-
file.googlecode.com/svn/vm/aes_cbc_encrypt.cpp) and [your working
code](http://pastebin.com/WAuEUecC).
|
Django model export on MySql server
Question: I imported the external database into my project , thereby converting it to
`models.py` file using `python manage.py inspectdb > models.py` command.
Now I have edited the models.py file by adding another class. How can I export
the `models.py` file onto the MySql server without hampering the data already
in the database?
Answer: Basically, what you need is a
[`syncdb`](https://docs.djangoproject.com/en/dev/ref/django-admin/#syncdb)
> Creates the database tables for all apps in INSTALLED_APPS whose tables have
> not already been created.
But, syncdb will only create tables for models which have not yet been
installed. It will never issue `ALTER TABLE` statements to match changes made
to a model class after installation.
There are 3rd party apps in django like [django-
south](http://south.aeracode.org/) which manage the migrations for you, so you
dont have to do all this manually.
Here is a link that explains [how to work with
south](http://www.djangopro.com/2011/01/django-database-migration-tool-south-
explained/)
|
TypeError: decoding Unicode is not supported python
Question: I am using lxml.html to parse an html file and get the text from the page. Bur
now I have a string which has a character `'` for example `Florian's` due to
which, while printing the output I get traceback
parent_link_id_text = parent_link_id.xpath('./td[@width="400"]/text()')
print (SGS_Mid[0]+";"+"External"+";"+str(link_id_num[0])+";"+parent_link_id_text[0]+";"+parent_link_link[0], file = log_file_1)
**UnicodeEncodeError: 'ascii' codec can't encode characters in position 56-58:
ordinal not in range(128)**
Then i tried this
print (SGS_Mid[0]+";"+"PublicFreeUrl"+";"+str(link_id_num[0])+";"+unicode(parent_link_id_text[0],"utf-8")+";"+parent_link_link[0], file = log_file_1)
and i get a traceback:
**TypeError: decoding Unicode is not supported**
How can i solve this by printing the string with the unicode cahracter?
Answer: Not sure if this is the solution to your problem, but perhaps it will guide
you in the right direction.
Without seeing the code you have to actually get the data, I'm going to
speculate and make a programmatic guess with how to solve your issue.
Please see the following code:
import lxml.html as lh
import urllib2
url = 'http://loremipsum.net/about.html'
doc = lh.parse(urllib2.urlopen(url))
value = doc.xpath('//p/strong/text()')[0]
print value
Printed result:
> What is 'lorem ipsum'?
By reading the about page on the lorem ipsum site, you can see that the text
returned indeed has the ' in it.
I hope this helps you in the right direction.
|
Creating a keyword based search in python
Question: I have a giant CSV file with close to 6K entries and the file looks something
like this:
PDB ID NDB ID Structure Title Citation Title Abstract
1ET4 1ET4 Structure of Solution structure Research was performed and
haemoglobin mRNA of mRNA aptamer structure of mRNa was obtained
aptamer.
My end goal is to display an output given a keyword like so:
Keyword: mRNA
PDB ID NDB ID Structure Title Citation Title Abstract Location of first hit
struc/citation/abstract
What would be a starting point for me? Also, do I have to use something called
Regex for this?
Disclaimer: This is part of a research project, not school homework.
A pseudocode or template would be great for me.
Answer: You could parse the CSV file and create two data structures. Both
dictionaries.
One dictionary would contain each line, keyed on `PDB ID`. The other
dictionary would store sets of `PDB ID`s and be keyed on keywords.
Below is just example code because I'm ignoring headers. You probably want to
parse the csv properly...
from collections import defaultdict
entries = {}
keywords = defaultdict(set)
with open('my_csv.csv') as f:
for line in f:
entries[line.split()[0]] = line # so keying on the PDB ID
with open('my_csv.csv') as f:
for line in f:
for kw in line.split()[1:]
keywords[kw].add(line.split()[0])
Once you have those two data structures it should be trivial to look up a
keyword in the keywords dict, iterate over the set, and print out each line
with the relevant PDB id.
|
How to convert multiple lists to dictionaries in python?
Question:
['*a*', '*b*', '*c*', '*d*', '*f*','*g*']
['11', '22', '33', '44', '', '55']
['66', '77', '88', '', '99', '10']
['23', '24', 'sac', 'cfg', 'dfg', '']
need to put in dictionary as:
{a : ('11','66','23'),b : ('22','77','24'),c : ('33','88','sac'),d :('44','','cfg')}
The rows are read from a CSV file:
import csv
csvFile = csv.reader(open("sach.csv", "rb"))
for row in csvFile:
print row
code which is tried shown above, The output of row is shown above which has
many lists. please help me to put it in dictionary format as shown above.
Answer: Zip the rows:
with open("sach.csv", "rb") as csv_infile:
reader = csv.reader(csv_infile)
yourdict = {r[0].replace('*', ''): r[1:] for r in zip(*reader)}
The [`zip()` function](http://docs.python.org/2/library/functions.html#zip)
does the pairing for you, by passing in the `reader` object with the `*`
argument Python will loop over the CSV rows and pass each row as a separate
argument to `zip()`.
|
Python sub process call
Question: What i am trying to accomplish in a few words is this: change directories and
call script from shell.
So far so good i have managed to change directories with `os.chdir()` .
However i haven't been able to understand how to syntax the second part of the
given task. Specifically, the command that i want to invoke is this one
`/path-to-dir-of-the-script/script<inputfile.txt>outfile.txt` and to my eyes
at least the problem is that the input file (and evidently the output file
that do not exist but will be generated by the script) are in two different
directories.
So by trying the following (`ls` and `print` are for debugging and supervising
purposes more or less) along with various modifications i am always getting an
error. Either SyntaxError or the system cannot find the two files, etc.
import subprocess
import os
import sys
subprocess.call(["ls"]) #read the contents of the current dir
print
os.dir('/path-to-dir')
subprocess.call(["ls"])
print
in_file = open(infile.txt) #i am not sure if declaring my files is a necessity.
out_file = open (outfile.txt)
com = /path-to-dir-of-the-script/script
process = subprocess.call([com], stdin=infile.txt, stdout=outfile.txt)
This is the last implementation of it which generates: `NameError:
name`infile`is not defined`
I am sure there are more than one errors in my approach (except form my
syntax) and i would probably have to study more. So far i ve taken a look in
the [doc](http://docs.python.org/2/library/subprocess.html) which includes
some `Popen` examples and two or three pertinent questions
[here](http://stackoverflow.com/questions/15205597/use-python-subprocess-
module-like-a-command-line-simulator) ,
[here](http://stackoverflow.com/questions/6046779/explain-example-from-python-
subprocess-module) and
[here](http://stackoverflow.com/questions/17571217/python-sub-process-call-
with-filename-variable) .
In case i didn't made myself clear some notes :
Script and files are not on the same level. The command is valid and works
flawless when it comes down to it. Moving either the files, either the script
into the same level won't work.
Any suggestions??
Answer: Use quotes to create a string in Python e.g.:
com = "/path-to-dir-of-the-script/script"
You could use `cwd` argument to run a script with a different working
directory e.g.:
subprocess.check_call(["ls"]) # read the contents of the current dir
subprocess.check_call(["ls"], cwd="/path-to-dir")
To emulate the bash command:
$ /path-to-dir-of-the-script/script < inputfile.txt > outfile.txt
using `subprocess` module:
import subprocess
with open("inputfile.txt", "rb") as infile, open("outfile.txt", "wb") as outfile:
subprocess.check_call(["/path-to-dir-of-the-script/script"],
stdin=infile, stdout=outfile)
|
Editing a duplicate list edits the original
Question: So I've started a simple sort of roguelike game project in python, but I have
problems with editing a duplicate list editing the original. The code is as
follows:
charx = 1
chary = 1
level = [["#","#","#","#","#","#","#"],["#",".",".",".",".",".","#"],["#",".","#",".",".",".","#"],["#","#","#","#","#","#","#"]]
while True:
move = raw_input("u,d,l or r: ")
if move=="u":
if level[chary-1][charx]!="#":
chary-=1
elif move=="d":
if level[chary+1][charx]!="#":
chary+=1
elif move=="r":
if level[chary][charx+1]!="#":
charx+=1
elif move=="l":
if level[chary][charx-1]!="#":
charx-=1
display_level=level
display_level[chary][charx]="@"
for row in display_level:
print "".join(row)
for row2 in level:
print ",".join(row2)
As you can see I have created list called level, where each list inside the
list is a row. The characters starting row and column position is stated.
Then the loop asks whether you want to move up, down left or right. It checks
to see if there is a wall (the # character) in the way and reacts accordingly.
As you can see, it makes a duplicate of the level called display_level and
then puts an "@" as the character onto display level. The problem is however
is that changing display_level changes level as well for no apparent reason,
even though they are separate and each time the loop runs through it makes
display_level=level.
display_level and level are then printed separately and it shows level being
edited along side display_level which shouldn't happen.
So what I want to know is why changing an part of display_level would change
level and how I can fix it.
Much appreciated
Answer: You are _not_ creating a duplicate of `level`. All you do is create another
reference to the _same list_.
To create a copy of a list, you can just slice it from start to end:
display_level = level[:]
but for your case, that is _not enough_. You need to copy the _nested_ lists.
In this case, a list comprehension is easiest:
display_level = [l[:] for l in level]
A more comprehensive solution is to use the [`copy.deepcopy()`
function](http://docs.python.org/2/library/copy.html#copy.deepcopy) to make
_absolutely sure_ that the top-level object and all contained objects are
copies:
import copy
# ...
display_level = copy.deepcopy(level)
|
Scapy problems when importing modules
Question: I recently started programming in python and scapy. But when i use from
scapy.all import * it doesnt work and i get the exception ImportError: No
module named 'base_classes'. So it is finding the folder all, but cannot find
base_classes. I verified however that base_classes is actually in there. In
extend, import scapy.all.base_classes finds that there are base_classes in
there but when i execute it i stil get an error. What should i do? i verified
my version of scapy and it is 2.x.
Thank you Martinos
Answer: I ran into a similar issue once and it was because I wasn't using the right
version of python and the right python path.
I solved it by adding the right classes to the path in the beginning of my
script using
import sys
sys.path.append("/home/me/mypy")
It's a bit ugly but it worked.
|
System Paths and Modules
Question: I have the following setup:
/project/
/api/
__init__.py
test.py
/modules/
__init__.py
api.py
I am trying to, from the /project/ directory, run api.py: `python
modules/api.py`
The api module attempts to import the test module from the api package, but
fails. I have tried the following:
import api.test
import project.api.test # (with an __init__.py in my /project/ directory)
I have even attempted to add the api package's parent directory to the system
path as described:
import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
### Question
How can I set up my project in such a way that each package has knowledge of
other packages in it's parent directory, which is also the project's root
directory?
Answer: Your problem is that you have a `project/modules/api.py` **file** as well as a
`projects/api` **directory** with an `__init__.py` file in it - you're getting
descriptor collisions. Rename one of them to something else, and your code
should work:
/project/
/api/
__init__.py
test.py
/modules/
__init__.py
foo.py
Then run `python modules/foo.py` and in foo.py:
from ..api import test
or alternatively:
import sys
import os
sys.path.append(os.path.abspath('../api'))
import test
|
how to add path with module to python?
Question: I try to build V8 javascript engine. When I try to invoke the command `python
build/git_v8`, I get error:
File build/gyp_v8, line 48 in < module >
import gyp
ImportError: No module named GYP
How I can tell python where search GYP module and what is the correct path to
the module in the folder GYP?
My version of python is 2.6.2.2, recommended in build instructions.
Answer: Install the module will be fine.
git clone https://chromium.googlesource.com/external/gyp
cd gyp
sudo ./setup.py install
enjoy it.
|
Python 2.7, pygame, combing pause and unpause button.
Question: I am coding a simple music player.
I have searched the other questions in Stackoverflow, however, the solutions
do not work with my pygame build.
My code is below. I am using Tkinter for the gui build.
import sys
from Tkinter import *
import tkMessageBox
import pygame
myGui = Tk()
def mClose():
mExit = tkMessageBox.askokcancel(title="Quit", message="are you sure?")
if mExit ==True:
myGui.destroy()
return
def mPlay():
pygame.mixer.init()
pygame.mixer.music.load("/home/david/Downloads/test.mp3")
pygame.mixer.music.play()
def unPause():
pygame.mixer.music.unpause()
def mPause():
pygame.mixer.music.pause()
myGui.title("My Audio")
myGui.geometry("200x200+600+300")
mLabel = Label(myGui, text="My Audio").pack()
''' Button for Closing App'''
mButton = Button(myGui, text="Close", command = mClose).pack()
'''Play Button'''
mButton = Button(myGui, text="Play", command = mPlay).pack()
'''Pause Button'''
mButton = Button(myGui, text="Pause", command = mPause).pack()
'''UnPause Button'''
mButton = Button(myGui, text="UnPause", command = unPause).pack()
I have tired to combine pause and unpause by using
pygame.mixer.music.get_busy(). However, if it is paused the boolean still
returns true for being active.
I have used the following to no avail:
def play_pause():
paused = not paused
if paused: pygame.mixer.music.unpause()
else: pygame.mixer.music.pause()
I get the following:
File "/home/david/Documents/tkinter_testing.py", line 29, in play_pause
paused = not paused
UnboundLocalError: local variable 'paused' referenced before assignment.
Any ideas or help. Thanks in advance for any help.
Answer: You are assigning `paused` a value but calling itself for the value. I believe
what you're looking for is
paused = False
or
paused = not True
|
Calling a method from a parent class in Python
Question: Can anyone help me with the correct syntax to call my method
`__get_except_lines(...)` from the parent class?
I have a class with a method as shown below. This particular method has the 2
underscores because I don't want the "user" to use it.
NewPdb(object)
myvar = ...
...
def __init__(self):
...
def __get_except_lines(self,...):
...
In a separate file I have another class that inherits from this class.
from new_pdb import NewPdb
PdbLig(NewPdb):
def __init__(self):
....
self.cont = NewPdb.myvar
self.cont2 = NewPdb.__get_except_lines(...)
And I get an attribute error that really confuses me:
AttributeError: type object 'NewPdb' has no attribute '_PdbLig__get_except_lines'
Answer: Your problem is due to Python name mangling for private variable
(<http://docs.python.org/2/tutorial/classes.html#private-variables-and-class-
local-references>). You should write:
NewPdb._NewPdb__get_except_lines(...)
|
pip not installing to site-packages directory from within virtualenv when I use a requirements.txt
Question: I'm relatively new to running Python with virtualenv so this might be an easy
fix, but I can't for the life of me figure out what's going on. I'm running
Windows 7 professional x64 with Python 2.7.5 installed I have installed pip
and virtualenv. I have a django project that I'm attempting to work on which I
have cloned from a Heroku repository. When I attempt to set up a virtualenv
and install the requirements of my project I'm running into a strange error
that I can't figure out. I have everything setup as follows:
Django project is placed in `C:\Users\xxx\PythonProjects\myProject`
I open a command prompt, cd to the myProject folder and execute the following
command:
C:\Users\xxx\PythonProjects\myProject> virtualenv --no-site-packages env
This should create a nice clean virtual environment for my project, so I go
ahead and activate as follows:
C:\Users\xxx\PythonProjects\myProject> Scripts\activate
The prompt changes to indicate my virtualenv has become active so I double
check by "where"ing python and pip:
(env) C:\Users\xxx\PythonProjects\myProject> where python
C:\Users\xxx\PythonProjects\myProject\env\Scripts\python.exe
C:\Python27\python.exe
(env) C:\Users\xxx\PythonProjects\myProject>where pip
C:\Users\xxx\PythonProjects\myProject\env\Scripts\pip.exe
C:\Python27\Scripts\pip.exe
Since it looks like virtualenv is functioning correctly I next attempt to pip
the requirements file as follows:
(env) C:\Users\xxx\PythonProjects\myProject> pip install -r requirements.txt
pip appears to run successfully installing all the packages I need. However
when I load up python the following happens (django is one of the packages in
my requirements file):
(env) C:\Users\xxx\PythonProjects\myProject>python
Python 2.7.5 (default, May 15 2013, 22:44:16) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named django
I then cd to the site-packages folder to find out what is going on and get the
following:
(env) C:\Users\xxx\PythonProjects\myProject\Lib\site-packages>dir
Volume in drive C is Windows7_OS
Volume Serial Number is 808F-577A
Directory of C:\Users\xxx\PythonProjects\myProject\env\Lib\site-packages
07/17/2013 02:16 PM <DIR> .
07/17/2013 02:16 PM <DIR> ..
07/17/2013 02:16 PM 237 easy-install.pth
07/17/2013 02:16 PM <DIR> pip-1.3.1-py2.7.egg
07/15/2013 09:16 PM 332,005 setuptools-0.6c11-py2.7.egg
07/17/2013 02:16 PM 31 setuptools.pth
3 File(s) 332,273 bytes
3 Dir(s) 169,869,336,576 bytes free
It appears that my pip call failed to install ANYTHING into my site-packages
folder, so python has no idea where to find my required packages. Instead they
appear to all be located in `C:\Users\xxx\PythonProjects\myProject\env\build`
If I use `pip install foo` without a requirements file, then everything works
fine and the `foo` ends up in my site-packages folder. Is there a way I can
get this working with the requirements file, or am I going to have to manually
install every package every time when using virtualenv? Sorry for the likely
overly long post, but I wanted to make sure that all relevant information is
here. Thanks for the help!
* * *
Edit with additional information:
It appears that my requirements file may be the source of the problem. Only
about half of the packages are being downloaded, the last being `django-
polymorphic`. The lines in my requirements file that specify that package and
the following package are as follows:
django-polymorphic==0.4.2
-e hg+https://bitbucket.org/fcurella/django-profiles@5c982ce7c040351fca9847a85dd4ff29f8a367e6#egg=django_profiles-dev
django-sekizai==0.7
-e git://github.com/divio/django-shop.git@0fb2258d27332166e3f76ad7cf7335c1f0a389b2#egg=django_shop-dev
-e git://github.com/fivethreeo/django-shop-categories.git@345fb100f5f680e6ac2066f74f25515eb2cd9374#egg=django_shop_categories-dev`
Answer: So I figured out the answer to my own question.
Basically, if you are running Python 2.7 (and likely other versions) on
Windows, some packages don't play nicely. If anyone else is having this
problem, you should download Windows binaries from
<http://www.lfd.uci.edu/~gohlke/pythonlibs/> and remove those packages from
your `requirements.txt` file. Once I did so, `pip` stopped crashing during the
install process and correctly installed the rest of the packages in my
`requirements.txt` file.
The packages I needed were:
1. pillow
2. psycopg
3. reportlab
|
tf-idf using data on unigram frequency from Google
Question: I'm trying to identify important terms in a set of government documents.
Generating the term frequencies is no problem.
For document frequency, I was hoping to use the [handy Python scripts and
accompanying data](http://norvig.com/ngrams/) that Peter Norvig posted for his
chapter in "Beautiful Data", which include the frequencies of unigrams in a
huge corpus of data from the Web.
My understanding of tf-idf, however, is that "document frequency" refers to
the number of documents containing a term, not the number of total words that
_are_ this term, which is what we get from the Norvig script. Can I still use
this data for a crude tf-idf operation?
Here's some sample data:
word tf global frequency
china 1684 0.000121447
the 352385 0.022573582
economy 6602 0.0000451130774123
and 160794 0.012681757
iran 2779 0.0000231482902018
romney 1159 0.000000678497795593
Simply dividing tf by gf gives "the" a higher score than "economy," which
can't be right. Is there some basic math I'm missing, perhaps?
Answer: As I understand, Global Frequency is equal "inverse total term frequency"
mentioned here
[Robertson](http://www.soi.city.ac.uk/~ser/idfpapers/Robertson_idf_JDoc.pdf).
From this Robertson's paper:
One possible way to get away from this problem would be to make a fairly radical re-
placement for IDF (that is, radical in principle, although it may be not so radical
in terms of its practical effects). ....
the probability from the event space of documents to the event space of term positions
in the concatenated text of all the documents in the collection.
Then we have a new measure, called here
inverse total term frequency:
...
On the whole, experiments with inverse total term frequency weights have tended to show
that they are not as effective as IDF weights
According to this text, you can use inverse global frequency as IDF term,
albeit more crude than standard one.
Also you are missing [stop words](https://en.wikipedia.org/wiki/Stop_words)
removal. Words such as the are used in almost all documents, therefore they do
not give any information. Before tf-idf , you should remove such stop words.
|
Display a georeferenced DEM surface in 3D matplotlib
Question: I want to use a DEM file to generate a simulated terrain surface using
matplotlib. But I do not know how to georeference the raster coordinates to a
given CRS. Nor do I know how to express the georeferenced raster in a format
suitable for use in a 3D matplotlib plot, for example as a numpy array.
Here is my python code so far:
import osgeo.gdal
dataset = osgeo.gdal.Open("MergedDEM")
gt = dataset.GetGeoTransform()
Answer: You can use the normal `plot_surface` method from matplotlib. Because it needs
a X and Y array, its already plotted with the right coordinates. I always find
it hard to make nice looking 3D plots, so the visual aspects can certainly be
improved. :)
import gdal
from mpl_toolkits.mplot3d import Axes3D
dem = gdal.Open('gmted_small.tif')
gt = dem.GetGeoTransform()
dem = dem.ReadAsArray()
fig, ax = plt.subplots(figsize=(16,8), subplot_kw={'projection': '3d'})
xres = gt[1]
yres = gt[5]
X = np.arange(gt[0], gt[0] + dem.shape[1]*xres, xres)
Y = np.arange(gt[3], gt[3] + dem.shape[0]*yres, yres)
X, Y = np.meshgrid(X, Y)
surf = ax.plot_surface(X,Y,dem, rstride=1, cstride=1, cmap=plt.cm.RdYlBu_r, vmin=0, vmax=4000, linewidth=0, antialiased=True)
ax.set_zlim(0, 60000) # to make it stand out less
ax.view_init(60,-105)
fig.colorbar(surf, shrink=0.4, aspect=20)

|
matplotlib agg ticks when rendering floating points
Question: This is the same problem as here: [python odd axis ticks,
matplotlib](http://stackoverflow.com/questions/16895980/python-odd-axis-ticks-
matplotlib). Except no one is following that question so to make it little
clearer:
I'm using a Linux machine:
$ uname -a
$ Linux stokes1 2.6.32.59-0.3-default #1 SMP 2012-04-27 11:14:44 +0200 x86_64 x86_64 x86_64 GNU/Linux
So this happens with Matplotlib (version 1.0.0) when using Agg (v2.2; any
combination I suspect, like TkAgg etc. but I can't check for sure because only
TkAgg is available on the machine). This is not my PC so I don't have root
access but I can talk with the administrators and let them know about it, but
I also wanted to get some details on the matter.
So basically if you take a look at the pictures you can see the problem with
the ticks. Now I found out that this only happens when the ticks are floating
point numbers as can be seen. I don't think it's a font problem since I'm
using the standard Bitstream Vera and also this doesn't happen if I use svg as
backend.
This can be reproduced by:
import matplotlib as m
m.use('tkagg')
from pylab import *
plot()
show()


Answer: I didn't find an answer to the problem with the boxes around the floating
point numbers, but the administrator of the machine gave me a workaround:
import matplotlib as m
m.use('agg')
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter
plt.gca().xaxis.set_major_formatter(FormatStrFormatter('%.1f'))
plt.gca().yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
plt.plot()
plt.savefig('test')
which produces this result: 
If I get an explanation from the administrator of the machine where this weird
boxes appear I will add it to the answer. Meanwhile hope this helps someone.
|
AttributeError: 'module' object has no attribute 'ClassType'
Question: Ok so I've been trying to type the command: python but it ends up spitting
this out:
Traceback (most recent call last):
File "C:\Python27\lib\site.py", line 62, in <module>
import os
File "C:\Python27\lib\os.py", line 398, in <module>
import UserDict
File "C:\Python27\lib\UserDict.py", line 83, in <module>
import _abcoll
File "C:\Python27\lib\_abcoll.py", line 70, in <module>
Iterable.register(str)
File "C:\Python27\lib\abc.py", line 107, in register
if not isinstance(subclass, (type, types.ClassType)):
AttributeError: 'module' object has no attribute 'ClassType'
I renamed the types.py to nottypes.py but it is still giving me the exact same
message.
Answer: You've called something else in `sys.path` "types.py". Rename it.
|
Python: encode special charecter
Question: I'm trying to encode a string containing '-' (minus) symbol to iso8859-15, it
will return the string as it is. for eg:
str="abc-def"
Expected output is
abc%2Ddef
Is there any way to do this? sorry me if my question is wrong.
Answer: ya.
from urllib.parse import quote
st = "abc-def"
encoded = quote(st)
|
How to calculate how much Ip Addresses have between two Ip Addresses?
Question: I have two Ip Addresses, and I want to count how many Ip Addresses there are
in the range between the two.
Example:
IP_START = "127.0.0.0"
IP_END = "127.0.1.1"
SUM_OF_IP_ADDRESS = 257
Does anyone know if python has anything to help me accomplish this?
Answer: Short solution using the `ipaddress` package.
import ipaddress
ip1 = int(ipaddress.IPv4Address(unicode('127.0.0.0')))
ip2 = int(ipaddress.IPv4Address(unicode('127.0.1.1')))
print ip2 - ip1
|
Hashlib: optimal size of chunks to be used in md5.update()
Question: This is in reference to [Get MD5 hash of big files in
Python](http://stackoverflow.com/questions/1131220/get-md5-hash-of-big-files-
in-python) and [Hashlib in Windows and
Linux](http://stackoverflow.com/questions/4418042/hashlib-in-windows-and-
linux)
In responses to both these questions, it is advised to use larger chunks of
data in the function md5.update() to improve performance.
All testing I have done appears to indicate that use of smaller chunks gives
the best performance.
Consider the following code:
def test(factor):
filehash = hashlib.md5()
blk_size_to_read = filehash.block_size * (2**factor)
with open(largetestfile, 'rb') as f:
read_data = f.read(blk_size_to_read)
filehash.update(read_data)
filehash.digest()
if __name__ == '__main__':
for ctr in xrange(0, 12):
funcstr = "test({})".format(str(ctr))
timetaken = timeit.timeit(funcstr, setup="from __main__ import test", number = 5000)
print "Factor: {} Time: {}".format(str(ctr), str(timetaken))
All tests I have done indicate that the best performance is achieved when
using `factor` 0 or 1 (that is, 64 or 128 bytes).
Any reason why I am seeing different results from those indicated in the
questions quoted?
I have tried binary and plain text files with size ranging from 700MB to 1.2GB
and am using Python 2.7.3 on Ubuntu 12.04
Secondary question: am I using timeit the way it should be?
Answer: Found the error! I was `read`ing just one chunk and then doing nothing!
Changed
with open(largetestfile, 'rb') as f:
read_data = f.read(blk_size_to_read)
filehash.update(read_data)
to
with open(testfile, 'rb') as f:
while (True):
read_data = f.read(blk_size_to_read)
if not read_data:
break
filehash.update(read_data)
to fix the issue.
UPDATE:
I ran a slightly modified version of the program above to establish the best
possible size of buffer to be used when incrementally using update() to find
the hash of a given file. I also wanted to establish whether there was any
benefit in incremental hashing rather than calculating the hash of the file in
one go (other than memory constraints).
I created 20 files (with random data) for this with file size starting from
4096 bytes and upto 2.1 GB. md5 hash for each of these files was calculated
using buffer sizes starting `2**6` bytes (64 bytes - block size) upto `2**20`
bytes. Using timeit each of these was run 100 times and execution times
obtained with the shortest execution time being recorded. Execution time for
hash calculation of the entire file in one go was was also recorded.
The results are as follows...
FileName Filesize Chunksize Chunked Time Complete Time %diff
file5.txt 4096 4096 0.0014789 0.0014701 -0.60%
file6.txt 8192 524288 0.0021310 0.0021060 -1.19%
file7.txt 16384 16384 0.0033200 0.0033162 -0.12%
file8.txt 32768 65536 0.0061381 0.0057440 -6.86%
file9.txt 65536 65536 0.0106990 0.0112500 4.90%
file10.txt 131072 131072 0.0203800 0.0206621 1.37%
file11.txt 262144 524288 0.0396681 0.0401120 1.11%
file12.txt 524288 1048576 0.0780780 0.0787551 0.86%
file13.txt 1048576 1048576 0.1552539 0.1564729 0.78%
file14.txt 2097152 262144 0.3101590 0.3167789 2.09%
file15.txt 4194304 65536 0.6295781 0.6477270 2.80%
file16.txt 8388608 524288 1.2633710 1.3030031 3.04%
file17.txt 16777216 524288 2.5265670 2.5925691 2.55%
file18.txt 33554432 65536 5.0558681 5.8452392 13.50%
file19.txt 67108864 65536 10.1133211 11.6993010 13.56%
file20.txt 134217728 524288 20.2226040 23.3923230 13.55%
file21.txt 268435456 65536 40.4060180 46.6972852 13.47%
file22.txt 536870912 65536 80.9403431 93.4165111 13.36%
file23.txt 1073741824 524288 161.8108051 187.1303582 13.53%
file24.txt 2147483648 65536 323.4812710 374.3899529 13.60%
The `Chunked Time` is execution time when the file is broken into chuck and
hased incrementally; the `Complete Time` is execution time when the entire
file is hashed in one go. The `%diff` is the percentage difference between
Chunked Time and 'Complete Time'.
Observations:
1. For smaller file sizes the chunk size is nearly always equal to file size and there appears to be no advantage in adopting either approach.
2. For larger files (33554432 (`2**25`) bytes and above), there appears to be considerable performance benefit (lesser time) in using the incremental approach rather than hashing the entire file in one go.
3. For larger files it the best chunk/buffer size is 65536 (`2**16`) bytes
Notes: python 2.7.3; Ubuntu 12.06 64 bit; 8 Gigs of RAM The code used for this
is available here... <http://pastebin.com/VxH7bL2X>
|
Why does python allow an empty function (with doc-string) body without a "pass" statement?
Question:
class SomeThing(object):
"""Represents something"""
def method_one(self):
"""This is the first method, will do something useful one day"""
def method_two(self, a, b):
"""Returns the sum of a and b"""
return a + b
In a recent review of some code similar to the above, a colleague asked:
> How come `method_one` is successfully parsed and accepted by python? Doesn't
> an empty function need a body consisting of just
> [`pass`](http://docs.python.org/2/reference/simple_stmts.html#pass)? i.e.
> shouldn't it look like this?
def method_one(self):
"""This is the first method, will do something useful one day"""
pass
My response at the time was something like:
> Although the docstring is usually not considered to be part of the function
> body, because it is not "executed", it is parsed as such, so the `pass` can
> be omitted.
In the spirit of [sharing knowledge Q&A
style](http://blog.stackoverflow.com/2011/07/its-ok-to-ask-and-answer-your-
own-questions/), I thought I'd post the more rigorous answer here.
Answer: According to the [Python 2.7.5 grammar
specification](http://docs.python.org/2/reference/grammar.html), which is read
by the parser generator and used to parse Python source files, a function
looks like this:
funcdef: 'def' NAME parameters ':' suite
The function body is a `suite` which looks like this
suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
Following this all the way through the grammar, `stmt` can be an `expr_stmt`,
which can be just a `testlist`, which can be just a single `test` which can
(eventually) be just an `atom`, which can be just a single `STRING`. The
docstring.
Here are just the appropriate parts of the grammar, in the right order to
follow through:
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | exec_stmt | assert_stmt)
expr_stmt: testlist (augassign (yield_expr|testlist) |
('=' (yield_expr|testlist))*)
testlist: test (',' test)* [',']
test: or_test ['if' or_test 'else' test] | lambdef
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
not_test: 'not' not_test | comparison
comparison: expr (comp_op expr)*
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
power: atom trailer* ['**' factor]
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [listmaker] ']' |
'{' [dictorsetmaker] '}' |
'`' testlist1 '`' |
NAME | NUMBER | STRING+)
|
Cast an object to a derived type in Python
Question: I want to cast an object of type A to type B so I can use B's methods. Type B
inherits A. For example I have class my class B:
class B(A):
def hello(self):
print('Hello, I am an object of type B')
My Library, Foo, has a function that returns an object of type A, which I want
to cast to type B.
>>>import Foo
>>>a_thing = Foo.getAThing()
>>>type(a_thing)
A
>>># Somehow cast a_thing to type B
>>>a_thing.hello()
Hello, I am an object of type B
Answer: The usual way to do this is to write a class method for B that takes an A
object and creates a new B object using the information from it.
class B(A):
@classmethod
def from_A(cls, A_obj):
value = A.value
other_value = A.other_value
return B(value, other_value)
a_thing = B.from_A(a_thing)
|
Does numpy.ma allow masking of sub-masked arrays in a masked array?
Question: I am writing some code in Python 2.7 (using pydev in eclipse, Mac OSX) to
gather information about a big set of card information stored in xml files.
The cards are from Magic the gathering and all have a very similar card
structure (Name, cost to play, type, etc.)
I am using masked arrays to store all of the information on the cards. Here is
the array I initialize to store this information (more fields are added as
they are encountered in the code):
AllCards=numpy.ma.masked_all(2, dtype=[('Set','a128'),('Name','a128'),
('Cost','a128'),('CMC','a128'),
('Color','a128'),('Type','a128'),
('Subtype','a128'),('Rarity','a128'),
('Rules','a512'),('Power','a128'),
('Toughness','a128'),('PT Box','a128'),
('Artist','a128'),('Flavor','a512'),
('MultiverseId','a128')])
I have been able to populate and mask this array as I wanted, but I am running
into a particular problem when I start to make this original array more
complicated. When a card such as [Faithful
Squire](http://gatherer.wizards.com/pages/card/Details.aspx?multiverseid=74093),
which has more complex components than a regular card (the flip aspect), it is
coded in the xml as:
<card name="Faithful Squire" id="34905b29-481d-9e16-bc50-3b9f5a70dbf2">
<property name="Cost" value="1ĄĄ" />
<property name="CMC" value="3" />
<property name="Color" value="White" />
<property name="Type" value="Creature" />
<property name="Subtype" value="Human Soldier" />
<property name="Rarity" value="Uncommon" />
<property name="Rules" value="Whenever you cast a Spirit or Arcane spell, you may put a ki counter on Faithful Squire.
At the beginning of the end step, if there are two or more ki counters on Faithful Squire, you may flip it." />
<property name="Power" value="2" />
<property name="Toughness" value="2" />
<property name="PT Box" value="2 / 2" />
<property name="Artist" value="Mark Zug" />
<property name="MultiverseId" value="74093" />
<alternate name="Kaiso, Memory of Loyalty" type="flip">
<property name="Cost" value="1ĄĄ" />
<property name="CMC" value="3" />
<property name="Color" value="White" />
<property name="Type" value="Legendary Creature" />
<property name="Subtype" value="Spirit" />
<property name="Rarity" value="Uncommon" />
<property name="Rules" value="Flying
Remove a ki counter from Kaiso, Memory of Loyalty: Prevent all damage that would be dealt to target creature this turn." />
<property name="Power" value="3" />
<property name="Toughness" value="4" />
<property name="PT Box" value="3 / 4" />
<property name="Artist" value="Mark Zug" />
<property name="MultiverseId" value="74093" />
</alternate>
</card>
All of the propertys of the top-level card object are the fields, so my
intention is to create another masked array that holds the properties of
"flip" cards, so all the flip cards can be categorized as such (and properties
analyzed, etc.) I am able to create a masked array of these properties, and
append it as a field to the larger array using:
AllCards=numpy.lib.recfunctions.append_fields(AllCards,AlternateCardType,AlternateCard)
but when I try to update the mask using:
AllCards.mask[AlternateCardType][0]=True
I receive the following error:
Traceback (most recent call last):
File "/Applications/Eclipse/plugins/org.python.pydev_2.7.3.2013031601/pysrc/pydevd.py", line 1397, in <module>
debugger.run(setup['file'], None, None)
File "/Applications/Eclipse/plugins/org.python.pydev_2.7.3.2013031601/pysrc/pydevd.py", line 1090, in run
pydev_imports.execfile(file, globals, locals) #execute the script
File "/Users/Andrew/Documents/Workspace/PyGather/PyGather.py", line 61, in <module>
!AllCards.mask[AlternateCardType][0]=True
TypeError: expected a readable buffer object
before this sub masked array is in the top-level array, I am able to
manipulate the mask as such, and can use loops and assignments to mask items I
don't want. I am trying to mask this because
numpy.lib.recfunctions.append_fields automatically adds the data to the first
item in the array, and I couldn't figure out how to pad the data
appropriately. Is this a bug, or am I doing something wrong in the code!
[Full Source](https://www.dropbox.com/s/todaqrlalhhl4s2/PyGather.txt)
Answer: Rather than storing the subarrays in fields of the main one, which is causing
you problems, I'd recommend keeping the flipped cards in their own array.
Think of it in terms of database organization; assuming the `MultiverseId`
attribute is unique to each card or card+flip combo, you can use that as the
primary ID, so to speak, in your flip array. That may not be as efficient with
numpy record arrays as it would in a true relational database, though; an
additional step may be to have a column in the main array to indicate whether
or not the card has a flip aspect to avoid having to check the subarray each
time, though that would use a bit more memory.
Alternatively, you could assign the flip cards their own unique IDs and store
them in the same record array as the regular cards, as the properties seem to
have the same names, and then have a `flip_id` field that would be some set
value such as `0` or `None` for cards without flip aspects and then the ID of
the flip card for those cards that do have a flip. (The flipped card could
then have the original card's ID in its `flip_id` field to connect the flipped
card to the original/main one.)
|
Python urllib2 - cannot read a page
Question: I am using `urllib2` in `Python` to scrape a webpage. However, the `read()`
method does not return.
Here is the code I am using:
import urllib2
url = 'http://edmonton.en.craigslist.ca/kid/'
headers = {'User-Agent': 'Mozilla/5.0'}
request = urllib2.Request(url, headers=headers)
f_webpage = urllib2.urlopen(request)
html = f_webpage.read() # <- does not return
I last ran the script a month ago and it was working fine then.
Note that the same script runs well for webpages of other categories on
Edmonton Craigslist like `http://edmonton.en.craigslist.ca/act/` or
`http://edmonton.en.craigslist.ca/eve/`.
Answer: As requested in comments :)
Install `requests` by `$ pip install requests`
Use `requests` as the following:
>>> import requests
>>> url = 'http://edmonton.en.craigslist.ca/kid/'
>>> headers = {'User-Agent': 'Mozilla/5.0'}
>>> request = requests.get(url, headers=headers)
>>> request.ok
True
>>> request.text # content in string, similar to .read() in question
...
...
Disclaimer: this is not technically the answer to OP's question, but solves
OP's problem as `urllib2` is known to be problematic and `requests` library is
born to solve such problems.
|
Unbind default button behavior in wxPython
Question: I am writing an interface where I'd like to have a user click a button, then
capture his next keystroke.
I can currently capture all the keys on the keyboard, except for those like
tab or the arrow keys which cause the button to lose focus when pressed. I
know that I need to unbind the window from keypress events during the capture
interval.
I've already tried unbinding from the frame, the notebook tab widget, and the
panel that's inside it, and it has not worked.
Where in the hierarchy do I need to unbind key presses to avoid having them
behave the way they normally do? Thanks
Answer:
import wx
a = wx.App(redirect=False)
class XFrame(wx.Frame):
def __init__(self,*args):
wx.Frame.__init__(self,*args)
b = wx.Button(self,-1,"Click")
b.Bind(wx.EVT_BUTTON,self.OnButton)
def OnButton(self,evt):
self.SetFocus()
self.Bind(wx.EVT_CHAR,self.OnChar)
def OnChar(self,evt):
print evt.KeyCode
self.Unbind(wx.EVT_CHAR)
f = XFrame(None,-1,"A Frame")
f.Show()
a.MainLoop()
I think anyway
|
python csv reader selecting specific rows
Question: Suppose we have a text file as given below:
sfgsdgfs >sfsf > "assfgs.jpg">sggw.sgw
sgsdfghsg>sdgsgsgsg[]
werw>"erqwer.jpg">egfwrewrw
How to extract the rows that contain .jpg? What is wrong with the following
code?
import csv
data = csv.reader (open ('outfile.txt', 'r'), delimiter = '"')
for row in data:
if '.jpg' in row:
print (row)
Answer: `row` is a list of column data items, you should iterate over it too:
import csv
data = csv.reader(open('test.csv', 'r'), delimiter='"')
for row in data:
for item in row:
if '.jpg' in item:
print(item)
prints:
assfgs.jpg
erqwer.jpg
|
Calculate Similarity of Sparse Matrix
Question: I am using Python with numpy, scipy and scikit-learn module.
I'd like to classify the arrays in very big sparse matrix. (100,000 * 100,000)
The values in the matrix are equal to 0 or 1. The only thing I have is the
index of value = 1.
a = [1,3,5,7,9]
b = [2,4,6,8,10]
which means
a = [0,1,0,1,0,1,0,1,0,1,0]
b = [0,0,1,0,1,0,1,0,1,0,1]
How can I change the index array to the sparse array in scipy ?
How can I classify those array quickly ?
Thank you very much.
Answer: If you choose the sparse `coo_matrix` you can create it passing the indices
like:
from scipy.sparse import coo_matrix
import scipy
nrows = 100000
ncols = 100000
row = scipy.array([1,3,5,7,9])
col = scipy.array([2,4,6,8,10])
values = scipy.ones(col.size)
m = coo_matrix((values, (row,col)), shape=(nrows, ncols), dtype=float)
|
displaying graph after importing txt file matplotlib
Question: I am writing a simple program to output a basic graph after importing a text
file. i get the following error:
Traceback (most recent call last):
File "C:\Users\Chris1\Desktop\attempt2\ex1.py", line 13, in <module>
x.append(int(xAndY[0]))
ValueError: invalid literal for int() with base 10: '270.286'
my python code looks like this:
import matplotlib.pyplot as plt
x = []
y = []
readFile = open ('temp.txt', 'r')
sepFile = readFile.read().split('\n')
readFile.close()
for plotPair in sepFile:
xAndY = plotPair.split(',')
x.append(int(xAndY[0]))
y.append(int(xAndY[1]))
print x
print y
plt.plot(x, y)
plt.title('example 1')
plt.xlabel('D')
plt.ylabel('Frequency')
plt.show()
a snippet of my text file looks like this:
270.286,4.353,16968.982,1903.115
38.934,68.608,16909.727,1930.394
190.989,1.148,16785.367,1969.925
the issue seems minor but cannot seem to resolve it myself thanks
Answer: It is quite simple, just replace the `int` conversion with `float`:
for plotPair in sepFile:
xAndY = plotPair.split(',')
x.append(float(xAndY[0]))
y.append(float(xAndY[1]))
That will fix the error.
|
python multi thread to process file with fcntl flcok
Question: I try to use python to handle text replace problem. There is a file of Little-
endian UTF-16 format, I want to replace the ip address in this file. First, I
read this file by line, then replace the target string, last, I write the new
string to the file. But with multi thread operate this file, the file will be
garbled. Here is my code.
import re
import codecs
import time
import thread
import fcntl
ip = "10.200.0.1"
searchText = r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}"
def replaceFileText(fileName,searchText,replaceText,encoding):
lines = []
with codecs.open(fileName,"r",encoding) as file:
fcntl.flock(file,fcntl.LOCK_EX)
for line in file:
lines.append(re.sub(searchText,replaceText,line))
fcntl.flock(file,fcntl.LOCK_UN)
with codecs.open(fileName,"w",encoding) as file:
fcntl.flock(file,fcntl.LOCK_EX)
for line in lines:
file.write(line)
fcntl.flock(file,fcntl.LOCK_UN)
def start():
replaceFileText("rdpzhitong.rdp",searchText,ip,"utf-16-le")
thread.exit_thread()
def test(number):
for n in range(number):
thread.start_new_thread(start,())
time.sleep(1)
test(20)
I can't understand why the file is garbled, I have use the fcntl flock to keep
the read/write sequence, where is the problem?
Answer: It's garbled because an fcntl lock is owned by a _process_ , not by a _thread_
, so a process cannot use fcntl to serialize its own access. See [this
answer](http://stackoverflow.com/a/9462648/132382), for example.
You'll need to use a threading construct like a
[Lock](http://docs.python.org/2/library/threading.html#lock-objects) instead.
|
solve an n-dimensional optimisation probl using iminuit
Question: I woul like to solve an n-dimensional optimisation problem using iminuit.
So my approach is the following. I am trying to figure out how to extend this:
def f(x,y,z):
return (x-1.)**2 + (y-2*x)**2 + (z-3.*x)**2 -1.
to a variable "x" that is a numpy.array.
I would like to do something like this:
x = [1,2,3,4,5]
y = [2,4,6,8,10]# y=2x
class StraightLineChi2:
def __init__(self,x,y):
self.x = x
self.y = y
def __call__(self,m,c): #lets try to find slope and intercept
chi2 = sum((y - m*x+c)**2 for x,y in zip(self.x,self.y))
return chi2
but in my case x is my unknown, and it is an array. Like in many
optimization/minimization problems, the function is a f=f(x1,...,xn) where n
can be big. x1,...,xn are the unknowns of the problem.
(These examples are taken from
[here](http://nbviewer.ipython.org/urls/raw.github.com/iminuit/iminuit/master/tutorial/tutorial.ipynb))
Something similar is achieved "hacking" pyminuit2, like described
[here](http://code.google.com/p/pyminuit/issues/detail?id=6)
ps: I wish I could have added the tag [iminuit], but I have no enough
reputation to create a new tag... ps2: stack overflow is somewhat crazy... I
cannot use the word "problem" in the title!
Answer: For your example I recommend you using
[iminuit](https://github.com/iminuit/iminuit) and
[probfit](https://github.com/iminuit/probfit). Having an argument as a list of
parameter is not exactly what you want to do since you will get confused which
parameter is what very soon.
Here is an example taken straight from [probfit
tutorial](http://nbviewer.ipython.org/urls/raw.github.com/iminuit/probfit/master/tutorial/tutorial.ipynb).
Also see [the
documentation](http://iminuit.github.io/probfit/api.html#probfit.costfunc.Chi2Regression)
import iminuit
import probfit
x = np.linspace(0, 10, 20)
y = 3 * x + 15 + np.random.randn(len(x))
err = np.ones(len(x))
def line(x, m, c): # define it to be parabolic or whatever you like
return m * x + c
chi2 = probfit.Chi2Regression(line, x, y, err)
minuit = iminuit.Minuit(chi2)
minuit.migrad();
print(minuit.values) #{'c': 16.137947520534624, 'm': 2.8862774144823855}
|
Is a general-purpose function/object doubling decorator feasible in Python?
Question: **Background:** Let's say that we have a function that opens a frequently-used
database connection, something essentially like the following but with
additional bells and whistles:
import getpass
import MySQLdb
def myspecialconnect(user='foo', host='bar', port=80085):
password = getpass.getpassword('Enter your password: ')
return MySQLdb.connect(user, password, host, port)
And maybe sometimes, we want to open two connections, along the lines of:
read_connection = myspecialconnect()
write_connection = myspecialconnect()
What a pain - I have to enter my password twice, when all I want is the same
thing again. Of course, there are many ways to modify this one example to
avoid that - _for example_ , an argument could be added like
`myspecialconnect(multi=True)` to return two connections instead of one, or
`myspecialconnect(copies=9)` if you want to get crazy, with the corresponding
code to make that happen inside this one function. However, this special case
prompted me to wonder about a more general application.
**Question** : What if I wanted to be able to get this functionality (return
multiple copies of whatever we want) from any arbitrary function? Hmm - this
could be tricky.
First, just to confirm that it doesn't work, I tried this:
def doubled(function):
def Wrapper(*args, **kwargs):
return (function(*args, **kwargs),function(*args, **kwargs))
return Wrapper
That's okay for a function that requires no user input; otherwise, you still
have to sit there and input the exact same thing twice in a row. That's easy
enough to fix, but by now you might be able to see where this is going:
def doubled(function):
def Wrapper(*args, **kwargs):
result = function(*args, **kwargs)
return (result, result)
return Wrapper
This version takes user input only once, but it returns the same reference
twice, making it nothing more than a needlessly convoluted way to do `foo =
bar = object()`. "Aha!" says I, "maybe I should take a look at the `copy`
module." Which is what I did, only I don't quite know how it works yet...
>>> import copy
>>> a = (i for i in [1,2])
>>> a
<generator object <genexpr> at 0x03FB0878>
>>> copy.copy(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\WinPython-32bit-2.7.5.1\python-2.7.5\lib\copy.py", line 96, in copy
return _reconstruct(x, rv, 0)
File "C:\WinPython-32bit-2.7.5.1\python-2.7.5\lib\copy.py", line 329, in _reconstruct
y = callable(*args)
File "C:\WinPython-32bit-2.7.5.1\python-2.7.5\lib\copy_reg.py", line 93, in __newobj__
return cls.__new__(cls, *args)
TypeError: object.__new__(generator) is not safe, use generator.__new__()
>>> copy.deepcopy(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\WinPython-32bit-2.7.5.1\python-2.7.5\lib\copy.py", line 190, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "C:\WinPython-32bit-2.7.5.1\python-2.7.5\lib\copy.py", line 329, in _reconstruct
y = callable(*args)
File "C:\WinPython-32bit-2.7.5.1\python-2.7.5\lib\copy_reg.py", line 93, in __newobj__
return cls.__new__(cls, *args)
TypeError: object.__new__(generator) is not safe, use generator.__new__()
Of course by now I've spent about as much time as I can possibly justify (or
more) on this little side problem, which means I'm incredibly curious. Can
this be done in a way that returns copies of arbitrary instances, without
turning into a monster that is forced to explicitly handle dozens of cases,
each in their own special way?
Answer: There's no general way to do what you want. It'd be simple if you just wanted
to replay the first function call - your first try would've worked.
Unfortunately, the requirement to replay user input complicates things.
First, you don't want a copy. How would you copy a database connection?
There's state over on the other side of a network connection that you'd have
to duplicate, and you'd have to pick new ports, and it wouldn't end up really
being a copy in the sense of having the same state and properties. You want to
open up a new connection with the same parameters as the old one.
Second, there's no way for the decorator to know which inputs to replay.
Calling a function twice with the same arguments is easy. Calling a function
twice, replaying the user's input from the first call into the second call, is
messy but possible. However, if the decorator tried to replay absolutely all
input from the first call into the second call, it would end up replaying the
database's TCP responses, too. Instead of talking to the database and setting
up a connection, the second call would talk to the decorator and return a
connection object that doesn't work.
Instead of trying to double `myspecialconnect`, make a function that doesn't
need to read user input and double that. Read the password once, then pass
that into a doubled function.
|
python: create and update a datetime field in a sqlite db
Question: I have a data structure that I'm iterating through:
SomeList = [[ID, VarA, VarB, DateC, VarD],[ID2, VarA2, VarB2, DateC2, VarD2]...]
The DateCX variables will always be of the form:
"2013-07-15T13:58:55Z"
I've also used the `sqlite` library to create a sqlite database:
import sqlite3 as lite
con = lite.connect('test.db')
with con:
cur = con.cursor()
cur.execute("CREATE TABLE TEST(ColumnID INT, ColumnA TEXT, ColumnB TEXT, ColumnC DATETIME, Column D TEXT)")
I'm then iterating through `SomeList`:
for list in SomeList:
TempID = list[0]
TempA = list[1]
TempB = list[2]
TempDateC = list[3]
TempD = list[4]
For the date field, I've been leveraging the `strptime` function in the `time`
library to parse it in python:
TempDateC = time.strptime(TempDateC, "%Y-%m-%dT%H:%M:%SZ")
which results in the tuple that I'm expecting.
I then tried to update the `TEST` database:
allValues = (TempID, TempA, TempB, TempDateC, TempD)
cur.execute("INSERT INTO TEST VALUES(?, ?, ?, ?, ?)", allValues)
but I'm getting the following error:
sqlite3.InterfaceError: Error binding parameter 3 - probably unsupported type.
Is there something else that I have to do to convert the tuple I've created
into something that can be inserted into a SQL db?
Answer: Okay, I figured it out.
You have to
import time
import datetime
and then, rather than
TempDateC = time.strptime(TempDateC, "%Y-%m-%dT%H:%M:%SZ")
you need to parse the date with a few functions:
TempDateC = datetime.datetime.fromtimestamp(time.mktime(time.strptime(TempdateC, "%Y-%m-%dT%H:%M:%SZ")))
|
Trouble accessing JSON data in Python for loop
Question: I can read in the JSON data and print data but for some reason it is reading
it in as unicode so I cannot use the simple dot notation to get at the data.
test.py:
#!/usr/bin/env python
from __future__ import print_function # This script requires python >= 2.6
import json, os
myData = json.loads(open("test.json").read())
print( json.dumps(myData, indent=2) )
print( myData["3942948"] )
print( myData["3942948"][u'myType'] )
for accnt in myData:
print( " myName: %s myType: %s " % ( accnt[u'myName'], accnt[u'myType'] ) ) # TypeError: string indices must be integers
#print( " myName: %s myType: %s " % ( accnt.myName, accnt.myType ) ) # AttributeError: 'unicode' object has no attribute 'myName'
#print( " myName: %s myType: %s " % ( accnt['myName'], accnt['myType'] ) ) # TypeError: string indices must be integers
#print( " myName: %s myType: %s " % ( accnt["myName"], accnt["myType"] ) ) # TypeError: string indices must be integers
test.json:
{
"7190003": { "myName": "Infiniti" , "myType": "Cars" },
"3942948": { "myName": "Honda" , "myType": "Cars" }
}
Running it I get:
> test.py
{
"3942948": {
"myType": "Cars",
"myName": "Honda"
},
"7190003": {
"myType": "Cars",
"myName": "Infiniti"
}
}
{u'myType': u'Cars', u'myName': u'Honda'}
Cars
Traceback (most recent call last):
File "test.py", line 10, in <module>
print( " myName: %s myType: %s " % ( accnt[u'myName'], accnt[u'myType'] ) )
TypeError: string indices must be integers
So my question is how do I read it in so that the keys are not unicode (much
prefered) or how do access the keys in a for loop when they are unicode.
Answer: You need to use the dict `myData` instead of the string `accnt`:
for accnt in myData:
print( " myName: %s myType: %s " % ( myData[accnt][u'myName'], myData[accnt][u'myType'] ) )
You can also use the `values()` function on the `myData` dict:
for accnt in myData.values():
print( " myName: %s myType: %s " % ( accnt[u'myName'], accnt[u'myType'] ) )
|
Setuptools setup.py installing when dependencies not satisfied
Question: I have a `setup.py` that looks a bit (okay, exactly) like this:
#!/usr/bin/env python
from setuptools import setup
import subprocess
import distutils.command.build_py
class BuildWithMake(distutils.command.build_py.build_py):
"""
Build using make.
Then do the default build logic.
"""
def run(self):
# Call make.
subprocess.check_call(["make"])
# Keep installing the Python stuff
distutils.command.build_py.build_py.run(self)
setup(name="jobTree",
version="1.0",
description="Pipeline management software for clusters.",
author="Benedict Paten",
author_email="[email protected]",
url="http://hgwdev.cse.ucsc.edu/~benedict/code/jobTree.html",
packages=["jobTree", "jobTree.src", "jobTree.test", "jobTree.batchSystems",
"jobTree.scriptTree"],
package_dir= {"": ".."},
install_requires=["sonLib"],
# Hook the build command to also build with make
cmdclass={"build_py": BuildWithMake},
# Install all the executable scripts somewhere on the PATH
scripts=["bin/jobTreeKill", "bin/jobTreeStatus",
"bin/scriptTreeTest_Sort.py", "bin/jobTreeRun",
"bin/jobTreeTest_Dependencies.py", "bin/scriptTreeTest_Wrapper.py",
"bin/jobTreeStats", "bin/multijob", "bin/scriptTreeTest_Wrapper2.py"])
It installs the package perfectly fine when run with `./setup.py install`.
However, it does this whether or not the "sonLib" package is installed,
ignoring the dependency.
Is this expected behavior? Should a `setup.py install` blithely proceed if the
dependencies are not installed, leaving it up to pip or whatever to install
them beforehand? If not, and `setup.py install` ought to fail when
dependencies are absent, what am I doing wrong?
**EDIT** : Some version information:
Python 2.7.2 (default, Jan 19 2012, 21:40:50)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import setuptools
>>> setuptools.__version__
'0.6c12'
>>>
Answer: The [default `install` command for Distutils
`setup`](https://docs.python.org/3/distutils/commandref.html#installing-
modules-the-install-command-family) doesn't know anything about dependencies.
If you are running that, you're right that dependencies will not be checked.
Just going by what you've show in the `setup.py`, though, you are using
Setuptools for the `setup` function. The [Setuptools `install`
command](https://setuptools.pypa.io/en/latest/setuptools.html#install-run-
easy-install-or-old-style-installation) is declared to run `easy_install`,
which in turn does check and download dependencies.
It is possible you are explicitly invoking the Distutils `install`, by
specifying `install --single-version-externally-managed`.
|
Python style: use imported class as namespace for related custom data?
Question: I was wondering if it's ok to use an imported, 3rd-party class as a namespace
for you related custom variables? Say, the following code:
import pycurl
curlm = pycurl.CurlMulti()
curlm.pool = [pycurl.Curl() for i in range(POOL_SIZE)]
curlm.running = set()
In this example, pycurl.CurlMulti is some 3rd-party class and _pool_ and
_running_ are my variables (not provided by CurlMulti), but intended to be
used together with curlm.
Answer: Don't do it. What you want is a
[WeakKeyDictionary](http://docs.python.org/2/library/weakref.html#weakref.WeakKeyDictionary).
You can associate arbitrary values with arbitrary objects without worrying
about different modules overwriting each other's variables or the dict keeping
objects alive too long.
Suppose you write a module `foo.py` that sets `thing.whatever = 5`. Two Python
versions later, `thing` offers a `whatever` method. Everything breaks, and you
didn't even touch your code.
Suppose you write a module `foo.py` that sets `thing.whatever = 5`. Two months
later, you're working on a different project, and you write a module `bar.py`
that sets `thing.whatever = 4` on the same `thing`. Horrible breakage occurs
due to unrelated decisions you made months ago.
|
Python 2: SMTPServerDisconnected: Connection unexpectedly closed
Question: I have a small problem with a sending Email in Python:
#me == my email address
#you == recipient's email address
me = "[email protected]"
you = "[email protected]"
# Create message container - the correct MIME type is multipart/alternative.
msg = MIMEMultipart('alternative')
msg['Subject'] = "Alert"
msg['From'] = me
msg['To'] = you
# Create the body of the message (a plain-text and an HTML version).
html = '<html><body><p>Hi, I have the following alerts for you!</p></body></html>'
# Record the MIME types of both parts - text/plain and text/html.
part2 = MIMEText(html, 'html')
# Attach parts into message container.
# According to RFC 2046, the last part of a multipart message, in this case
# the HTML message, is best and preferred.
msg.attach(part2)
# Send the message via local SMTP server.
s = smtplib.SMTP('aspmx.l.google.com')
# sendmail function takes 3 arguments: sender's address, recipient's address
# and message to send - here it is sent as one string.
s.sendmail(me, you, msg.as_string())
s.quit()
So before now, my program, didn't give me an error, but it also didn't send me
an Email. And now python give me an error:
SMTPServerDisconnected: Connection unexpectedly closed
How can I fix that?
Answer: Most probably the gmail server rejected the connection after the data command
(very nasty of them to do so at this stage :). The actual message is most
probably this one:
retcode (421); Msg: 4.7.0 [ip.octets.listed.here 15] Our system has detected an unusual rate of
4.7.0 unsolicited mail originating from your IP address. To protect our
4.7.0 users from spam, mail sent from your IP address has been temporarily
4.7.0 rate limited. Please visit
4.7.0 https://support.google.com/mail/answer/81126 to review our Bulk Email
4.7.0 Senders Guidelines. qa9si9093954wjc.138 - gsmtp
How do I know that? Because I've tried it :) with the `s.set_debuglevel(1)`,
which prints the SMTP conversation and you can see firsthand what's the issue.
You've got two options here:
1. Continue using that relay; [as explained by Google](https://support.google.com/a/answer/176600?hl=en), it's unencrypted gmail-to-gmail only, and you have to un-blacklist your ip through their procedure
2. The most fool-proof option is to switch to TLS with authentication
Here's how the changed source looks like:
# skipped your comments for readability
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
me = "[email protected]"
my_password = r"your_actual_password"
you = "[email protected]"
msg = MIMEMultipart('alternative')
msg['Subject'] = "Alert"
msg['From'] = me
msg['To'] = you
html = '<html><body><p>Hi, I have the following alerts for you!</p></body></html>'
part2 = MIMEText(html, 'html')
msg.attach(part2)
# Send the message via gmail's regular server, over SSL - passwords are being sent, afterall
s = smtplib.SMTP_SSL('smtp.gmail.com')
# uncomment if interested in the actual smtp conversation
# s.set_debuglevel(1)
# do the smtp auth; sends ehlo if it hasn't been sent already
s.login(me, my_password)
s.sendmail(me, you, msg.as_string())
s.quit()
Now, if try to 'cheat' the system and send with a different (non-gmail)
address it's gonna a) require you to connect to a different hostname (some of
the MX records for gmail), then b) stop you and close the connection on the
grounds of blacklisted ip, and c) do reverse DNS, DKIM and lots of other
countermeasures to make sure you're actually in control of the domain you
presented in the MAIL FROM: address.
Finally, there's also option 3) - use any other email relaying service, there
are tons of good ones :)
|
Recursive function gives no output
Question: I'm scrapping all the URL of my domain with recursive function. But it outputs
nothing, without any error.
#usr/bin/python
from bs4 import BeautifulSoup
import requests
import tldextract
def scrap(url):
for links in url:
main_domain = tldextract.extract(links)
r = requests.get(links)
data = r.text
soup = BeautifulSoup(data)
for href in soup.find_all('a'):
href = href.get('href')
if not href:
continue
link_domain = tldextract.extract(href)
if link_domain.domain == main_domain.domain :
problem.append(href)
elif not href == '#' and link_domain.tld == '':
new = 'http://www.'+ main_domain.domain + '.' + main_domain.tld + '/' + href
problem.append(new)
return len(problem)
return scrap(problem)
problem = ["http://xyzdomain.com"]
print(scrap(problem))
When I create a new list, it works, but I don't want to make a list every time
for every loop.
Answer: You need to structure your code so that it meets the pattern for recursion as
your current code doesn't - you also should not call variables the same name
as libraries, e.g. `href = href.get()` because this will usually stop the
library working as it becomes the variable, your code as it currently is will
only ever return the len() as this return is unconditionally reached before:
`return scrap(problem)`.:
def Recursive(Factorable_problem)
if Factorable_problem is Simplest_Case:
return AnswerToSimplestCase
else:
return Rule_For_Generating_From_Simpler_Case(Recursive(Simpler_Case))
for example:
def Factorial(n):
""" Recursively Generate Factorials """
if n < 2:
return 1
else:
return n * Factorial(n-1)
|
Play 2 sounds simultaneously with multiprocessing in python
Question: I need to play 2 sounds simultaneously, with multiprocessing rather than
threads, to see if it solves a problem where threads play the audio in
sequence rather than in parallel. I am guessing it's due to the Global
Interpreter Lock (GIL) in python.
I added [a question](http://stackoverflow.com/questions/17711672/simultaneous-
record-audio-from-mic-and-play-it-back-with-effect-in-python) 2 days ago, but
my description is overcomplicated. This is the simple version:
The audio gets imported as numpy array. I take this array and play it using
the scikits.audiolab module:
import scikits.audiolab as audiolab
# This is how I import my wav file. "frames" is the numpy array, "fs" = sampling
# frequency, "encoder" = quantizing at 16 bits
frames, fs, encoder = audiolab.wavread('audio.wav')
# This is how I play my wav file. audiolab plays the frames array at a frequency of
# 44100 Hz
audiolab.play(frames, fs=44100)
That's fine, but this is what I need help on: playing 2 files at the same time
using multiprocessing.
frames1, fs1, encoder1 = audiolab.wavread('audio1.wav')
frames2, fs2, encoder2 = audiolab.wavread('audio2.wav')
audiolab.play(frames1, fs=44100)
audiolab.play(frames2, fs=44100)
Answer: The better way to approach this is to use a library that already knows how to
mix audio streams -- two different processes trying to share the audio
hardware is at best a wonky way to address this problem.
Look at [Pygame](http://www.pygame.org/wiki/about) or
[PyAudio](http://people.csail.mit.edu/hubert/pyaudio/) (Python bindings to
[PortAudio](http://www.portaudio.com/)).
|
How do I catch a 404 error in urllib? (python 3)
Question: I've been reading tens of examples for similar issues, but I can't get any of
the solutions I've seen or their variants to run. I'm screen scraping, and I
just want to ignore 404 errors (skip the pages). I get
_'AttributeError: 'module' object has no attribute 'HTTPError'._
I've tried 'URLError' as well. I've seen the near identical syntax accepted as
working answers. Any ideas? Here's what I've got:
import urllib
import datetime
from bs4 import BeautifulSoup
class EarningsAnnouncement:
def __init__(self, Company, Ticker, EPSEst, AnnouncementDate, AnnouncementTime):
self.Company = Company
self.Ticker = Ticker
self.EPSEst = EPSEst
self.AnnouncementDate = AnnouncementDate
self.AnnouncementTime = AnnouncementTime
webBaseStr = 'http://biz.yahoo.com/research/earncal/'
earningsAnnouncements = []
dayVar = datetime.date.today()
for dte in range(1, 30):
currDay = str(dayVar.day)
currMonth = str(dayVar.month)
currYear = str(dayVar.year)
if (len(currDay)==1): currDay = '0' + currDay
if (len(currMonth)==1): currMonth = '0' + currMonth
dateStr = currYear + currMonth + currDay
webString = webBaseStr + dateStr + '.html'
try:
#with urllib.request.urlopen(webString) as url: page = url.read()
page = urllib.request.urlopen(webString).read()
soup = BeautifulSoup(page)
tbls = soup.findAll('table')
tbl6= tbls[6]
rows = tbl6.findAll('tr')
rows = rows[2:len(rows)-1]
for earn in rows:
earningsAnnouncements.append(EarningsAnnouncement(earn.contents[0], earn.contents[1],
earn.contents[3], dateStr, earn.contents[3]))
except urllib.HTTPError as err:
if err.code == 404:
continue
else:
raise
dayVar += datetime.timedelta(days=1)
Answer: It looks like for urllib (not urllib2) that the exception is
`urllib.error.HTTPError`, not `urllib.HTTPError`. See the
[documentation](http://docs.python.org/3.1/library/urllib.error.html) for more
information.
|
Socket echo server in go
Question: I'm trying to implement a simple socket echo server in go this is the code:
package main
import (
"fmt"
"net"
"sync"
)
func echo_srv(c net.Conn, wg sync.WaitGroup) {
defer c.Close()
defer wg.Done()
for {
var msg []byte
n, err := c.Read(msg)
if err != nil {
fmt.Printf("ERROR: read\n")
fmt.Print(err)
return
}
fmt.Printf("SERVER: received %v bytes\n", n)
n, err = c.Write(msg)
if err != nil {
fmt.Printf("ERROR: write\n")
fmt.Print(err)
return
}
fmt.Printf("SERVER: sent %v bytes\n", n)
}
}
func main() {
var wg sync.WaitGroup
ln, err := net.Listen("unix", "./sock_srv")
if err != nil {
fmt.Print(err)
return
}
defer ln.Close()
conn, err := ln.Accept()
if err != nil {
fmt.Print(err)
return
}
wg.Add(1)
go echo_srv(conn, wg)
wg.Wait()
}
For some reason as soon as a client connects, c.Read() does not block and the
error message is printed. So, my first question is: Shouldn't c.Read() block
until a client sends something to the socket?
And second: After printing the error message, the server does not terminate.
This is what I see when executing the program in gdb:
(gdb) run
Starting program: src/sockets/server/server
warning: Could not load shared library symbols for linux-vdso.so.1.
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
[New Thread 0x7fffe7806700 (LWP 28594)]
[New Thread 0x7fffe7005700 (LWP 28595)]
ERROR: read
EOF^C
Program received signal SIGINT, Interrupt.
runtime.epollwait () at /usr/lib/go/src/pkg/runtime/sys_linux_amd64.s:383
383 RET
(gdb) info goroutines
1 waiting runtime.park
2 syscall runtime.goexit
* 3 syscall runtime.entersyscallblock
I have similar echo servers in Python and C and they work fine. For
completeness I also post the socket client application below (it works fine
with my C and Python servers).
Client:
package main
import (
"bufio"
"fmt"
"net"
"os"
"strings"
)
func main() {
stdin := bufio.NewReader(os.Stdin)
conn, err := net.Dial("unix", "./sock_srv")
if err != nil {
fmt.Print(err)
return
}
defer conn.Close()
for {
fmt.Print("Enter message to transmit: ")
msg, err := stdin.ReadString('\n')
if err != nil {
fmt.Print(err)
return
}
msg = msg[:len(msg)-1]
if (strings.ToLower(msg) == "quit") || (strings.ToLower(msg) == "exit") {
fmt.Println("bye")
return
}
n, err := conn.Write([]byte(msg))
if err != nil {
fmt.Print(err)
return
}
fmt.Printf("CLIENT: sent %v bytes\n", n)
n, err = conn.Read([]byte(msg))
if err != nil {
fmt.Print(err)
return
}
fmt.Printf("CLIENT: received %v bytes\n", n)
fmt.Println("Received message:", msg)
}
}
Answer: Here is a working `echo_srv` for you. You'll need @jnml's suggestion too!
* actually allocate some buffer to receive into - you made a 0 byte buffer!
* exit neatly on EOF
* only write the bytes received with `msg[:n]`
func echo_srv(c net.Conn, wg *sync.WaitGroup) {
defer c.Close()
defer wg.Done()
for {
msg := make([]byte, 1000)
n, err := c.Read(msg)
if err == io.EOF {
fmt.Printf("SERVER: received EOF (%d bytes ignored)\n", n)
return
} else if err != nil {
fmt.Printf("ERROR: read\n")
fmt.Print(err)
return
}
fmt.Printf("SERVER: received %v bytes\n", n)
n, err = c.Write(msg[:n])
if err != nil {
fmt.Printf("ERROR: write\n")
fmt.Print(err)
return
}
fmt.Printf("SERVER: sent %v bytes\n", n)
}
}
|
What is the fastest way to do I/O in Python?
Question: Like those programming challenges, right now I do the following:
For a single variable:
x = int(sys.stdin.readline())
for many variables
A, B, C = map(int,sys.stdin.readline().split())
Is this optimal or are there faster ways?
Answer: If you have numpy available, the numpy loading functions are very fast. For
example:
>>> import numpy
>>> s = '1\n2\n3\n4\n'
>>> data = numpy.fromstring(s, dtype=int, sep='\n')
>>> data
array([1, 2, 3, 4])
This example loads from a string, but you can also load directly from an open
file using
[numpy.fromfile](http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html#numpy.fromfile).
|
Python Printing A List Issue
Question: I'm really struggling to work out how to print to a list. I'd like to print
the server response codes of URLs I specify. Do you know how I'd alter to code
to print the output into a list? If not, do you know where I'd find the
answer? I've been searching around for a couple of weeks now.
Here's the code:
import urllib2
for url in ["http://stackoverflow.com/", "http://stackoverflow.com/questions/"]:
try:
connection = urllib2.urlopen(url)
print connection.getcode()
connection.close()
except urllib2.HTTPError, e:
print e.getcode()
prints:
200
200
I'd like to have:
[200, 200]
Answer: Do you actually want a list? Or just to print it like a list? In either case
the following should work:
import urllib2
out = []
for url in ["http://stackoverflow.com/", "http://stackoverflow.com/questions/"]:
try:
connection = urllib2.urlopen(url)
out.append(connection.getcode())
connection.close()
except urllib2.HTTPError, e:
out.append(e.getcode())
print out
It just makes a list containing the codes, then prints the list.
|
Getting a list of values from a list of dict in python: Without using list comprehension
Question: I have a list of dict for example,
[{'id': 1L, 'name': u'Library'}, {'id': 2L, 'name': u'Arts'}, {'id': 3L, 'name': u'Sports'}]
Now, I have to retrieve the following list from this dict without using list
comprehension
[u'Library', u'Arts', u'Sports']
Is there any way to achieve this in python? I saw many similar questions, but
all answers were using list comprehension.
Any suggestion is appreciated. Thanks in advance.
Answer: You could use `itemgetter`:
from operator import itemgetter
categories = map(itemgetter('name'), things)
But a list comprehension is good too. What's wrong with list comprehensions?
|
Running webapp2 app in a multiple WSGI apps set up with Werkzeug
Question: I am trying to run a django app and a webapp2 app together in one python
interpreter. I'm using werkzeug for that as described
[here](http://flask.pocoo.org/docs/patterns/appdispatch/).
Here's my sample code.
from werkzeug.wsgi import DispatcherMiddleware
from django_app import application as djangoapp
from webapp2_app import application as webapp2app
application = DispatcherMiddleware(djangoapp, {
'/backend': webapp2app
})
After doing this, I would expect all requests to /backend should be treated by
the webapp2 app as /. But it treats the requests as /backend. This work fines
with other WSGI apps using django or flask. The problem only appears with
webapp2 apps. Does anyone have any suggestions how to overcome this? Is there
any other way I can achieve my purpose without using werkzeug for serving
multiple WSGI apps under one domain?
Answer: `DispatcherMiddleware` fabricates environments for your apps and especially
`SCRIPT_NAME`. Django can deal with it with configuration varibale
`FORCE_SCRIPT_NAME = ''`
([docs](https://docs.djangoproject.com/en/1.5/ref/settings/#force-script-
name)).
With Webapp2 it's slightly more complicated. You can create subclass of
`webapp2.WSGIApplication` and override `__call__()` method and force
`SCRIPT_NAME` to desired value. So in your `webapp2_app.py` it could be like
this
import webapp2
class WSGIApp(webapp2.WSGIApplication):
def __call__(self, environ, start_response):
environ['SCRIPT_NAME'] = ''
return super(WSGIApp, self).__call__(environ, start_response)
# app = WSGIApp(...)
|
Return a Dynamic png from Pylons
Question: What I'm trying to do is have my Pylons app dynamically generate an image
based on some data, and return it in such a way that it can be viewed in a
browser.
So far I am generating my image like this:
import Image, ImageDraw
image = Image.new("RGB", (width, height),"black")
img_out = ImageDraw.Draw(image)
img_out.polygon(...
img_out.text(...
#etc
The image is successfully generated, and can even be saved to file like this:
img_out.save(filepath)
My problem is that I am not trying to write it to disk, but rather return it
via a Pylons response. Based off of the answers to [another
question](http://stackoverflow.com/questions/2413707/stream-a-file-to-the-
http-response-in-pylons) I was able to get this far:
import FileApp
my_headers = [('Content-Disposition', 'attachment; filename=\"' + user_filename + '\"'), ('Content-Type', 'text/plain')]
file_app = FileApp(filepath, headers=my_headers)
return file_app(request.environ, self.start_reponse)
Using this solution I can take a png I have saved on the server side and
return it to the user for download. Still, there are two problems here. The
first is that I am forced to write the file to disk and then serve it from
disk, rather than simply using the image straight from code. The second is
that it is actually returning the file, therefore a user is forced to download
it rather than viewing it in their browser.
What I want is for the user to be able to view the file in their browser, not
download it themselves. IDEALLY I wouldn't have to save the image to disk on
the server side either, but I realize it is likely impossible to serve it
without it living on either the server or client's computer.
So my question is this. Can I serve the image straight from code such that the
user will simply see the image in their browser as the response to their
request? If not, can I save the image to disk server side and serve it from
there such that the user will see the image in their browser and not be
prompted to download a file?
(For what it's worth I am using Python 2.6.2 and PasteScript 1.7.4.2)
Answer: Browsers can accept raw data as part of the `src` attribute for `img`'s base64
encoded...
from PIL import Image
from cStringIO import StringIO
a = Image.new('RGB', (10, 10), 'black')
# ...
buf = StringIO()
a.save(buf, 'png')
b64img = '<img src="data:image/png;base64,{0}" />'.format(buf.getvalue().encode('base64'))
So what you do here is build your image, save it into a string buffer in
memory (instead of on disk), then encode it to base64... You return the
`<img>` tag as part of the page (or purely by itself if being lazy) using
whatever templates/etc... Maybe just a `return Response(b64img)` would do
it...
|
tkinter progress bar with file list
Question: I have a loop that read files in python like below:
def Rfile():
for fileName in fileList:
….
How can I add a tkinter progress bar that will be linked to the for loop and
the size of the fileList (start before the loop and close after the loop)?.
Thx
Answer: This little script should demonstrate how to do that:
import tkinter as tk
from time import sleep
# The truncation will make the progressbar more accurate
# Note however that no progressbar is perfect
from math import trunc
# You will need the ttk module for this
from tkinter import ttk
# Just to demonstrate
fileList = range(10)
# How much to increase by with each iteration
# This formula is in proportion to the length of the progressbar
step = trunc(100/len(fileList))
def MAIN():
"""Put your loop in here"""
for fileName in fileList:
# The sleeping represents a time consuming process
# such as reading a file.
sleep(1)
# Just to demonstrate
print(fileName)
# Update the progressbar
progress.step(step)
progress.update()
root.destroy()
root = tk.Tk()
progress = ttk.Progressbar(root, length=100)
progress.pack()
# Launch the loop once the window is loaded
progress.after(1, MAIN)
root.mainloop()
You can always tweak it to perfectly satisfy your needs.
|
Using a text file to receive a string for a variable in Python, without defining it
Question: I have a text file in which there are several variables. Most of them are used
in a Bash script of mine, but I'd like to use the same text file for my Python
script. For the lines that are not properly formatted for Python, I want my
script to just ignore. For those that are properly formatted, I want the
script to check and if it's the variable I'm looking for - use it.
import sys import re
for ln in open("thefile.txt"):
m = re.match(r"(?P<varname>[^=]*)\s*=\s*(?P<value>.+)", ln)
if m:
varname = m.group("varname")
value_string = m.group("value")
value = eval(value_string)
print value
# so if the variables name is THISVARIABLE, get that value:
if varname == "THISVARIABLE":
mypythonvariable == value
I'm getting the following error:
**NameError: name 'Somevariableinmytextfile' is not defined**
The Somevariableinmytextfile is the first variable in that file.
My question:
Do I have to define every variable in the txt file, in order to get rid of
this error? If not, what shall I do? I'm very new at Python. This is my first
program.
Answer: The error is `eval` complaining that the contents of `value_string` have no
meaning as a whatever-it-is.
The _real_ error is using `eval` at all. (A good post on the pitfalls can be
found [here](http://me.veekun.com/blog/2012/03/24/on-principle/).) **You don't
even need to`eval` here** \- leaving `value_string` as the string the regex
gave you will be _just fine_.
# The problem with the present approach
Sample `thefile.txt`:
foo=bar
baz=42
quux=import os; os.shutdown()
* When parsing `foo`, Python complains that `bar` isn't defined. (Simple.)
* When parsing `bar`, Python gives you an `int` instead of a `str`. (No real problem...)
* When parsing `quux`, Python shuts down your computer. (_Uh oh!_)
# Why you don't need `eval`
You want a string value, correct? The regex already gives you a string!
varname = m.group("varname")
value = m.group("value")
print value
if varname == "THISVARIABLE":
mypythonvariable = value # You meant = instead of ==?
|
mixing pixels of an image manually using python
Question: I am trying to create an algorithm that blends the pixels of an image and I
can bring the image as it was before, but I do not know do this.
I'm using python and pil, but I can use other libraries.
Exemple: 
to  and
back to 
Thank you.
Answer: This should do it. There's no error handling, it doesn't follow pep8
standards, it uses slow PIL operations and it doesn't use an argument parsing
library. I'm sure there are other bad things about it also.
It works by seeding python's random number generator with an invariant of the
image under scrambling. The hash of the size is used. Since the size doesn't
changed, a random sequence built on it will be the same for all images that
share the same size. That sequence is used as a one-to-one mapping, therefore
it's reversible.
The script may be invoked twice from a shell to create two images,
"scrambled.png" and "unscrambled.png". "Qfhe3.png" is the source image.
python scramble.py scramble "./Qfhe3.png"
python scramble.py unscramble "./scrambled.png"
-
#scramble.py
from PIL import Image
import sys
import os
import random
def openImage():
return Image.open(sys.argv[2])
def operation():
return sys.argv[1]
def seed(img):
random.seed(hash(img.size))
def getPixels(img):
w, h = img.size
pxs = []
for x in range(h):
for y in range(w):
pxs.append(img.getpixel((x, y)))
return pxs
def scrambledIndex(pxs):
idx = range(len(pxs))
random.shuffle(idx)
return idx
def scramblePixels(img):
seed(img)
pxs = getPixels(img)
idx = scrambledIndex(pxs)
out = []
for i in idx:
out.append(pxs[i])
return out
def unScramblePixels(img):
seed(img)
pxs = getPixels(img)
idx = scrambledIndex(pxs)
out = range(len(pxs))
cur = 0
for i in idx:
out[i] = pxs[cur]
cur += 1
return out
def storePixels(name, size, pxs):
outImg = Image.new("RGB", size)
w, h = size
pxIter = iter(pxs)
for x in range(h):
for y in range(w):
outImg.putpixel((x, y), pxIter.next())
outImg.save(name)
def main():
img = openImage()
if operation() == "scramble":
pxs = scramblePixels(img)
storePixels("scrambled.png", img.size, pxs)
elif operation() == "unscramble":
pxs = unScramblePixels(img)
storePixels("unscrambled.png", img.size, pxs)
else:
sys.exit("Unsupported operation: " + operation())
if __name__ == "__main__":
main()
|
Python/WXWidgets: ST_NO_AUTORESIZE not being honored for wx.StaticText
Question: I want to throw up a view in the center of the screen at a fixed size, with
some static text being displayed centered both horizontally and vertically.
So far, I have the following code:
import wx
class DisplayText(wx.Dialog):
def __init__(self, parent, text="", displayMode=0):
# Initialize dialog
wx.Dialog.__init__(self, parent, size=(480,320), style=( wx.DIALOG_EX_METAL | wx.STAY_ON_TOP ) )
# Center form
self.Center()
self.txtField = wx.StaticText(self, label=text, pos=(80,120), size=(320,200), style=wx.ALIGN_CENTRE_HORIZONTAL | wx.ST_NO_AUTORESIZE)
self.txtField.SetFont(wx.Font(24, wx.DEFAULT, wx.BOLD, 0))
app = wx.App(False)
c = DisplayText(None, text="Now is the time for all good men to come to the aid of their country.")
c.Show()
app.MainLoop()
The goal is to actually have the text vertically centered, but for now, I was
just trying to be explicit about the positioning of the static text on the
frame.
For a brief split second, the text appears in the position I put it in, but
then it quickly jumps to the very top bound of the window and expands to the
maximum width. (I deliberately set the width and position low so I'd be able
to see if this behavior was occurring or not.)
It does not matter if I use wx.Dialog or wx.Frame.
As you can see I did define the NO_AUTORESIZE flag, but this is not being
honored.
Can anyone explain what's happening?
Python 2.7.5/wxWidgets 2.8.12.1/Mac OS X 10.8.4
Answer: Turns out that it's a limitation of Mac OS X's native dialog implementation.
The following made it work on OS X. I never did try it on Windows but from
other forum posts it appears it would have worked as-is on Windows.
import wx
class DisplayText(wx.Dialog):
def __init__(self, parent, text="", displayMode=0):
# Initialize dialog
wx.Dialog.__init__(self, parent, size=(480,320), style=( wx.DIALOG_EX_METAL | wx.STAY_ON_TOP ) )
# Center form
self.Center()
# (For Mac) Setup a panel
self.panel = wx.Panel(self)
# Create text field
self.txtField = wx.StaticText(self.panel, label=text, pos=(80,120), size=(320,200), style=wx.ALIGN_CENTRE_HORIZONTAL | wx.ST_NO_AUTORESIZE)
self.txtField.SetFont(wx.Font(24, wx.DEFAULT, wx.BOLD, 0))
self.txtField.SetAutoLayout(False)
app = wx.App(False)
c = DisplayText(None, text="Now is the time for all good men to come to the aid of their country.")
c.Show()
app.MainLoop()
|
Python Random Map Generation with Perlin Noise
Question: Recently, I've been attempting to defeat one of my main weaknesses in
programming in general, random generation. I thought it would be an easy thing
to do, but the lack of simple information is killing me on it. I don't want to
sound dumb, but it feels to me like most of the information from places like
[this](http://freespace.virgin.net/hugo.elias/models/m_perlin.htm) are written
for mathematicians who went to college to graduate in theoretical mathematics.
I just don't understand what I'm meant to do with that information in order to
apply it to programming in a language such as python.
I've been working a few days staring at equations and attempting attempt after
attempt, but still after all those days, after ripping my code apart again and
again, all that's been working properly this entire time is this noise
generator to generate basic noise:
import random
import math
random.seed(0)
def generateWhiteNoise(width,height):
noise = [[r for r in range(width)] for i in range(height)]
for i in range(0,height):
for j in range(0,width):
noise[i][j] = random.randint(0,1)
return noise
noise = generateWhiteNoise(50,12)
for i in noise:
print()
for o in i:
if(o == 0):
print('-',end='')
else:
print('#',end='')
This code produces this result:
##-######--#--#-#--##-###-###---#-##-#-----#--##-#
#-#-##-##-#----##------##--#####-#-##---#--#-##---
-------#-#------#---#-#---###--#--#-###-----##-#--
######--#-#-#--####-###---#---###-##--#-#-##--####
-#----###--------##--##--##-#-#--#----###-####--##
---####-#--#--###-#-#--#--#####--####-#-##-##--#--
----#--####-#-#-#-#-#---#--###------###--#-######-
--###--#-###-------#-##--###---#-####----###-#####
#----##--##-#--##-###--#----#-#-##--##-#-##---###-
##---##----##--##--#--#--###-###-#--#-##---#------
-##----#-###---######---#-#---#---###---#---###-##
#--##-##-###-###---#--##-##--##-##-#-#-##--#-#-##-
I'm wanting it to eventually produce something like this:
--------------------------------------------------
------------------####----------------------------
-----------------#####----------------------------
----------------#####-----------------------------
---------------#####--------------###-------------
---------------#####--------------###-------------
---------------------------------####-------------
---######------------------------####-------------
---######------------###--------------------------
----########---------###--------------------------
-----#######---------###--------------------------
------###-----------------------------------------
How can I manage to smooth out the white-noise I generate, and turn it into
islands? Can anyone explain it in a very simplistic way for me?
I may be thinking about all of this very wrong.
Answer: Just use [Noise](https://github.com/caseman/noise). Good coders code, great
reuse.
Here's a [very basic
example](https://github.com/caseman/noise/blob/master/examples/1dnoise.py)
(others can be found in the /examples directory).
|
Upload file with framework Zope
Question: I would like that users of my ZOPE/Plone website can upload (big) file (>1Gb)
on a server.
I have a form in html :
<form enctype="multipart/form-data" action="upload.py" method="post">
<p>File: <input type="file" name="file"></p>
<p><input type="submit" value="Upload"></p>
</form>
I have an external script with ZOPE : upload.py
def get(self, REQUEST):
filename = REQUEST.file['file']
Unfortunately I don't know what to do with this file..
I found some tutorial but I think I'm on wrong way (because these methods
can't work with ZOPE ?):
**CGI** : <http://webpython.codepoint.net/cgi_file_upload>
**ftplib** : [Python Script Uploading files via
FTP](http://stackoverflow.com/questions/12613797/python-script-uploading-
files-via-ftp)
Thanks for your advices,
Answer: It depends on how and where you want to store it.
The REQUEST.file is a file object where you can read, seek, tell etc the
contents from.
You can store it like a blob:
from ZODB.blob import Blob
blob = Blob()
bfile = blob.open('w')
bfile.write(REQUEST.file)
bfile.close()
# save the blob somewhere now
context.myfile = blob
|
What are \xHEX characters and is there a table for them?
Question: When reading a textfile, I read these characters, when printed out to console
it outputs blanks or �:
['\x80', '\xc3', '\x94', '\x99', '\x98','\x9d', '\x9c', '\xa9', '\xa6', '\xe2']
What are these \xHEX characters? Is there a link to the table to lookup these
characters?
**SOLVED:**
it's not an `ascii` textfile, it was a unicode `utf8` file. That was why I was
unable to get correct the characters.
For Java:
import java.io.*
File infile = new File('\home\foo\bar.txt');
BufferedReader in = new BufferedReader(new InputStreamReader(new FileInputStream(infile), "UTF8"));
while ((str = in.readLine()) != null) {
System.out.println(str);
}
if `system.out.println` complains try:
PrintStream out = new PrintStream(System.out, true, "UTF-8");
out.println(str);
For Python, simply:
import codecs
infile = '\home\foo\bar.txt'
reader = codecs.open(infile,'r','urf8')
for l in reader:
print ln
Answer: Here is a link to all unicode characters:
<http://en.wikipedia.org/wiki/List_of_Unicode_characters>
Also, if you are using Eclipse, make sure your project "Text File Encoding" is
set to UTF-8.
**Project->properties->resources->Text File Encoding.**
I had similar problem with cyrillic characters :)
|
Python-twitter api.VerifyCredentials() returns none
Question: I am using python-twitter api and i got consumer_key, consumer_secret,
access_token_key, access_token_secret but when i try code below i got this
output {} for `print api.VerifyCredentials()` and i got none for `print
status.text`
import twitter
api = twitter.Api(consumer_key='**',
consumer_secret='**',
access_token_key='**',
access_token_secret='**')
print api.VerifyCredentials() #returns {}
status = api.PostUpdate('Ilovepython-twitter!')
print status.text #returns none
What do you think is problem here?
Answer: Here the problem was library 'twitter' for sure. Don't import 'twitter' use
'tweepy' instead and it will work.
|
Python: run through all integer combinations subject to a constraint (find the minimum of a function)
Question: I would like to find a way to return the set of all vectors [x_1,...,x_n]
subject to the constraint x_1+...+x_n=constant, each x_i is a nonnegative
integer, and the order doesn't matter. (so [1,1,1,2]=[2,1,1,1]). I have very
little experience with programming but I've been working with Python (sage)
for the past month or so.
In particular, I'm trying to find the minimum value of a 15-variable
(symmetric) function over nonnegative integers (subject to a constraint), but
I'd like to write a program to do it because I can use it for similar projects
as well.
I have been trying to write a program for 4 days now, and I'm suddenly coming
to the realization that I have to somehow recursively define my function...and
I have no idea what to do. I have a code which does something similar to what
I want (but it's no where near done). I'll post it even though I'm sure it's
the least efficient way to do what I'm trying to do:
def each_comb_first_step(vec):
row_num=floor(math.fabs((vec[0,vec.ncols()-1]-vec[0,vec.ncols()-2]))/2)+1
mat=matrix(ZZ, row_num, vec.ncols(), 0)
for j in range(row_num):
mat[j]=vec
vec[0,vec.ncols()-2]=vec[0,vec.ncols()-2]+1
vec[0,vec.ncols()-1]=vec[0,vec.ncols()-1]-1
return mat
def each_comb(num,const):
vec1=matrix(ZZ,1,num,0)
vec1[0,num-1]=const
time=0
steps=0
subtot=0
for i in (2,..,num-1):
steps=floor(const/(i+1))
for j in (1,..,steps):
time=j
for k in (num-i-1,..,num-2):
vec1[0,k]=time
time=time+1
subtot=0
for l in range(num-1):
subtot=subtot+vec1[0,l]
vec1[0,num-1]=const-subtot
mat1=each_comb_first_step(vec1)
return mat1
Is there by any chance a function which already does this, or something
similar? Any help or suggestions would be greatly appreciated.
Answer: A **brute force** solution is as follows:
import itertools as it
# Constraint function returns true if inputs meet constraint requirement
def constraint(x1, x2, x3, x4):
return x1 + x2 + x3 + x4 == 10
numbers = range(1,10) #valid numbers (non-negative integers)
num_variables = 4 #size of number tuple to create
#vectors contains all tuples of 4 numbers that meet constraint
vectors = [t for t in it.combinations_with_replacement(numbers, num_variables)
if constraint(*t)]
print vectors
outputs
[(1, 1, 1, 7), (1, 1, 2, 6), (1, 1, 3, 5), (1, 1, 4, 4), (1, 2, 2, 5), (1, 2, 3, 4), (1, 3, 3, 3), (2, 2, 2, 4), (2, 2, 3, 3)]
The running time is `O(numbers**num_variables)`, so will probably be
prohibitively slow with your 15 variable solution. You might want to look into
linear programming techniques. There's a free course on [Linear
Optimization](https://www.coursera.org/course/linearopt) at the Cousera
website that can be used to solve these sorts of problems much quicker.
Check out this [Stack Overflow
question](http://stackoverflow.com/questions/12543092/trying-to-find-a-pure-
python-integer-linear-programming-solver-with-inequalities) for a link to a
python module that is an integer constraint solver.
|
Mysql named placeholders in python used in the IN clause
Question: I prefer to use named placeholders when hacking MySQL bound python code, but
it seems that I can't get it just right with the `IN` clause. An example:
con = MySQLdb.connect(db='test', user='test')
cur = con.cursor()
Three too simple examples:
# (#1) works fine, but I want placeholders.
cur.execute( """ update test
set i = 999
where SNO in (1, 2) """)
# (#2) works fine too, but still not enough placeholders.
cur.execute( """ update test
set i = %(i)s
where SNO in (1, 2) """, {'i' : 999})
# (#3) works, but did not pass the beauty check...
cur.execute( """ update test
set i = %(i)s
where SNO in ( %(a)s, %(b)s ) """, {'i' : 99,
'a' : 1,
'b' : 2})
This is what I really want, but it fail with: **Operand should contain 1
column(s)**
# (#4) This one fails with: _mysql_exceptions.OperationalError: (1241, 'Operand should contain 1 column(s)')
cur.execute( """ update test
set i = %(i)s
where SNO in ( %(foo)s ) """, {'i' : 999,
'foo' : [1, 2]})
Apparently I need _more magic_. It would be easy simply to move the problem to
the application, implementing a loop in python, but I would rather avoid that.
Ah yes, performance is important too.
Answer: `MySQLdb` already handles the escaping of a sequence for you:
>>> con = MySQLdb.connect(db='test')
>>> con.literal([1,2,3])
('1', '2', '3')
>>> cur = con.cursor()
>>> cur.execute("select * from test where id in %(foo)s", {'foo': [1,2,3]})
3L
>>> cur._executed
"select * from test where id in ('1', '2', '3')"
So by removing the parenthesis around your placeholder it should work - but
only for sequences with more then one element, because a single element is
formatted like this:
>>> con.literal([1])
('1',)
Inserted into a SQL query, the trailing comma makes it illegal SQL.
To work arount this, you could also define your own converter to convert a
custom type to the representation you like:
import MySQLdb.converters
conv = MySQLdb.converters.conversions.copy()
class CustomList(list):
def __init__(self, *items):
super(CustomList, self).__init__(items)
conv[CustomList] = lambda lst, conv: "(%s)" % ', '.join(str(item) for item in lst)
con = MySQLdb.connect(db='test', conv=conv)
cur = con.cursor()
cur.execute('select * from test where id in %(foo)s', {'foo': CustomList(0, 1, 2)})
print cur._executed
select * from test where id in (0, 1, 2)
This way the quotes around the list items are gone.
It would also work to just replace the converter for `list`, but that would
change the behaviour for all lists and possibly introduce vulnerabilities. The
above way of formatting a list would not be safe for a list containing
strings, as it doesn't escape special characters.
To do that, you would have to recursively escape all items in the list:
>>> ...
>>> conv[list] = lambda lst, cv: "(%s)" % ', '.join(cv[type(item)](item, cv) for item in lst)
>>> con = MySQLdb.connect(..., conv=conv)
>>> con.literal([1, "it's working...", 2])
"(1, 'it\\'s working...', 2)"
|
flask-classy and peewee, metaclass conflict error
Question: I'm trying to get my user class to work with both BaseModel and FlaskView.
This results in the metaclass conflict error and I can't solve it.
Things I have tried to fix the problem:
This didn't work because of the _from noconflict import classmaker_. The
example is from June 2003. Maybe it is too old? I'm running on python 2.7.3.
<http://code.activestate.com/recipes/204197-solving-the-metaclass-conflict/>
Also tried this solution, see the code blocks below. I get this error:
AttributeError: type object 'BaseModel' has no attribute '**metaclass** '
[Double inheritance causes metaclass
conflict](http://stackoverflow.com/questions/11254553/double-inheritance-
causes-metaclass-conflict)
from base_model import BaseModel
from flask.ext.classy import FlaskView
class CombinedMeta(BaseModel.__metaclass__, FlaskView.__metaclass__):
pass
from peewee import *
#sqlite is used for easy testing.
mysql_db = SqliteDatabase('test.db')
class BaseModel(Model):
class Meta:
database = mysql_db
from combined_meta import CombinedMeta
from base_model import BaseModel
from flask.ext.classy import FlaskView
from flask.ext.classy import route
from peewee import *
from flask import request
from utility import response_json
from utility import send_email
from utility import random_string
class User(BaseModel, FlaskView):
__metaclass__ = CombinedMeta
@route('/<username>', methods=['GET'])
def read_user(self, username):
#cool method stuff
When I change the BaseModel class to the following code I get a new error.
class BaseModel(Model): TypeError: Error when calling the metaclass bases this
constructor takes no arguments
from peewee import *
#sqlite is used for easy testing.
mysql_db = SqliteDatabase('test.db')
class BaseModel(Model):
class Meta:
database = mysql_db
__metaclass__ = Meta
I have no idea how I can fix this, I'm new to Python. My main goal is to get
the program working with multiple classes. That is why I'm trying to get flask
classy to work.
A way to fix this problem without flask classy is just as welcome as any other
fix. If not using flask classy is more easy I'll give that a try.
**EDIT**
[When calling the metaclass bases, object.__init__() takes no
parameters](http://stackoverflow.com/questions/9555402/when-calling-the-
metaclass-bases-object-init-takes-no-parameters)
class Meta(type):
database = mysql_db
When I change the code to this I get the following error:
TypeError: Error when calling the metaclass bases metaclass conflict: the
metaclass of a derived class must be a (non-strict) subclass of the
metaclasses of all its bases
Answer: I managed to solve the problem by not using flask-classy. Instead I'm using
blueprints, [flask documentation](http://flask.pocoo.org/docs/blueprints/). I
no longer need both BaseModel and FlakView, only the BaseModel is needed now.
Here is my working code:
I no longer need FlaskView because I don't use flak-classy anymore. No more
meteclass error!
**__init_ _.py**
from flask import Flask
import user
app = Flask(__name__)
app.register_blueprint(user.bp)
**user.py**
from base_model import BaseModel
class User(BaseModel):
username = CharField(primary_key=True)
password = CharField(null=False)
bp = Blueprint('user', __name__)
@bp.route('/user/method', method=['GET'])
def method()
#method stuff
|
Element Tree doesn't load a Google Earth-exported KML
Question: I have a problem related to a Google Earth exported KML, as it doesn't seem to
work well with Element Tree. I don't have a clue where the problem might lie,
so I will explain how I do everything.
Here is the relevant code:
kmlFile = open( filePath, 'r' ).read( -1 ) # read the whole file as text
kmlFile = kmlFile.replace( 'gx:', 'gx' ) # we need this as otherwise the Element Tree parser
# will give an error
kmlData = ET.fromstring( kmlFile )
document = kmlData.find( 'Document' )
With this code, ET (Element Tree object) _creates_ an Element object
accessible via variable kmlData. It points to the root element ('kml' tag).
However, when I run a search for the sub-element 'Document', it returns None.
Although the 'Document' tag is present in the KML file!
Are there any other discrepancies between KMLs and XMLs apart from the 'gx:
smth' tags? I have searched through the KML files I am dealing with and found
nothing suspicious. Here is a simplified structure of an KML file the program
is supposed to deal with:
<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://earth.google.com/kml/2.2">
<Document>
<name>UK.kmz</name>
<Style id="sh_blu-blank">
<IconStyle>
<scale>1.3</scale>
<Icon>
<href>http://maps.google.com/mapfiles/kml/paddle/blu-blank.png</href>
</Icon>
<hotSpot x="32" y="1" xunits="pixels" yunits="pixels"/>
</IconStyle>
<ListStyle>
<ItemIcon>
<href>http://maps.google.com/mapfiles/kml/paddle/blu-blank-lv.png</href>
</ItemIcon>
</ListStyle>
</Style>
[other style tags...]
<Folder>
<name>UK</name>
<Placemark>
<name>1262 Crossness Pumping Station</name>
<LookAt>
<longitude>0.1329926667038817</longitude>
<latitude>51.50303535104574</latitude>
<altitude>0</altitude>
<range>4246.539753518848</range>
<tilt>0</tilt>
<heading>-4.295161152207489</heading>
<altitudeMode>relativeToGround</altitudeMode>
<gx:altitudeMode>relativeToSeaFloor</gx:altitudeMode>
</LookAt>
<styleUrl>#msn_blu-blank15000</styleUrl>
<Point>
<coordinates>0.1389579668507301,51.50888923518947,0</coordinates>
</Point>
</Placemark>
[other placemark tags...]
</Folder>
</Document>
</kml>
Do you have an idea why I can't access any sub-elements of 'kml'? By the way,
Python version is 2.7.
Answer: The KML document is in the `http://earth.google.com/kml/2.2` namespace, as
indicated by
<kml xmlns="http://earth.google.com/kml/2.2">
This means that the name of the `Document` element is in fact
`{http://earth.google.com/kml/2.2}Document`.
Instead of this:
document = kmlData.find('Document')
you need this:
document = kmlData.find('{http://earth.google.com/kml/2.2}Document')
However, there is a problem with the XML file. There is an element called
`gx:altitudeMode`. The `gx` bit is a namespace prefix. Such a prefix needs to
be declared, but the declaration is missing.
You have worked around the problem by simply replacing `gx:` with `gx`. But
the proper way to do this would be to add the namespace declaration. Based on
<https://developers.google.com/kml/documentation/altitudemode>, I take it that
`gx` is associated with the `http://www.google.com/kml/ext/2.2` namespace. So
for the document to be well-formed, the root element start tag should read
<kml xmlns="http://earth.google.com/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2">
Now the document can be parsed:
In [1]: from xml.etree import ElementTree as ET
In [2]: kmlData = ET.parse("kml2.xml")
In [3]: document = kmlData.find('{http://earth.google.com/kml/2.2}Document')
In [4]: document
Out[4]: <Element '{http://earth.google.com/kml/2.2}Document' at 0x1895810>
In [5]:
|
For each and every ssh command asks for password, Python
Question: I am trying to execute the following code, which asks me password for each and
every ssh command though I provide my password in the code. Can any one please
tell me where I am doing mistake. Thanks in advance
import signal
from subprocess import call, PIPE, Popen
from time import sleep
import os, pty
class SshCmd:
socket = ''
pid = 0
password = None
def __init__(self, password = None):
if password:
SshCmd.password = password
# start agent
devnull = open(os.devnull, 'w')
call(['killall', 'ssh-agent'], stderr=devnull)
process = Popen('/usr/bin/ssh-agent', stdout=PIPE, stderr=devnull)
stdout, stderr = process.communicate()
lines = stdout.splitlines()
SshCmd.socket = lines[0].decode().split(';')[0].split('=')[1]
SshCmd.pid = lines[1].decode().split(';')[0].split('=')[1]
devnull.close()
# unlock key
pid, fd = pty.fork()
if 0 == pid:
# child adds key
os.execve('/usr/bin/ssh-add', ['/usr/bin/ssh-add'], \
{'SSH_AUTH_SOCK': SshCmd.socket, 'SSH_AGENT_PID': SshCmd.pid})
else:
# parent send credentials
cmsg = os.read(fd, 1024)
os.write(fd, SshCmd.password.encode())
os.write(fd, os.linesep.encode())
cmsg = os.read(fd, 1024)
if len(cmsg) <= 2:
os.waitpid(pid, 0)
else:
os.kill(pid, signal.SIGTERM)
def execve(self, path, args, env = {}):
if not SshCmd.password:
return
pid = os.fork()
if 0 == pid:
env['SSH_AUTH_SOCK'] = SshCmd.socket
env['SSH_AGENT_PID'] = SshCmd.pid
os.execve(path, args, env)
else:
os.waitpid(pid, 0)
def ssh(self, user, host, cmd, args = []):
cmdLine = cmd
for arg in args:
cmdLine += ' '
cmdLine += arg
self.execve('/usr/bin/ssh',
['/usr/bin/ssh',
'-o',
'UserKnownHostsFile=/dev/null',
'-o',
'StrictHostKeyChecking=false',
'%(user)s@%(host)s' % {'user': user, 'host': host},
cmdLine])
if '__main__' == __name__:
other = SshCmd('passowrd')
other.ssh('root', 'host', '/sbin/ifconfig')
other.ssh('root', 'host', 'ping', ['-c', '5', 'localhost'])
Answer: You are not making a mistake. In order to skip the password step, you need to
pre-validate your request, you can do this by using the command `ssh-copy-id`
first. which will store the credentials and allow connection through a key.
You need to have a key first, which you can create with `ssh-keygen`
Note: these commands may change depending on the linux distribution.
|
Cannot get Cython to find the MinGW gcc compiler even after editing PATH, making a file in distutils, removing all instances of -mno-cygwin
Question: I am trying to get cython to realize I have a c compiler in MinGW 32-bit and
I've tried everything I can find on the web but it's still not working. I am
running Windows 7 Professional 64-bit. Here is what I have tried:
(1) I have Python 2.7 and I just installed MinGW with options gcc and g++ and
some other options
(2) I edited the PATH environmental variable so it includes
C:\MinGW\bin;C:\MinGW\MSYS\1.0\local\bin;C:\MinGW\MSYS\1.0\bin
(3) I told Python to use MinGW as the default compiler by creating a file
named
C:\Python27\Lib\distutils\distutils.cfg, containing
[build]
compiler = mingw32
(I do have MinGW32 by the way)
(4) I removed all instances of -mno-cygwin from the file
C:\Python27\Lib\distutils\cygwincompiler.py
(5) I have a file called setup.py and a module called tryingcython.pyx that is
written in python. My setup.py says from distutils.core import setup from
distutils.extension import Extension from Cython.Distutils import build_ext
setup(
cmdclass = {'build_ext':build_ext},
ext_modules=[Extension("tryingcython",["tryingcython.pyx"])]
)
So then I open Command Prompt and get into the directory that contains
setup.py and tryingcython.pyx, and I type python setup.py build_ext --inplace
--compiler=mingw32
Then it tells me:
running build_ext
skipping 'tryingcython.c' Cython extension (up-to-date)
building 'tryingcython.c' extension
gcc -mdll -O -Wall -IC:\Python27\include -IC:\Python27\PC -c tryingcython.c -o build\
temp.win32-2.7\Release\tryingcython.o
error: command 'gcc' failed: No such file or directory
So I guess Cython can't tell that I have gcc and it can't find it or what,
even though I've tried about every single piece of advice I can find online
for making it realize that I have MinGW which has gcc included. Any
help/additional ideas on how I can get cython to actually work would be much
appreciated.
Answer: You are using exactly the same operational system and versions than me.
Try to cal `gcc` using:
SET input=intput.c
SET output=output.pyd
gcc -shared -IC:\Python27\include -LC:\Python27\libs -O2 -o %output% %input% -lpython27
Usually I put this call in a `cythongcc.bat` file, in a directory recognized
by the `PATH` environment variable as:
gcc -shared -IC:\Python27\include -LC:\Python27\libs -O3 -mtune=native -o %1.pyd %2.c -lpython27
So that I can , from where my cython `.pyx` files are, just do:
cython input.pyx
cythongcc input input
To get the compiled `.pyd` working!
|
Printing one variable from a netCDF file using Python
Question: I am trying to take one variable from a netCDF file and print it. here is my
code
import netCDF4
import netCDF4_utils
from netCDF4 import Dataset
from numpy.random import uniform
import csv
B = []
rootgrp = Dataset('test.cdf', 'r', format = 'NETCDF4')
f = open('testoutput.csv','wb')
b = (rootgrp.variables['grid_optical_depth'][:])
for x in b:
B.append(x)
f.write(str(B))
rootgrp.close()
f.close()
when I run this I get a very large set of data that seems to be repeating, but
I don't see how my for loop is doing that, shouldn't it only run through the
data set once? also could anyone speak to why the data prints out in sets of
four per line? If I run `print rootgrp.variables['grid_optical_depth']` I get
<type 'netCDF4.Variable'>
float32 grid_optical_depth(grid_time, range)
long_name: Grid_Aerosol_Optical_Depth_Profile
units: (n/a)
temporal_average: 20.0
unlimited dimensions:
current shape = (1440, 399)
so does that mean that two of the numbers correspond to the grid_time and rage
value? I don't think this is the case because all the numbers are much smaller
then 1 (on the order of 10^-3 and -4).
Any help is appreciated
Answer: I tested your code with another file and it gives the intended results:
printing a variable from a netCDF file to a csv file. It does not print the
variable twice, perhaps this is a characteristic of your file.
Your `grid_optical_depth` variable has a shape of `(1440, 399)`, the first
index corresponding to the dimension `grid_time` and the second to the
dimension `range`. When you do the loop `for x in b:`, you are appending each
column of the variable (up to 1440), and each column will have 399 rows.
Besides, you don't actually need the loop. If you set numpy's print options to
show the full array, you can just print the whole array directly to a string,
like this:
import numpy as np
import netCDF4
rootgrp = netCDF4.Dataset('test.cdf', 'r', format='NETCDF4')
f = open('testoutput.csv','wb')
np.set_printoptions(threshold='nan')
f.write(str(rootgrp.variables['grid_optical_depth'][:]))
f.close()
rootgrp.close()
If all you want is to write a netCDF variable into text format, then I
strongly consider you get acquainted with the
[ncdump](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/ncdump.html)
program.
|
Django issue with adding instances to a class
Question: The code below this compiles, but whenever I uncomment purchase_date,
po_number, or confirmed it's giving me an error. I tried python manage.py
syncdb after uncommenting those lines and it's stil giving me errors.
from django.db import models
class PurchaseOrder(models.Model):
product = models.CharField(max_length=256)
vendor = models.CharField(max_length=256)
price = models.FloatField()
item_number = models.AutoField(primary_key=True)
# purchase_date = models.DateField()
# po_number = models.IntegerField(unique=True)
# confirmed = models.NullBooleanField(null=True)
The error I am getting is this:
DatabaseError at /admin/purchaseorders/purchaseorder/
column purchaseorders_purchaseorder.purchase_date does not exist
LINE 1: ...e", "purchaseorders_purchaseorder"."item_number", "purchaseo...
^
Request Method: GET
Request URL:
Django Version: 1.5.1
Exception Type: DatabaseError
Exception Value:
column purchaseorders_purchaseorder.purchase_date does not exist
LINE 1: ...e", "purchaseorders_purchaseorder"."item_number", "purchaseo...
^
Exception Location: /usr/local/lib/python2.7/dist-packages/django/db/backends/postgresql_psycopg2/base.py in execute, line 54
Python Executable: /usr/bin/python
Python Version: 2.7.3
Python Path:
['/LPG/firstproject',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-linux2',
'/usr/lib/python2.7/lib-tk',
'/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages']
Server time: Tue, 23 Jul 2013 16:30:58 +0000
Does this have anything to do with tables already being created and it wants
me to clear them?
Answer: While syncdb would work for creating new tables, it does not work for altering
database tables. In this case, it looks like you added the 3 columns after
running syncdb once.
Here is [the documentation](https://docs.djangoproject.com/en/dev/ref/django-
admin/#syncdb) (Readup on : Syncdb will not alter existing tables)
To achieve this, you can do it in 2 ways:
1. you would need a 3rd party application such as [django south](http://south.aeracode.org/) which would handle the migrations for you. Once you run the migrations, you would be able to access these columns without any issue. (Highly recommended)
2. If your code is not in production yet, you can just drop the database and then do `syncdb` (a fresh start) - This is not very recommended - as it might be a good idea to use south.
Here is a [step by step tutorial](http://www.djangopro.com/2011/01/django-
database-migration-tool-south-explained/) on south, and here is the [official
documentation on south](http://south.aeracode.org/wiki/Tutorial1)
|
Constrain wxPython MultiSplitterWindow panes
Question: **Edit:** I'm leaving the question open as is, as it's still a good question
and the answer may be useful to others. However, I'll note that I found an
actual solution to _my_ issue by using a completely different approach with
`AuiManager`; see the [answer](http://stackoverflow.com/a/17837040/564181)
below.
I'm working on a `MultiSplitterWindow` setup (after spending a good deal of
time struggling against `SashLayoutWindow` layout quirks). Unfortunately, when
I create a `MultiSplitterWindow`, I see some unexpected behavior when dragging
the sashes around: the sashes can be dragged outside the containing window in
the direction of the layout. To say the least, this is behavior I'd like to
avoid.
Here is the basic setup (you can confirm the behavior below in the wxPython
demo, just substituting `leftwin` for `Panel1`, etc., also see below for an
example app). Where I have `RootPanel/BoxSizer`, there is a panel (or `Frame`,
or whatever kind of container element you like) with a `BoxSizer` to which the
`MultiSplitterWindow` is added – again, as in demo.
+--------------------------------+
| RootPanel/BoxSizer |
|+------------------------------+|
|| MultiSplitterWindow ||
||+--------++--------++--------+||
||| Panel1 || Panel2 || Panel3 |||
||| || || |||
||+--------++--------++--------+||
|+------------------------------+|
+--------------------------------+
When you drag, you can end up with something like this, where `~` and `!`
indicate that the panel "exists" there but isn't being displayed:
+--------------------------------+
| RootPanel/BoxSizer |
|+-------------------------------|~~~~~~~~~~~~~+
|| MultiSplitterWindow | !
||+-----------------++-----------|~~++~~~~~~~~+!
||| Panel1 || Panel2 | !! Panel3 !!
||| || | !! !!
||+-----------------++-----------|~~++~~~~~~~~+!
|+-------------------------------|~~~~~~~~~~~~~+
+--------------------------------+
If at this point, you drag the `RootPanel` to be wider than the overall set of
panels, you will see all the panels again. Likewise, if you drag the width
back down on `Panel1`, you can get access to the sash for `Panel3` again
(assuming the `Panel2` isn't already too wide, of course). Moreover, this is
precisely the situation reported by the Inspection Tool: the `RootPanel`
retains its size, but the `MultiSplitterWindow` grows beyond the size of the
`RootPanel/BoxSizer`.
Further examination with the Inspection Tool reveals that the virtual and
client width values are both 0, but the actual size value is _negative_ (by
the corresponding number of pixels it was dragged out of the window) whenever
it's out of range. Again, this is nutty behavior; I can't imagine why one
would _ever_ want a window to behave this way.
Now, if one holds down `Shift` so that the `_OnMouse` method in
`MultiSplitterWindow` adjusts neighbors, this doesn't happen. Thus, one of my
approaches was to simply override that method. It works, but I'd prefer to
override methods that way if absolutely necessary. Is there another, better
way to solve this problem? It doesn't seem like this would be expected or
desirable behavior in general, so I imagine there is a standard way of fixing
it.
## Other things I've tried:
* Checking whether the sum of the values in the `MultiWindowSplitter` exceeds the width of the containing window, using each of the `EVT_SPLITTER_SASH_POS_CHANGED` AND `EVT_SPLITTER_SASH_POS_CHANGING` events, and then trying to fix the issue by:
* Using an `event.Veto()` call
* Using the `SetSashPosition()` method on the splitter
* Overriding the `_OnMouse()` method to use the behavior that is normally associated with holding down the `Shift` key. This works, but it ends up giving me other results I don't like.
* Setting the minimum pane sizes via `SetMinimumPaneSize` method
* Setting the maximum size on `MultiSplitterWindow` via `SetMaxSize()`
* Setting the maximum size on `RootPanel/BoxSizer` using both `SetMaxSize()` and `SetSizeHints()` on the `RootPanel`.
* I've even done this with an event handler for `wx.EVT_SIZE` on the container so that the `RootPanel` _always_ has the appropriate maximum size from the parent frame element
* I've attempted the same event handling approach for the `MultiSplitterWindow`, also to no effect.
## Version info
I have confirmed that this appears in Windows 32-bit and OS X 64-bit, with the
latest snapshot build of wxPython, against both Python 2.7 and 3.3.
## Working example (with Inspection tool included)
The following essentially duplicates (and slightly simplifies) the demo
source. It's a working demonstration of the problem.
import wx, wx.adv
import wx.lib.mixins.inspection as wit
from wx.lib.splitter import MultiSplitterWindow
class AppWInspection(wx.App, wit.InspectionMixin):
def OnInit(self):
self.Init() # enable Inspection tool
return True
class MultiSplitterFrame(wx.Frame):
def __init__(self, *args, **kwargs):
super().__init__(size=(800, 800), *args, **kwargs)
self.SetMinSize((600, 600))
self.top_sizer = wx.BoxSizer(orient=wx.HORIZONTAL)
self.SetSizer(self.top_sizer)
self.splitter = MultiSplitterWindow(parent=self, style=wx.SP_LIVE_UPDATE)
self.top_sizer.Add(self.splitter, wx.SizerFlags().Expand().Proportion(1).Border(wx.ALL, 10))
inner_panel1 = wx.Panel(parent=self.splitter)
inner_panel1.SetBackgroundColour('#999980')
inner_panel1_text = wx.StaticText(inner_panel1, -1, 'Inner Panel 1')
inner_panel1.SetMinSize((100, -1))
inner_panel2 = wx.Panel(parent=self.splitter)
inner_panel2.SetBackgroundColour('#999990')
inner_panel2_text = wx.StaticText(inner_panel2, -1, 'Inner Panel 2')
inner_panel2.SetMinSize((100, -1))
inner_panel2.SetMaxSize((100, -1))
inner_panel3 = wx.Panel(parent=self.splitter)
inner_panel3.SetBackgroundColour('#9999A0')
inner_panel3_text = wx.StaticText(inner_panel3, -1, 'Inner Panel 3')
inner_panel3.SetMinSize((100, -1))
self.splitter.AppendWindow(inner_panel1)
self.splitter.AppendWindow(inner_panel2)
self.splitter.AppendWindow(inner_panel3)
if __name__ == '__main__':
app = AppWInspection(0)
frame = MultiSplitterFrame(parent=None, title='MultiSplitterFrame Test')
app.SetTopWindow(frame)
frame.Show()
app.MainLoop()
Answer: Depending on what one needs this for, one possible option to use instead of a
custom-managed `MultiSplitterWindow` (or `SashLayoutWindow` combinations,
etc.) is the Advanced User Interface kit's `AuiManager` tool (documentation
for pre-Phoenix version [here](http://wxpython.org/docs/api/wx.aui.AuiManager-
class.html); Phoenix docs
[here](http://www.wxpython.org/Phoenix/docs/html/lib.agw.html#module-
lib.agw)). `AuiManager` automates a lot of these kinds of things for you. In
my case, I was attempting to use the `MultiSplitterWindow` as a way of
controlling collapsible and resizable panels for the UI in question, so the
`AuiManager` is a perfect fit: it already has all the controls and constraints
I need built in.
In that case, all one needs to do is create an `AuiManager` instance
(I'm leaving this here as _an_ answer in hopes that others who may be taking
the same naive approach I was taking will find it useful, but not selecting it
as the answer because it does _not_ directly answer the original question.)
## Sample use of AUI under Phoenix
This code sample does exactly what I was trying to do with the
`MultiSplitterWindow`, but managed automatically by the `AuiManager`.
import wx, wx.adv
import wx.lib.mixins.inspection as wit
from wx.lib.agw import aui
class AppWInspection(wx.App, wit.InspectionMixin):
def OnInit(self):
self.Init() # enable Inspection tool
return True
class AuiFrame(wx.Frame):
def __init__(self, *args, **kwargs):
super().__init__(size=(800, 800), *args, **kwargs)
self.SetMinSize((600, 600))
# Create an AUI Manager and tell it to manage this Frame
self._manager = aui.AuiManager()
self._manager.SetManagedWindow(self)
inner_panel1 = wx.Panel(parent=self)
inner_panel1.SetBackgroundColour('#999980')
inner_panel1.SetMinSize((100, 100))
inner_panel1_info = aui.AuiPaneInfo().Name('inner_panel1').Caption('Inner Panel 1').Left().\
CloseButton(True).MaximizeButton(True).MinimizeButton(True).Show().Floatable(True)
inner_panel2 = wx.Panel(parent=self)
inner_panel2.SetBackgroundColour('#999990')
inner_panel2_info = aui.AuiPaneInfo().Name('inner_panel2').Caption('Inner Panel 2').Left().Row(1).\
Show().Floatable(False)
inner_panel3 = wx.Panel(parent=self)
inner_panel3.SetBackgroundColour('#9999A0')
inner_panel3.SetMinSize((100, 100))
inner_panel3_info = aui.AuiPaneInfo().Name('inner_panel3').Caption('Inner Panel 3').CenterPane()
self._manager.AddPane(inner_panel1, inner_panel1_info)
self._manager.AddPane(inner_panel2, inner_panel2_info)
self._manager.AddPane(inner_panel3, inner_panel3_info)
self._manager.Update()
def __OnQuit(self, event):
self.manager.UnInit()
del self.manager
self.Destroy()
if __name__ == '__main__':
app = AppWInspection(0)
frame = AuiFrame(parent=None, title='AUI Manager Test')
app.SetTopWindow(frame)
frame.Show()
app.MainLoop()
|
How to extract contents of a csv file and place them in a dict file type without using csv module. [python]
Question: Here is the information in the file:
"Part no.","Description","Price"
"453","Sperving_Bearing","9900"
"1342","Panametric_Fan","23400"
"9480","Converter_Exchange","93859"
I'm trying to figure out how to open up a file and then store it's contents
into a dictionary using the Part no. as the key and the other information as
the value. So I want it to look something like this:
{Part no.: "Description,Price", 453: "Sperving_Bearing,9900", 1342: "Panametric_Fan,23400",9480: "Converter_Exchange,93859"}
I was able to store the text from the file into a list, but I'm not sure how
to assign more than one value to a key. I'm trying to do this without
importing any modules. I've been using the basic str methods, list methods and
dict methods. Here is my code so far: ( I'm assuming the filename will
correctly be entered)
textname = input("ENter a file")
thetextfile = open(textname,'r')
print("The file has been successfully opened!")
thetextfile = thetextfile.read()
file_s = thetextfile.split()
holder = []
ck = 0
for c in range(len(file_s)):
holder.append(file_s[c])
ck = ck+1
if(ck == 3):
holder.insert(c,'\n')
count = 0
holder_string = "".join(holder)
holder = holder_string.split("\n")
wordlist = {}
#kind of stuck here.
Answer: Perhaps like this:
wordlist = {}
with open(textname, 'r') as thetextfile:
for line in thetextfile:
line = line.split()
wordlist[line[0]] = line[1:]
That makes the dict values the (more convenient) list of remaining items. But,
if you really wanted the "," string syntax that you have above, maybe:
wordlist = {}
with open(textname, 'r') as thetextfile:
for line in thetextfile:
line = line.split()
wordlist[line[0]] = ",".join(line[1:])
|
How to specify that a parameter is a list of specific objects in Python docstrings
Question: I really like using docstrings in Python to specify type parameters when
projects get beyond a certain size.
I'm having trouble finding a standard to use to specify that a parameter is a
list of specific objects, e.g. in Haskell types I'd use [String] or [A].
Current standard (recognisable by PyCharm editor):
def stringify(listOfObjects):
"""
:type listOfObjects: list
"""
return ", ".join(map(str, listOfObjects))
What I'd prefer:
**OPTION 1**
def stringify(listOfObjects):
"""
:type listOfObjects: list<Object>
"""
return ", ".join(map(str, listOfObjects))
**OPTION 2**
def stringify(listOfObjects):
"""
:type listOfObjects: [Object]
"""
return ", ".join(map(str, listOfObjects))
I suppose that wasn't a great example - the more relevant use case would be
one where the objects in the list must be of a specific type.
**BETTER EXAMPLE**
class Food(Object):
def __init__(self, calories):
self.calories = calories
class Apple(Food):
def __init__(self):
super(self, 200)
class Person(Object):
energy = 0
def eat(foods):
"""
:type foods: [Food] # is NOT recognised by editor
"""
for food in foods:
energy += food.calories
So, other than the fact that I'm getting hungry, this example illustrates that
if called with a list of the wrong kind of object, the code would break. Hence
the importance of documenting not only that it needs a list, but that it needs
a list of Food.
**RELATED QUESTION** [How can I tell PyCharm what type a parameter is expected
to be?](http://stackoverflow.com/questions/6318814/how-can-i-tell-pycharm-
what-type-a-parameter-is-expected-to-be) Please note that I'm looking for a
more specific answer than the one above.
Answer: In comments section of [PyCharm's
manual](http://www.jetbrains.com/pycharm/webhelp/type-hinting-in-pycharm.html)
there's a nice hint from developer:
#: :type: dict of (str, C)
#: :type: list of str
It works for me pretty well. Now it makes me wonder what's the best way to
document parametrized classes in Python :).
|
Python inspect.getcomments(module) doesn't return the first comment if it's a shebang
Question: When a Python file contains a shebang (`#!blabla`), the function `getcomments`
from the module `inspect` doesn't return it. What can I do to get the shebang
from a module object?
Answer: The shebang is only valid if it is the first line of the file ... So, it seems
like you could do something like:
import module
fname = module.__file__
with open(fname) as fin:
shebang = next(fin)
Of course, I've jumped over a bunch of subtleties ... (making sure the first
line is actually a comment, making sure that we've grabbed a `.py` file
instead of a `.pyc` file, etc.). Those checks and substitutions should be easy
enough to make though if you want to make it more robust.
And, I suppose an alternative to using `__file__` magic would be to use
`inspect.getsourcelines`:
shebang = inspect.getsourcelines(module)[0]
if not shebang.startswith('#!'):
pass #Not a shebang :)
|
Python strip. can't strip newlines but using f.read()
Question: Such a simple action, but I canoot get this to work!!!..
If I have a file that has a number of lines containing text, I want to strip
all newlines and whitespace and have a single contigous string from the
contents of the file. I've written a very simple python script to do this but
it's not working and I cannot figure out where I'm going wrong. :/
file contents look like:
1244X 2222 555
141234X
2332X
2354235X
235235 X235 5233 5 523
code
import sys file = sys.argv[1]
with open(file, 'r') as infile:
data = infile.read()
clean_data = data.rstrip('\n')
print clean_data
prints out:
1244X 2222 555
141234X
2332X
2354235X
235235 X235 5233 5 523
when I want;
1244X2222555141234X2332X2354235X235235X23552335523
I have tried `strip()`, `strip(\n\r)` as well as `rstrip()`..
Obviously I'm doing something stupid, but please can someone point out the
problem?
Answer:
>>> with open('hello.txt') as f:
... f.read().replace(' ', '').replace('\n', '')
...
'1244X2222555141234X2332X2354235X235235X23552335523'
>>>
Or:
>>> with open('hello.txt') as f:
... ''.join(f.read().split())
...
'1244X2222555141234X2332X2354235X235235X23552335523'
>>>
|
Website remote render d3.js server side
Question: Looking for a solution to an arguably strange problem. Ok, so we are using
d3.js to plot charts and graphs. However our data sets can be very small, to
intensely massive. Right now most of what we are doing is internal and just
prototyping. However, we do show clients these charts and draw them in real
time for them, quite often and rapidly change their inputs.
Doing this in D3 looks great, but can be slow as expected. I'm more interested
in what the possibilities are for this process. Go to our website, loging, and
show an instance of our dashboard being rendered remotely on the server. Our
server cluster is a super demon beast so I'm not worried about it doing any
heavy lifting. It can do these processes about 100x faster than our best pc so
it seems if we could setup our website to create instances on the fly of our
dashboard, BUT only have access to that user accounts data.
This is getting a bit convoluted so let me explain. We have a database, full
of millions of data points. We have about 10 user accounts. Each have access
to different pieces of this data. One has access to all of it, the other some
of it. All of this is not the issue we are looking for a solution on. We are
more interested in the ability of our server to create multiple instances of
our site, through a window essentially, that the user is remotely controlling.
Like a Remote Desktop in a way. We could even start with the user login form
being part of the remote render. Where our system is fully hosted and operates
on the server itself, and the we page is essentially a KVM on the server in a
way. However it needs to handle multiple users at the same time.
We are using Centos 6.4 lots of python for the back end stuff, php HTML and a
mixture of Postgres and SQLite, but I doubt any of this is important. Just
want to cover my bases.
Answer: It seems unlikely to me that you'd be able meaningfully display millions of
datapoints on a single screen without grouping and summarizing them in some
way. Do the processing and summarize the data on the server and ship the
resulting smaller datasets to the client, which will then plot your graphs and
charts from that. It's likely you'll have more than one set of data now, but
it should result in much better client performance. e.g.
* {millions of points} -> transform on server -> data for bar chart to client
* {millions of points} -> transform on server -> data for XY-scatter chart
* etc.
What you've proposed is not really a programming issue, and isn't going to
scale very well.
|
Generating unique usernames from an email list for creating new users in django application
Question: I am importing contacts from gmail. `c_lst` is the list that has the names and
email address in a dictionary as follows - `[{'name': u'fn1 ln1', 'emails':
[u'[email protected]']}, {'name': u'fn2 ln2', 'emails':
[u'[email protected]']},.`
There are two problems with importing contacts:
1. Some of the contacts that I might be importing, might already be present in the database, in that case, I do _not_ want to add another contact.
2. Unique usernames. There is a possibility of two emails being same, except the domain names. eg. [email protected] and then [email protected] in that case, I need to have distinct usernames so the first username would be like email, and the second one would be email1.
I have implemented both of them, and commented for making things clear. Can
there be more pythonic way of doing it?
for contact in c_lst:
email = contact.get('emails')[0]
name = contact.get('name').split(' ')
first_name, last_name = name[0], name[-1]
try:
# check if there is already a user, with that email address
# if yes then ignore.
u = Users.objects.get(email = email)
print "user exists"
except:
while True:
username = email.split('@')[0]
name, idx = username, 1
try:
# user with current username exists, so add numeral
Users.objects.get(username = username)
name = username + str(idx)
except User.DoesNotExist:
username = name
u = User.objects.create(username = username, email = email, first_name = first_name, last_name = last_name)
u.save()
break
Please let me know, of any other/better flow/approach.
For generating usernames, one might advice generating random numbers, but its
okay for me to go sequentially, as it is only one time activity.
Answer: The one thing I would like to change is to handle the first `except`
explicitly. Since you are using:
u = Users.objects.get(email=email) # don't add space before and after "=" in argument
It could raise a `MultipleObjectsReturned` exception then create an infinite
loop in the current `except` block.
So you should at least change your code to:
# ... your code ...
first_name, last_name = name[0], name[-1]
try:
u = Users.objects.get(email=email)
except User.DoesNotExist:
# ... your code ....
except User.MultipleObjectsReturned:
# handle this case differently ?
Well your might want to handle the second `try` `except` block similarly but
that's your choice.
Hope this helps.
|
How to parse a collection of lists returned from cypher?
Question: Using python/py2neo, I run a cypher query containing
return ..., ..., collect([node1.uuid, node1.timestamp, id(node1), node2.uuid])
Both in web console and py2neo I get back a result looking like this:
[ ..., ..., [u'List(1234abcd-1234-1234-1234-1234abcd1234, 1.374650647E9, 13312, 4321abcd-4321-4321-4321-4321abcd4321)', u'List(..., ..., ...)']]
(just with `""` instead of `u''` in web console)
It doesn't look like JSON. There's a `u'List()'`, unquoted strings and
scientific notation.
How is it possible to parse returned collections of lists?
Answer: You could do it with regex:
import re
s = u'List(1234abcd-1234-1234-1234-1234abcd1234, 1.374650647E9, 13312, 4321abcd-4321-4321-4321-4321abcd4321)'
re.findall(r'List\(([a-z0-9-]+), ([0-9.E]+), (\d+), ([a-z0-9-]+)\)', s)
this would return:
[(u'1234abcd-1234-1234-1234-1234abcd1234',
u'1.374650647E9',
u'13312',
u'4321abcd-4321-4321-4321-4321abcd4321')]
|
can't get make to use previously defined var's
Question: I'm using the [GnuWin32](http://gnuwin32.sourceforge.net/) project, and
created a `makefile` to manage the compiling of some code. In the command line
I run:
set PYUIC=python "E:\PortableApps\Portable Python 2.7.3.1\App\Lib\site-packages\PyQt4\uic\pyuic.py"
my make file contains the following:
UIC := %pyuic%
HELP_VIEW := less
vpath %.ui ./ui
vpath %.py ./py
.PHONY: help
help:
${HELP_VIEW} help
%.py: %.ui
${UIC} -o ./py/$@ $^
print_%:
@echo $* = ${$*}
when I run `make print_UIC` I get:
UIC = python "E:\PortableApps\Portable Python 2.7.3.1\App\Lib\site-packages\PyQt4\uic\pyuic.py"
but when I run 'make main.py' I get:
%pyuic% -o ./py/main.py ./ui/main.ui
process_begin: CreateProcess(NULL, %pyuic% -o ./py/main.py ./ui/main.ui, ...) fa
iled.
make (e=2): The system cannot find the file specified.
make: *** [main.py] Error 2
when I run `%pyuic% -o ./py/main.py ./ui/main.ui` it runs with no problems,
and result is as expected.
What's wrong?
Answer: GNU make doesn't support Windows-style environment variables, like `%pyuic%`.
Your first example works because it's invoking the Windows "shell", and that
shell is expanding this value for you. In your second example GNU make is
trying to directly invoke the command. This could be considered a bug in GNU
make: probably if make sees a `%` in a rule on Windows it should always use
the Windows shell.
Anyway, you should use GNU make's variable syntax; GNU make will import all
environment variables when it starts up, so you can refer to them as make
variables. This is much more portable, since obviously `%pyuic%` will not work
at all on anything but Windows:
UIC := $(pyuic)
|
How to run webapp2(appengine) in Heroku?
Question: Here is my project file
Procfile
web: python main.py
requirement.txt
webapp2==2.3
main.py
import webapp2
class MainHandler(webapp2.RequestHandler):
def get(self):
self.response.write("hello")
app = webapp2.WSGIApplication([
('/', MainHandler)
], debug=True)
still heroku give out a Application Error
what's wrong with my project?
Answer: Base on their getting started with python it seems like you need gunicorn web
server. Try adding gunicorn in your requirements.txt and procfile web:
gunicorn main:app
Forgot to add the link: <https://devcenter.heroku.com/articles/python>
Also here is my webapp2-starter, it's setup to act like an appengine dev
server but works outside app engine.
<https://github.com/faisalraja/webapp2-starter>
|
Using a DOT graph as a basis for tree GUI
Question: I want to use a graph that is generated by DOT (pyDot in python) as the basis
for an interactive Tree-structured GUI in which each of the nodes in the Tree
could be widgets.
The tree will basically be a binary Morse Code tree which start at the top
node and navigate down the tree to their desired letter and select it. The
node they want to select should be highlightable, and the contents (letters)
of should be able to be changed based on user input.
Basically I want the nodes to be turned into full scale objects with tunable
parameters that change as the interface is used. Can anyone point me in the
right direction in order to do this?
Answer: I started with the demo code at:
<http://wiki.wxpython.org/AnotherTutorial#wx.TreeCtrl>. I've added the
build_tree, _build_tree_helper and build_conn_dict methods. The key methods of
interest from the dot_parser library are edge.get_source() and
edge.get_destination() which are used to make the "connection" dictionary.
The dot graph is stored in the dot_data variable. Importantly, the dot graph
**must not** loop; that is, it must be a spanning tree otherwise the
_build_tree_helper method will loop infinitely (and it doesn't make sense in a
TreeControl).
I also had to patch dot_parser according to
<https://github.com/nlhepler/pydot-py3/issues/1#issuecomment-15999052> to get
it to work.
import wx
from dot_parser import parse_dot_data
class MyFrame(wx.Frame):
def __init__(self, parent, id, title, **kwargs):
self.parsed_dot = kwargs.pop("parsed_dot", None)
wx.Frame.__init__(self, parent, id, title, wx.DefaultPosition, wx.Size(450, 350))
hbox = wx.BoxSizer(wx.HORIZONTAL)
vbox = wx.BoxSizer(wx.VERTICAL)
panel1 = wx.Panel(self, -1)
panel2 = wx.Panel(self, -1)
self.tree = wx.TreeCtrl(panel1, 1, wx.DefaultPosition, (-1,-1), wx.TR_HAS_BUTTONS | wx.TR_LINES_AT_ROOT )
self.build_tree(self.tree)
self.tree.Bind(wx.EVT_TREE_SEL_CHANGED, self.OnSelChanged, id=1)
self.display = wx.StaticText(panel2, -1, "",(10,10), style=wx.ALIGN_CENTRE)
vbox.Add(self.tree, 1, wx.EXPAND)
hbox.Add(panel1, 1, wx.EXPAND)
hbox.Add(panel2, 1, wx.EXPAND)
panel1.SetSizer(vbox)
self.SetSizer(hbox)
self.Centre()
def build_conn_dict(self):
conn_dict = {}
if(self.parsed_dot):
for edge in self.parsed_dot.get_edges():
conn_dict.setdefault(edge.get_source(), []).append(edge.get_destination())
return conn_dict
def build_tree(self, tree):
if(self.parsed_dot):
conn_dict = self.build_conn_dict()
outs = set(conn_dict.keys())
ins = reduce(lambda x, y: x | set(y), conn_dict.values(), set([]))
roots = list(outs - ins)
roots = dict([(root, tree.AddRoot(root)) for root in roots])
self._build_tree_helper(tree, conn_dict, roots)
def _build_tree_helper(self, tree, conn_dict = {}, roots = {}):
new_roots = {}
for root in roots:
if(conn_dict.has_key(root)):
for sub_root in conn_dict[root]:
new_roots[sub_root] = tree.AppendItem(roots[root], sub_root)
if(new_roots):
self._build_tree_helper(tree, conn_dict, new_roots)
def OnSelChanged(self, event):
item = event.GetItem()
self.display.SetLabel(self.tree.GetItemText(item))
child_text = self.tree.GetItemText(item)
parent_text = ""
try:
parent = self.tree.GetItemParent(item)
parent_text = self.tree.GetItemText(parent)
except wx._core.PyAssertionError:
pass
print "child: %s, parent: %s" % (child_text, parent_text)
class MyApp(wx.App):
def OnInit(self):
dot_data = \
'''
graph ""
{
label="(% (EXP (% (X) (% (X) (X)))) (EXP (SIN (X))))"
n039 ;
n039 [label="%"] ;
n039 -> n040 ;
n040 [label="EXP"] ;
n040 -> n041 ;
n041 [label="%"] ;
n041 -> n042 ;
n042 [label="X"] ;
n041 -> n043 ;
n043 [label="%"] ;
n043 -> n044 ;
n044 [label="X"] ;
n043 -> n045 ;
n045 [label="X"] ;
n039 -> n046 ;
n046 [label="EXP"] ;
n046 -> n047 ;
n047 [label="SIN"] ;
n047 -> n048 ;
n048 [label="X"] ;
}
'''
parsed_dot = parse_dot_data(dot_data)
frame = MyFrame(None, -1, "treectrl.py", parsed_dot = parsed_dot)
frame.Show(True)
self.SetTopWindow(frame)
return True
app = MyApp(0)
app.MainLoop()
|
Release memory in a code using matplotlib?
Question: Well, I tried many things and I'm almost convinced that there is no way to
solve my problem. Here I go... I'm writing a simple software with tkinter and
in one part of this software I use matplotlib and basemaps to provide some
maps to the user. The problem is that in those maps are loaded a relative big
amount of data and in some computers it can be a problem if the user open many
maps. One part of mu code (the critical) is:
def plota_bacia():
global status_shape,pontos,arq
print status_shape
if status_shape == True:
fig = pyl.figure(figsize=(12,8))
fig.canvas.set_window_title('Bacia fornecida pelo arquivo: '+arq)
fig.patch.set_facecolor('white')
m = Basemap(projection='merc',llcrnrlat=-32.5,urcrnrlat=5.0,llcrnrlon=-65.0,urcrnrlon=-33.0,lat_ts=20,resolution='c')
parallels = arange(-50.,20,0.5)
meridians = arange(-90.,0.,0.5)
ptos = []
for x,y in zip(pontos[0],pontos[1]):
x1,y1=m(x,y)
ptos.append((x1,y1))
p = Polygon(ptos,facecolor='red',edgecolor='green',linewidth=1)
pyl.gca().add_patch(p)
pyl.title(arq)
xmin,ymin = m(min(pontos[0])-0.5,min(pontos[1])-0.5)
xmax,ymax = m(max(pontos[0])+0.5,max(pontos[1])+0.5)
m.drawparallels(parallels,labels=[1,0,0,0],fontsize=16)
m.drawmeridians(meridians,labels=[0,0,0,1],fontsize=16)
m.readshapefile(dir_shape+'Brasil/BRASIL','r')
pyl.xlim([xmin,xmax])
pyl.ylim([ymin,ymax])
pyl.show()
else:tkMessageBox.showinfo( "Gráfico da bacia","Entre com uma bacia",parent=top)
Doing some tests I understood that the problem is way how python manages the
memory, for example:
from pylab import *
f = range(1,10000000,1)
plot(f)
show()
del f ; gc.collect()
If I put the line "del f ; gc.collect()" after the second one ("f =
range(1,10000000,1)") I have some space released related to delete the
variable "f", but once I plot "f", I supose a matplotlib object is conected to
the part of the memory related to "f" and for this reason I can't release that
part of the memory. Is that correct? I tried cla(), clf(), close() and this
not help me. Sorry if I did some stupid, I program many things in python, but
I'm a environmental engineer, not a programmer. Thanks a lot!
Answer: After `pyl.show()` add `pyl.close(fig.number); del fig`.
|
using flask-sqlalchemy without the subclassed declarative base
Question: I am using Flask for my python wsgi server, and sqlalchemy for all my database
access.
I _think_ I would like to use the Flask-Sqlalchemy extension in my
application, but I do not want to use the declarative base class (db.Model),
instead, I want to use the base from sqlalchemy.ext.declarative.
Does this defeat the entire purpose of using the extension?
* * *
My use case:
I would like the extension to help me manage sessions/engines a little better,
but I would like to handle all models separately.
I actually wouldn't mind using the extension, but I want to write _strict_
models. I am porting code from a non-flask application, and I will be pushing
changes back to that project as I go. If flask-sqlalchemy allows me to cheat
on Table **metadata** for instance, that is going to cause problems when the
code is pushed back out. There are also portions of my code that do lots of
type checking (polymorphic identities), and I also remember reading that type
checking on Table is not recommended when using the extension.
Answer: You can have Flask-SQLAlchemy expose your own base Model instead of it's
built-in one. Just subclass `SQLAlchemy` and override `make_declarative_base`.
from flask.ext.sqlalchemy import SQLAlchemy
class CustomAlchemy(SQLAlchemy):
def make_declarative_base(self):
base = declarative_base(...)
...
return base
db = CustomAlchemy()
|
Python, argparse: different parameters with different number of arguments
Question: I would like to write a Python script called `sync` that has three or four
modes of operation, each receiving a different number of arguments. For
example,
sync set_version <build> <version_number>
sync get_version <build>
sync has_started <build_1> <build_2> ... <build_n>
I've tried using `argparse`'s subparsers for each mode of operation:
import argparse
parser = argparse.ArgumentParser(description='Build synchronization mechanism')
subparsers = parser.add_subparsers()
parser_get_version = subparsers.add_parser('get_version')
parser_get_version.add_argument('build')
parser_update_version = subparsers.add_parser('update_version')
parser_update_version.add_argument('build')
parser_update_version.add_argument('version')
args = parser.parse_args()
print args
The problem is that the help message is not reflecting the structure of the
arguments of each operation modes. Instead, it simply lists the operation
modes:
usage: sync.py [-h] {get_version,update_version} ...
Build synchronization mechanism
positional arguments:
{get_version,update_version}
optional arguments:
-h, --help show this help message and exit
**How do I force argparse to display a full help message, with all the
subparsers parameters?**
Answer: Each sub-command has it's own help; try `sync.py get_version --help`, for
example.
You'd have to override the help action yourself, and loop over the subcommands
and print help for each. This requires some foibling with internal attributes:
class HelpAction(argparse._HelpAction):
def __call__(self, parser, namespace, values, option_string=None):
parser.print_help()
for group in parser._subparsers._group_actions:
group.choices.values()[0].print_help()
parser.exit()
parser = argparse.ArgumentParser(description='Build synchronization mechanism',
add_help=False)
parser.add_argument('-h', '--help', action=HelpAction, default=argparse.SUPPRESS,
help=argparse._('show this help message and exit'))
You probably want to tweak the output some more though.
|
urllib2 fails when URL has a port number appended
Question: The code below:
import urllib2
file = urllib2.urlopen("http://foo.bar.com:82")
works just fine on my mac (OS X 10.8.4 running Python 2.7.1. It opens the URL
and I can parse the file with no problems.
When I try the EXACT same code (these two lines) in GoDaddy Python 2.7.3 (or
2.4) I receive an error:
urllib2.URLError: <urlopen error (111, 'Connection refused')
The problem has something to do with the port :82 that is an essential part of
the address. I have tried using a forwarding address with masking, etc., and
nothing works.
Any idea why it would work in one environment and not in the other (ostensibly
similar) environment? Any ideas how to get around this? I also tried Mechanize
to no avail. Previous posts have suggested focusing on
urllib2.HTTPBasicAuthHandler, but it works fine on my OS X environment without
anything special.
Ideas are welcome.
Answer: `Connection refused` means that your operating system tried to contact the
remote host, but got a "closed port" message.
Most likely, this is because of a firewall between GoDaddy and `foo.bar.com`.
Most likely, `foo.bar.com` is only reachable from your computer or your local
network, but it also could be GoDaddy preventing access to strange ports.
|
Printing not working in certain python functions
Question: Here is the code in trouble, should be self-explanatory with the comments:
import numpy as np
import sys
A = np.matrix([[1, 1], [2, 0]])
x0 = np.matrix([1, 0]).reshape(2, 1)
thresh = 1e-3
def inv_powerm(A, x0, thresh):
m0 = x0.flat[abs(x0).argmax()]
x1 = np.linalg.solve(A, (x0 / m0))
m1 = x1.flat[abs(x1).argmax()]
while abs(m1 - m0) > thresh:
m0 = m1
x1 = np.linalg.solve(A, (x1 / m1))
m1 = x1.flat[abs(x1).argmax()]
print(x1)
print(m1)
return m1;
def pmat(m):
i = 0
while i < 10:
print(m)
i = i + 1
return m
# I can print the matrix
print(A)
# I can print the matrix in pmat()
pmat(A)
# But I cannot print matrices in inv_powerm()
inv_powerm(A, x0, thresh)
Answer: It is not the `print` not work. It is the logic of your code that fails. In
`inv_powerm`, the first time `m0 == 1` and `m1 == 1.0`, so `m0 - m1 == 0`. So
the `while` test fails. All the code in `while` not executed.
|
Python Regex - How to remove text between 2 characters
Question: How can I remove anything between `")"` and `"|"`
For example,
str = "left)garbage|right"
I need the output to be `"left)|right"`
Answer:
>>> import re
>>> s = "left)garbage|right"
>>> re.sub(r'(?<=\)).*?(?=\|)', '', s)
'left)|right'
>>> re.sub(r'\).*?\|', r')|', s)
'left)|right'
|
Python convert date string to python date and subtract
Question: I have a situation where I need to find the previous date from the
`date_entry` where the `date_entry` is string, I managed to do this:
>>> from datetime import timedelta, datetime
>>> from time import strptime, mktime
>>> date_str = '20130723'
>>> date_ = strptime(date_str, '%Y%m%d')
>>> date_
time.struct_time(tm_year=2013, tm_mon=7, tm_mday=23, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=1, tm_yday=204,tm_isdst=-1)
>>> datetime.fromtimestamp(mktime(date_))-timedelta(days=1)
datetime.datetime(2013, 7, 22, 0, 0)
>>>
But, for this I have to import the modules `timedelta`, `datetime`, `strptime`
and `mktime`. I think this really an overkill to solve this simple problem.
Is there any more elegant way to solve this (using Python 2.7) ?
Answer: Just use
[datetime.datetime.strptime](http://docs.python.org/2/library/datetime.html#datetime.datetime.strptime)
class method:
>>> import datetime
>>> date_str = '20130723'
>>> datetime.datetime.strptime(date_str, '%Y%m%d') - datetime.timedelta(days=1)
datetime.datetime(2013, 7, 22, 0, 0)
|
Animation using matplotlib with subplots and ArtistAnimation
Question: I am working on an image analysis and I want to create an animation of the
final results that includes the time-sequence of 2D data and a plot of the
time sequences at a single pixel such that the 1D plot updates as the 2D
animation progresses. Then set them up in a subplot side by side The link
below has an image of the end result which would ideally be animated.

I keep getting an error: AttributeError: 'list' object has no attribute
'set_visible'. I googled it (as you do) and stumbled across
<http://matplotlib.1069221.n5.nabble.com/Matplotlib-1-1-0-animation-vs-
contour-plots-td18703.html> where one guy duck punches the code to set the
set_visible attribute. Unfortunately, the plot command does not seem to have
such an attribute so I am at a loss as to how I can produce the animation. I
have included the monkey patch in the minimal working example below (commented
out) as well as a second 'im2' that is also commented out which should work
for anyone trying to run the code. Obviously it will give you two 2D plot
animations. Minimal working example is as follows:
#!/usr/bin/env python
import matplotlib.pyplot as plt
import matplotlib.animation as anim
import numpy as np
import types
#create image with format (time,x,y)
image = np.random.rand(10,10,10)
image2 = np.random.rand(10,10,10)
#setup figure
fig = plt.figure()
ax1=fig.add_subplot(1,2,1)
ax2=fig.add_subplot(1,2,2)
#set up list of images for animation
ims=[]
for time in xrange(np.shape(image)[1]):
im = ax1.imshow(image[time,:,:])
# im2 = ax2.imshow(image2[time,:,:])
im2 = ax2.plot(image[0:time,5,5])
# def setvisible(self,vis):
# for c in self.collections: c.set_visible(vis)
# im2.set_visible = types.MethodType(setvisible,im2,None)
# im2.axes = plt.gca()
ims.append([im, im2])
#run animation
ani = anim.ArtistAnimation(fig,ims, interval=50,blit=False)
plt.show()
I was also curious as to whether anyone knew of a cool way to highlight the
pixel that the 1D data is being extracted from, or even draw a line from the
pixel to the rightmost subplot so that they are 'connected' in some way.
Adrian
Answer: `plot` returns a list of artists (hence why the error is referring to a list).
This is so you can call `plot` like `lines = plot(x1, y1, x2, y2,...)`.
Change
im2 = ax2.plot(image[0:time,5,5])
to
im2, = ax2.plot(image[0:time,5,5])
Adding the comma un-packs the length one list into `im2`
As for you second question, we try to only have one question per thread on SO
so please open a new question.
|
Python 3 unicode encode error
Question: I'm using glob.glob to get a list of files from a directory input. When trying
to open said files, Python fights me back with this error:
> UnicodeEncodeError: 'charmap' codec can't encode character '\xf8' in
> position 18: character maps to < undefined >
By defining a string variable first, I can do this:
filePath = r"C:\Users\Jørgen\Tables\\"
Is there some way to get the 'r' encoding for a variable?
EDIT:
import glob
di = r"C:\Users\Jørgen\Tables\\"
def main():
fileList = getAllFileURLsInDirectory(di)
print(fileList)
def getAllFileURLsInDirectory(directory):
return glob.glob(directory + '*.xls*')
There is a lot more code, but this problem stops the process.
Answer: Independently on whether you use the raw string literal or a normal string
literal, Python interpreter must know the source code encoding. It seems you
use some 8-bit encoding, not the UTF-8. Therefore you have to add the line
like
# -*- coding: cp1252 -*-
at the beginning of the file (or using another encoding used for the source
files). It need not to be the first line, but it usually is the first or
second (the first should contain `#!python3` for the script used on Windows).
Anyway, it is usually better not to use non ASCII characters in the
file/directory names.
You can also use normal slashes in the path (the same way as in Unix-based
systems). Also, have a look at
[os.path.join](http://docs.python.org/3/library/os.path.html#os.path.join)
when you need to compose the paths.
**Updated**
The problem is probably not where you search it for. My guess is that the
error manifests only when you want to display the resulting list via `print`.
This is usually because the console by default uses non-unicode encoding that
is not capable to display the character. Try the `chcp` command without
arguments in your `cmd` window.
You can modify the print command in your `main()` function to convert the
string representation to the ASCII one that can always be displayed:
print(ascii(fileList))
|
Raw_input inside a Python process
Question: I have created a small script in python where I want to execute two function
on the same time using multiprocessing. The first function would do a
directory recursive search and the second one will display some questions to
the user. Although the .txt file is created the question doesn't appear. I
have seen this question: [Python command line input in a
process](http://stackoverflow.com/questions/5697305/python-command-line-input-
in-a-process) but as a beginner I did not understand what is the problem and
how to solve it. Here's my script:
import os
import thread
import time
from multiprocessing import Process
def writeFiles():
#open a file for writing files in it
f = open("testFile.txt","w")
#do the walk
for root ,dirs,files in os.walk('C:\\Users'):
for dir in dirs:
if dir.startswith('Test'):
for root ,dirs,files in os.walk('C:\\Users\\' + dir +'\Desktop'):
for file in files:
if file.endswith('.txt'):
#include the full path
f.write( os.path.join(root, file + "\n") )
#close the file
f.close()
def ask():
a = raw_input('Your name? ')
if a == 'Tester':
print 'Hello'
else:
print 'Bye'
if __name__ == '__main__':
# create processes
p1 = Process( target = writeFiles)
p2 = Process( target = ask)
p1.start()
p2.start()
Answer: The simplest thing to do would be to call `ask` from the main process itself:
if __name__ == '__main__':
p1 = Process(target = writeFiles)
p1.start()
ask()
Or you could use a thread:
import threading
import multiprocessing as mp
import sys
def ask(stdin):
print 'Your name? ',
a = stdin.readline().strip()
if a == 'Tester':
print 'Hello'
else:
print 'Bye'
stdin.close()
def writeFiles():
pass
if __name__ == '__main__':
p1 = mp.Process(target=writeFiles)
p1.start()
t1 = threading.Thread(target=ask, args=(sys.stdin,))
t1.start()
p1.join()
t1.join()
* * *
Or, you could use `os.dup` [as J.F. Sebastian shows
here](http://stackoverflow.com/a/8981813/190597):
import multiprocessing as mp
import sys
import os
def ask(stdin):
print 'Your name? ',
a = stdin.readline().strip()
if a == 'Tester':
print 'Hello'
else:
print 'Bye'
stdin.close()
def writeFiles():
pass
newstdin = os.fdopen(os.dup(sys.stdin.fileno()))
if __name__ == '__main__':
p1 = mp.Process(target=writeFiles)
p1.start()
p2 = mp.Process(target=ask, args=(newstdin,))
p2.start()
p1.join()
p2.join()
|
Is there a way to make local titles using subplot2grid in Python
Question: I'm using suplot2grid like in the example in the matplotlib page:
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1, 2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2, 0))
ax5 = plt.subplot2grid((3,3), (2, 1))
plt.suptitle("subplot2grid")

Is there a way to make a local subtitle below ax1, instead of the global one
on top of it?
Thanks
Answer: You can add titles to each sub plot using the set_title() method of the axes.
Each title will still be display above the axis. If you want text below the
axis, you could use set_xlabel. For example:
import pylab as plt
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1, 2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2, 0))
ax5 = plt.subplot2grid((3,3), (2, 1))
# add titles to subplots
ax2.set_title('plot 2')
ax3.set_title('plot 3')
# add x-label to subplot
ax1.set_xlabel('plot 1 x-label')
# add y-label to subplot
ax1.set_ylabel('y-label')
plt.tight_layout()
plt.show()

You can also use figtext to add a new title like this:
# add Text
pos = ax1.get_position()
x = pos.x0 + 0.35
y = pos.y0
plt.figtext(x,y,'new title')
plt.tight_layout()
plt.show()
|
Simplest method to call a function from keypress in python(3)
Question: I have a python application in which a function runs in a recursive loop and
prints updated info to the terminal with each cycle around the loop, all is
good until I try to stop this recursion.
It does not stop until the terminal window is closed or the application is
killed (control-c is pressed) however I am not satisfied with that method.
I have a function which will stop the loop and exit the program it just never
has a chance to get called in the loop, so I wish to assign it to a key so
that when it is pressed it will be called.
What is the simplest method to assign one function to one or many keys?
Answer: You can intercept the `ctrl+c` signal and call your own function at that time
rather than exiting.
import signal
import sys
def exit_func(signal, frame):
'''Exit function to be called when the user presses ctrl+c.
Replace this with whatever you want to do to break out of the loop.
'''
print("Exiting")
sys.exit(0) # remove this if you do not want to exit here
# register your exit function to handle the ctrl+c signal
signal.signal(signal.SIGINT, exit_func)
#loop forever
while True:
...
You should replace `sys.exit(0)` with something more useful to you. You could
raise an exception and that `except` on it outside the loop body (or just
`finally`) to perform your cleanup actions.
|
Python for loop breaks text based game
Question:
#!/usr/bin/env python
import random
import time
import os
class vars:
running = 1
def win ():
print("You escaped!")
vars.running = 0
time.sleep(4)
return 0
def main ():
char_loc = 11 #The characters current co-ordinates in XY format
pos_char_loc = 11
ex_y = random.randint(1, 5)
ex_x = random.randint(1, 5) * 10
ex_loc = ex_x + ex_y
while vars.running == 1:
os.system('CLS')
x0 = ["#"] * 5
x1 = ["#"] * 5
x2 = ["#"] * 5
x3 = ["#"] * 5
x4 = ["#"] * 5
if (char_loc >= 11 and char_loc <= 55):
if (char_loc >= 11 and char_loc <= 15):
i = 0; k = 11
for x in range(0, 4):
if char_loc == k:
x0.insert(i, '@')
else:
i += 1
k += 1
if (char_loc >= 21 and char_loc <= 25):
i =0; k = 21
for loop1 in range(0, 4):
if char_loc == k:
x1.insert(i, '@')
else:
i += 1
k += 1
if (char_loc >= 31 and char_loc <= 35):
i =0; k = 31
for loop2 in range(0, 4):
if char_loc == k:
x2.insert(i, '@')
else:
i += 1
k += 1
if (char_loc >= 41 and char_loc <= 45):
i =0; k = 41
for loop3 in range(0, 4):
if char_loc == k:
x3.insert(i, '@')
else:
i += 1
k += 1
if (char_loc >= 51 and char_loc <= 55):
i =0; k = 51
for loop5 in range(0, 4):
if char_loc == k:
x4.insert(i, '@')
else:
i += 1
k += 1
else:
print("fail")
print( x0[4],x1[4],x2[4],x3[4],x4[4])
print( x0[3],x1[3],x2[3],x3[3],x4[3])
print( x0[2],x1[2],x2[2],x3[2],x4[2])
print( x0[1],x1[1],x2[1],x3[1],x4[1])
print( x0[0],x1[0],x2[0],x3[0],x4[0])
print(char_loc, ex_loc)
if char_loc == ex_loc:
win()
move = input()
if move == "w" and (char_loc != 15 and char_loc != 25 and char_loc != 35 and char_loc != 45 and char_loc !=55):
char_loc += 1
print("up")
elif move == "s" and (char_loc != 11 and char_loc != 21 and char_loc != 31 and char_loc != 41 and char_loc != 51):
char_loc -= 1
print("down")
elif move == "a" and (char_loc != 11 and char_loc != 12 and char_loc != 13 and char_loc != 14 and char_loc != 15):
char_loc -= 10
print("left")
elif move == "d" and (char_loc != 51 and char_loc != 52 and char_loc != 53 and char_loc != 54 and char_loc != 55):
char_loc += 10
print("right")
else: print("You can't move there!")
if __name__ == '__main__': main()
I'm trying to make a simple text based game where you move the '@' around a
grid of '#'s and try to find the exit. I've changed the code to make it easier
for me to make the grid bigger or smaller without adding or deleting huge
chunks of code and it keeps on giving me this output:
fail
# # # # #
@ # # # #
@ # # # #
@ # # # #
@ # # # #
11 52
and I can't figure out what's wrong with it! Only one '@' is supposed to
appear :( I am only a newbie at python so if you have any tips for improving
this please, don't hesitate, and post them! Thanks in advance,
Answer: I think the "fail" occurs because it will occur every time the char_loc is not
between 51 and 55.
if (char_loc >= 11 and char_loc <= 15):
if (char_loc >= 21 and char_loc <= 25):
if (char_loc >= 31 and char_loc <= 35):
if (char_loc >= 41 and char_loc <= 45):
if (char_loc >= 51 and char_loc <= 55):
else:
What I think you'd want to do here is use elif, which will only fire if the
previous checks don't trigger.
if (char_loc >= 11 and char_loc <= 15):
elif (char_loc >= 21 and char_loc <= 25):
elif (char_loc >= 31 and char_loc <= 35):
elif (char_loc >= 41 and char_loc <= 45):
elif (char_loc >= 51 and char_loc <= 55):
else:
In regards to the multiple @ symbols, I think this may play a part. Currently
you have:
if char_loc == k:
x0.insert(i, '@')
else:
i += 1
k += 1
What I think you're looking to do is:
if char_loc == k:
x0.insert(i, '@')
i += 1
k += 1
Since you want k to change every time that loop iterates.
One last thing that I would suggest is since you have:
i =0; k = 21
i =0; k = 31
i =0; k = 41
i =0; k = 51
You will probably want to add
i =0; k = 11
To the first one.
Hope that helps.
|
Existing Tkinter Code that Takes an Input String to Another String
Question: I already posted about this, however, my purpose here is different. I believe
there should be code around that does something very similar to this. I was
hoping someone might have an idea of where to look for examples like this
(interfaces of functions that act on strings). Even better, does anyone have a
block of code they already have available? I'm just looking for a template to
base mine off of. Thanks in advance.
To give you an idea of what I mean, here's the function I have in mind:
def psi_j(x, j):
rtn = []
for n2 in range(0, len(x) * j - 2):
n = n2 / j
r = n2 - n * j
rtn.append(j * x[n] + r * (x[n + 1] - x[n]))
print 'n2 =', n2, ': n =', n, ' r =' , r, ' rtn =', rtn
return rtn
This code takes a string x = [0,1,1,1,2] for example (it must always begin
with 0) and a parameter j, say 2, and outputs a string (x = [0, 1, 2, 2, 2, 2,
2, 3] in this example).
It does this in two steps: First it decomposes some number m into a multiple
of j and a remainder. Then it runs this decomposition through a function on
the rtn.append line.
Notice that this has cj - 1 terms where c is the number of terms in the input
string and j is the parameter. Normally, we would like it to be able to
calculate cj terms. This is an issue with the function that I am more than
happy to put aside for the moment.
My key interest is to be able to make this program usable for someone who has
no knowledge of programming. In particular, I need some kind of user interface
that prompts the user to input a string (ideally just by putting in numbers in
the form 011123334 for example) and a parameter.
EDIT:
The error is on the third line, and python highlights the space to the right
of (self, master) in orange all the way to the end of the row.
Answer: If you're interested in making a user interface of this with Tkinter you can
use the following code:
from Tkinter import *
class App(Frame):
def __init__(self, master):
Frame.__init__(self, master)
self.grid()
self.create_widgets()
def create_widgets(self):
self.entryLabel = Label(self, text="Please enter a list of numbers (no commas):")
self.entryLabel.grid(row=0, column=0, columnspan=2)
self.listEntry = Entry(self)
self.listEntry.grid(row=0, column=2, sticky=E)
self.entryLabel = Label(self, text="Please enter an index value:")
self.entryLabel.grid(row=1, column=0, columnspan=2, sticky=E)
self.indexEntry = Entry(self)
self.indexEntry.grid(row=1, column=2)
self.runBttn = Button(self, text="Run Function", command=self.psiFunction)
self.runBttn.grid(row=2, column=0, sticky=W)
self.answerLabel = Label(self, text="Output List:")
self.answerLabel.grid(row=2, column=1, sticky=W)
def psiFunction(self):
j = int(self.indexEntry.get())
valueList = list(self.listEntry.get())
x = map(int, valueList)
if x[0] != 0:
x.insert(0, 0)
rtn = []
for n2 in range(0, len(x) * j - 2):
n = n2 / j
r = n2 - n * j
rtn.append(j * x[n] + r * (x[n + 1] - x[n]))
self.answer = Label(self, text=rtn)
self.answer.grid(row=2, column=2, sticky=W)
if __name__ == "__main__":
root = Tk()
app = App(root)
root.mainloop()
If you have any questions let me know; this works in python 2.7.
This program automatically adds a zero if the user forgets to and the format
for input is, for example, 01112.
You can use `rtn` later on as well; it is a list of results.
|
thread safe python dictionaries?
Question: I have a function call that starts 10 threads. Before the start of these
threads , I have
from collections import defaultdict
output = defaultdict(dict)
and output is empty.
Each thread will generate data to write to the dictionary.
Something like:
output['water'] = 'h20'
output['fire'] = 'delta of oxygen'
....
The threads will only add items and they do not iterate over any of the other
items or modify any other items. `output['water']` being an item that is
different from `output['fire']`. I can also guarantee that no two threads are
going to create the same item. That is, each thread T has a unique i. In code:
output[i] is unique per thread.
Is this dictionary thread safe in this regard?
Answer: Yes.
If you are using CPython and strings as keys, then yes. The GIL in CPython
ensures only one thread executes bytecode at a time, and setting a key to a
value in a dict happens in a single opcode, `STORE_SUBSCR`. If you are not
using CPython, or you are using a key that has custom `__hash__`, `__eq__`, or
`__cmp__` methods, all bets are off. If I had a soapbox, I'd hop on it and
warn you of the evils of relying on implementation details like this for
correctness. It's more pythonic of you to write something that works only for
the case and in the environment where it will be used, since doing otherwise
could be seen as a premature optimization. Enjoy your working code!
>>> from dis import dis
>>> dis(compile('output = defaultdict(dict); output["water"] = "H2O"', 'example', 'exec'))
1 0 LOAD_NAME 0 (defaultdict)
3 LOAD_NAME 1 (dict)
6 CALL_FUNCTION 1 (1 positional, 0 keyword pair)
9 STORE_NAME 2 (output)
12 LOAD_CONST 0 ('H2O')
15 LOAD_NAME 2 (output)
18 LOAD_CONST 1 ('water')
21 STORE_SUBSCR
22 LOAD_CONST 2 (None)
25 RETURN_VALUE
This has been [discussed
elsewhere](http://bytes.com/topic/python/answers/655729-thread-safety-dict).
|
python timeout using os.system
Question: So, I know everyone is going to tell me to use the subprocess module, but I
can't use that for the project I am working on since Piping simply doesn't
want to work with wxpython and py2exe on my system.
So, I've been using the os.system call. I need to know how to wait for the
process to end. Currently, I have
os.system(cmd)
and my command may actually take a long time to execute, so it usually times
out early. How can I make my program to wait for os.system? I've tried waitpid
and I guess that doesn't work for os.system.
I am developing for windows so I can't use fork and execvp unfortunately. I
have a lot of hands tied :(
Answer: you can correct your code:
os.system('cmd')
_extra explain about subprocess:_
import subprocess
ls_output = subprocess.check_output(['ls'])
_Running External Command_
To run an external command without interacting with it, such as one would do
with `os.system()`, Use the `call()` function.
import subprocess
# Simple command
subprocess.call('ls -l', shell=True)
$ python replace_os_system.py
total 16
-rw-r--r-- 1 root8085 root8085 0 Jul 1 13:27 __init__.py
-rw-r--r-- 1 root8085 root8085 1316 Jul 1 13:27 replace_os_system.py
-rw-r--r-- 1 root8085 root8085 1167 Jul 1 13:27 replace_os_system.py~
# run cmd
import subprocess
l = subprocess.call(['cmd'])
Extra example: Make a system call three different ways:
#! /usr/bin/env python
import subprocess
# Use a sequence of args
return_code = subprocess.call(["echo", "hello sequence"])
# Set shell=true so we can use a simple string for the command
return_code = subprocess.call("echo hello string", shell=True)
# subprocess.call() is equivalent to using subprocess.Popen() and wait()
proc = subprocess.Popen("echo hello popen", shell=True)
return_code = proc.wait() # wait for process to finish so we can get the return code
_Control stderr and stdout:_
#! /usr/bin/env python
import subprocess
# Put stderr and stdout into pipes
proc = subprocess.Popen("echo hello stdout; echo hello stderr >&2", \
shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
return_code = proc.wait()
# Read from pipes
for line in proc.stdout:
print("stdout: " + line.rstrip())
for line in proc.stderr:
print("stderr: " + line.rstrip())
|
Sending to Exchange: How to Disable Disable Lossy Conversion of HTML to RTF?
Question: I have a python script which sends a multipart email with text, html, and ics
attachments. The idea is that a modern email client will render the HTML part
and offer to add the event to the user's calendar.
Code looks like:
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from icalendar import Calendar, Event, UTC, vText, vCalAddress
# ... set up calendar invite, render text and html templates ...
msg = MIMEMultipart('alternative')
msg_text = MIMEText(body_text, 'plain', 'utf-8')
msg_html = MIMEText(body_html, 'html', 'utf-8')
meeting = MIMEText(cal.as_string(), 'calendar;method=REQUEST', 'utf-8')
meeting.set_param('method', 'REQUEST')
meeting.set_param('name', 'meeting.ics')
meeting.add_header('Content-class', 'urn:content-classes:calendarmessage')
# ... set up various message attributes: to/from/subject ...
msg.add_header('Content-class', 'urn:content-classes:calendarmessage')
msg.attach(msg_text)
msg.attach(msg_html)
msg.attach(meeting)
s = smtplib.SMTP(smtp_server)
s.sendmail(sender, send_to, msg.as_string())
This works: I get the message, it is displayed as HTML, and I can easily add
the event to my calendar in Outlook and Mac Mail. However, the HTML is broken.
Here is the telltale in the HTML:
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from rtf -->
Here is what I know so far:
* If I drop the calendar attachment, the HTML makes it through without conversion.
* I can only get the calendar invite to work if I attach it to the message last.
So, something about having the calendar invite attached causes Exchange to
mess up my HTML message. Is there maybe a header I can add to the message or
the HTML to ask Microsoft to pretty please not convert my HTML to RTF?
Answer: Outlook only works with RTF when it comes to appointments, tasks and contacts.
If a meeting invitation comes with an HTML body, it gets converted to RTF.
UPDATE: as of Outlook 2016, this is no longer the case: Outlook now natively
supports HTML for the appointments and tasks. HTML is stored in PR_HTML (or
RTF-wrapped HTML in PR_RTF_COMPRERSSED), and you can specify the format just
like with regular email messages. It is still not exposed on the Outlook
Object Model level unfortunately - there is no AppointmentItem.HTMLBody
property yet.
|
efficiently swap a python dict's keys and values where the values contain one or more elements
Question: Suppose I have a dict:
x = { "a": ["walk", "the", "dog"], "b": ["dog", "spot"], "c":["the", "spot"] }
and want to have the new dict:
y = { "walk": ["a"], "the": ["a", "c"], "dog":["a", "b"], "spot":["b","c"] }
What is the most efficient way to do this? If a solution is a few lines and is
somehow made simple by a pythonic construct what is it (even if it's not most
efficient)?
Note that this is different than other questions where the value is a single
element and not a list.
Answer: You can use `defaultdict`:
from collections import defaultdict
y = defaultdict(list)
for key, values in x.items(): # .iteritems() in Python 2
for value in values:
y[value].append(key)
|
Python Bible verse lookup
Question: I'm fairly new to python and I'm trying to learn. I'm writing a program that
will import a text file that contains the king james bible. The user would
have to enter in the bible verse for instance gen 1:1 or gen 1:1-10 and it
will either display that verse or verses upon raw data input so far I have it
to where the program receives the file and splits the data input I'm not sure
what features of python I could use to finish this
bible = open("kjv.txt" , "r").readlines()
for line in bible:
x = line.split("|")
print "%s, %s, %s" % (x[0], x[1], x[2])
a sample of how the text file looks
0 | gen 1:1 | In the beginning God created the heaven and the earth.
1 | gen 1:2 | And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters.
2 | gen 1:3 | And God said, Let there be light: and there was light.
3 | gen 1:4 | And God saw the light, that it was good: and God divided the light from the darkness.
4 | gen 1:5 | And God called the light Day, and the darkness he called Night. And the evening and the morning were the first day.
5 | gen 1:6 | And God said, Let there be a firmament in the midst of the waters, and let it divide the waters from the waters.
6 | gen 1:7 | And God made the firmament, and divided the waters which were under the firmament from the waters which were above the firmament: and it was so.
7 | gen 1:8 | And God called the firmament Heaven. And the evening and the morning were the second day.
8 | gen 1:9 | And God said, Let the waters under the heaven be gathered together unto one place, and let the dry land appear: and it was so.
9 | gen 1:10 | And God called the dry land Earth; and the gathering together of the waters called he Seas: and God saw that it was good.
Answer:
bibletext = """the bible contents"""
bible = {}
for line in bibletext.splitlines():
number,bv,contents = line.split(" | ")
book,verse = bv.strip().split(" ")
print book
print bible
if book in bible:
bible[book].append([verse,contents])
else:
bible[book] = [verse,contents]
print bible
This will return a `bible` dictionary, using books as keys (so you can do
`bible['gen']`, for example, and get the contents of that book). The contents
of a book are stored as a list of lists, like this:
[['1:1', 'In the beginning God created the heaven and the earth.', ['1:2', 'And the earth was without form, and void; and darkness was upon the face of the deep. And the Spirit of God moved upon the face of the waters. ']]
In the future, if you need something more specific, please _specify_ that in
your question.
|
Python : How to parse the Body from a raw email , given that raw email does not have a "Body" tag or anything
Question: It seems easy to get the
From
To
Subject
etc via
import email
b = email.message_from_string(a)
bbb = b['from']
ccc = b['to']
assuming that `"a"` is the raw-email string which looks something like this.
a = """From [email protected] Thu Jul 25 19:28:59 2013
Received: from a1.local.tld (localhost [127.0.0.1])
by a1.local.tld (8.14.4/8.14.4) with ESMTP id r6Q2SxeQ003866
for <[email protected]>; Thu, 25 Jul 2013 19:28:59 -0700
Received: (from root@localhost)
by a1.local.tld (8.14.4/8.14.4/Submit) id r6Q2Sxbh003865;
Thu, 25 Jul 2013 19:28:59 -0700
From: [email protected]
Subject: oooooooooooooooo
To: [email protected]
Cc:
X-Originating-IP: 192.168.15.127
X-Mailer: Webmin 1.420
Message-Id: <1374805739.3861@a1>
Date: Thu, 25 Jul 2013 19:28:59 -0700 (PDT)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="bound1374805739"
This is a multi-part message in MIME format.
--bound1374805739
Content-Type: text/plain
Content-Transfer-Encoding: 7bit
ooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooo
ooooooooooooooooooooooooooooooooooooooooooooooo
--bound1374805739--"""
**THE QUESTION**
how do you get the `Body` of this email via python ?
So far this is the only code i am aware of but i have yet to test it.
if email.is_multipart():
for part in email.get_payload():
print part.get_payload()
else:
print email.get_payload()
is this the correct way ?
or maybe there is something simpler such as...
import email
b = email.message_from_string(a)
bbb = b['body']
?
Answer: Use
[Message.get_payload](http://docs.python.org/2/library/email.message#email.message.Message.get_payload)
b = email.message_from_string(a)
if b.is_multipart():
for payload in b.get_payload():
# if payload.is_multipart(): ...
print payload.get_payload()
else:
print b.get_payload()
|
how to import module from other directory in python?
Question: This is my directory tree
Game/
a/
1.py
...
b/
2.py
In 2.py I want import function display from 1.py. First I keep both file in
same folder there is no problem.But how to import from other location?
Answer: try using [`imp`](http://docs.python.org/2/library/imp.html):
import imp
foo = imp.load_source('filename', 'File\Directory\filename.py')
this is just like importing normally now you can use the file use imported
you then use what you named it (in this case `foo`) like this:
foo.method()
hope thats what youre looking for!
you can also try this:
import sys
sys.path.append('folder_name')
|
python list group by first character
Question:
list1=['hello','hope','hate','hack','bit','basket','code','come','chess']
What I need is:
list2=[['hello','hope','hate','hack'],['bit','basket'],['code','come','chess']]
If the first character is the same and is the same group, then sublist it.
How can I solve this?
Answer: You can use
[`itertools.groupby`](http://docs.python.org/2/library/itertools.html#itertools.groupby):
>>> from itertools import groupby
>>> list1 = ['hello','hope','hate','hack','bit','basket','code','come','chess']
>>> [list(g) for k, g in groupby(list1, key=lambda x: x[0])]
[['hello', 'hope', 'hate', 'hack'], ['bit', 'basket'], ['code', 'come', 'chess']]
|
work winth cassandra on centos
Question: I have a problem when I want to work with cassandra in centos Every thins is
right, I have installed Python,DJango and cassadnra on it but when I want ro
un my project I have an error in importing cqlengine to my project. Can any
one help me about it. Thanks
Answer: You need to upgrade python on your centos box. It looks like you're using
python 2.6 on your centos box. Set comprehensions (what's happening in the
cqlengine file) were not introduced until python 2.7. That's most likely your
problem.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.