text
stringlengths 226
34.5k
|
---|
Syntax error with a simple Python import
Question: My GF is trying to follow the [Udacity's Web Development
course](https://www.udacity.com/course/cs253) but she ran into a problem. And
I can't solve it. It's just at the start when one has to create a "hello
world" Python script that runs on AppEngine.
So, the files:
app.yaml:
application: focus-invention-298
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /.*
script: helloworld.app
helloworld.py:
# -*- coding: utf8 -*-
βimport webapp2
class MainPage(webapp2.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
self.response.write('Hello, Udacity!')
application = webapp2.WSGIApplication([('/', MainPage)], debug=True)
But, when I run the app (either trough the GUI launcher or with the
dev_appserver.py) and open the app in the browser I get this error (in the
console):
Traceback (most recent call last):
File "/Users/Kaja/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/runtime/wsgi.py", line 196, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/Users/Kaja/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/runtime/wsgi.py", line 255, in _LoadHandler
handler = __import__(path[0])
File "/Users/Kaja/Documents/udacity/helloworld.py", line 3
βimport webapp2
^
SyntaxError: invalid syntax
INFO 2013-08-05 14:06:00,875 module.py:595] default: "GET / HTTP/1.1" 500 -
ERROR 2013-08-05 14:06:01,012 wsgi.py:219]
Traceback (most recent call last):
File "/Users/Kaja/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/runtime/wsgi.py", line 196, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/Users/Kaja/Desktop/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/runtime/wsgi.py", line 255, in _LoadHandler
handler = __import__(path[0])
File "/Users/Kaja/Documents/udacity/helloworld.py", line 3
βimport webapp2
^
SyntaxError: invalid syntax
We are on OSX 10.8.4 and when I run python in the terminal it tells me I have
2.7.2 version installed. AppEngine launcher (or SDK) version is 1.8.2.
Anyone? I've tried so many things now without success that I really don't know
what to do anymore (I'm not a python dev) and I really wanna make this thing
work so my GF can continue learning :)
Answer: There are bytes _before_ the `import` statement (unicode non-breaking
whitespace characters are a prime candidate) that could cause this.
Check the first 50 bytes or so:
print repr(open('helloworld.py', 'rb').read(50))
If you see a sequence like `'\xc2\xa0'` then you have UTF-8 encoded non-
breakable space characters in there, for example.
|
Ignore certificate validation with urllib3
Question: I'm using urllib3 against private services that have self signed certificates.
Is there any way to have urllib3 ignore the certificate errors and make the
request anyways?
import urllib3
c = urllib3.HTTPSConnectionPool('10.0.3.168', port=9001)
c.request('GET', '/')
When using the following:
import urllib3
c = urllib3.HTTPSConnectionPool('10.0.3.168', port=9001, cert_reqs='CERT_NONE')
c.request('GET', '/')
The following error is raised:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 67, in request
**urlopen_kw)
File "/usr/lib/python3/dist-packages/urllib3/request.py", line 80, in request_encode_url
return self.urlopen(method, url, **urlopen_kw)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 415, in urlopen
body=body, headers=headers)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 267, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.3/http/client.py", line 1061, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python3.3/http/client.py", line 1099, in _send_request
self.endheaders(body)
File "/usr/lib/python3.3/http/client.py", line 1057, in endheaders
self._send_output(message_body)
File "/usr/lib/python3.3/http/client.py", line 902, in _send_output
self.send(msg)
File "/usr/lib/python3.3/http/client.py", line 840, in send
self.connect()
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 103, in connect
match_hostname(self.sock.getpeercert(), self.host)
File "/usr/lib/python3/dist-packages/urllib3/packages/ssl_match_hostname/__init__.py", line 32, in match_hostname
raise ValueError("empty or no certificate")
ValueError: empty or no certificate
Using `cURL` I'm able to get the expected response from the service
$ curl -k https://10.0.3.168:9001/
Please read the documentation for API endpoints
Answer: Try following code:
import urllib3
c = urllib3.HTTPSConnectionPool('10.0.3.168', port=9001, cert_reqs='CERT_NONE',
assert_hostname=False)
c.request('GET', '/')
See [Setting assert_hostname to False will disable SSL hostname
verification](https://github.com/shazow/urllib3/pull/194)
|
Python: Export Messages as .msg using pywin32.client
Question: I am hitting a road block with exporting outlook messages as .msg files. I'm
not sure on how to accomplish this task. I have a program that currently reads
the email and exports the attachments and once it completes it moves the
message to a processed folder so I can keep track of what has been completed.
I need to add a function that exports the entire email itself to a folder on
the local machine. Has anyone accomplished this using pywin32.client?
Here is the program as it stands now. (excuse the mess its still in progress)
import os
import win32com.client
import csv
import datetime
from random import randint
ATTACHMENTS_FOLDER = "C:\\EMAILS"
LOG_PATH = 'C:\\EMAILS\\log.csv'
COUNTER = 1
SUBFOLDER_LIST = ["TEST"]
UPLOAD_LIST = 'C:\\EMAILS\\logupload%d.csv' % randint(2,150000)
def ExportMessages (Folder, item):
#No IDEA!
pass
def process_attachment(sender, subject, attachment, messagedate):
"""
:param sender:
:param subject:
:param attachment:
:param messagedate:
"""
global count
count = 0
try:
filepath = os.path.join(ATTACHMENTS_FOLDER, "RAN-%dSEN-%sSUB-%s%s" % (randint(2,100000),str(sender), str(subject), attachment))
count = count +1
print "Trying", filepath
open(filepath, "r")
os.remove(filepath)
except IOError:
pass
try:
attachment.SaveAsFile(filepath)
row = [messagedate, sender, subject, count]
row2 = [messagedate, sender, subject, filepath]
w = csv.writer(csv_file)
w2 = csv.writer(csv_file2)
w.writerow(row)
w2.writerow(row2)
except:
pass
def move_message_fail(message, folder):
"""
:param message:
:param folder:
"""
print "Moving:", message.Subject
proc_folder = folder.Folders("Failed")
message.Move(proc_folder)
def move_message(folder, message):
"""
:param folder:
:param message:
"""
print "Moving:", message.Subject
proc_folder = folder.Folders("Processed")
message.Move(proc_folder)
def process_message(message, folder):
"""
:param message:
:param folder:
"""
global vin_num
vin_num = ''
print "Message:", message.Subject
vin = message.Subject
sender = message.SenderName
if folder == SUBFOLDER_LIST[0]:
for i in vin.split(' '):
if '-' in i:
vin_num = i
if vin_num:
now = datetime.datetime.now()
messagedate = now.strftime("%m-%d-%Y")
attachments = message.Attachments
for n_attachment in range(attachments.Count):
attachment = attachments.Item(n_attachment + 1)
#if attachment.Type == win32com.client.constants.CdoFileData:
process_attachment(sender, vin_num, attachment, messagedate)
#message.Update()
move_message(folder, message, up_folder)
else:
move_message_fail(message, folder)
else:
vin_num = vin.split(' ')[1]
now = datetime.datetime.now()
messagedate = now.strftime("%m-%d-%Y")
attachments = message.Attachments
for n_attachment in range(attachments.Count):
attachment = attachments.Item(n_attachment + 1)
#if attachment.Type == win32com.client.constants.CdoFileData:
process_attachment(sender, vin_num, attachment, messagedate)
#message.Update()
move_message(folder, message, up_folder)
if __name__ == '__main__':
csv_file = open(LOG_PATH, "ab")
csv_file2 = open(UPLOAD_LIST, "wb")
for i in SUBFOLDER_LIST:
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
one_folder = outlook.Folders(1)
two_folder = one_folder.Folders("Inbox")
three_folder = two_folder.Folders(i)
messages = three_folder.Items
message = messages.GetFirst()
while message:
process_message(message, i)
ExportMessages(three_folder, message)
message = messages.GetNext()
Answer: Call MailItem.SaveAs and pass olMsg or olMsgUnicode as the second (Format)
parameter.
|
xmlns namespace breaking lxml
Question: I am trying to open an xml file, and get values from certain tags. I have done
this a lot but this particular xml is giving me some issues. Here is a section
of the xml file:
<?xml version='1.0' encoding='UTF-8'?>
<package xmlns="http://apple.com/itunes/importer" version="film4.7">
<provider>filmgroup</provider>
<language>en-GB</language>
<actor name="John Smith" display="Doe John"</actor>
</package>
And here is a sample of my python code:
metadata = '/Users/mylaptop/Desktop/Python/metadata.xml'
from lxml import etree
parser = etree.XMLParser(remove_blank_text=True)
open(metadata)
tree = etree.parse(metadata, parser)
root = tree.getroot()
for element in root.iter(tag='provider'):
providerValue = tree.find('//provider')
providerValue = providerValue.text
print providerValue
tree.write('/Users/mylaptop/Desktop/Python/metadataDone.xml', pretty_print = True, xml_declaration = True, encoding = 'UTF-8')
When I run this it can't find the provider tag or its value. If I remove
`xmlns="http://apple.com/itunes/importer"` then all work as expected. My
question is how can I remove this namespace, as i'm not at all interested in
this, so I can get the tag values I need using lxml?
Answer: The `provider` tag is in the `http://apple.com/itunes/importer` namespace, so
you either need to use the fully qualified name
{http://apple.com/itunes/importer}provider
or use one of the lxml methods that has [the `namespaces`
parameter](http://lxml.de/xpathxslt.html#namespaces-and-prefixes), such as
`root.xpath`. Then you can specify it with a namespace prefix (e.g.
`ns:provider`):
from lxml import etree
parser = etree.XMLParser(remove_blank_text=True)
tree = etree.parse(metadata, parser)
root = tree.getroot()
namespaces = {'ns':'http://apple.com/itunes/importer'}
items = iter(root.xpath('//ns:provider/text()|//ns:actor/@name',
namespaces=namespaces))
for provider, actor in zip(*[items]*2):
print(provider, actor)
yields
('filmgroup', 'John Smith')
Note that the XPath used above assumes that `<provider>` and `<actor>`
elements always appear in alternation. If that is not true, then there are of
course ways to handle it, but the code becomes a bit more verbose:
for package in root.xpath('//ns:package', namespaces=namespaces):
for provider in package.xpath('ns:provider', namespaces=namespaces):
providerValue = provider.text
print providerValue
for actor in package.xpath('ns:actor', namespaces=namespaces):
print actor.attrib['name']
|
Write bitarray to file in python?
Question: I am trying to write a pit array to a file in python as in this example:
[python bitarray to and from
file](http://stackoverflow.com/questions/6266330/python-bitarray-to-and-from-
file)
however, I get garbage in my actual test file:
test_1 = ^@^@
test_2 = ^@^@
code:
from bitarray import bitarray
def test_function(myBitArray):
test_bitarray=bitarray(10)
test_bitarray.setall(0)
with open('test_file.inp','w') as output_file:
output_file.write('test_1 = ')
myBitArray.tofile(output_file)
output_file.write('\ntest_2 = ')
test_bitarray.tofile(output_file)
Any help with what's going wrong would be appreciated.
Answer: That's not garbage. The `tofile` function writes binary data to a binary file.
A 10-bit-long `bitarray` with all 0's will be output as two bytes of 0. (The
docs explain that when the length is not a multiple of 8, it's padded with 0
bits.) When you read that as text, two 0 bytes will look like `^@^@`, because
`^@` is the way (many) programs represent a 0 byte as text.
If you want a human-readable text-friendly representation, use the `to01`
method, which returns a human-readable strings. For example:
with open('test_file.inp','w') as output_file:
output_file.write('test_1 = ')
output_file.write(myBitArray.to01())
output_file.write('\ntest_2 = ')
output_file(test_bitarray.to01())
Or maybe you want this instead:
output_file(str(test_bitarray))
β¦ which will give you something like:
bitarray('0000000000')
|
Python File IOError When Trying to Create File In "w" Mode
Question: Python version: 2.6.6
I'm trying to use Python to create a file that doesn't exist by using
`open('file/path/file_name', 'w')`. However, I can't hard code the file path
in like that since I need to pass it in as a variable containing a path
specified by the user.
This works: `open('/home/root/my_file.txt', 'w')`
But my code doesn't:
import os
import sys
input = sys.argv[1] # assume it's "/home/root"
path = os.path.join(input, "my_file.txt")
f = open(path, 'w')
Which causes an exception `IOError: [Errno 2] No such file or directory:
'/home/root/my_file.txt'`
I've also tried some other modes other than "w", like "w+" and "a", but none
of them worked.
Could anyone tell me how I can fix this problem? Is it caused by the my
incorrect way of using it or is it because of the version of Python I'm using?
Thanks.
**UPDATE:**
I have found the solution to my problem: my carelessness of forgetting to
create the directory which doesn't exist yet. Here's my final code:
import os, errno
import sys
input = sys.argv[1]
try:
os.makedirs(input)
except OSError as e:
if e.errno == errno.EEXIST and os.path.isdir(input):
pass
else:
raise
path = os.path.join(input, "my_file.txt")
with open(path, "w") as f:
f.write("content")
Code adapted from [this SO answer](http://stackoverflow.com/a/600612/1248894)
Answer: When you use a relative path, like this:
open('file/path/file_name', 'w')
That's equivalent to taking the current working directory, appending
`/file/path/file_name`, and trying to open or create a file at that location.
So, if there is no `file` directory under the current working directory, or
there is a `file` directory but it has no `path` directory underneath it, you
will get this error.
If you want to create the directory if it doesn't exist, then create the file
underneath it, you need to do that explicitly:
os.makedirs('file/path')
open('file/path/file_name', 'w')
* * *
For debugging cases like this, it may be helpful to print out the absolute
path to the file you're trying to open:
print input
print os.path.abspath(input)
This may be, say, `/home/root/file/path/file_name`. If that was what you
expected, then you can check (in your terminal or GUI) whether
`/home/root/file/path` already exists. If it wasn't what you expected, then
your current working directory may not be what you wanted it to be.
|
AttributeError: 'HTTPResponse' object has no attribute 'type'
Question: So, I am trying to build a program that will retrieve the scores of the NHL's
season through the use of yahoo's RSS feed.
I am not an experienced programmer, so some things haven't quite gotten into
my head just yet. However, here is my code so far:
* * *
from urllib.request import urlopen
import xml.etree.cElementTree as ET
YAHOO_NHL_URL = 'http://sports.yahoo.com/nhl/rss'
def retrievalyahoo():
nhl_site = urlopen('http://sports.yahoo.com/nhl/rss')
tree = ET.parse(urlopen(nhl_site))
retrievalyahoo()
* * *
The title above states the error I get after I test the aforementioned code.
EDIT: Okay, after the fix, the traceback error comes as this, to which I am
puzzled:
Traceback (most recent call last):
File "C:/Nathaniel's Folder/Website Scores.py", line 12, in <module>
retrievalyahoo()
File "C:/Nathaniel's Folder/Website Scores.py", line 10, in retrievalyahoo
tree = ET.parse(nhl_site)
File "C:\Python33\lib\xml\etree\ElementTree.py", line 1242, in parse
tree.parse(source, parser)
File "C:\Python33\lib\xml\etree\ElementTree.py", line 1730, in parse
self._root = parser._parse(source)
File "<string>", line None
xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 17, column 291
Answer: The problem is that you're trying to call `urlopen` on the result of
`urlopen`.
Just call it once, like this:
nhl_site = urlopen('http://sports.yahoo.com/nhl/rss')
tree = ET.parse(nhl_site)
The error message probably could be nicer. If you look at the docs for
[`urlopen`](http://docs.python.org/3.4/library/urllib.request.html#urllib.request.urlopen):
> Open the URL _url_ , which can be either a string or a `Request` object.
Clearly the `http.client.HTTPResponse` object that it returns is neither a
string nor a `Request` object. What's happened here is that `urlopen` sees
that it's not a string, and therefore assumes it's a `Request`, and starts
trying to access methods and attributes that `Request` objects have. This kind
of design is generally a good thing, because it lets you pass things that act
just like a `Request` and they'll just work⦠but it does mean that if you pass
something that _doesn't_ act like a `Request`, the error message can be
mystifying.
|
OperationError on first query to django sqlite3 database
Question: The beginning of my `settings.py` has the following.
import os, sys
PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))
CURRENT_DIR = os.path.dirname(__file__)
TEMPLATE_DIRS = (os.path.join(CURRENT_DIR, 'templates'),)
STATICFILES_DIRS = (os.path.join(CURRENT_DIR, 'static'),)
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ADMINS = (
# ('Your Name', '[email protected]'),
)
MANAGERS = ADMINS
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': '/home/ubuntu/myapp/myapp',
}
}
I get the following in my error log after a call from `views.py` to the
database.
[time] [error] File "/usr/local/lib/python2.7/dist-packages/django/db/models/query.py", line 372, in count
[time] [error] return self.query.get_count(using=self.db)
[time] [error] File "/usr/local/lib/python2.7/dist-packages/django/db/models/sql/query.py", line 423, in get_count
[time] [error] number = obj.get_aggregation(using=using)[None]
[time] [error] File "/usr/local/lib/python2.7/dist-packages/django/db/models/sql/query.py", line 389, in get_aggregation
[time] [error] result = query.get_compiler(using).execute_sql(SINGLE)
[time] [error] File "/usr/local/lib/python2.7/dist-packages/django/db/models/sql/compiler.py", line 839, in execute_sql
[time] [error] cursor = self.connection.cursor()
[time] [error] File "/usr/local/lib/python2.7/dist-packages/django/db/backends/__init__.py", line 324, in cursor
[time] [error] cursor = self.make_debug_cursor(self._cursor())
[time] [error] File "/usr/local/lib/python2.7/dist-packages/django/db/backends/sqlite3/base.py", line 306, in _cursor
[time] [error] self._sqlite_create_connection()
[time] [error] File "/usr/local/lib/python2.7/dist-packages/django/db/backends/sqlite3/base.py", line 296, in _sqlite_create_connection
[time] [error] self.connection = Database.connect(**kwargs)
[time] [error] OperationalError: unable to open database file
How do I go about correctly referring to my database file in `settings.py`?
Answer: `'NAME': '/home/ubuntu/myapp/myapp'` looks to me like you're referring to a
folder instead of a file. For SQLite, the database name must be a full path,
including the name of a file, so, as you already got your project's folder
programmatically, you could use that to construct the file path:
`'NAME': os.path.join(PROJECT_PATH, 'mydb.db')`
|
The flow of a Recursive Call
Question: I looked for answers for this question but most were given in programming
languages other than Python.
Now in this code:
def f(n):
if n==0:
return 1
else:
m = f(n-1)
s = n * m
return s
I know that if I use n = 3 for instance, the function will use the second
branch to calculate "3-1 = 2" then will move to do the same with "2-1 = 1"
getting in the end to 0 then the result will be returned.
Now what would happen in the following case:
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
Let's say we execute it with n = 5. The third branch will then be used to
return fib(4) + fib(3). But what then? Which number would the program use as
the new value of n, 4 or 3? Thank you.
Answer: The recursion level that is given `5` as a parameter will call itself _twice,_
once with `4` and once with `3`, and the results of those two calls will be
added together.
Similarly, calling it with `4` will result in two calls, one with `3` and one
with `2`. And so on down through the levels. So the recursion "tree" would be:
_____5_____
/ \
__4__ 3
/ \ / \
3 2 2 1
/ \ / \ / \
2 1 1 0 1 0
/ \
1 0
Your more "classical" recursion like factorial only calls itself once per
level but that's by no means necessary for something to be recursive:
5
\
4
\
3
\
2
\
1
If you want to see what's going on, change it to something like this:
def fibonacci(x, n):
for i in range(x):
print " ",
print "Level %d called with %d"%(x,n)
if n == 0:
return 0
if n == 1:
return 1
return fibonacci(x+1,n-1) + fibonacci(x+1,n-2)
print fibonacci (0,5)
The output generated:
Level 0 called with 5
Level 1 called with 4
Level 2 called with 3
Level 3 called with 2
Level 4 called with 1
Level 4 called with 0
Level 3 called with 1
Level 2 called with 2
Level 3 called with 1
Level 3 called with 0
Level 1 called with 3
Level 2 called with 2
Level 3 called with 1
Level 3 called with 0
Level 2 called with 1
5
You'll see I've also removed the totally unnecessary else-after-return
paradigm that some people use. It's rarely needed and can make code less
readable.
* * *
Having explained that, you should also make yourself aware of the situations
in which recursion can be abused. Although the Fibonacci sequence can be coded
elegantly as a recursive solution, it's not overly efficient since it
recalculates a lot of different values in each sub-branch (such as `fib(2)`
being calculated three times in the given example, and a lot more times if you
call it with a larger argument than `5`).
Even factorial is not that good for recursion since it reduces the "search
space" quite slowly: `fact(20)` will in fact go about twenty-deep on the
stack, and the stack is often a limited resource.
The best recursive algorithms are those that reduce the search space quickly,
such as a binary search, which halves it on every recursion level.
Knowing when to use recursion is usually just as important than knowing how
to.
You can get away with iterative solutions for both factorial and Fibonacci, as
with:
def fact (n): # n = 1..whatever
result = n
for i in range (2,n):
result = result * n
def fib(n): # n = 1..whatever
me = 1
if n >1:
grandparent = 1
parent = 1;
for i in range(2, n):
me = parent + grandparent
grandparent = parent
parent = me
return me
Neither of these will risk exhausting the stack for large volumes of `n`
|
How to get a phone's azimuth with compass readings and gyroscope readings?
Question: I wish to get my phone's current orientation by the following method:
1. Get the initial orientation (azimuth) first via the `getRotationMatrix()` and `getOrientation()`.
2. Add the integration of gyroscope reading over time to it to get the current orientation.
**Phone Orientation:**
The phone's x-y plane is fixed parallel with the ground plane. i.e., is in a
"texting-while-walking" orientation.
**"`getOrientation()`" Returnings:**
Android API allows me to easily get the orientation, i.e., azimuth, pitch,
roll, from `getOrientation()`.
Please **note** that this method **always** returns its value within the
range: **`[0, -PI]`** and **`[o, PI]`**.
**My Problem:**
Since the integration of the gyroscope reading, denoted by `dR`, may be quite
big, so when I do `CurrentOrientation += dR`, the `CurrentOrientation` may
exceed the **`[0, -PI]`** and **`[o, PI]`** ranges.
**What manipulations are needed so that I can ALWAYS get the current
orientation within the the`[0, -PI]` and `[o, PI]` ranges?**
I have tried the following in Python, but I highly doubt its correctness.
rotation = scipy.integrate.trapz(gyroSeries, timeSeries) # integration
if (headingDirection - rotation) < -np.pi:
headingDirection += 2 * np.pi
elif (headingDirection - rotation) > np.pi:
headingDirection -= 2 * np.pi
# Complementary Filter
headingDirection = ALPHA * (headingDirection - rotation) + (1 - ALPHA) * np.mean(azimuth[np.array(stepNo.tolist()) == i])
if headingDirection < -np.pi:
headingDirection += 2 * np.pi
elif headingDirection > np.pi:
headingDirection -= 2 * np.pi
* * *
**Remarks**
This is NOT that simple, because it involves the following trouble-makers:
1. The orientation sensor reading goes from `0` to `-PI`, and then **DIRECTLY JUMPS** to `+PI` and gradually gets back to `0` via `+PI/2`.
2. The integration of the gyrocope reading also leads to some trouble. Should I add `dR` to the orientation or subtract `dR`.
**Do please refer to the Android Documentations first, before giving a
confirmed answer.**
Estimated answers will not help.
Answer: > The orientation sensor actually derives its readings from the real
> magnetometer and the accelerometer.
I guess maybe this is the source of the confusion. Where is this stated in the
documentation? More importantly, does the documentation somewhere explicitly
state that the gyro readings are ignored? As far as I know the method
described in this video is implemented:
[Sensor Fusion on Android Devices: A Revolution in Motion
Processing](http://www.youtube.com/watch?v=C7JQ7Rpwn2k)
This method uses the gyros and integrates their readings. This pretty much
renders the rest of the question moot; nevertheless I will try to answer it.
* * *
**The orientation sensor is already integrating the gyro readings for you** ,
that is how you get the orientation. I don't understand why you are doing it
yourself.
**You are not doing the integration of the gyro readings properly** , it is
more complicated than `CurrentOrientation += dR` (which is incorrect). If you
need to integrate the gyro readings (I don't see why, the SensorManager is
already doing it for you) please read [Direction Cosine Matrix IMU:
Theory](http://gentlenav.googlecode.com/files/DCMDraft2.pdf) how to do it
properly (Equation 17).
**Don't try integrating with Euler angles** (aka azimuth, pitch, roll),
nothing good will come out.
**Please use either quaternions or rotation matrices in your computations**
instead of Euler angles. If you work with rotation matrices, you can always
convert them to Euler angles, see
[Computing Euler angles from a rotation matrix by Gregory G.
Slabaugh](https://truesculpt.googlecode.com/hg-
history/38000e9dfece971460473d5788c235fbbe82f31b/Doc/rotation_matrix_to_euler.pdf)
(The same is true for quaternions.) There are (in the non-degenrate case) two
ways to represent a rotation, that is, **you will get two Euler angles. Pick
the one that is in the range you need.** (In case of [gimbal
lock](http://en.wikipedia.org/wiki/Gimbal_lock), there are infinitely many
Euler angles, see the PDF above). Just promise you won't start using Euler
angles again in your computations after the rotation matrix to Euler angles
conversion.
**It is unclear what you are doing with the complementary filter.** You can
implement a pretty damn good sensor fusion based on the [Direction Cosine
Matrix IMU: Theory](http://gentlenav.googlecode.com/files/DCMDraft2.pdf)
manuscript, which is basically a tutorial. It's not trivial to do it but I
don't think you will find a better, more understandable tutorial than this
manuscript.
One thing that I had to discover myself when I implemented sensor fusion based
on this manuscript was that the so-called [integral
windup](http://en.wikipedia.org/wiki/Integral_windup) can occur. I took care
of it by bounding the `TotalCorrection` (page 27). You will understand what I
am talking about if you implement this sensor fusion.
* * *
* * *
**UPDATE:** Here I answer your questions that you posted in comments after
accepting the answer.
> I think the compass gives me my current orientation by using gravity and
> magnetic field, right? Is gyroscope used in the compass?
Yes, if the phone is more or less stationary for at least half a second, you
can get a good orientation estimate by using gravity and the compass only.
Here is how to do it: [Can anyone tell me whether gravity sensor is as a tilt
sensor to improve heading
accuracy?](http://stackoverflow.com/q/8027305/341970)
No, the gyroscopes are not used in the compass.
> Could you please kindly explain why the integration done by me is wrong? I
> understand that if my phone's pitch points up, euler angle fails. But any
> other things wrong with my integration?
There are two unrelated things: (i) the integration should be done
differently, (ii) Euler angles are trouble because of the Gimbal lock. I
repeat, these two are unrelated.
As for the integration: here is a simple example how you can actually _see_
what is wrong with your integration. Let x and y be the axes of the horizontal
plane in the room. Get a phone in your hands. Rotate the phone around the x
axis (of the room) by 45 degrees, then around the y axis (of the room) by 45
degrees. Then, repeat these steps from the beginning but now rotate around the
y axis first, and then around the x axis. The phone ends up in a totally
different orientation. If you do the integration according to
`CurrentOrientation += dR` you will see no difference! Please read the above
linked Direction Cosine Matrix IMU: Theory manuscript if you want to do the
integration properly.
As for the Euler angles: they screw up the stability of the application and it
is enough for me not to use them for arbitrary rotations in 3D.
I still don't understand why you are trying to do it yourself, why you don't
want to use the orientation estimate provided by the platform. Chances are,
you cannot do better than that.
|
Adding counter to my Python Web Crawler
Question: I've made a web crawler which gives a link and text from link for all sites in
given addres it looks like this:
import urllib
from bs4 import BeautifulSoup
import urlparse
import mechanize
url = ["http://adbnews.com/area51"]
for u in url:
br = mechanize.Browser()
urls = [u]
visited = [u]
i = 0
while i<len(urls):
try:
br.open(urls[0])
urls.pop(0)
for link in br.links():
levelLinks = []
linkText = []
newurl = urlparse.urljoin(link.base_url, link.url)
b1 = urlparse.urlparse(newurl).hostname
b2 = urlparse.urlparse(newurl).path
newurl = "http://"+b1+b2
linkTxt = link.text
linkText.append(linkTxt)
levelLinks.append(newurl)
if newurl not in visited and urlparse.urlparse(u).hostname in newurl:
urls.append(newurl)
visited.append(newurl)
#print newurl
#get Mechanize Links
for l,lt in zip(levelLinks,linkText):
print newurl,"\n",lt,"\n"
except:
urls.pop(0)
it gets results like that:
http://www.adbnews.com/area51/contact.html
CONTACT
http://www.adbnews.com/area51/about.html
ABOUT
http://www.adbnews.com/area51/index.html
INDEX
http://www.adbnews.com/area51/1st/
FIRST LEVEL!
http://www.adbnews.com/area51/1st/bling.html
BLING
http://www.adbnews.com/area51/1st/index.html
INDEX
http://adbnews.com/area51/2nd/
2ND LEVEL
And I wanna add a counter of somekind which could limit how deep crawler
goes..
I've tried add for example `steps = 3` and change `while i<len(urls)` in
`while i<steps:`
but that would go only to first level even the number says 3...
Any advice is welcome
Answer: If you want to search a certain "depth", consider using a recursive function
instead of just appending a list of URL's.
def crawl(url, depth):
if depth <= 3:
#Scan page, grab links, title
for link in links:
print crawl(link, depth + 1)
return url +"\n"+ title
This allows for easier control of your recursive searching, as well as being
faster and less resource heavy :)
|
finding _collections in python source
Question: I wanted to have a look at the python deque class. When I checked [the source
code](http://hg.python.org/cpython/file/791034a0ae1e/Lib/collections/__init__.py)
, I found the following at line 10
from _collections import deque, defaultdict
where exactly can I find this _collections module? I searched on my copy of
the python source, but couldn't spot it.
Where is this class located?
Answer: `_collections` is builtin extension module.
You can find source for _collection module
[here](http://hg.python.org/cpython/file/tip/Modules/_collectionsmodule.c).
[Setup.dist](http://hg.python.org/cpython/file/tip/Modules/Setup.dist)
contains mapping between builtin extension module name to source file.
|
Read from text file and assign data to new variable
Question: Python 3 program allows people to choose from list of employee names. Data
held on text file look like this: **('larry', 3, 100)** (being the persons
name, weeks worked and payment)
I need a way to assign each part of the text file to a new variable, so that
the user can enter a new amount of weeks and the program calculates the new
payment.
Below is my code and attempt at figuring it out.
import os
choices = [f for f in os.listdir(os.curdir) if f.endswith(".txt")]
print (choices)
emp_choice = input("choose an employee:")
file = open(emp_choice + ".txt")
data = file.readlines()
name = data[0]
weeks_worked = data[1]
weekly_payment= data[2]
new_weeks = int(input ("Enter new number of weeks"))
new_payment = new_weeks * weekly_payment
print (name + "will now be paid" + str(new_payment))
Answer: currently you are assigning the first three lines form the file to `name`,
`weeks_worked` and `weekly_payment`. but what you want (i think) is to
separate a single line, formatted as `('larry', 3, 100)` (does each file have
only one line?).
so you probably want code like:
from re import compile
# your code to choose file
line_format = compile(r"\s*\(\s*'([^']*)'\s*,\s*(\d+)\s*,\s*(\d+)\s*\)")
file = open(emp_choice + ".txt")
line = file.readline() # read the first line only
match = line_format.match(line)
if match:
name, weeks_worked, weekly_payment = match.groups()
else:
raise Exception('Could not match %s' % line)
# your code to update information
the regular expression looks complicated, but is really quite simple:
\(...\) matches the parentheses in the line
\s* matches optional spaces (it's not clear to me if you have spaces or not
in various places between words, so this matches just in case)
\d+ matches a number (1 or more digits)
[^']* matches anything except a quote (so matches the name)
(...) (without the \ backslashes) indicates a group that you want to read
afterwards by calling .groups()
and these are built from simpler parts (like `*` and `+` and `\d`) which are
described at <http://docs.python.org/2/library/re.html>
if you want to repeat this for many lines, you probably want something like:
name, weeks_worked, weekly_payment = [], [], []
for line in file.readlines():
match = line_format.match(line)
if match:
name.append(match.group(1))
weeks_worked.append(match.group(2))
weekly_payment.append(match.group(3))
else:
raise ...
|
Import Module CSRF issue: Django
Question: Short Notes - I'm trying to call `from django.middleware import csrf` at
python shell. its throwing/showing a message.
See shell code:
C:\>python
Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win
32
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
>>> from django import middleware
>>> from django.middleware import csrf
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\django\middleware\csrf.py", line 16, in <m
odule>
from django.utils.cache import patch_vary_headers
File "C:\Python27\lib\site-packages\django\utils\cache.py", line 26, in <modul
e>
from django.core.cache import get_cache
File "C:\Python27\lib\site-packages\django\core\cache\__init__.py", line 70, i
n <module>
if DEFAULT_CACHE_ALIAS not in settings.CACHES:
File "C:\Python27\lib\site-packages\django\conf\__init__.py", line 53, in __ge
tattr__
self._setup(name)
File "C:\Python27\lib\site-packages\django\conf\__init__.py", line 46, in _set
up
% (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting CACHES, but setti
ngs are not configured. You must either define the environment variable DJANGO_S
ETTINGS_MODULE or call settings.configure() before accessing settings.
>>>
Can anybody gone through this & how to sort out this?
Answer: You can't access and run django projects from python shell. Django doesn't
know what project you want to work on.
You have to do one of these things:
1. python manage.py shell
2. Set DJANGO_SETTINGS_MODULE environment variable in your OS to mysite.settings
3. Use setup_environ in the python interpreter:
from django.core.management import setup_environ
from mysite import settings
setup_environ(settings)
|
Using hebrew on python
Question: I have a problem printing hebrew words. i am using the counter module in order
to count number of words in my given text (which is in hebrew). the counter
indeed counts the words, and identifies the language because i am using `# -*-
coding: utf-8 -*- `
The problem is, when i print my counter, i get weird symbols. (I am using
eclipse) Here is the code and the printings:
# -*- coding: utf-8 -*-
import string
from collections import Counter
class classifier:
def __init__(self,filename):
self.myFile = open(filename)
self.cnt = Counter()
def generateList(self):
exclude = set(string.punctuation)
for lines in self.myFile:
for word in lines.split():
if word not in exclude:
nWord = ""
for letter in word:
if letter in exclude:
letter = ""
nWord += letter
else:
nWord += letter
self.cnt[nWord]+=1
print self.cnt
Printings:
Counter({'\xd7\x97\xd7\x94': 465, '\xd7\x96\xd7\x95': 432, '\xd7\xa1\xd7\x92\xd7\x95\xd7\xa8': 421, '\xd7\x94\xd7\x92\xd7\x91': 413})
Any idea on how to print the words in the right way?
Answer: The "weird symbols" you are getting is python's way of representing unicode
strings.
You need to decode them, for example:
>>>print '\xd7\x97\xd7\x94'.decode('UTF8')
ΧΧ
|
Python: Too many values to unpack (dictionary)
Question: I'm trying to add key-value pairs to a dictionary by pairing two and two lines
from a text file. Why does this not work?
newdata = {}
os.chdir("//GOLLUM//tbg2//tbg2//forritGB")
f = open(filename)
for line1, line2 in f.readlines():
newdata[line1] = line2
edit: The error I get is
ValueError: too many values to unpack
Answer: You are reading all lines, and assigning the first line (a sequence) to two
variables. This only works if the first line consists of 2 characters. Use the
file as an iterator instead:
newdata = {}
os.chdir("//GOLLUM//tbg2//tbg2//forritGB")
with open(filename) as f:
for line1 in f:
newdata[line1.strip()] = next(f, '').strip()
Here `next()` reads the next line from the file.
The alternative would be to use a pair-wise recipe:
from itertools import izip_longest
def pairwise(iterable):
return izip_longest(*([iter(iterable)] * 2), '')
newdata = {}
os.chdir("//GOLLUM//tbg2//tbg2//forritGB")
with open(filename) as f:
for line1, line2 in pairwise(f):
newdata[line1.strip()] = line2.strip()
Note the `str.strip()` calls, to remove any extra whitespace (including the
newline at the end of each line).
|
Python argparser. List of dict in INI
Question: I have a following list in my python module:
couples = [("somekey1", "somevalue1"), ("somekey2", "somevalue2"), ("somekey3", "somevalue3"),....]
I am storing configurations for my app in "configs.ini" and i use
**configparser** tor read it. I checked documentation for configparser and
didn't find how can i read my list from file.
**UPD:** Does anybody know how i can **read** following list from my configs
or maybe exists another way to store it in file ?
**UPD2:** it is list of logins and password.May it can help.
Answer: I'm not sure I understand this correctly but if you want to create a config
file to easily read a list like you've shown then create a section in your
configs.ini
[section]
key = value
key2 = value2
key3 = value3
and then
>> config = ConfigParser.RawConfigParser()
>> config.read('configs.ini')
>> items = config.items('section')
>> items
[('key', 'value'), ('key2', 'value2'), ('key3', 'value3')]
which is basically what you say you need.
If on the other hand what you are saying is that your config file contains:
[section]
couples = [("somekey1", "somevalue1"), ("somekey2", "somevalue2"), ("somekey3", "somevalue3")]
what you could do is extend the config parser like for example so:
class MyConfigParser(ConfigParser.RawConfigParser):
def get_list_of_tups(self, section, option):
value = self.get(section, option)
import re
couples = re.finditer('\("([a-z0-9]*)", "([a-z0-9]*)"\)', value)
return [(c.group(1), c.group(2)) for c in couples]
and then your new parser can get fetch your list for you:
>> my_config = MyConfigParser()
>> my_config.read('example.cfg')
>> couples = my_config.get_list_of_tups('section', 'couples')
>> couples
[('somekey1', 'somevalue1'), ('somekey2', 'somevalue2'), ('somekey3', 'somevalue3')]
The second situation is just making things hard for yourself I think.
|
Fast relational Database for simple use with Python
Question: for my link scraping program, written in python3.3, i want to use a database
where i want to store the following thing:
* around 100.000 websites, just the url, a timestamp and for each website a list with several properties
I dont have any knowledge in databases so far. By googling i found the
following database maybe fit my purposes: Postgresql, SQLite or Firebird.
Im very interested in the speed to access the database and to get the wanted
information. For example: for website x does property y exist and if yes read
it. The speed of writing is of course also important.
My question: Are there big differences in speed for the different databases or
does it not really matter for my small program? Maybe someone can tell which
database fits my requirements and is easy to handle with python.
thx alot
Answer: If speed is the main criteria, then i would suggest to go with a in-memory
database. Take a look at <http://docs.python.org/2/library/sqlite3.html>
it can be used as a normal database too, for the in-memory mode use the below
and the db should get created in the RAM itself and hence much faster run-time
access.
import sqlite3
conn = sqlite3.connect(':memory:')
|
Python having trouble accessing usb microphone using Gstreamer to perform speech recognition with Pocketsphinx on a Raspberry Pi
Question: So python is acting like acting like it can't hear ANYTHING from my microphone
at all.
Here's the problem. I have a **Python** ( 2.7 ) script that is suppose to be
using **Gstreamer** to access my microphone and do speech recognition for me
via **Pocketsphinx**. I'm using **Pulse Audio** and my device is a **Raspberry
Pi**. My microphone is a **Playstation 3 Eye**.
Now off the bat, I have already gotten pocketsphinx_continuous to run
correctly and recognize the words I have defined in my .dict and .lm files.
The accuracy is around 85-90% accurate after a couple trial runs I've had. So
off the bat I know my microphone is picking up sound normally via pocketsphinx
+ pulse audio.
FYI I ran the following:
pocketsphinx_continuous -lm /home/pi/dev/scarlettPi/config/speech/lm/scarlett.lm -dict /home/pi/dev/scarlettPi/config/speech/dict/scarlett.dic -hmm /home/pi/dev/scarlettPi/config/speech/model/hmm/en_US/hub4wsj_sc_8k -silprob 0.1 -wip 1e-4 -bestpath 0
In my python code i'm attempting to do the same thing, but i'm using gstreamer
to access the microphone in python. ( Note: I'm a bit new to Python )
Here is my code ( Thanks Josip Lisec for getting me this far ):
import pi
from pi.becore import ScarlettConfig
from recorder import Recorder
from brain import Brain
import os
import json
import tempfile
#import sys
import pygtk
pygtk.require('2.0')
import gtk
import gobject
import pygst
pygst.require('0.10')
gobject.threads_init()
import gst
scarlett_config=ScarlettConfig()
class Listener:
def __init__(self, gobject, gst):
self.failed = 0
self.pipeline = gst.parse_launch(' ! '.join(['pulsesrc',
'audioconvert',
'audioresample',
'vader name=vader auto-threshold=true',
'pocketsphinx lm=' + scarlett_config.get('LM') + ' dict=' + scarlett_config.get('DICT') + ' hmm=' + scarlett_config.get('HMM') + ' name=listener',
'fakesink']))
listener = self.pipeline.get_by_name('listener')
listener.connect('result', self.__result__)
listener.set_property('configured', True)
print "KEYWORDS WE'RE LOOKING FOR: " + scarlett_config.get('ourkeywords')
bus = self.pipeline.get_bus()
bus.add_signal_watch()
bus.connect('message::application', self.__application_message__)
self.pipeline.set_state(gst.STATE_PLAYING)
def result(self, hyp, uttid):
if hyp in scarlett_config.get('ourkeywords'):
self.failed = 0
self.listen()
else:
self.failed += 1
if self.failed > 4:
pi.speak("" + scarlett_config.get('scarlett_owner') + ", if you need me, just say my name.")
self.failed = 0
def listen(self):
self.pipeline.set_state(gst.STATE_PAUSED)
pi.play('pi-listening')
Recorder(self)
def cancel_listening(self):
pi.play('pi-cancel')
self.pipeline.set_state(gst.STATE_PLAYING)
# question - sound recording
def answer(self, question):
pi.play('pi-cancel')
print " * Contacting Google"
destf = tempfile.mktemp(suffix='piresult')
os.system('wget --post-file %s --user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.77 Safari/535.7" --header="Content-Type: audio/x-flac; rate=16000" -O %s -q "https://www.google.com/speech-api/v1/recognize?client=chromium&lang=en-US"' % (question, destf))
#os.system("speech2text %s > %s" % (question, destf))
b = open(destf)
result = b.read()
b.close()
os.unlink(question)
os.unlink(destf)
if len(result) == 0:
print " * nop"
pi.play('pi-cancel')
else:
brain = Brain(json.loads(result))
if brain.think() == False:
print " * nop2"
pi.play('pi-cancel')
self.pipeline.set_state(gst.STATE_PLAYING)
def __result__(self, listener, text, uttid):
struct = gst.Structure('result')
struct.set_value('hyp', text)
struct.set_value('uttid', uttid)
listener.post_message(gst.message_new_application(listener, struct))
def __application_message__(self, bus, msg):
msgtype = msg.structure.get_name()
if msgtype == 'result':
self.result(msg.structure['hyp'], msg.structure['uttid'])
The application is suppose to match on the keyword "Scarlett" then perform an
action after that.
When I run my application, I get the following output:
pi@scarlettpi ~/dev/scarlettPi/scripts/pi/bin $ ./pi
/usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:57: GtkWarning: could not open display
warnings.warn(str(e), _gtk.Warning)
INFO: cmd_ln.c(691): Parsing command line:
gst-pocketsphinx \
-samprate 8000 \
-cmn prior \
-fwdflat no \
-bestpath no \
-maxhmmpf 2000 \
-maxwpf 20
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ascale 20.0 2.000000e+01
-aw 1 1
-backtrace no no
-beam 1e-48 1.000000e-48
-bestpath no no
-bestpathlw 9.5 9.500000e+00
-bghist no no
-ceplen 13 13
-cmn current prior
-cmninit 8.0 8.0
-compallsen no no
-debug 0
-dict
-dictcase no no
-dither no no
-doublebw no no
-ds 1 1
-fdict
-feat 1s_c_d_dd 1s_c_d_dd
-featparams
-fillprob 1e-8 1.000000e-08
-frate 100 100
-fsg
-fsgusealtpron yes yes
-fsgusefiller yes yes
-fwdflat yes no
-fwdflatbeam 1e-64 1.000000e-64
-fwdflatefwid 4 4
-fwdflatlw 8.5 8.500000e+00
-fwdflatsfwin 25 25
-fwdflatwbeam 7e-29 7.000000e-29
-fwdtree yes yes
-hmm
-input_endian little little
-jsgf
-kdmaxbbi -1 -1
-kdmaxdepth 0 0
-kdtree
-latsize 5000 5000
-lda
-ldadim 0 0
-lextreedump 0 0
-lifter 0 0
-lm
-lmctl
-lmname default default
-logbase 1.0001 1.000100e+00
-logfn
-logspec no no
-lowerf 133.33334 1.333333e+02
-lpbeam 1e-40 1.000000e-40
-lponlybeam 7e-29 7.000000e-29
-lw 6.5 6.500000e+00
-maxhmmpf -1 2000
-maxnewoov 20 20
-maxwpf -1 20
-mdef
-mean
-mfclogdir
-min_endfr 0 0
-mixw
-mixwfloor 0.0000001 1.000000e-07
-mllr
-mmap yes yes
-ncep 13 13
-nfft 512 512
-nfilt 40 40
-nwpen 1.0 1.000000e+00
-pbeam 1e-48 1.000000e-48
-pip 1.0 1.000000e+00
-pl_beam 1e-10 1.000000e-10
-pl_pbeam 1e-5 1.000000e-05
-pl_window 0 0
-rawlogdir
-remove_dc no no
-round_filters yes yes
-samprate 16000 8.000000e+03
-seed -1 -1
-sendump
-senlogdir
-senmgau
-silprob 0.1 1.000000e-01
-smoothspec no no
-svspec
-tmat
-tmatfloor 0.0001 1.000000e-04
-topn 4 4
-topn_beam 0 0
-toprule
-transform legacy legacy
-unit_area yes yes
-upperf 6855.4976 6.855498e+03
-usewdphones no no
-uw 1.0 1.000000e+00
-var
-varfloor 0.0001 1.000000e-04
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wbeam 7e-29 7.000000e-29
-wip 1e-4 1.000000e-04
-wlen 0.025625 2.562500e-02
INFO: cmd_ln.c(691): Parsing command line:
\
-nfilt 20 \
-lowerf 1 \
-upperf 4000 \
-wlen 0.025 \
-transform dct \
-round_filters no \
-remove_dc yes \
-svspec 0-12/13-25/26-38 \
-feat 1s_c_d_dd \
-agc none \
-cmn current \
-cmninit 56,-3,1 \
-varnorm no
Current configuration:
[NAME] [DEFLT] [VALUE]
-agc none none
-agcthresh 2.0 2.000000e+00
-alpha 0.97 9.700000e-01
-ceplen 13 13
-cmn current current
-cmninit 8.0 56,-3,1
-dither no no
-doublebw no no
-feat 1s_c_d_dd 1s_c_d_dd
-frate 100 100
-input_endian little little
-lda
-ldadim 0 0
-lifter 0 0
-logspec no no
-lowerf 133.33334 1.000000e+00
-ncep 13 13
-nfft 512 512
-nfilt 40 20
-remove_dc no yes
-round_filters yes no
-samprate 16000 8.000000e+03
-seed -1 -1
-smoothspec no no
-svspec 0-12/13-25/26-38
-transform legacy dct
-unit_area yes yes
-upperf 6855.4976 4.000000e+03
-varnorm no no
-verbose no no
-warp_params
-warp_type inverse_linear inverse_linear
-wlen 0.025625 2.500000e-02
INFO: acmod.c(246): Parsed model-specific feature parameters from /usr/local/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/feat.params
INFO: feat.c(713): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none'
INFO: cmn.c(142): mean[0]= 12.00, mean[1..12]= 0.0
INFO: acmod.c(167): Using subvector specification 0-12/13-25/26-38
INFO: mdef.c(517): Reading model definition: /usr/local/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/mdef
INFO: mdef.c(528): Found byte-order mark BMDF, assuming this is a binary mdef file
INFO: bin_mdef.c(336): Reading binary model definition: /usr/local/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/mdef
INFO: bin_mdef.c(513): 50 CI-phone, 143047 CD-phone, 3 emitstate/phone, 150 CI-sen, 5150 Sen, 27135 Sen-Seq
INFO: tmat.c(205): Reading HMM transition probability matrices: /usr/local/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/transition_matrices
INFO: acmod.c(121): Attempting to use SCHMM computation module
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /usr/local/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/means
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(198): Reading mixture gaussian parameter: /usr/local/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/variances
INFO: ms_gauden.c(292): 1 codebook, 3 feature, size:
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(294): 256x13
INFO: ms_gauden.c(354): 0 variance values floored
INFO: s2_semi_mgau.c(903): Loading senones from dump file /usr/local/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/sendump
INFO: s2_semi_mgau.c(927): BEGIN FILE FORMAT DESCRIPTION
INFO: s2_semi_mgau.c(1022): Using memory-mapped I/O for senones
INFO: s2_semi_mgau.c(1296): Maximum top-N: 4 Top-N beams: 0 0 0
INFO: dict.c(317): Allocating 4120 * 20 bytes (80 KiB) for word entries
INFO: dict.c(332): Reading main dictionary: /home/pi/dev/scarlettPi/config/speech/dict/scarlett.dic
INFO: dict.c(211): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(335): 13 words read
INFO: dict.c(341): Reading filler dictionary: /usr/local/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k/noisedict
INFO: dict.c(211): Allocated 0 KiB for strings, 0 KiB for phones
INFO: dict.c(344): 11 words read
INFO: dict2pid.c(396): Building PID tables for dictionary
INFO: dict2pid.c(404): Allocating 50^3 * 2 bytes (244 KiB) for word-initial triphones
INFO: dict2pid.c(131): Allocated 30200 bytes (29 KiB) for word-final triphones
INFO: dict2pid.c(195): Allocated 30200 bytes (29 KiB) for single-phone word triphones
INFO: ngram_model_arpa.c(477): ngrams 1=12, 2=18, 3=17
INFO: ngram_model_arpa.c(135): Reading unigrams
INFO: ngram_model_arpa.c(516): 12 = #unigrams created
INFO: ngram_model_arpa.c(195): Reading bigrams
INFO: ngram_model_arpa.c(533): 18 = #bigrams created
INFO: ngram_model_arpa.c(534): 3 = #prob2 entries
INFO: ngram_model_arpa.c(542): 3 = #bo_wt2 entries
INFO: ngram_model_arpa.c(292): Reading trigrams
INFO: ngram_model_arpa.c(555): 17 = #trigrams created
INFO: ngram_model_arpa.c(556): 2 = #prob3 entries
INFO: ngram_search_fwdtree.c(99): 12 unique initial diphones
INFO: ngram_search_fwdtree.c(147): 0 root, 0 non-root channels, 12 single-phone words
INFO: ngram_search_fwdtree.c(186): Creating search tree
INFO: ngram_search_fwdtree.c(191): before: 0 root, 0 non-root channels, 12 single-phone words
INFO: ngram_search_fwdtree.c(326): after: max nonroot chan increased to 152
INFO: ngram_search_fwdtree.c(338): after: 12 root, 24 non-root channels, 11 single-phone words
KEYWORDS WE'RE LOOKING FOR: [ 'scarlett', 'SCARLETT' ]
But it fails to match on anything. I almost think python can not hear anything
from the microphone, there aren't even any attempts to recognize anything. In
**pocketsphinx_continuious** it usually prints out a READY state when its
prepared to start listening...I expect the same in python?
Here are my python packages:
pi@scarlettpi ~/dev/scarlettPi/scripts/pi/bin $ dpkg -l | grep -i python
ii idle 2.7.3-4 all IDE for Python using Tkinter (default version)
ii idle-python2.7 2.7.3-6 all IDE for Python (v2.7) using Tkinter
rc idle3 3.2.3-6 all IDE for Python using Tkinter (default version)
ii libpyside1.1:armhf 1.1.1-3 armhf Python bindings for Qt 4 (base files)
ii libpython2.6 2.6.8-1.1 armhf Shared Python runtime library (version 2.6)
ii libpython2.7 2.7.3-6 armhf Shared Python runtime library (version 2.7)
ii libshiboken1.1:armhf 1.1.1-1 armhf CPython bindings generator for C++ libraries - shared library
ii python 2.7.3-4 all interactive high-level object-oriented language (default version)
ii python-alsaaudio 0.5+svn36-1 armhf Alsa bindings for Python
ii python-cairo 1.8.8-1 armhf Python bindings for the Cairo vector graphics library
ii python-dbg 2.7.3-4 all debug build of the Python Interpreter (version 2.7)
ii python-dbus 1.1.1-1 armhf simple interprocess messaging system (Python interface)
ii python-dbus-dev 1.1.1-1 all main loop integration development files for python-dbus
ii python-dev 2.7.3-4 all header files and a static library for Python (default)
ii python-gi 3.2.2-2 armhf Python 2.x bindings for gobject-introspection libraries
ii python-gi-dbg 3.2.2-2 armhf Python bindings for the GObject library (debug extension)
ii python-gi-dev 3.2.2-2 all development headers for GObject Python bindings
ii python-gobject 3.2.2-2 all Python 2.x bindings for GObject - transitional package
ii python-gobject-2 2.28.6-10 armhf deprecated static Python bindings for the GObject library
ii python-gobject-2-dbg 2.28.6-10 armhf deprecated static Python bindings for the GObject library (debug extension)
ii python-gobject-2-dev 2.28.6-10 all development headers for the static GObject Python bindings
ii python-gobject-dbg 3.2.2-2 all Python 2.x debugging modules for GObject - transitional package
ii python-gobject-dev 3.2.2-2 all Python 2.x development headers for GObject - transitional package
ii python-gst0.10 0.10.22-3 armhf generic media-playing framework (Python bindings)
ii python-gst0.10-dbg 0.10.22-3 armhf generic media-playing framework (Python debug bindings)
ii python-gst0.10-dev 0.10.22-3 armhf generic media-playing framework (Python bindings)
ii python-gst0.10-rtsp 0.10.8-3 armhf GStreamer RTSP server plugin (Python bindings)
ii python-gtk2 2.24.0-3 armhf Python bindings for the GTK+ widget set
ii python-iplib 1.1-3 all Python library to convert amongst many different IPv4 notations
ii python-libxml2 2.8.0+dfsg1-7+nmu1 armhf Python bindings for the GNOME XML library
ii python-minimal 2.7.3-4 all minimal subset of the Python language (default version)
ii python-numpy 1:1.6.2-1.2 armhf Numerical Python adds a fast array facility to the Python language
ii python-pexpect 2.4-1 all Python module for automating interactive applications
ii python-pip 1.1-3 all alternative Python package installer
ii python-pkg-resources 0.6.24-1 all Package Discovery and Resource Access using pkg_resources
ii python-pyalsa 1.0.25-1 armhf Official ALSA Python binding library
ii python-pyside 1.1.1-3 all Python bindings for Qt4 (big metapackage)
ii python-pyside.phonon 1.1.1-3 armhf Qt 4 Phonon module - Python bindings
ii python-pyside.qtcore 1.1.1-3 armhf Qt 4 core module - Python bindings
ii python-pyside.qtdeclarative 1.1.1-3 armhf Qt 4 Declarative module - Python bindings
ii python-pyside.qtgui 1.1.1-3 armhf Qt 4 GUI module - Python bindings
ii python-pyside.qthelp 1.1.1-3 armhf Qt 4 help module - Python bindings
ii python-pyside.qtnetwork 1.1.1-3 armhf Qt 4 network module - Python bindings
ii python-pyside.qtopengl 1.1.1-3 armhf Qt 4 OpenGL module - Python bindings
ii python-pyside.qtscript 1.1.1-3 armhf Qt 4 script module - Python bindings
ii python-pyside.qtsql 1.1.1-3 armhf Qt 4 SQL module - Python bindings
ii python-pyside.qtsvg 1.1.1-3 armhf Qt 4 SVG module - Python bindings
ii python-pyside.qttest 1.1.1-3 armhf Qt 4 test module - Python bindings
ii python-pyside.qtuitools 1.1.1-3 armhf Qt 4 UI tools module - Python bindings
ii python-pyside.qtwebkit 1.1.1-3 armhf Qt 4 WebKit module - Python bindings
ii python-pyside.qtxml 1.1.1-3 armhf Qt 4 XML module - Python bindings
ii python-rpi.gpio 0.5.3a-1 armhf Python GPIO module for Raspberry Pi
ii python-setuptools 0.6.24-1 all Python Distutils Enhancements (setuptools compatibility)
ii python-simplejson 2.5.2-1 armhf simple, fast, extensible JSON encoder/decoder for Python
ii python-support 1.0.15 all automated rebuilding support for Python modules
ii python-tk 2.7.3-1 armhf Tkinter - Writing Tk applications with Python
ii python-yaml 3.10-4 armhf YAML parser and emitter for Python
ii python-yaml-dbg 3.10-4 armhf YAML parser and emitter for Python (debug build)
ii python2.6 2.6.8-1.1 armhf Interactive high-level object-oriented language (version 2.6)
ii python2.6-minimal 2.6.8-1.1 armhf Minimal subset of the Python language (version 2.6)
ii python2.7 2.7.3-6 armhf Interactive high-level object-oriented language (version 2.7)
ii python2.7-dbg 2.7.3-6 armhf Debug Build of the Python Interpreter (version 2.7)
ii python2.7-dev 2.7.3-6 armhf Header files and a static library for Python (v2.7)
ii python2.7-minimal 2.7.3-6 armhf Minimal subset of the Python language (version 2.7)
pi@scarlettpi ~/dev/scarlettPi/scripts/pi/bin $
Also just to confirm that pocketsphinx is complied correctly against the right
libaries:
pi@scarlettpi ~ $ ldd /usr/local/bin/pocketsphinx_continuous
/usr/lib/arm-linux-gnueabihf/libcofi_rpi.so (0xb6f9b000)
libpocketsphinx.so.1 => /usr/local/lib/libpocketsphinx.so.1 (0xb6f5a000)
libsphinxad.so.0 => /usr/local/lib/libsphinxad.so.0 (0xb6f4e000)
libsphinxbase.so.1 => /usr/local/lib/libsphinxbase.so.1 (0xb6f07000)
libpulse.so.0 => /usr/lib/arm-linux-gnueabihf/libpulse.so.0 (0xb6ea8000)
libpulse-simple.so.0 => /usr/lib/arm-linux-gnueabihf/libpulse-simple.so.0 (0xb6e9c000)
libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0xb6e7d000)
libm.so.6 => /lib/arm-linux-gnueabihf/libm.so.6 (0xb6e0c000)
libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0xb6cdd000)
libjson.so.0 => /lib/arm-linux-gnueabihf/libjson.so.0 (0xb6ccd000)
libpulsecommon-2.0.so => /usr/lib/arm-linux-gnueabihf/pulseaudio/libpulsecommon-2.0.so (0xb6c6b000)
libdbus-1.so.3 => /lib/arm-linux-gnueabihf/libdbus-1.so.3 (0xb6c29000)
libcap.so.2 => /lib/arm-linux-gnueabihf/libcap.so.2 (0xb6c1e000)
librt.so.1 => /lib/arm-linux-gnueabihf/librt.so.1 (0xb6c0f000)
libdl.so.2 => /lib/arm-linux-gnueabihf/libdl.so.2 (0xb6c04000)
libgcc_s.so.1 => /lib/arm-linux-gnueabihf/libgcc_s.so.1 (0xb6bdb000)
/lib/ld-linux-armhf.so.3 (0xb6fa8000)
libX11-xcb.so.1 => /usr/lib/arm-linux-gnueabihf/libX11-xcb.so.1 (0xb6bd2000)
libX11.so.6 => /usr/lib/arm-linux-gnueabihf/libX11.so.6 (0xb6abe000)
libxcb.so.1 => /usr/lib/arm-linux-gnueabihf/libxcb.so.1 (0xb6a9f000)
libICE.so.6 => /usr/lib/arm-linux-gnueabihf/libICE.so.6 (0xb6a82000)
libSM.so.6 => /usr/lib/arm-linux-gnueabihf/libSM.so.6 (0xb6a73000)
libXtst.so.6 => /usr/lib/arm-linux-gnueabihf/libXtst.so.6 (0xb6a67000)
libwrap.so.0 => /lib/arm-linux-gnueabihf/libwrap.so.0 (0xb6a57000)
libsndfile.so.1 => /usr/lib/arm-linux-gnueabihf/libsndfile.so.1 (0xb69ee000)
libasyncns.so.0 => /usr/lib/arm-linux-gnueabihf/libasyncns.so.0 (0xb69e2000)
libattr.so.1 => /lib/arm-linux-gnueabihf/libattr.so.1 (0xb69d4000)
libXau.so.6 => /usr/lib/arm-linux-gnueabihf/libXau.so.6 (0xb69ca000)
libXdmcp.so.6 => /usr/lib/arm-linux-gnueabihf/libXdmcp.so.6 (0xb69be000)
libuuid.so.1 => /lib/arm-linux-gnueabihf/libuuid.so.1 (0xb69b1000)
libXext.so.6 => /usr/lib/arm-linux-gnueabihf/libXext.so.6 (0xb699b000)
libXi.so.6 => /usr/lib/arm-linux-gnueabihf/libXi.so.6 (0xb6986000)
libnsl.so.1 => /lib/arm-linux-gnueabihf/libnsl.so.1 (0xb696a000)
libFLAC.so.8 => /usr/lib/arm-linux-gnueabihf/libFLAC.so.8 (0xb691f000)
libvorbisenc.so.2 => /usr/lib/arm-linux-gnueabihf/libvorbisenc.so.2 (0xb67b2000)
libvorbis.so.0 => /usr/lib/arm-linux-gnueabihf/libvorbis.so.0 (0xb6782000)
libogg.so.0 => /usr/lib/arm-linux-gnueabihf/libogg.so.0 (0xb6775000)
libresolv.so.2 => /lib/arm-linux-gnueabihf/libresolv.so.2 (0xb6761000)
pi@scarlettpi ~ $
And if you need to see any information about my microphone ( ps3 eye ):
Had to throw this in pastebin, ran out of room in this post.
<http://pastebin.com/gSDZwRHc>
Does anyone have any ideas why this isn't working? Please let me know if my
question needs any clarification or if I can provide any more information to
aid with debugging.
Thanks.
Answer: So I finally got this guy working.
**Couple key things I needed to realize:**
_1\. Even if you're using Pulseaudio on your Raspberry Pi, as long as Alsa is
still installed you're still able to use it. ( This might seem like a no
brainer to others, but I honestly didn't realize I could still use both of
these at the same time ) Hint via
([syb0rg](http://raspberrypi.stackexchange.com/questions/8872/python-having-
trouble-accessing-usb-microphone-via-gstreamer-to-perform-
speech-r#comment12881_8872))._
_2\. When it comes to sending large amounts of raw audio data (**.wav** format
in my case ) to Pocketsphinx via Gstreamer,
([queues](http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-
plugins/html/gstreamer-plugins-queue.html)) are your friend._
After messing around with gst-launch-0.10 on the command line for a while I
came across something that actually worked:
gst-launch-0.10 alsasrc device=hw:1 ! queue ! audioconvert ! audioresample ! queue ! vader name=vader auto-threshold=true ! pocketsphinx lm=/home/pi/dev/scarlettPi/config/speech/lm/scarlett.lm dict=/home/pi/dev/scarlettPi/config/speech/dict/scarlett.dic hmm=/usr/local/share/pocketsphinx/model/hmm/en_US/hub4wsj_sc_8k name=listener ! fakesink dump=1
**So what's happening here?**
* Gstreamer is listening to device hw:1 ( Which is my Ps3 Eye USB device ). This device might vary, you can determine this by running :
>
> pi@scarlettpi ~ $ pacmd dump
> Welcome to PulseAudio! Use "help" for usage information.
>
> ....
>
> load-module module-alsa-card device_id="0" name="platform-
> bcm2835_AUD0.0"
>
>
> card_name="alsa_card.platform-bcm2835_AUD0.0" namereg_fail=false tsched=yes
> fixed_latency_range=no ignore_dB=no deferred_volume=yes
> card_properties="module-udev-detect.discovered=1"
>
>
> load-module module-udev-detect
>
> load-module module-bluetooth-discover
>
> load-module module-esound-protocol-unix
>
> load-module module-native-protocol-unix
>
> load-module module-gconf
>
> load-module module-default-device-restore
>
> load-module module-rescue-streams
>
> load-module module-always-sink
>
> load-module module-intended-roles
>
> load-module module-console-kit
>
> load-module module-systemd-login
>
> load-module module-position-event-sounds
>
> load-module module-role-cork
>
> load-module module-filter-heuristics
>
> load-module module-filter-apply
>
> load-module module-dbus-protocol
>
> load-module module-switch-on-port-available
>
> load-module module-cli-protocol-unix
>
> load-module module-alsa-card device_id="1" name="usb-
> OmniVision_Technologies__Inc._USB_Camera-B4.09.24.1-01-CameraB409241"
> card_name="alsa_card.usb-
> OmniVision_Technologies__Inc._USB_Camera-B4.09.24.1-01-CameraB409241"
> namereg_fail=false tsched=yes fixed_latency_range=no ignore_dB=no
>
>
> deferred_volume=yes card_properties="module-udev-detect.discovered=1"
>
>
> ....
>
**The important line to notice is:**
load-module module-alsa-card device_id="1" name="usb-OmniVision_Technologies__Inc._USB_Camera-B4.09.24.1-01-CameraB409241" card_name="alsa_card.usb-OmniVision_Technologies__Inc._USB_Camera-B4.09.24.1-01-CameraB409241" namereg_fail=false tsched=yes fixed_latency_range=no ignore_dB=no deferred_volume=yes card_properties="module-udev-detect.discovered=1"
**Thats my Playstation 3 Eye, and thats on device_id=1. Hence hw:1**
* The audio data coming in from the ps3 eye gets resampled and added to a gstreamer queue and has to pass through a ([vader](http://cmusphinx.sourceforge.net/wiki/gstreamer#the_vader_element)) element before moving on to pocketsphinx. By passing the audio through the vader element w/ the auto-threshold=true flag on, gstreamer can determine the background noise level, which can be important if you have a lousy soundcard or a far-field microphone. This is how the pocketsphinx element will know when an utterance starts and ends.
* Add the regular pocketsphix arguments to the pipeline that we already determined ([here](http://stackoverflow.com/questions/17778532/raspberrypi-pocketsphinx-ps3eye-error-failed-to-open-audio-device/17823877#17823877)).
* Pass everything into a fakesink since we don't need to hear anything right now, we only need pocketsphinx to listen to everything. The dump=1 flag provides us with more debugging information to see what's being processed / if audio is being accepted at all.
** After getting that to run successfully, the new python code looks like
this: **
self.pipeline = gst.parse_launch(' ! '.join(['alsasrc device=' + scarlett_config.gimmie('audio_input_device'),
'queue',
'audioconvert',
'audioresample',
'queue',
'vader name=vader auto-threshold=true',
'pocketsphinx lm=' + scarlett_config.gimmie('LM') + ' dict=' + scarlett_config.gimmie('DICT') + ' hmm=' + scarlett_config.gimmie('HMM') + ' name=listener',
'fakesink dump=1']))
Hope this helps someone.
**NOTE: Please excuse me if my Gstreamer pipline is using excessive elements.
I'm fairly new to Gstreamer, and i'm opener to more efficient ways of doing
this.**
|
python calculate table without (numpy)
Question: Hi i have a quick question about finding line # from my text and use this
line# to calculate something
(this is not hw question and i just start learning about python)
ex~ if my text looks like
100 200 300
400 500 600
700 800 900
120 130 140
150 160 170
and
f1 = open('sample4.txt','r')
line_num = 0
search_phrase = "100"
for line in f1.readlines():
line_num += 1
if line.find(search_phrase) >= 0:
x = line_num
print (x)
import numpy
data = numpy.loadtxt('sample4.txt')
print(data[x:x+3,1].sum())
i could get
1430.0 which is (200+500+800+130)
however, if my text looks like:
apple is good
i dont like apple
100 200 300
400 500 600
700 800 900
120 130 140
150 160 170
i love orange
error pops up and said
Traceback (most recent call last):
File "C:/Python33/sample4.py", line 13, in <module>
data = numpy.loadtxt('sample4.txt')
File "C:\Python33\lib\site-packages\numpy\lib\npyio.py", line 827, in loadtxt
items = [conv(val) for (conv, val) in zip(converters, vals)]
File "C:\Python33\lib\site-packages\numpy\lib\npyio.py", line 827, in <listcomp>
items = [conv(val) for (conv, val) in zip(converters, vals)]
ValueError: could not convert string to float: b'apple'
i think the reason why this error pops up becasue **NUMPY**
is there any way to make this correct?? without using some skip_header or
skip_footer
Answer: It seems that loadtxt can use a file handle as input, so one (maybe ugly)
trick might be to first determine the line of your interesting text as you
did, then reopen the file, read the first few uninteresting lines and then
pass the file handle to loadtxt (untested):
fname = 'sample4.txt'
search_phrase = '100'
with open(fname) as fid:
for linenum, line in enumerate(fid):
if search_phrase in line:
break #if the n-th line is interesting, line_num = n-1
#reopen file
with open(fname) as fid:
for i in xrange(linenum):
fid.readline() #throw away uninteresting lines
data = np.loadtxt(fid) #pass file handle
print(data[:3,1].sum()) #interesting stuff is now in first row
But what is wrong with using skiprows? The second part could then be changed
into
#get linenum as before
data = np.loadtxt(fname, skiprows = linenum)
print(data[:3,1].sum()) #interesting stuff is now in first row
|
Forking python, defunct child
Question: I have some troubles with Python child processes so I wrote a very simple
script:
import os
import sys
import time
pid = os.fork()
if pid:
#parent
time.sleep(30)
else:
#child
#os._exit(0)
sys.exit()
While parent process is sleeping I launch
ps fax | grep py[t]hon
And I read this output
2577 ? S 0:00 python /home/pi/python/GPIO/GPIODaemon.py restart
2583 ? Z 0:00 \_ [python] <defunct>
Using `sys.exit()`or`os._exit(0)` there is always a Zombie process and I'm
unable to understand why.
Working on my more complex code I was thinking that there was some resources
that child processes were keeping locked, but on this simplified code child
has no file/socket/db connection at all! Why is child process zombiefied?
Answer: To clear the child process in Unix you need to wait on the child, check one of
the os.wait(), os.waitpid(), os.wait3() or os.wait4() at
<http://docs.python.org/2/library/os.html#os.wait>
As to why this is so, this is a design decision of Unix. The child process
keeps its return value in its process state, if it was to disappear you'll
have no return value. The os.wait() also returns to you the return value and
then the child process is released and all associated resources are released.
|
Cannot json dump a python dictionary with id key
Question:
json.dump({'id' : '3'})
File "/Library/Python/2.7/site-packages/simplejson/__init__.py", line 354, in dumps
return _default_encoder.encode(obj)
File "/Library/Python/2.7/site-packages/simplejson/encoder.py", line 262, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/Library/Python/2.7/site-packages/simplejson/encoder.py", line 340, in iterencode
return _iterencode(o, 0)
File "/Library/Python/2.7/site-packages/simplejson/encoder.py", line 239, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: ObjectId('520183b519744a0ca3d36003') is not JSON serializable
What is wrong?
Answer: Try using the standard json library and `dumps` instead.
With that change it works fine for me.
>>> import json
>>> json.dumps({'id' : '3'})
'{"id": "3"}'
|
Exporting Python List into csv
Question: So I am trying to create a truth table that I can export out into either a csv
or excel format. I am a newbie to Python, so please bear with me if my code is
horrible.
I started with this as my code for the truth table after some research:
import itertools
table = list(itertools.product([False, True], repeat=6))
print(table)
Then I got to this code:
import csv
import sys
with open('C:\\blahblah5.csv','w') as fout:
writer = csv.writer(fout, delimiter = ',')
writer.writerows(table)
This gets me almost to where I need to be with the truth table in a csv
format. However, when I open up the file in excel, there are blank rows
inserted between my records. I tried a tip I found online where I need to
change the input type from w to wb, but I get this error when I do:
Traceback (most recent call last):
File "<pyshell#238>", line 3, in <module>
writer3.writerows(table)
TypeError: 'str' does not support the buffer interface
I am not sure where to go from here because I feel like I am so close to
getting this into the format I want.
Answer: I suspect you're using Python 3. The way you open files for writing `csv`s
changed a little: if you write
with open("C:\\blahblah5.csv", "w", newline="") as fout:
it should work, producing a file which looks like
False,False,False,False,False,False
False,False,False,False,False,True
False,False,False,False,True,False
[etc.]
|
increase insertion speed in python
Question: I have a script in django which i am running on terminal to update the field
values in database , there is about 3000 records to be updated , it updated
but it takes so much time.
Here is the code:
getAge = myplayer.objects.all()
for i in getAge:
i.age = i.age + 0.0192 # it is equilant to 1/52
i.save()
print "new age of id - " ,i.id, "is ", i.age
I am using a MYSQL DB please suggest how can i speed up the insertion time
thanks
Answer: If you don't need to print new ages, you can do as follow using
[F](https://docs.djangoproject.com/en/1.5/topics/db/queries/#query-
expressions):
from django.db.models import F
myplayer.objects.update(age=F('age')+0.0192)
|
Unit test for only root user in python
Question: Does unit test library for python (especially 3.x, I don't really care about
2.x) has decorator to be accessed only by root user?
I have this testing function.
def test_blabla_as_root():
self.assertEqual(blabla(), 1)
blabla function can only be executed by root. I want root user only decorator
so normal user will skip this test:
@support.root_only
def test_blabla_as_root():
self.assertEqual(blabla(), 1)
Does such decorator exist? We have @support.cpython_only decorator though.
Answer: If you're using unittest, you can skip tests or entire test cases using
`unittest.skipIf` and `unittest.skipUnless`.
Here, you could do:
import os
@unittest.skipUnless(os.getuid() == 0) # Root has an uid of 0
def test_bla_as_root(self):
...
Which could be simplified in a (less readable):
@unittest.skipIf(os.getuid())
|
Python - a set of one type only
Question: I want to create a Set class in Python, but unlike the set class that already
implemented in Python, I want my set to contain elements of only one type. For
example a set of integers, a set of strings and a set of any other user
defined class.
Is there any way in Python to pass the type as a parameter to the class? or
any other way to do it?
I was thinking about using dict to implement the set but dicts can contain
elements from different types.
Answer: Somethig like that?
from sets import Set
class MySet(Set):
def __init__(self, iter, klass=None):
if klass is not None:
for item in iter:
if not isinstance(item, klass):
raise Exception("Error")
super(MySet, self).__init__(iter)
if __name__ == '__main__':
set1 = MySet([1,2,3], int)
set2 = MySet([2,3,4], int)
print set2.intersection(set1)
If you want a control of the type even in the methods like `intersection`, you
should override the call for the generation of the new Set
(`self.__class__(common)`)
bye
|
Scrapy run from Python
Question: I am trying to run Scrapy from Python. I'm looking at this code which
([source](http://doc.scrapy.org/en/0.16/topics/practices.html)):
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log
from testspiders.spiders.followall import FollowAllSpider
spider = FollowAllSpider(domain='scrapinghub.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run() # the script will block here
My issue is that I'm confused on how to adjust this code to run my own spider.
I have called my spider project "spider_a" which specifies the domain to crawl
within the spider itself.
What I am asking is, if I run my spider with the following code:
scrapy crawl spider_a
How do I adjust the example python code above to do the same?
Answer: Just import it and pass to `crawler.crawl()`, like:
from testspiders.spiders.spider_a import MySpider
spider = MySpider()
crawler.crawl(spider)
|
How do I run my Flask app remotely using MongoHQ and Heroku (Python)
Question: I have a script written in Python that lets me consume tweets from Terminal
into a locally hosted mongodb database. To improve uptime, I would like to
host this script remotely on Heroku and to shoot the consumed tweets into a
database hosted with MongoHQ. As I would like to do this without using Django,
I use the Flask framework to deploy the app to Heroku (described here:
<https://devcenter.heroku.com/articles/python>).
When I run a simple "hello world" app using this setup, everything is fine.
However, when I try to run my tweet consuming app, it immediately crashes. How
can I change my app to that it will work with the Flask/Heroku/MongoHQ setup?
The source code is:
import json
import pymongo
import tweepy
consumer_key = ""
consumer_secret = ""
access_key = ""
access_secret = ""
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
class CustomStreamListener(tweepy.StreamListener):
def __init__(self, api):
self.api = api
super(tweepy.StreamListener, self).__init__()
self.db = pymongo.MongoClient().test
def on_data(self, tweet):
self.db.tweets.insert(json.loads(tweet))
def on_error(self, status_code):
return True # Don't kill the stream
def on_timeout(self):
return True # Don't kill the stream
sapi = tweepy.streaming.Stream(auth, CustomStreamListener(api))
sapi.filter(track=['rooney'])
I am completely new to programming so I imagine the solution to this problem
might well be quite straight forward. However, I am stuck and could really use
some help to progress.
Answer: It's hard to debug without more information, but my first guess is that you
don't have the dependancies installed.
Heroku gives you a clean python environment. However, you need special
libraries like `tweepy` that don't get installed by default. Thus, you have to
let Heroku know to install these.
You'll need to use pip and a requirements.txt document which lists out all the
libraries that you are trying to use as well as what version numbers.
<https://devcenter.heroku.com/articles/python-pip>
|
keyboard emulation with windows and python
Question: I'm trying to achieve the same under Windows as I have achieved in Linux
environment. Serial to keyboard input, someone please help or point me in the
right direction. I'd should be as simple as possible, I tried but failed.
import serial
import time
import uinput
import binascii
ser = serial.Serial(
port='/dev/ttyUSB0',\
baudrate=115200,\
parity=serial.PARITY_NONE,\
stopbits=serial.STOPBITS_ONE,\
bytesize=serial.EIGHTBITS,\
timeout=0)
ser.open()
device = uinput.Device([uinput.KEY_A, uinput.KEY_B])
time.sleep(1)
while True:
datain=ser.read(1)
if datain=='':
continue
datain_int=int(binascii.hexlify(datain), 16)
datain_bin=bin(datain_int)
if datain_int==0:
continue
if datain_int==128:
device.emit_click(uinput.KEY_A)
elif datain_int==64:
device.emit_click(uinput.KEY_B)
Answer: I'm not sure what you're trying to do exactly, but here's a script I once
wrote to build a hotkey cheat engine for GTA San Andreas
import pyHook
import pythoncom
import win32com.client
shell = win32com.client.Dispatch("WScript.Shell")
hm = pyHook.HookManager()
def OnKeyboardEvent(event):
if event.KeyID == 48:
#cheat set 1
shell.SendKeys("chttychttybangbang")
if event.KeyID == 49:
#cheat set 2
shell.SendKeys("hesoyamuzumymwfullclip")
if event.KeyID == 50:
#cheat set 3
shell.SendKeys("hesoyaprofessionalskitfullclip")
if event.KeyID == 51:
#cheat set 4
shell.SendKeys("hesoyalxgiwylfullclip")
if event.KeyID == 52:
#cheat set 5
shell.SendKeys("zeiivgylteiczflyingfisheveryoneisrichspeedfreak")
if event.KeyID == 53:
#cheat set 6
shell.SendKeys("aiwprton")
if event.KeyID == 54:
#cheat set 7
shell.SendKeys("oldspeeddemon")
if event.KeyID == 55:
#cheat set 8
shell.SendKeys("itsallbull")
if event.KeyID == 56:
#cheat set 9
shell.SendKeys("monstermash")
if event.KeyID == 57:
#cheat set 10
shell.SendKeys("jumpjetkgggdkpaiypwzqpohdudeflyingtostunt")
# return True to pass the event to other handlers
return True
hm.KeyDown = OnKeyboardEvent
hm.HookKeyboard()
pythoncom.PumpMessages()
And it worked on windows. Hope this helps
|
How to get the status of spawn process in twisted python?
Question: I want to trigger many long running processes continiously. And, based on the
status returned of each process executed, I need to perform other tasks. In
the below example, I'm able to spawn processes, but I'm not able to
capture/get the details of the spawn processes execution status returned to
mail loop(i.e in CmdProtocol class).
I'm new to this twisted python concepts - Can someone help me here?
import sys
from twisted.internet.protocol import ServerFactory, ProcessProtocol
from twisted.protocols.basic import LineReceiver
from twisted.internet import reactor
from twisted.internet import protocol
import os
import signal
class MyPP(protocol.ProcessProtocol):
def __init__(self):
self.parent_id = os.getpid()
def connectionMade(self):
print "connectionMade!"
print "Parent id = %s" % self.parent_id
print "Child process id = %s" % self.transport.pid
def outReceived(self, data):
print "out", data
def errReceived(self, data):
print "error", data
def inConnectionLost(self):
print "inConnectionLost! stdin is closed! (we probably did it)"
print "Parent id = %s" % self.parent_id
print "Child process id closes STDIN= %s" % self.transport.pid
def outConnectionLost(self):
print "outConnectionLost! The child closed their stdout!"
print "Parent id = %s" % self.parent_id
print "Child process id closes STDOUT = %s" % self.transport.pid
def errConnectionLost(self):
print "errConnectionLost! The child closed their stderr."
print "Parent id = %s" % self.parent_id
print "Child process id closes ERRCONN = %s" % self.transport.pid
def processExited(self, reason):
print "processExited %s, status %d" % (self.transport.pid, reason.value.exitCode,)
def processEnded(self, reason):
print "%s processEnded, status %d" % (self.transport.pid, reason.value.exitCode,)
print "quitting"
class CmdProtocol(LineReceiver):
delimiter = '\n'
def connectionMade(self):
self.client_ip = self.transport.getPeer()
print "Client connection from %s" % self.client_ip
def processcmd(self):
pp = MyPP()
cmd = ['c:\Python27\python.exe', '-u', 'print_hi.py']
print "Calling processcmd - <%s>" % cmd
reactor.spawnProcess(pp, cmd[0], cmd[1:])
def connectionLost(self, reason):
print "Lost client connection. Reason: %s" % reason
def lineReceived(self, line):
if not line: return
# Parse the command
print 'Cmd received from %s : %s' % (self.client_ip, line)
commandParts = line.split()
if len(commandParts) > 0:
command = commandParts[0].lower()
args = commandParts[1:]
try:
print "Command received : <%s>" % command
method = getattr(self, command)
except AttributeError, e:
self.sendLine('Error: no such command.')
else:
try:
res = method()
print "Returned status:%s" % res
self.sendLine('Command executed successfully.')
except Exception, e:
self.sendLine('Error: ' + str(e))
def do_kill(self, pid):
"""kill: Kill a process (PID)"""
print 'Killing pid:%s' % pid
res = os.kill(int(pid), signal.SIGTERM)
print "Kill Status %s" % res
class MyFactory(ServerFactory):
protocol = CmdProtocol
def __init__(self):
print "Factory called"
reactor.listenTCP(8000, MyFactory())
reactor.run()
Answer: This is actually a very basic Python data structures question. You just need
to refer to an instance of `CmdProtocol` from an instance of `MyPP`. Since
`CmdProtocol` is what constructs `MyPP` in the first place, this is easy. Just
change the construction of `MyPP` to look like this:
def processcmd(self):
pp = MyPP(self)
and then `MyPP.__init__` to look like this:
def __init__(self, cmd_protocol):
self.parent_id = os.getpid()
self.cmd_protocol = cmd_protocol
Then, in any method on `MyPP`, you can access the relevant `CmdProtocol`
instance with `self.cmd_protocol`.
|
MAXREPEAT issue when running Python 2.7 from MacPorts
Question: I'm running into some issues with running python2.7 from MacPorts.
Here's a list of the available Python versions:
$ sudo port select python
Available versions for python:
none
python25-apple
python26-apple
python27 (active)
python27-apple
When I set `python27` to be active (as above), I get the following error when
running `python`:
$ sudo port select --set python python27
Selecting 'python27' for 'python' succeeded. 'python27' is now active.
$ python
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 548, in <module>
main()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 530, in main
known_paths = addusersitepackages(known_paths)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 266, in addusersitepackages
user_site = getusersitepackages()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 241, in getusersitepackages
user_base = getuserbase() # this will also set USER_BASE
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 231, in getuserbase
USER_BASE = get_config_var('userbase')
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sysconfig.py", line 516, in get_config_var
return get_config_vars().get(name)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sysconfig.py", line 449, in get_config_vars
import re
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/re.py", line 105, in <module>
import sre_compile
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sre_compile.py", line 14, in <module>
import sre_parse
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sre_parse.py", line 17, in <module>
from sre_constants import *
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sre_constants.py", line 18, in <module>
from _sre import MAXREPEAT
ImportError: cannot import name MAXREPEAT
The version of the port I have installed is (according to `sudo port
installed`):
python27 @2.7.5_1
python27 @2.7.5_1+universal (active)
I do not get the above error when running `python2.7`, only when I run
`python` in the shell.
$ which python2.7
/opt/local/bin/python2.7
$ which python
/opt/local/bin/python
Any suggestions?
Thanks.
Answer: I just experienced the same issue. A quick and easy workaround is to add an
alias in ~/.profile:
alias python='python2.7'
If you don't have a .profile file already, just create one. Open a new
instance of terminal and python should work fine with no errors.
|
Python with etc/Shadow
Question: so I'm writing this program that needs to check the password hash in
etc/shadow and compare it to the password the user entered. I tried encrypting
the password with hashlib.sha512, but the result was not the same. I think
it's salted some how, but I don't know if it uses a universal salt or how I
can get the salt each time.
tldr; I need a way for a user to enter a password, then have the program hash
it and check it against the etc/shadow. Any ideas?
Answer: Try this <https://pypi.python.org/pypi/pam> . First link in google by `python
pam`. Look at distribution package manager for python-pam if exists. Else
install with `pip` or `easy_install`.
Small example:
>>> import pam
>>> pam.authenticate('fred', 'fredspassword')
False
|
Python: Catch Ctrl-C command. Prompt "really want to quit (y/n)", resume execution if no
Question: I have a program that may have a lengthy execution. In the main module I have
the following:
import signal
def run_program()
...time consuming execution...
def Exit_gracefully(signal, frame):
... log exiting information ...
... close any open files ...
sys.exit(0)
if __name__ == '__main__':
signal.signal(signal.SIGINT, Exit_gracefully)
run_program()
This works fine, but I'd like the possibility to pause execution upon catching
SIGINT, prompting the user if they would really like to quit, and resuming
where I left off in run_program() if they decide they don't want to quit.
The only way I can think of doing this is running the program in a separate
thread, keeping the main thread waiting on it and ready to catch SIGINT. If
the user wants to quit the main thread can do cleanup and kill the child
thread.
Is there a simpler way?
Answer: The python signal handlers do not seem to be real signal handlers; that is
they happen after the fact, in the normal flow and after the C handler has
already returned. Thus you'd try to put your quit logic within the signal
handler. As the signal handler runs in the main thread, it will block
execution there too.
Something like this seems to work nicely.
import signal
import time
import sys
def run_program():
while True:
time.sleep(1)
print("a")
def exit_gracefully(signum, frame):
# restore the original signal handler as otherwise evil things will happen
# in raw_input when CTRL+C is pressed, and our signal handler is not re-entrant
signal.signal(signal.SIGINT, original_sigint)
try:
if raw_input("\nReally quit? (y/n)> ").lower().startswith('y'):
sys.exit(1)
except KeyboardInterrupt:
print("Ok ok, quitting")
sys.exit(1)
# restore the exit gracefully handler here
signal.signal(signal.SIGINT, exit_gracefully)
if __name__ == '__main__':
# store the original SIGINT handler
original_sigint = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGINT, exit_gracefully)
run_program()
The code restores the original signal handler for the duration of `raw_input`;
`raw_input` itself is not re-entrable, and re-entering it will lead to
`RuntimeError: can't re-enter readline` being raised from `time.sleep` which
is something we don't want as it is harder to catch than `KeyboardInterrupt`.
Rather, we let 2 consecutive Ctrl-C's to raise `KeyboardInterrupt`.
|
Web App to run Python Scripts - Run as a background process.
Question: I have a python script which i want a user to run in the server, with out
giving a SSH Login permission. I got a web app from below link. [How to
connect a webpage with python to
mod_wsgi?](http://stackoverflow.com/questions/16122770/how-to-connect-a-
webpage-with-python-to-mod-wsgi)
Here is the modified version that's working for me
* * *
import web
class Echo:
def GET(self):
return """<html><form name="script-input-form" action="/" method="post">
<p><label for="Import"><font color=green><b>Press the button to import CSV</b></font)</label>
Import: <input type="submit" value="ImportCSV"></p>
</form><html>"""
def POST(self):
data = web.input()
return data
obj = compile('execfile("importcsv.py")', '', 'exec')
result = eval(obj, globals(), locals())
web.seeother('/')
urls = (
'/.*', Echo,
)
if __name__ == "__main__":
app = web.application(urls, globals())
app.run()
else:
app = web.application(urls, globals()).wsgifunc()
* * *
Script i stored in site.py and I am executing it with the command "**python2.7
site.py 10.10.10.19:8080** ". When user access 10.10.10.19:8080 he can see the
web page and click on the button when he wants the code to be executed.
Now the issues is
Web Page Stops when i close my SSH Session :(
How do i run it in the background? tried & and that didn't help.
Thanks and Regards, Rijil Hari
Answer: It looks like you want to spawn a new process, since the current process is
attached to the shell (alternatively, you could detach the terminal from the
ssh session).
To spawn a new process you can refer to [bash spawn new
process](http://stackoverflow.com/questions/2429763/new-application-process-
from-bash-shell), or you can try [this option
instead](http://askubuntu.com/questions/8653/how-to-keep-processes-running-
after-ending-ssh-session) which keeps your session alive after you disconnect.
|
Python heapq not being pushed in right order?
Question: util.py
import heapq
class PriorityQueue:
def __init__(self):
self.heap=[]
def push(self,item,priority):
pair = (priority,item)
heapq.heappush(self.heap,pair)
def pop(self):
(priority,item) = heapq.heappop(self.heap)
return item
def getHeap(self):
return self.heap
Class PriorityQueueWithFunction(PriorityQueue):
def __init__ (self,priorityFunction):
self.priorityFunction = priorityFunction
PriorityQueue.__init__(self)
def push(self,item):
PriorityQueue.push(self, item, self.priorityFunction(item))
pqtest.py
import os,sys
lib_path = os.path.abspath('../../lib/here')
sys.path.append(lib_path)
import Util
import string
import random
def str_gen():
return ''.join(random.choice(string.ascii_uppercase + string.digits) for x in range(random.randint(2,8)))
def pqfunc(item):
return len(str(item))
rdy = Util.PriorityQueueFunction(pqfunc)
for i in range(1,10):
rdy.push(str_gen())
for i in rdy.getHeap():
print i
it printed
(3, '2UA')
(4, '6FD6')
(6, 'DLB66A') <---out of place
(4, 'J97K')
(7, 'GFQMRZZ') <----out of place
(6, 'SRU5T4')
(7, 'BP4PGKH')
(7, 'CBUJWQO')
(7, '5KNNY1P')
why are those two out of place and how to fix?
and when I add print `rdy.pop()` inside `for i in rdy.getHeap():`
it only pops 5 of them when i pushed in 9
Answer: The [`heapq`](http://docs.python.org/2/library/heapq.html) functions do not
keep your list _sorted_ , but only guarantee that the _heap property_ is
maintained:
* `heap[k] <= heap[2*k+1]`
* `heap[k] <= heap[2*k+2]`
Consequently, `heap[0]` is always the smallest item.
When you want to iterate over the items in order of priority, you cannot
simply iterate over the heap but need to `pop()` items off until the queue is
empty. `heappop()` will take the first item, then reorganize the list to
fulfil the heap invariant.
See also: <http://en.wikipedia.org/wiki/Heap_(data_structure)>
|
3dplot scatter points via python
Question: I just used `numpy.loadtxt('filename', usecols=(0,))` to load a csv with the
following format:
x,y,z
1.1,2.2,3.3
5.5,1.45,6.77
(There are ~1M lines). I'd like to make a scatterplot. I searched the web and
found `numpy.meshgrid` and `mlab.surf` but I'm not sure what to do. Please
point me in the right direction.
Answer: I think you can use [matplotlib](http://matplotlib.org/), it's very powerful,
and widely used, most importantly, it has good document and is easy to use.
Hope helps!
|
Python: Parsing Timestamps Embedded in Filenames
Question: I am working on a django project with avatar support and the system (which
isn't the greatest but needs to be maintained) requires that we embed a
timestamp of the form YYYYMMDDHHMM in the user generated avatar files and
concats that with the user_id, for example:
23_201308080930.png
I have written a function that parses these filenames and returns the most
recent timestamp:
def _get_timestamp(self):
"""Return the timestamp of a user's most recently uploaded avatar."""
path = settings.USER_AVATAR_DIRECTORY + self._get_dir()
user_id = self.user_id
file_re = re.escape(str(user_id)) + r"_\d{12}.png"
times = []
[times.append(file) for file in os.listdir(path) if re.match(file_re, file)]
if times:
digits = [re.findall("\d{12}", timestamp) for timestamp in times]
timestamp = sorted(digits, reverse=True)[0][0]
return timestamp
It works OK, but the double [0][0] pop necessary to traverse the list within a
list that is returned by the sequential regexen is a bit distasteful and
overall it all seems a bit blunt. Furthermore, although the avatars are in
reality spread over many directories (automatically generated by the user_id
but that's not really important here) I feel that possibly the brute force
regular expression searching could make a performance hit if a directory was
very large.
I am interested to know what might be the optimal and idiomatic solution for
this problem? Is it a candidate for generators or some form of lazy
evaluation?
Answer: I wouldn't use regexps for this; it's best to avoid them when you don't really
need them. Here's how I'd do it (untested):
def _get_timestamp(self):
"""Return the timestamp of a user's most recently uploaded avatar."""
path = settings.USER_AVATAR_DIRECTORY + self._get_dir()
filenames = [filename for filename in os.listdir(path)
if filename.partition('_')[0] == str(self.user_id)]
filenames.sort(reverse=True)
return (filenames[0].rpartition('_')[2].partition('.')[0]
if filenames else None)
|
how to extract all the data between
Question:
<p align="JUSTIFY"><a href="#abcd"> Mr A </a></p>
<p align="JUSTIFY">I </p>
<p align="JUSTIFY"> have a question </p>
<p align="JUSTIFY"> </p>
<p align="JUSTIFY"><a href="#mnop"> Mr B </a></p>
<p align="JUSTIFY">The </p>
<p align="JUSTIFY">answer is</p>
<p align="JUSTIFY">not there</p>
<p align="JUSTIFY"> </p>
<p align="JUSTIFY"><a href="wxyz"> Mr C </a></p>
<p align="JUSTIFY">Please</p>
<p align="JUSTIFY">Help</p>
I want to iterate the extraction of the data with the help of ` `.
* The first iteration should display _I have a question_
* second iteration should display _The answer is not there_
* The person names should also be extracted in a different list ..for example ['Mr A','Mr B','Mr C']
If someone has any idea how to do it, it might be useful because I am trying
to learn python got stuck with this problem.The code i tried is
for t in soup.findAll('p',text = re.compile(' '), attrs = {'align' : 'JUSTIFY'}):
print t
for item in t.parent.next_siblings:
if isinstance(item, Tag):
if 'p' in item.attrs and 'align' in item.attrs['p']:
break
print item
It return [] which is not what want
Answer: You can do that with BeautifulSoup:
from bs4 import BeautifulSoup
s = ""
html = '<p align="JUSTIFY">I </p>\
<p align="JUSTIFY"> have a question </p>\
<p align="JUSTIFY"> </p>\
<p align="JUSTIFY">The </p>\
<p align="JUSTIFY">answer is</p>\
<p align="JUSTIFY">not there</p>\
<p align="JUSTIFY"> </p>\
<p align="JUSTIFY">Please</p>\
<p align="JUSTIFY">Help</p>'
soup = BeautifulSoup(html)
title = soup.findAll("p", {"align" : "JUSTIFY"})
for i in title:
s += ''.join(i.contents)
f = s.split(" ")
for i in f:
print i
|
install Pycuda 2013.1.1 on Windows 7 64bit
Question: I followed instructions
[here](http://wiki.tiker.net/PyCuda/Installation/Windows#Windows_7_64-bit_with_Visual_Studio_Professional_2008_.28Strictly_Binary_Versions.29)
. I have installed all packages from
<http://www.lfd.uci.edu/~gohlke/pythonlibs/> (all the latest one).
It seems I installed successfully. I ran the code below in Ipython:
import pycuda.gpuarray as gpuarray
import pycuda.driver as cuda
import pycuda.autoinit
import numpy
a_gpu = gpuarray.to_gpu(numpy.random.randn(4,4).astype(numpy.float32)) ## pass
a_doubled = (2*a_gpu).get() ## the line can't be passed with Ipython
and got this error:
> File "C:\Python27\lib\site-packages\pycuda\compiler.py", line 137, in
> compile_plain
> lcase_err_text = (stdout+stderr).decode("utf-8").lower() File
> "C:\Python27\lib\encodings\utf_8.py", line 16, in decode return
> codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec
> can't decode byte 0xb8 in position 109: invalid start byte
How to solve this issue? I have struggled several days.
Answer: This appears to have been caused by an error handling issue inside PyCUDA when
code contains unparseable unicode. The bug was
[fixed](https://github.com/inducer/pycuda/issues/37) in late 2013 and should
have been pushed in the PyCUDA 2014.1 release.
[This answer was added as a community wiki entry to get this question off the
unanswered list for the CUDA and PyCUDA tags]
|
Python Installation Location (Windows)
Question: I need to find out if there is Python installed on the computer.
My specific issue is that I am distributing a program that comes with its own
interpreter + standard library (the end-user may not have Python). In the
installation, I am giving the option to use the user's own installed Python
interpreter + library if they have it. However, I need the location of that. I
can ask the user to find it manually, but I was hoping there was an automatic
way.
Since my installer uses my included interpreter, `sys.prefix` refers to the
included interpreter (I know this because I tried it out, I have Python 2.7
and 3.3 installed).
I also tried using `subprocess.call`: `subprocess.call(['py', '-c', '"import
sys; print sys.prefix"'])` which would use the standard Python interpreter if
there was one, but I'm not sure how to capture this output.
Thus, are there any other ways to find out if there is a default Python
version installed on the user's computer and where?
Answer: Some users might have placed their local Python directory into the system's
`PATH` environment variable and some might even have set the `PYTHONPATH`
environment variable.
You could try the following:
import os
if "python" in os.environ["PATH"].lower():
# Confirm that the Python executable actually is there
if "PYTHONPATH" in os.environ.keys():
# Same as in the last if...
As for the `subprocess.call(...)`, set the `stdout` parameter for something
that passes for a file object, and afterwards just `.read()` the file object
you gave to see the output from the call.
|
sql import export command error using pyodbc module python
Question: I am trying to run the following db2 command through the python pyodbc module
IBM DB2 Command : "DB2 export to C:\file.ixf of ixf select * from emp_hc"
i am successfully connected to the DSN using the pyodbc module in python and
works fine for select statement
but when i try to execute the following command from the Python IDLE 3.3.2
cursor.execute(" export to ? of ixf select * from emp_hc",r"C:\file.ixf")
pyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664]
SQL0104N An unexpected token "db2 export to ? of" was found following "BEGIN-
OF-STATEMENT". Expected tokens may include: "". SQLSTATE=42601\r\n (-104)
(SQLExecDirectW)')
or cursor.execute(" export to C:\file.ixf of ixf select * from emp_hc")
Traceback (most recent call last): File "", line 1, in cursor.execute("export
to C:\myfile.ixf of ixf select * from emp_hc") pyodbc.ProgrammingError:
('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0007N The character
"\" following "export to C:" is not valid. SQLSTATE=42601\r\n (-7)
(SQLExecDirectW)')
am i doing something wrong ? any help will be greatly appreciated.
Answer: `db2 export` is a command run in the shell, not through SQL via odbc.
It's possible to write database query results to a file with python and
pyodbc, but `db2 export` will almost certainly be faster and effortlessly
handle file formatting if you need it for import.
|
Optimizing a python function with numpy arrays
Question: I have been trying to optimize a python script I wrote for the last two days.
Using several profiling tools (cProfile, line_profiler etc.) I narrowed down
the issue to the following function below.
`df` is a numpy array with 3 columns and +1,000,000 rows (data type is float).
Using line_profiler, I found out that the function spends most of the time
whenever it needs to access the numpy array.
`full_length += head + df[rnd_truck, 2]`
and
`full_weight += df[rnd_truck,1]`
take most of the time, followed by
`full_length = df[rnd_truck,2]`
`full_weight = df[rnd_truck,1]`
lines.
As far as I see the bottleneck is caused by the access time the function tries
to grab a number from the numpy array.
When I run the function as `MonteCarlo(df, 15., 1000.)` it takes 37 seconds to
call the function for 1,000,000 times in a i7 3.40GhZ 64bit Windows machine
with 8GB RAM. In my application, I need to run it for 1,000,000,000 to ensure
convergence, which brings the execution time to more than an hour. I tried
using the `operator.add` method for the summation lines, but it did not help
me at all. It looks like I have to figure out a faster way to access this
numpy array.
Any ideas would be welcome!
def MonteCarlo(df,head,span):
# Pick initial truck
rnd_truck = np.random.randint(0,len(df))
full_length = df[rnd_truck,2]
full_weight = df[rnd_truck,1]
# Loop using other random truck until the bridge is full
while 1:
rnd_truck = np.random.randint(0,len(df))
full_length += head + df[rnd_truck, 2]
if full_length > span:
break
else:
full_weight += df[rnd_truck,1]
# Return average weight per feet on the bridge
return(full_weight/span)
Below is a portion of the `df` numpy array I am using:
In [31] df
Out[31]:
array([[ 12. , 220.4, 108.4],
[ 11. , 220.4, 106.2],
[ 11. , 220.3, 113.6],
...,
[ 4. , 13.9, 36.8],
[ 3. , 13.7, 33.9],
[ 3. , 13.7, 10.7]])
Answer: As noted by other people, this isn't vectorized at all, so your slowness is
really due to slowness of the Python interpreter. [Cython](http://cython.org)
can help you a lot here with minimal changes:
>>> %timeit MonteCarlo(df, 5, 1000)
10000 loops, best of 3: 48 us per loop
>>> %timeit MonteCarlo_cy(df, 5, 1000)
100000 loops, best of 3: 3.67 us per loop
where `MonteCarlo_cy` is just (in the IPython notebook, after `%load_ext
cythonmagic`):
%%cython
import numpy as np
cimport numpy as np
def MonteCarlo_cy(double[:, ::1] df, double head, double span):
# Pick initial truck
cdef long n = df.shape[0]
cdef long rnd_truck = np.random.randint(0, n)
cdef double full_weight = df[rnd_truck, 1]
cdef double full_length = df[rnd_truck, 2]
# Loop using other random truck until the bridge is full
while True:
rnd_truck = np.random.randint(0, n)
full_length += head + df[rnd_truck, 2]
if full_length > span:
break
else:
full_weight += df[rnd_truck, 1]
# Return average weight per feet on the bridge
return full_weight / span
|
How to use IPython.parallel map() with generators as input to function
Question: I am trying to use IPython.parallel map. The inputs to the function I wish to
parallelize are generators. Because of size/memory it is not possible for me
to convert the generators to lists. See code below:
from itertools import product
from IPython.parallel import Client
c = Client()
v = c[:]
c.ids
def stringcount(longstring, substrings):
scount = [longstring.count(s) for s in substrings]
return scount
substrings = product('abc', repeat=2)
longstring = product('abc', repeat=3)
# This is what I want to do in parallel
# I should be 'for longs in longstring' I use range() because it can get long.
for num in range(10):
longs = longstring.next()
subs = substrings.next()
print(subs, longs)
count = stringcount(longs, subs)
print(count)
# This does not work, and I understand why.
# I don't know how to fix it while keeping longstring and substrings as
# generators
v.map(stringcount, longstring, substrings)
for r in v:
print(r.get())
Answer: You can't use `View.map` with a generator without walking through the entire
generator first. But you can write your own custom function to submit batches
of tasks from a generator and wait for them incrementally. I don't have a more
interesting example, but I can illustrate with a terrible implementation of a
prime search.
Start with our token 'data generator':
from math import sqrt
def generate_possible_factors(N):
"""generator for iterating through possible factors for N
yields 2, every odd integer <= sqrt(N)
"""
if N <= 3:
return
yield 2
f = 3
last = int(sqrt(N))
while f <= last:
yield f
f += 2
This just generates a sequence of integers to use when testing if a number is
prime.
Now our trivial function that we will use as a task with `IPython.parallel`
def is_factor(f, N):
"""is f a factor of N?"""
return (N % f) == 0
and a complete implementation of prime check using the generator and our
factor function:
def dumb_prime(N):
"""dumb implementation of is N prime?"""
for f in generate_possible_factors(N):
if is_factor(f, N):
return False
return True
A parallel version that only submits a limited number of tasks at a time:
def parallel_dumb_prime(N, v, max_outstanding=10, dt=0.1):
"""dumb_prime where each factor is checked remotely
Up to `max_outstanding` factors will be checked in parallel.
Submission will halt as soon as we know that N is not prime.
"""
tasks = set()
# factors is a generator
factors = generate_possible_factors(N)
while True:
try:
# submit a batch of tasks, with a maximum of `max_outstanding`
for i in range(max_outstanding-len(tasks)):
f = factors.next()
tasks.add(v.apply_async(is_factor, f, N))
except StopIteration:
# no more factors to test, stop submitting
break
# get the tasks that are done
ready = set(task for task in tasks if task.ready())
while not ready:
# wait a little bit for some tasks to finish
v.wait(tasks, timeout=dt)
ready = set(task for task in tasks if task.ready())
for t in ready:
# get the result - if True, N is not prime, we are done
if t.get():
return False
# update tasks to only those that are still pending,
# and submit the next batch
tasks.difference_update(ready)
# check the last few outstanding tasks
for task in tasks:
if t.get():
return False
# checked all candidates, none are factors, so N is prime
return True
This submits a limited number of tasks at a time, and as soon as we know that
N is not prime, we stop consuming the generator.
To use this function:
from IPython import parallel
rc = parallel.Client()
view = rc.load_balanced_view()
for N in range(900,1000):
if parallel_dumb_prime(N, view, 10):
print N
A more complete illustration [in a
notebook](http://nbviewer.ipython.org/6203173).
|
Don't write final new line character to a file
Question: I have looked around StackOverflow and couldn't find an answer to my specific
question so forgive me if I have missed something.
import re
target = open('output.txt', 'w')
for line in open('input.txt', 'r'):
match = re.search(r'Stuff', line)
if match:
match_text = match.group()
target.write(match_text + '\n')
else:
continue
target.close()
The file I am parsing is huge so need to process it line by line.
This (of course) leaves an additional newline at the end of the file.
How should I best change this code so that on the final iteration of the 'if
match' loop it doesn't put the extra newline character at the end of the file.
Should it look through the file again at the end and remove the last line
(seems a bit inefficient though)?
The existing StackOverflow questions I have found cover removing all new lines
from a file.
If there is a more pythonic / efficient way to write this code I would welcome
suggestions for my own learning also.
Thanks for the help!
Answer: Write the newline of each line at the beginning of the _next_ line. To avoid
writing a newline at the beginning of the first line, use a variable that is
initialized to an empty string and then set to a newline in the loop.
import re
with open('input.txt') as source, open('output.txt', 'w') as target:
newline = ''
for line in source:
match = re.search(r'Stuff', line)
if match:
target.write(newline + match.group())
newline = '\n'
I also restructured your code a bit (the `else: continue` is not needed,
because what else is the loop going to do?) and changed it to use the `with`
statement so the files are automatically closed.
|
Using session in flask app
Question: I'm sure there is something that I'm clearly not understanding about session
in Flask but I want to save an ID between requests. I haven't been able to get
session to work so I tried a simple Flask app below and when I load it I get
an internal server error
#!/usr/bin/env python
from flask import Flask, session
app = Flask(__name__)
@app.route('/')
def run():
session['tmp'] = 43
return '43'
if __name__ == '__main__':
app.run()
Why can't I use store that value in session?
Answer: According to [Flask sessions
documentation](http://flask.pocoo.org/docs/quickstart/#sessions):
> ... What this means is that the user could look at the contents of your
> cookie but not modify it, unless they know the secret key used for signing.
>
> In order to use sessions you **have to set a secret key**.
Set _secret key_. And you should return string, not int.
#!/usr/bin/env python
from flask import Flask, session
app = Flask(__name__)
@app.route('/')
def run():
session['tmp'] = 43
return '43'
if __name__ == '__main__':
app.secret_key = 'A0Zr98j/3yX R~XHH!jmN]LWX/,?RT'
app.run()
|
Python : How do GTK widget instances work?
Question: This is driving me litterally crazy :
* Why is the method `the_method(self, button)` referencing two separate instances of `self.button` depending on what object calls it?
* How do I reference an object instance explicitly?
Thank you for your help
* * *
import os, stat, time
import gtk
class Lister(object):
OPEN_IMAGE = gtk.image_new_from_stock(gtk.STOCK_DND_MULTIPLE, gtk.ICON_SIZE_BUTTON)
CLOSED_IMAGE = gtk.image_new_from_stock(gtk.STOCK_DND, gtk.ICON_SIZE_BUTTON)
def __init__(self, dname = None):
filename = "foo"
self.hbox = gtk.HBox()
self.button = gtk.Button()
self.button.set_image(self.OPEN_IMAGE)
self.button.connect('clicked', self.open_file)
self.hbox.pack_start(self.button, False)
def open_file(self, button):
Buttons().the_method("foo")
# return
class Buttons(object):
OPEN_IMAGE = gtk.image_new_from_stock(gtk.STOCK_DND_MULTIPLE, gtk.ICON_SIZE_BUTTON)
CLOSED_IMAGE = gtk.image_new_from_stock(gtk.STOCK_DND, gtk.ICON_SIZE_BUTTON)
def __init__(self):
self.button = gtk.Button() # THIS is the button to modify
self.hbox = gtk.HBox()
self.hbox.pack_start(self.button, False)
self.button.set_image(self.OPEN_IMAGE)
self.button.connect('clicked', self.the_method)
def the_method(self, button):
print vars(self)
self.button.set_image(self.CLOSED_IMAGE)
class GUI(object):
def delete_event(self, widget, event, data=None):
gtk.main_quit()
return False
def __init__(self):
self.window = gtk.Window()
self.window.set_size_request(300, 600)
self.window.connect("delete_event", self.delete_event)
vbox = gtk.VBox()
vbox.pack_start(Buttons().hbox, False, False, 1)
vbox.pack_start(Lister().hbox)
self.window.add(vbox)
self.window.show_all()
return
def main():
gtk.main()
if __name__ == "__main__":
GUI()
main()
Answer: > Why is the method the_method(self, button) referencing two separate
> instances of self.button depending on what object calls it?
It does not depend on the caller: You are explicitly creating a new instance
of Buttons every time (e.g. `Buttons().hbox` creates a new instance and gets
the hbox from it).
> How do I reference an object instance explicitly?
You already refer to an instance, it's just a new instance every time. These
two calls will call the method on the same instance, as expected:
my_buttons = Buttons();
my_buttons.the_method();
my_buttons.the_method();
This is the same in every object oriented language, so any OO tutorial could
be helpful but <http://docs.python.org/tutorial/classes.html> may help
understand how classes work in python.
|
python: sort a list of lists by an item in the sublist
Question: I have a list of lists that contains users and scores as follows:
[["user1", 100], ["user2", 234], ["user3", 131]...]
I want to produce a list that sorts the users by score in declining order:
[["user2", 234], ["user3", 131], ["user1", 100]...]
How might I go about doing this kind of sort?
Answer:
>>> li = [["user1", 100], ["user2", 234], ["user3", 131]]
>>>
>>> import operator
>>>
>>> sorted(li, key=operator.itemgetter(1)) # Ascending order
[['user1', 100], ['user3', 131], ['user2', 234]]
>>> sorted(li, key=operator.itemgetter(1), reverse=True) # Reverse Sort
[['user2', 234], ['user3', 131], ['user1', 100]]
|
Python removing words from strings
Question: My desired program has to read a text file like this:
EASTS versus WESTS
EASTS have scored 25:13
WESTS have scored 26:28
WESTS have scored 40:23
WESTS have scored 42:01
and run the following output:
WESTS 3
EASTS 1
I think i need to first put it into groups. Remove the newline characters.
Then remove everything but the capital letters in the first line and assign
them to seperate variables. Then search the text for the amount these
variables occur. so that would mean a = 2 and b = 4 then - 1 off each total
and have that as result. This is what i have so far:
import string
teams = []
for word in open('commentary.txt'):
word = word[:-1] # gets away the /n characters.
word = word.strip("versus") # This line doesn't work
teams.append(word)
print(teams)
I think i know what to do but i dont know... Any help would be appreciated :D
Thanks
Answer: You need to use a dict or a counter for this. Something like this.
from collections import Counter
counter = Counter()
for line in file:
if 'versus' in line:
continue
words = line.split()
counter[words[0]] += 1
for team in counter:
print team, counter[team]
|
Static files are not loaded in a Django website hosted on Bluehost
Question: as you can see it live on the website: <http://www.workshopvenues.com> the
**static** files that are under /assets/* are not loaded correctly.
I expect this url (for example) to be valid:
<http://www.workshopvenues.com/assets/ico/apple-touch-
icon-144-precomposed.png> but it's not (as you can verify clicking on it).
This is what I have in my **settings.py**
STATIC_ROOT = '/home6/ptlugorg/workshopvenues/workshopvenues/workshopvenues/assets/'
STATIC_URL = 'http://www.workshopvenues.com/assets/'
STATICFILES_DIRS = (
'/home6/ptlugorg/workshopvenues/workshopvenues/workshopvenues/assets/',
)
The paths are correct, I've double checked them:
[email protected] [~/workshopvenues/workshopvenues/workshopvenues/assets]# pwd
/home6/ptlugorg/workshopvenues/workshopvenues/workshopvenues/assets
if it may help, I'm serving the website using fastcgi. I've followed the
instructions here
<http://simplyargh.blogspot.co.uk/2012/04/python-27-django-14-on-
bluehost.html>
and these are my configuration files.
**.htaccess**
[email protected] [~/public_html/workshopvenues]# cat .htaccess
AddHandler fcgid-script .fcgi
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ workshopvenues.fcgi/$1 [QSA,L]
**workshopvenues.fcgi**
[email protected] [~/public_html/workshopvenues]# cat workshopvenues.fcgi
#!/home6/ptlugorg/python27/bin/python27
import sys, os
# Add a custom Python path.
sys.path.insert(0, "/home6/ptlugorg/python27")
sys.path.insert(13, "/home6/ptlugorg/workshopvenues/workshopvenues")
os.environ['DJANGO_SETTINGS_MODULE'] = 'workshopvenues.settings'
from django.core.servers.fastcgi import runfastcgi
runfastcgi(method="threaded", daemonize="false")
Just in case you are wondering about the **permissions** :
[email protected] [~/workshopvenues/workshopvenues/workshopvenues]# ls -al
total 52
drwxr-xr-x 3 ptlugorg ptlugorg 4096 Aug 9 03:31 ./
drwxr-xr-x 4 ptlugorg ptlugorg 4096 Aug 9 02:52 ../
drwxr-xr-x 11 ptlugorg ptlugorg 4096 Aug 8 15:33 assets/
-rw-r--r-- 1 ptlugorg ptlugorg 0 Aug 8 14:23 __init__.py
-rw-r--r-- 1 ptlugorg ptlugorg 144 Aug 8 14:25 __init__.pyc
-rw-r--r-- 1 ptlugorg ptlugorg 430 Aug 8 15:20 secrets.py
-rw-r--r-- 1 ptlugorg ptlugorg 527 Aug 8 15:20 secrets.pyc
-rw-r--r-- 1 ptlugorg ptlugorg 5779 Aug 9 03:31 settings.py
-rw-r--r-- 1 ptlugorg ptlugorg 3399 Aug 9 03:31 settings.pyc
-rw-r--r-- 1 ptlugorg ptlugorg 614 Aug 8 14:23 urls.py
-rw-r--r-- 1 ptlugorg ptlugorg 467 Aug 8 15:23 urls.pyc
-rw-r--r-- 1 ptlugorg ptlugorg 1150 Aug 8 14:23 wsgi.py
-rw-r--r-- 1 ptlugorg ptlugorg 1058 Aug 8 15:21 wsgi.pyc
[email protected] [~/workshopvenues/workshopvenues/workshopvenues/assets]# ls -al
total 48
drwxr-xr-x 11 ptlugorg ptlugorg 4096 Aug 8 15:33 ./
drwxr-xr-x 3 ptlugorg ptlugorg 4096 Aug 9 03:31 ../
drwxr-xr-x 5 ptlugorg ptlugorg 4096 Aug 8 15:33 admin/
drwxr-xr-x 5 ptlugorg ptlugorg 4096 Aug 8 14:23 bootstrap/
drwxr-xr-x 2 ptlugorg ptlugorg 4096 Aug 8 14:23 css/
drwxr-xr-x 5 ptlugorg ptlugorg 4096 Aug 8 15:33 django_extensions/
drwxr-xr-x 2 ptlugorg ptlugorg 4096 Aug 8 14:23 font-awesome/
drwxr-xr-x 2 ptlugorg ptlugorg 4096 Aug 8 14:23 ico/
drwxr-xr-x 7 ptlugorg ptlugorg 4096 Aug 8 14:23 img/
drwxr-xr-x 2 ptlugorg ptlugorg 4096 Aug 8 14:23 js/
drwxr-xr-x 5 ptlugorg ptlugorg 4096 Aug 8 14:23 prettyPhoto/
Everything seems correct, but it still doesn't work as expected. Do you have
any idea of where the problem could be? What tests I could do to verify if
something is wrong?
Thanks for your help!
Answer: Thanks to a kind user in **#django** (Freenode IRC channel) named **mattmcc**
, I've been able to fix it. Actually it was a problem in the **STATIC_ROOT**.
It was pointing to the phisical file location instead of pointing to the
**DOCUMENT ROOT location**.
The correct **settings.py** is like this:
STATIC_ROOT = '/home6/ptlugorg/public_html/workshopvenues/assets/'
STATIC_URL = '/assets/'
STATICFILES_DIRS = (
'/home6/ptlugorg/workshopvenues/workshopvenues/workshopvenues/assets/',
)
Everything works now :)
|
daemon threads in Python
Question: After reading: <http://pymotw.com/2/threading/#daemon-vs-non-daemon-threads> I
expect following code to terminate after 2 seconds:
from threading import Thread
from time import sleep
def a():
i = 0
while 1:
print i
i+=1
t = Thread(target=a)
t.setDaemon(True)
t.run()
sleep(2)
However, it keeps printing numbers forever. Am I missing something here? I am
on win7. I get same behaviour from windows shell and idle.
Answer: You should call `t.start()`, not `t.run()`. The first one will spawn a new
thread and call `run` itself from there. Calling run on your own causes you to
execute the `a` function in your current thread.
|
PHP silently fails sending data to graphite over UDP using fsockopen and fwrite
Question: ## The code
I'm sending metrics to [graphite](https://github.com/graphite-
project/graphite-web "graphite") via UDP using both php and python.
My **python** client looks like this
#!/usr/bin/python
import time
from socket import socket
sock = socket()
try:
sock.connect( ('127.0.0.1', 2003) )
except:
print 'network error'
sys.exit(12)
message = ("some.custom.metric.python 1 %d\n" % (int( time.time() )))
print message
sock.sendall(message)
Output:
> some.custom.metric.python 1 1376045467
And my **php** client like this
<?php
try {
$fp = fsockopen("udp://127.0.0.1", 2003, $errno, $errstr);
if (!empty($errno)) echo $errno;
if (!empty($errstr)) echo $errstr;
$message = "some.custom.metric.php 1 ".time().PHP_EOL;
$bytes = fwrite($fp, $message);
echo $message;
} catch (Exception $e) {
echo "\nNetwork error: ".$e->getMessage();
}
Output:
> some.custom.metric.php 1 1376042961
## Testing
I start **carbon** enabling debug output:
/opt/graphite/bin/carbon-cache.py --debug start
When I run my **python** client it works just fine, and I can see it on the
debug output
09/08/2013 13:13:05 :: [listener] MetricLineReceiver connection with 127.0.0.1:58134 established
09/08/2013 13:13:05 :: [listener] MetricLineReceiver connection with 127.0.0.1:58134 closed cleanly
I do the same via CLI using **netcat**
echo "some.custom.metric.netcat 1 `date +%s`" | nc -w 1 127.0.0.1 2003
And I can see the connection in the debug output
>
> 09/08/2013 13:17:46 :: [listener] MetricLineReceiver connection with
> 127.0.0.1:58136 established
> 09/08/2013 13:17:48 :: [listener] MetricLineReceiver connection with
> 127.0.0.1:58136 closed cleanly
>
## The problem
My php client is never communicating with carbon. Even if I use a different
port where there's no app listening my PHP just tells me everything's fine. if
I do the same on my python client I get a network error.
According to the PHP docs, fsockopen **never** fails when using UDP because of
the nature of the protocol, but I should get an error when executing the
**fwrite**. In my case the fwrite always returns the len() of the $message no
matter which host/port I use when opening the socket.
If I use a wrong port with netcat or the python client i get a network error
as expected.
PHP-cli has `error_display = On` and `error_reporting = E_ALL`. I've tested
this on PHP 5.4.4-14 on debian 7.1 and PHP 5.5 on Windows 7.
Has anybody run into something similar to this? I'm almost sure there's no
problem with my graphite or network configurations, so I bet it has something
to do with PHP.
Answer: Check if you have an open UDP port (it's off by default). The code (simplified
version) i used for testing (uses TCP)
$line = "foo.bla 1 " . time() . "\n";
$fp = fsockopen('127.0.0.1', 2003, $err, $errc, 1);
fwrite($fp, $line);
fclose($fp);
|
Python: how to setup python-ldap to ignore referrals?
Question: how can I avoid getting (undocumented) exception in following code?
import ldap
import ldap.sasl
connection = ldap.initialize('ldaps://server:636', trace_level=0)
connection.set_option(ldap.OPT_REFERRALS, 0)
connection.protocol_version = 3
sasl_auth = ldap.sasl.external()
connection.sasl_interactive_bind_s('', sasl_auth)
baseDN = 'ou=org.com,ou=xx,dc=xxx,dc=com'
filter = 'objectclass=*'
try:
result = connection.search_s(baseDN, ldap.SCOPE_SUBTREE, filter)
except ldap.REFERRAL, e:
print "referral"
except ldap.LDAPError, e:
print "Ldaperror"
It happens that baseDN given in example is a referral. When I run this code I
get `referral` as output.
What would I want is that python-ldap just would skip it or ignore without
throwing strange exception (I cannot find documentation about it)?
(this may help or not) The problem happened when I was searching baseDN upper
in a tree. When I was searching 'ou=xx,dc=xxx,dc=com' it started to freeze on
my production env when on development env everything works great. When I
started to looking at it I found that it freezing on referral branches. How
can I tell python-ldap to ignore referrals? Code above does not work as I
want.
Answer: This is a working example, see if it helps.
def ldap_initialize(remote, port, user, password, use_ssl=False, timeout=None):
prefix = 'ldap'
if use_ssl is True:
prefix = 'ldaps'
# ask ldap to ignore certificate errors
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
if timeout:
ldap.set_option(ldap.OPT_NETWORK_TIMEOUT, timeout)
ldap.set_option(ldap.OPT_REFERRALS, ldap.OPT_OFF)
server = prefix + '://' + remote + ':' + '%s' % port
l = ldap.initialize(server)
l.simple_bind_s(user, password)
|
HTTPError: HTTP Error 401: basic auth failed. Bing Search
Question: I have made a code to get urls from bing search. It gives the error mentioned
above.
import urllib
import urllib2
accountKey = 'mykey'
username =accountKey
queryBingFor = "'JohnDalton'"
quoted_query = urllib.quote(queryBingFor)
rootURL = "https://api.datamarket.azure.com/Bing/Search/"
searchURL = rootURL + "Image?$format=json&Query=" + quoted_query
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, searchURL,username,accountKey)
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
readURL = urllib2.urlopen(searchURL).read()
I have made the username = authKey as someone told me it has to be same for
both. Anyways, i didn't get a username when i made the bing webmaster account.
Or is it just my email. Excuse me if i have made novice mistakes. I've just
started Python.
Answer: In the absence of any other information, it seems unlikely that what is
effectively your username and password would be the same thing if this site
actually needs this form of authorisation.
Are you able to make it work by doing a request in your browser like the
following?
<https://mykey:[email protected]/Bing/Search/Image>?$format=json&Query=blah
If so then at lerast it sounds like the credentials are right and that its the
way you are using them in python that's wrong, but more likely the above will
fail with the same error, suggesting the credentials themselves are not valid.
Also see this question, which suggests there may be a problem is the site
doesn't do 'standard' auth: [urllib2 HTTPPasswordMgr not working - Credentials
not sent
error](http://stackoverflow.com/questions/9495279/urllib2-httppasswordmgr-not-
working-credentials-not-sent-error)
It also suggests that you might need to pass the top level URL of the site tot
he password manager rather than the specific search URL.
Finally, it might be worth adapting this code:
<http://www.voidspace.org.uk/python/articles/authentication.shtml>
for your site to check the auth realm and scheme the site is sending you to
check they're supported.
|
How to get Process Owner by Python using WMI?
Question: I tried to get some information about Process Owner, using WMI. I tried to run
this script:
import win32com.client
process_wmi = set()
strComputer = "."
objWMIService = win32com.client.Dispatch("WbemScripting.SWbemLocator")
objSWbemServices = objWMIService.ConnectServer(strComputer,"root\cimv2")
process_list = objSWbemServices.ExecQuery("Select * from Win32_Process")
for process in process:
owner = process.GetOwner
if owner != 0:
print('Access denied')
else:
print('process: ',process.Name, 'PID: ', process.ProcessId, 'Owner: ', owner)
Of course, i get `owner = 0 (Successful Completion)`
When I tried to call `process.GetOwner()`, I get this error: `TypeError: 'int'
object is not callable`
How to use this method without errors? With what parameters or with what flags
maybe?
I try to actualize and use this method,
[here](http://stackoverflow.com/questions/5078570/how-to-set-process-priority-
using-pywin32-and-wmi/12631794#12631794), but I can't convert code to my
situation and get Process Owner. =(
Or may be someone know another method, how to get information about process
owner. May be with WinApi methods?
Thank you for help!
Answer: I would suggest using the `psutil` library. I was using the winapi, and wmi,
but it's terribly slow :( `psutil` is much, much faster and gives you a
convenient API for working with processes.
You can achieve the same thing like this:
import psutil
for process in psutil.get_process_list():
try:
print('Process: %s, PID: %s, Owner: %s' % (process.name, process.pid,
process.username))
except psutil.AccessDenied:
print('Access denied!')
And because only the username can give you Access denied you can in `except`
do:
except psutil.AccessDenied:
print('Process: %s, PID: %s, Owner: DENIED' % (process.name, process.pid)
If you can use only pywin32 and wmi then this will work:
import wmi
for i in wmi.WMI().Win32_Process():
print('%s, %s, %s' % (i.Name, i.ProcessId, i.GetOwner()[2]))
|
Wrong decimal calculations with pandas
Question: I have a data frame (df) in pandas with four columns and I want a new column
to represent the mean of this four columns: df['mean']= df.mean(1)
1 2 3 4 mean
NaN NaN NaN NaN NaN
5.9 5.4 2.4 3.2 4.225
0.6 0.7 0.7 0.7 0.675
2.5 1.6 1.5 1.2 1.700
0.4 0.4 0.4 0.4 0.400
So far so good. But when I save the results to a csv file this is what I
found:
5.9,5.4,2.4,3.2,4.2250000000000005
0.6,0.7,0.7,0.7,0.6749999999999999
2.5,1.6,1.5,1.2,1.7
0.4,0.4,0.4,0.4,0.4
I guess I can force the format in the mean column, but any idea why this is
happenning?
I am using winpython with python 3.3.2 and pandas 0.11.0
Answer: You could use the `float_format` parameter:
import pandas as pd
import io
content = '''\
1 2 3 4 mean
NaN NaN NaN NaN NaN
5.9 5.4 2.4 3.2 4.225
0.6 0.7 0.7 0.7 0.675
2.5 1.6 1.5 1.2 1.700
0.4 0.4 0.4 0.4 0.400'''
df = pd.read_table(io.BytesIO(content), sep='\s+')
df.to_csv('/tmp/test.csv', float_format='%g', index=False)
yields
1,2,3,4,mean
,,,,
5.9,5.4,2.4,3.2,4.225
0.6,0.7,0.7,0.7,0.675
2.5,1.6,1.5,1.2,1.7
0.4,0.4,0.4,0.4,0.4
|
SocketServer used to control PiBot remotely (python)
Question: this is my first question! (despite using the site to find most of the answers
to programming questions i've ever had)
I have created a PiBotController which i plan to run on my laptop which i want
to pass the controls (inputs from arrow keys) to the raspberry pi controlling
my robot. Now the hardware side of this isn't the issue i have created a
program that responds to arrow key inputs and i can control the motors on the
pi through a ssh connection.
Searching online i found the following basic server and client code using
socketserver which i can get to work sending a simple string.
Server:
import socketserver
class MyTCPHandler(socketserver.BaseRequestHandler):
"""
The RequestHandler class for our server.
It is instantiated once per connection to the server, and must
override the handle() method to implement communication to the
client.
"""
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip()
print("{} wrote:".format(self.client_address[0]))
print(self.data)
# just send back the same data, but upper-cased
self.request.sendall(self.data.upper())
if __name__ == "__main__":
HOST, PORT = "localhost", 9999
# Create the server, binding to localhost on port 9999
server = socketserver.TCPServer((HOST, PORT), MyTCPHandler)
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
Client:
import socket
import sys
HOST, PORT = "192.168.2.12", 9999
data = "this here data wont send!! "
# Create a socket (SOCK_STREAM means a TCP socket)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
# Connect to server and send data
sock.connect((HOST, PORT))
sock.sendall(bytes(data + "\n", "utf-8"))
# Receive data from the server and shut down
received = str(sock.recv(1024), "utf-8")
finally:
sock.close()
print("Sent: {}".format(data))
print("Received: {}".format(received))
Now this works fine and prints the results both on my Raspberry Pi (server)
and on my laptop (client) however i have tried multiple times to combine it
into a function that activates along with my key press' and releases' like i
have in my controller.
PiBotController
#import the tkinter module for the GUI and input control
try:
# for Python2
import Tkinter as tk
from Tkinter import *
except ImportError:
# for Python3
import tkinter as tk
from tkinter import *
import socket
import sys
#variables
Drive = 'idle'
Steering = 'idle'
#setting up the functions to deal with key presses
def KeyUp(event):
Drive = 'forward'
drivelabel.set(Drive)
labeldown.grid_remove()
labelup.grid(row=2, column=2)
def KeyDown(event):
Drive = 'reverse'
drivelabel.set(Drive)
labelup.grid_remove()
labeldown.grid(row=4, column=2)
def KeyLeft(event):
Steering = 'left'
steeringlabel.set(Steering)
labelright.grid_remove()
labelleft.grid(row=3, column=1)
def KeyRight(event):
Steering = 'right'
steeringlabel.set(Steering)
labelleft.grid_remove()
labelright.grid(row=3, column=3)
def key(event):
if event.keysym == 'Escape':
root.destroy()
#setting up the functions to deal with key releases
def KeyReleaseUp(event):
Drive = 'idle'
drivelabel.set(Drive)
labelup.grid_remove()
def KeyReleaseDown(event):
Drive = 'idle'
drivelabel.set(Drive)
labeldown.grid_remove()
def KeyReleaseLeft(event):
Steering = 'idle'
steeringlabel.set(Steering)
labelleft.grid_remove()
def KeyReleaseRight(event):
Steering = 'idle'
steeringlabel.set(Steering)
labelright.grid_remove()
#connection functions
def AttemptConnection():
connectionmessagetempvar = connectionmessagevar.get()
connectionmessagevar.set(connectionmessagetempvar + "\n" + "Attempting to connect...")
def transmit(event):
HOST, PORT = "192.168.2.12", 9999
data = "this here data wont send!! "
# Create a socket (SOCK_STREAM means a TCP socket)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
# Connect to server and send data
sock.connect((HOST, PORT))
sock.sendall(bytes(data + "\n", "utf-8"))
# Receive data from the server and shut down
received = str(sock.recv(1024), "utf-8")
finally:
sock.close()
print("Sent: {}".format(data))
print("Received: {}".format(received))
#setting up GUI window
root = tk.Tk()
root.minsize(300,140)
root.maxsize(300,140)
root.title('PiBot Control Centre')
root.grid_columnconfigure(0, minsize=50)
root.grid_columnconfigure(1, minsize=35)
root.grid_columnconfigure(2, minsize=35)
root.grid_columnconfigure(3, minsize=35)
root.grid_rowconfigure(2, minsize=35)
root.grid_rowconfigure(3, minsize=35)
root.grid_rowconfigure(4, minsize=35)
root.configure(background='white')
root.option_add("*background", "white")
#set up the labels to display the current drive states
drivelabel = StringVar()
Label(root, textvariable=drivelabel).grid(row=0, column=1, columnspan=2)
steeringlabel = StringVar()
Label(root, textvariable=steeringlabel).grid(row=1, column=1, columnspan=2)
Label(root, text="Drive: ").grid(row=0, column=0, columnspan=1)
Label(root, text="Steering: ").grid(row=1, column=0, columnspan=1)
#set up the buttons and message for connecting etc..
messages=tk.Frame(root, width=150, height=100)
messages.grid(row=1,column=4, columnspan=2, rowspan=4)
connectionbutton = Button(root, text="Connect", command=AttemptConnection)
connectionbutton.grid(row=0, column=4)
connectionmessagevar = StringVar()
connectionmessage = Message(messages, textvariable=connectionmessagevar, width=100, )
connectionmessage.grid(row=1, column=1, rowspan=1, columnspan=1)
disconnectionbutton = Button(root, text="Disconnect")
disconnectionbutton.grid(row=0, column=5)
#pictures
photodown = PhotoImage(file="down.gif")
labeldown = Label(root, image=photodown)
labeldown.photodown = photodown
#labeldown.grid(row=4, column=1)
photoup = PhotoImage(file="up.gif")
labelup = Label(root, image=photoup)
labelup.photoup = photoup
#labelup.grid(row=2, column=1)
photoleft = PhotoImage(file="left.gif")
labelleft = Label(root, image=photoleft)
labelleft.photoleft = photoleft
#labelleft.grid(row=3, column=0)
photoright = PhotoImage(file="right.gif")
labelright = Label(root, image=photoright)
labelright.photoright = photoright
#labelright.grid(row=3, column=2)
photoupleft = PhotoImage(file="upleft.gif")
labelupleft = Label(root, image=photoupleft)
labelupleft.photoupleft = photoupleft
#labelupleft.grid(row=2, column=0)
photodownleft = PhotoImage(file="downleft.gif")
labeldownleft = Label(root, image=photodownleft)
labeldownleft.photodownleft = photodownleft
#labeldownleft.grid(row=4, column=0)
photoupright = PhotoImage(file="upright.gif")
labelupright = Label(root, image=photoupright)
labelupright.photoupright = photoupright
#labelupright.grid(row=2, column=2)
photodownright = PhotoImage(file="downright.gif")
labeldownright = Label(root, image=photodownright)
labeldownright.photodownright = photodownright
#labeldownright.grid(row=4, column=2)
#bind all key presses and releases to the root window
root.bind_all('<Key-Up>', KeyUp)
root.bind_all('<Key-Down>', KeyDown)
root.bind_all('<Key-Left>', KeyLeft)
root.bind_all('<Key-Right>', KeyRight)
root.bind_all('<KeyRelease-Up>', KeyReleaseUp)
root.bind_all('<KeyRelease-Down>', KeyReleaseDown)
root.bind_all('<KeyRelease-Left>', KeyReleaseLeft)
root.bind_all('<KeyRelease-Right>', KeyReleaseRight)
root.bind_all('<Key>', key)
root.bind_all('<Key>', transmit)
#set the labels to an initial state
steeringlabel.set('idle')
drivelabel.set('idle')
connectionmessagevar.set ('PiBotController Initiated')
#initiate the root window main loop
root.mainloop()
this program compiles fine but then doesn't send any data to the server? (i'm
aware its still just sending a string but i thought id start with something
easy... and well evidently i got stuck so it is probably for the best)
Any suggestions to make it work just sending the string or sending the
varibales drive and steering every time they change would be greatly
appreciated.
Dave xx
_**EDIT_**
here is the transmit function i got to it work in the sense it sends data
whenever i do a key press/release (like i wanted before) however it only sends
the initial setting for the variables of 'idle'. Looking at the code now as
well i think i should probably take the host and port info and creating a
socket connection out of the function that runs every time? but im not sure so
here is what i have right now anyway.
def transmit():
HOST, PORT = "192.168.2.12", 9999
DriveSend = drivelabel.get
SteeringSend = steeringlabel.get
# Create a socket (SOCK_STREAM means a TCP socket)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
# Connect to server and send data
sock.connect((HOST, PORT))
sock.sendall(bytes(Drive + "\n", "utf-8"))
sock.sendall(bytes(Steering + "\n", "utf-8"))
# Receive data from the server and shut down
received = str(sock.recv(1024), "utf-8")
finally:
sock.close()
print("Sent: {}".format(Steering))
print("Sent: {}".format(Drive))
print("Received: {}".format(received))
Answer: The problem is that when Tkinter catches a key event, it triggers the more
specific binding first (for example 'Key-Up'), and the event is never passed
to the more general binding ('Key'). Therefore, when you press the 'up' key,
KeyUp is called, but transmit is never called.
One way to solve this would be to just call transmit() within all the callback
functions (KeyUp, KeyDown, etc).
For example, KeyUp would become
def KeyUp(event):
Drive = 'forward'
drivelabel.set(Drive)
labeldown.grid_remove()
labelup.grid(row=2, column=2)
transmit()
Then you can get rid of the event binding to 'Key'.
Another option would be to make "Drive" and "Steering" into Tkinter.StringVar
objects, then bind to write events using "trace", like this:
Drive = tk.StringVar()
Drive.set('idle')
Drive.trace('w', transmit)
Note that `trace` sends a bunch of arguments to the callback, so you'd have to
edit `transmit` to accept them.
**EDIT**
Ok, I see the problems - there are three.
1\. When you write
Drive = 'forward'
in your callback functions, you're **not** setting the variable `Drive` in
your module namespace, you're setting `Drive` in the local function namespace,
so the module-namespace `Drive` never changes, so when `transmit` accesses it,
it's always the same.
2\. In `transmit`, you write
DriveSend = drivelabel.get
SteeringSend = steeringlabel.get
This is a good idea, but you're just referencing the functions, not calling
them. You need
DriveSend = drivelabel.get()
SteeringSend = steeringlabel.get()
3\. In `transmit`, the values you send through the socket are the module-level
variables `Drive` and `Steering` (which never change as per problem #1),
rather than `DriveSend` and `SteeringSend`.
**Solution:**
I would recommend doing away with all the `Drive` and `Steering` variables
entirely, and just using the `StringVars` 'drivelabel`and`steeringlabel`. Thus
your callbacks can become:
def KeyUp(event):
# Drive = 'forward' (this doesn't actually do any harm, but to avoid confusion I'd just get rid of the Drive variables altogether)
drivelabel.set('forward')
labeldown.grid_remove()
labelup.grid(row=2, column=2)
(and so on for the rest of the callbacks) and your transmit function will
become
def transmit():
HOST, PORT = "192.168.2.12", 9999
DriveSend = drivelabel.get() # Note the ()
SteeringSend = steeringlabel.get()
# Create a socket (SOCK_STREAM means a TCP socket)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
# Connect to server and send data
sock.connect((HOST, PORT))
sock.sendall(bytes(DriveSend + "\n", "utf-8")) # Note Drive ==> DriveSend
sock.sendall(bytes(SteeringSend + "\n", "utf-8")) # Note Steering ==> SteeringSend
# Receive data from the server and shut down
received = str(sock.recv(1024), "utf-8")
finally:
sock.close()
print("Sent: {}".format(SteeringSend)) # Note Steering ==> SteeringSend
print("Sent: {}".format(DriveSend)) # Note Drive ==> DriveSend
print("Received: {}".format(received))
**modified solution (from OP):**
Since playing around with this method for a little while i found that
constantly changing the variables every 100ms due to key being held down is
troublesome and causes isues with the smoothness of the motor control when i
am for example just driving forward. to fix this i used the following edit
into each function
def KeyUp(event):
if drivelabel.get() == "forward":
pass
else:
drivelabel.set("forward")
labeldown.grid_remove()
labelup.grid(row=2, column=2)
transmit()
print (drivelabel.get())
The code now checks if the varibale is already set to the relevent direction
if it is it does nothing, otherwise it modifies it. the print line is just
there for me to check it was working properly and could be removed or
commented out.
|
Python : How can two GTK widgets interact with each other?
Question: What is the proper way to interact with a button without actually clicking on
it?
I have a button "button", that can, upon click :
* Call the method "the_method" that will print what argument (here "filename") has been passed to it
* toggle its own attributes, here its icon.
And I have a treeview, whose rows must, upon double click :
* Call the method "the_method" that will print what argument (here "filename") has been passed to it
* toggle "button"'s attributes, here its icon.
And only the 1st part works. The "foo" function is called (via a callback for
the button, directly for the treeview item) and the argument ("filename") is
retrieved OK, but how to execute part 2 of the job (changing "button"'s
attributes, here its icon)?
* * *
import gtk
class Lister(object):
def __init__(self):
self.hbox = gtk.HBox()
liststore = gtk.ListStore(str)
liststore.append(["foo"])
liststore.append(["bar"])
treeview = gtk.TreeView(liststore)
self.hbox.pack_start(treeview, False)
cell = gtk.CellRendererText()
col = gtk.TreeViewColumn("Column 1")
col.pack_start(cell, True)
col.set_attributes(cell,text=0)
treeview.connect('row-activated', self.open_file)
treeview.append_column(col)
def open_file(self, button, *args):
Buttons().the_method(self, "foo")
class Buttons(object):
OPEN_IMAGE = gtk.image_new_from_stock(gtk.STOCK_ADD, gtk.ICON_SIZE_BUTTON)
CLOSED_IMAGE = gtk.image_new_from_stock(gtk.STOCK_REFRESH, gtk.ICON_SIZE_BUTTON)
def __init__(self):
self.button = gtk.Button() # THIS is the button to modify
self.hbox = gtk.HBox()
self.hbox.pack_start(self.button, False)
self.button.set_image(self.OPEN_IMAGE)
self.button.connect('clicked', self.the_method, "plop")
self.toggled = True
def the_method(self, button, filename):
print filename
print vars(self)
if self.toggled:
self.button.set_image(self.CLOSED_IMAGE)
self.toggled = False
else:
self.button.set_image(self.OPEN_IMAGE)
self.toggled = True
class GUI(object):
def delete_event(self, widget, event, data=None):
gtk.main_quit()
return False
def __init__(self):
self.window = gtk.Window()
self.window.set_size_request(100, 150)
self.window.connect("delete_event", self.delete_event)
vbox = gtk.VBox()
vbox.pack_start(Buttons().hbox, False, False, 1)
vbox.pack_start(Lister().hbox)
self.window.add(vbox)
self.window.show_all()
return
def main():
gtk.main()
if __name__ == "__main__":
GUI()
main()
Answer: I strongly disagree with user1146332 answer. This is not a GTK+ issue, nor a
strong design issue, just an object oriented programming issue. The cause of
your bug is that you call `the_method` like this:
Buttons().the_method(self, "foo")
This can't work, because you're mixing up two different fundamental things: a
class, and an instance of a class. When you call `Buttons()`, you're creating
a new instance of the `Buttons` class. Thus, as this class is not a singleton,
you're in fact creating a new instance, with a new GtkButton, and end up not
interacting with the button you previously created.
The solution here is to make the `Lister` object aware of what it needs to
modify, which means storing around the `Buttons` instance you previously
created, for example in `self.button`, and calling `the_method` on it.
self.button.the_method("foo")
Here's a slightly modified version of your code. The important thing is that
the `Lister` instance is now aware of the `Buttons` instance it needs to
modify.
import gtk
class Lister(object):
def __init__(self, button):
self.hbox = gtk.HBox()
self.button = button
liststore = gtk.ListStore(str)
liststore.append(["foo"])
liststore.append(["bar"])
treeview = gtk.TreeView(liststore)
self.hbox.pack_start(treeview, False)
cell = gtk.CellRendererText()
col = gtk.TreeViewColumn("Column 1")
col.pack_start(cell, True)
col.set_attributes(cell,text=0)
treeview.connect('row-activated', self.open_file)
treeview.append_column(col)
def open_file(self, button, *args):
self.button.the_method("foo")
class Buttons(object):
OPEN_IMAGE = gtk.image_new_from_stock(gtk.STOCK_ADD, gtk.ICON_SIZE_BUTTON)
CLOSED_IMAGE = gtk.image_new_from_stock(gtk.STOCK_REFRESH, gtk.ICON_SIZE_BUTTON)
def __init__(self):
self.button = gtk.Button() # THIS is the button to modify
self.hbox = gtk.HBox()
self.hbox.pack_start(self.button, False)
self.button.set_image(self.OPEN_IMAGE)
self.button.connect('clicked', self.the_method, "plop")
self.toggled = True
def the_method(self, filename):
print filename
print vars(self)
if self.toggled:
self.button.set_image(self.CLOSED_IMAGE)
self.toggled = False
else:
self.button.set_image(self.OPEN_IMAGE)
self.toggled = True
class GUI(object):
def delete_event(self, widget, event, data=None):
gtk.main_quit()
return False
def __init__(self):
self.window = gtk.Window()
self.window.set_size_request(100, 150)
self.window.connect("delete_event", self.delete_event)
vbox = gtk.VBox()
buttons = Buttons()
vbox.pack_start(buttons.hbox, False, False, 1)
vbox.pack_start(Lister(buttons).hbox)
self.window.add(vbox)
self.window.show_all()
return
def main():
gtk.main()
if __name__ == "__main__":
GUI()
main()
However, there's still lots of room for improvement. I suggest you don't use
the `__init__` function to create your widgets, but a `create` method that
will return the toplevel widget of your widget tree. This is because you can't
return anything in `__init__`, so it's easier to use a different method
instead of raising exceptions there.
b = Buttons()
vbox.pack_start(b.create(), False, False, 1)
l = Lister(b)
vbox.pack_start(l.create(), False, False, 1)
Other improvement might be (sorry, i'm using the C naming here for GTK
classes/functions, which I know better than the python one):
* using a `GtkToggleButton` instead of tracking the button state yourself
* using gtk_button_set_use_stock to tell the button to interpret the label you will set in the button as the stock id for the button (this may print the associated text too, not sure about this)
* switching to GTK 3 (uses pyGObject), as this is GTK 2 code (uses pyGTK), unless you want Windows compatibility
See you on linuxfr :-)
|
Python, How to extend Decimal class to add helpful methods
Question: I would like to extend the Decimal class to add some helpful methods to it,
specially for handling money.
The problem when I go do this:
from decimal import Decimal
class NewDecimal(Decimal):
def new_str(self):
return "${}".format(self)
d1 = NewDecimal(1)
print d1.new_str() # prints '$1'
d2 = NewDecimal(2)
d3 = NewDecimal(3)
d5 = d2 + d3
print d5.new_str() #exception happens here
It throws an exception:
AttributeError: 'Decimal' object has no attribute 'new_str'
This is because of the way Decimal does arithmetic, it always returns a new
Decimal object, by literally calling Decimal(_new value_) at the end of the
computation.
Does anyone no a workaround for this other than completely reimplementing all
the arithmetic?
Answer: You probably don't actually want to do this just to have an extra method for
printing Decimal objects in an alternate way. A top-level function or
monkeypatched method is a whole lot simpler, and cleaner. Or, alternatively, a
`Money` class that has a `Decimal` member that it delegates arithmetic to.
But what you want is doable.
* * *
To make `NewDecimal(1) + NewDecimal(2)` return `NewDecimal(3)`, you can just
override `__add__`:
def __add__(self, rhs):
return NewDecimal(super().__add__(rhs))
And of course you'll want to override `__iadd__` as well. And don't forget
`mul` and all the other [numeric special
methods](http://docs.python.org/3.3/reference/datamodel.html#emulating-
numeric-types).
But that still won't help for `Decimal(2) + NewDecimal(3)`. To make that work,
you need to define `NewDecimal.__radd__`. You also need to ensure that
`NewDecimal.__radd__` will get called instead of `Decimal.__add__`, but when
you're using inheritance, that's easy, because Python has a rule specifically
to make this easy:
> _Note:_ If the right operandβs type is a subclass of the left operandβs type
> and that subclass provides the reflected method for the operation, this
> method will be called before the left operandβs non-reflected method. This
> behavior allows subclasses to override their ancestorsβ operations.
* * *
You may want to read the section [Implementing the arithmetic
operations](http://docs.python.org/3/library/numbers.html#implementing-the-
arithmetic-operations) in the `numbers` module docs, and [the implementation
of
`fractions.Fraction`](http://hg.python.org/cpython/file/3.3/Lib/fractions.py)
(which was intended to serve as sample code for creating new numeric types,
which is why the docs link directly to the source). Your life is easier than
`Fraction`'s because you can effectively fall back to `Decimal` for every
operation and then convert (since `NewDecimal` doesn't have any different
numeric behavior from `Decimal`), but it's worth seeing all the issues, and
understanding which ones are and aren't relevant and why.
|
Experiment trying to avoid Python circular dependencies
Question: I have a testing environment to try to understand how python circular
dependencies can be avoided importing the modules with an `import x`
statement, instead of using a `from x import y`:
test/
__init__.py
testing.py
a/
__init__.py
m_a.py
b/
__init__.py
m_b.py
The files have the following content:
**testing.py:**
from a.m_a import A
**m_a.py:**
import b.m_b
print b.m_b
class A:
pass
**m_b.py:**
import a.m_a
print a.m_a
class B:
pass
There is a situation which I can't understand:
If I remove the print statements from modules `m_a.py` and `m_b.py` or only
from `m_b.py` this works OK, but if the print is present at `m_b.py`, then the
following error is thrown:
File "testing.py", line 1, in <module>
from a.m_a import A
File "/home/enric/test/a/m_a.py", line 1, in <module>
import b.m_b
File "/home/enric/test/b/m_b.py", line 3, in <module>
print a.m_a
AttributeError: 'module' object has no attribute 'm_a'
Do you have any ideas?
Answer: It only "works" with the print statements removed because you're not actually
doing anything that depends on the imports. It's still a broken circular
import.
Either run this in the debugger, or add a `print` statement after each line,
and you'll see what happens:
* testing.py: `from a.m_a import A`
* a.m_a: `import b.m_b`
* b.m_b: `import a.m_a`
* b.m_b: `print a.m_a`
It's clearly trying to access `a.m_a` before the module finished importing.
(In fact, you can see the rest of `a.m_a` on the stack in your backtrace.)
If you dump out `sys.modules` at this point, you'll find two partial modules
named `a` and `a.m_a`, but if you `dir(a)`, there's no `m_a` there yet.
As far as I can tell, the fact that `m_a` doesn't get added to `a` until
`m_a.py` finishes evaluating is not documented anywhere in the Python 2.7
documentation. (3.x has much a more complete specification of the import
processβbut it's also a very different import process.) So, you can't rely on
this either failing _or_ succeeding; either one is perfectly legal for an
implementation. (But it happens to fail in at least CPython and PyPyβ¦)
* * *
More generally, using `import foo` instead of `from foo import bar` doesn't
magically solve all circular-import problems. It just solves one particular
class of circular-import problems (or, rather, makes that class moot). (I
realize there is some misleading text in [the
FAQ](http://docs.python.org/2/faq/programming.html#what-are-the-best-
practices-for-using-import-in-a-module) about this.)
* * *
There are various tricks to work around circular imports while still letting
you have circular top-level dependencies. But really, it's almost always
simpler to get rid of the circular top-level dependencies.
In this toy case, there's really no reason for `a.m_a` to depend on `b.m_b` at
all. If you need some that prints out `a.m_a`, there are better ways to get it
than from a completely independent package!
In real-life code, there probably is some stuff in `m_a` that `m_b` needs and
vice-versa. But usually, you can separate it out into two levels: stuff in
`m_a` that needs `m_b`, and stuff in `m_a` that's needed by `m_b`. So, just
split it into two modules. It's really the same thing as the common fix for a
bunch of modules that try to reach back up and `import main`: split a `utils`
off `main`.
What if there really is something that `m_b` needs from `m_a`, that also needs
`m_b`? Well, in that case, you may have to insert a level of indirection. For
example, maybe you can pass the thing-from-`m_b` into the
function/constructor/whatever from `m_a`, so it can access it as a local
parameter value instead of as a global. (It's hard to be more specific without
a more specific problem.)
If worst comes to worst, and you can't remove the import via indirection, you
have to move the import out of the way. That may again mean doing an import
inside a function call, etc. (as explained in the FAQ immediately after the
paragraph that set you off), or just moving some code above the import, or all
kinds of other possibilities. But consider these last-ditch solutions to
something which just can't be designed cleanly, not a roadmap to follow for
your designs.
|
Fix newlines when writing UTF-8 to Text file in python
Question: I'm at my wits end on this one. I need to write some Chinese characters to a
text file. The following method works however the newlines get stripped so the
resulting file is just one super long string.
I tried inserting every known unicode line break that I know of and nothing.
Any help is greatly appreciated. Here is snippet:
import codecs
file_object = codecs.open( 'textfile.txt', "w", "utf-8" )
xmlRaw = (data to be written to text file )
newxml = xmlRaw.split('\n')
for n in newxml:
file_object.write(n+(u'2424'))# where \u2424 is unicode line break
Answer: If you use python 2, then use u"\n" to append newline, and encode internal
unicode format to utf when you write it to file:
`file_object.write((n+u"\n").encode("utf"))` Ensure `n` is of type `unicode`
inside your loop.
|
Importing a large SQL file into a local Firebird database
Question: I'm trying to import a large SQL file (15k+ statements) into a new local
Firebird database with Python, through [_fdb_](http://pythonhosted.org/fdb/)
module.
import fdb
db = fdb.create_database("CREATE DATABASE 'test.fdb'")
sql_str = open('test.sql').read()
in_trans = False
for stmt in sql_str.split(';'):
if stmt.startswith('INSERT') and not in_trans:
in_trans = True
db.begin()
else:
in_trans = False
db.commit()
db.execute_immediate(stmt)
I haven't found any better way to do this... (like using a sort of
`executescript()` in _sqlite3_ module which executes multiple statements per
call).
It worked for first few statements, than it stopped and raised that exception:
fdb.fbcore.DatabaseError: ('Error while executing SQL statement:\n- SQLCODE: -104\n- Dynamic SQL Error\n- SQL error code = -104\n- Client SQL dialect 0 does not support reference to BIGINT datatype', -104, 335544569)
From what I read in Firebird documentation, I've to use SQL dialect 3 to be
able to use BIGINT datatype in table schemas. I'm already correctly setting
database SQL dialect 3 with `fdb.create_database()` default arguments, but I'm
still need to set client SQL dialect and I've no idea where and how I do that.
I'm using Firebird 2.5.2 and Python 2.7.2 on OS X.
Answer: As far as I know, `fdb` should default to dialect 3, but you can specify it
explicitly using connection property dialect=3, see example 2 in [Connecting
to a
database](http://www.firebirdsql.org/file/documentation/drivers_documentation/python/fdb/getting-
started.html#connecting-to-a-database). The weird thing is though that there
is no such thing as a dialect 0 in Firebird.
If that does solve the problem, I'd suggest you file a bug in the issue
tracker as the driver should default to dialect 3:
<http://tracker.firebirdsql.org/browse/PYFB>
|
Efficiency of infinite loop to service GPIO
Question: I'm using Python on Raspbian (a type of linux) on the Raspberry Pi (an
embedded processor board) to monitor GPIO inputs.
See simplified version of my code below. I have an infinite loop in the python
script waiting for something to happen on a GPIO i/p. Is this the correct way
to do it? I.e. does this mean that the CPU is running at full whack just going
round this loop, leaving no CPU cycles for other stuff? Especially as I need
to be running other things in parallel (e.g. the browser).
Also what happens if the CPU is busy doing something else and a GPIO i/p
changes? Does the GPIO event get stored somewhere so it is eventually
serviced, or does it just get lost?
Is there a better way of doing this?
(For your answers, please note that I'm new to linux, and v. new to python and
real-time programming)
#!/usr/bin/python
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BOARD)
GPIO.setup(16, GPIO.IN, pull_up_down=GPIO.PUD_UP)
def ButtonHandler(channel):
print "Button pressed " + str(channel)
# do stuff here
GPIO.add_event_detect(16, GPIO.FALLING, callback=ButtonHandler, bouncetime=200)
while True:
pass
Answer: Yes, doing `while True: pass` will burn 100% of your CPU (or as close to it as
possible) doing nothing.
From what I understand (hopefully this is documented somewhere), the RPi.GPIO
module spawns a background thread that waits on the GPIO and calls your
`callback` function for each event. So your main thread really has nothing to
do. If you want this to run as a service, make it `sleep` for long periods of
time. If you want to run it interactively (in which case you probably want it
easier to cancel), `sleep` for shorter periods of time, maybe 0.5 seconds, and
add some way to exit the loop.
It would be even nicer if you could do the GPIO `select` in the main thread,
or get a handle to the GPIO background thread that you can just `join`, either
of which would burn no CPU at all. However, the module doesn't seem to be
designed in a way to make that easy.
However, looking at [the source](http://code.google.com/p/raspberry-gpio-
python/source/browse/source/py_gpio.c), there is a `wait_for_edge` method.
Presumably you could loop around `GPIO.wait_for_edge` instead of setting a
callback. But without the documentation, and without a device to test for
myself, I'm not sure I'd want to recommend this to a novice.
Meanwhile:
> Also what happens if the CPU is busy doing something else and a GPIO i/p
> changes? Does the GPIO event get stored somewhere so it is eventually
> serviced, or does it just get lost?
Well, while _your_ thread isn't doing anything, the GPIO background thread
seems to be waiting on `select`, and `select` won't let it miss events. (Based
on the name, that `wait_for_edge` function sounds like it might be edge-
triggered rather than level-triggered, however, which is part of the reason
I'm wary of recommending it.)
|
Folder organization for easy translation
Question: I'm making an application in Python that will be released in different
languages.
So I'm wondering what is the best way to organize a project so you don't have
to look for literal strings in the source code and replace them by hand.
Here's what I have so far from some thinking and the information I could
gather:
First we have a module for each language to contain every strings for the
application:
english.py:
greeting = 'Hello world!'
french.py:
greeting = 'Bonjour le monde!'
then the code would be written so myModule.py:
if lang == 'eng': #How do you mimick a 'preprocessor' in Python?
import english as str
elif lang == 'fr':
import french as str
if __name__ == '__main__':
print(str.greeting)
* Does it make sense to do it this way?
* Should every language have it's own module?
* How do you achieve the preprocessing conditionals of C in a pythonesque way?
I'm thinking there's no way it hasn't been asked before but maybe it's cuz
English isn't my first language and I'm doing it wrong but I can't find any
good info on the best practices to do this.
Answer: My system is fu*_*_ up but try this:
import os
lang = os.getenv('LANG')
if lang == 'en_US,':
import english as str
elif lang == 'fr_CH':
import french as str
if __name__ == '__main__':
print(str.greeting)
I get None, but give it a try.
|
Using unittest.mock to patch input() in Python 3
Question: How do you use the @patch decorator to patch the built-in input() function?
For example, here's a function in question.py that I'd like to test, which
contains a call to input():
def query_yes_no(question, default="yes"):
""" Adapted from http://stackoverflow.com/questions/3041986/python-command-line-yes-no-input """
valid = {"yes": True, "y": True, "ye": True, "no": False, "n": False}
if default is None:
prompt = " [y/n] "
elif default == "yes":
prompt = " [Y/n] "
elif default == "no":
prompt = " [y/N] "
else:
raise ValueError("invalid default answer: '%s'" % default)
while True:
sys.stdout.write(question + prompt)
choice = input().lower()
if default is not None and choice == '':
return valid[default]
elif choice in valid:
return valid[choice]
else:
sys.stdout.write("Please respond with 'yes' or 'no' "
"(or 'y' or 'n').\n")
Here's my test, which gives me the error "ImportError: No module named
'**builtins** '":
import unittest
from unittest.mock import patch
import question
class TestQueryYesNo(unittest.TestCase):
@patch('__builtins__.input.return_value', 'y')
def test_query_y(self):
answer = question.query_yes_no("Blah?")
self.assertTrue(answer)
Answer: [__builtin__](http://docs.python.org/2/library/__builtin__.html) module is
renamed to [builtins](http://docs.python.org/3/library/builtins.html) in
Python 3. Replace as follow:
@patch('builtins.input', lambda: 'y')
|
Is it possible to concatenate list indices in one statement in Python?
Question: I have a list that looks like
alist = [a, b, c, d, e]
and I want to pass it to a function that looks like
def callme(a, b, e):
pass
So, I would like to do something like
callme(*alist[0,1,4])
Is there a one liner that will achieve this?
EDIT
I could also do this, I guess (**EDIT** Don't do it this way, drewk has
answered with a better method of enumeration.)
callme(*[a for a in alist if alist.index(a) in [0,1,4]])
Answer: Use `operator.itemgetter`:
from operator import itemgetter
callme(*itemgetter(0, 1, 4)(alist))
|
Main program variables in side programs (Python)
Question: How do I use variables that exist in the main program in the side program? For
example, if I were to have Var1 in the main program, how would I use it in the
side program, how would I for example, print it?
Here's what I have right now:
#Main program
Var1 = 1
#Side program
from folder import mainprogram
print(mainprogram.Var1)
This I think would work, if it didn't run the main program when it imports it,
because I have other functions being executed in it. How would I import all
the main program data, but not have it execute? The only thing I thought of
was to import that specific variable from the program, but I don't know how to
do it. What I have in my head is:
from folder import mainprogram
from mainprogram import Var1
But it still excecutes mainprogram.
Answer: Your approach is basically correct (except for `from folder import
mainprogram` \- that looks a bit strange, unless you want to import a function
named `mainprogram` from a Python script named `folder.py`). You have also
noticed that an imported module is executed on import. This is usually what
you want.
But if there are parts of the module that you only want executed when it's run
directy (as in `python.exe mainprogram.py`) but not when doing `import
mainprogram`, then wrap those parts of the program in an `if` block like this:
if __name__ == "__main__":
# this code will not be run on import
|
For Loop Functions in Python
Question: I am continuing with a Hangman project, and I have encountered a problem with
a for loop and performing a function inside it. For example, if you press in a
level named 'CANADA', and you press 'B', since there are no Bs in Canada, it
should draw the first line of the Hangman. This is what I've done so far:
def hangman1():
pygame.draw.line(screen, black, (775, 250), (775, 50), (4))
def hangman2():
pygame.draw.line(screen, black, (750, 250), (800, 250), (4))
def hangman3():
pygame.draw.line(screen, black, (775, 50), (925, 50), (4))
def hangman4():
pygame.draw.line(screen, black, (925, 50), (925, 175), (4))
def hangman5():
pygame.draw.circle(screen, black, (925, 100), 30, (0))
def hangman6():
pygame.draw.line(screen, black, (925, 125), (925, 200), (4))
def hangman7():
pygame.draw.line(screen, black, (885, 160), (965, 160), (4))
def hangman8():
pygame.draw.line(screen, black, (925, 200), (900, 225), (4))
def hangman9():
pygame.draw.line(screen, black, (925, 200), (950, 225), (4))
After a bit more code...
letters = list('abcdefghijklmnopqrstuvwxyz')
a = font2.render(str(letters[0]), True, (black))
b = font2.render(str(letters[1]), True, (black))
c = font2.render(str(letters[2]), True, (black))
d = font2.render(str(letters[3]), True, (black))
e = font2.render(str(letters[4]), True, (black))
f = font2.render(str(letters[5]), True, (black))
g = font2.render(str(letters[6]), True, (black))
h = font2.render(str(letters[7]), True, (black))
i = font2.render(str(letters[8]), True, (black))
j = font2.render(str(letters[9]), True, (black))
k = font2.render(str(letters[10]), True, (black))
l = font2.render(str(letters[11]), True, (black))
m = font2.render(str(letters[12]), True, (black))
n = font2.render(str(letters[13]), True, (black))
o = font2.render(str(letters[14]), True, (black))
p = font2.render(str(letters[15]), True, (black))
q = font2.render(str(letters[16]), True, (black))
r = font2.render(str(letters[17]), True, (black))
s = font2.render(str(letters[18]), True, (black))
t = font2.render(str(letters[19]), True, (black))
u = font2.render(str(letters[20]), True, (black))
v = font2.render(str(letters[21]), True, (black))
w = font2.render(str(letters[22]), True, (black))
x = font2.render(str(letters[23]), True, (black))
y = font2.render(str(letters[24]), True, (black))
z = font2.render(str(letters[25]), True, (black))
Then...
hangman = [hangman1, hangman2, hangman3, hangman4, hangman5, hangman6, hangman7, hangman8, hangman9]
for linebyline in hangman:
Later...
elif b1.collidepoint(pygame.mouse.get_pos()):
letter = letters[1]
check = country.count(letter)
if check >= 1:
if letter == letters[0]:
aPosition = 325, 235
a3 = screen.blit((a), (375, 235))
a4 = screen.blit((a), (425, 235))
a1.x, a1.y = -500, -500
elif letter == letters[2]:
cPosition = 300, 235
c1.x, c1.y = -500, -500
elif letter == letters[13]:
nPosition = 450, 235
n1.x, n1.y = -500, -500
elif letter == letters[3]:
dPosition = 600, 235
d1.x, d1.y = -500, -500
else:
b2 = font.render(str(letters[1]), True, (red))
screen.blit(b2, (485, 325))
linebyline()
time.sleep(0.5)
bPosition = -500, -500
b1.x, b1.y = -500, -500
When I press B, it turns red, and in 0.5 seconds it disappears, but it doesn't
draw the line. Any help?
**EDIT:** I did some testing with another module, and the function thing works
perfectly fine with printing normal text. But when I tested it again with
drawing (Pygame), it worked, but when combined other things (like
`time.sleep()`), it shows a white screen. When combined with a `print` the
drawing thing doesn't work but the printing does. Also, if I added a
`time.sleep(1)`, it would have a black screen for exactly nine seconds,
without doing anything else. This is my test code:
import pygame, sys, random, time
from pygame.locals import *
pygame.init()
screen = pygame.display.set_mode((1000, 700))
pygame.display.set_caption("Hangman: Countries")
black = 0, 0, 0
def hangman1():
pygame.draw.line(screen, black, (775, 250), (775, 50), (4))
print 'test'
def hangman2():
pygame.draw.line(screen, black, (750, 250), (800, 250), (4))
print 'test somethin'
def hangman3():
pygame.draw.line(screen, black, (775, 50), (925, 50), (4))
print 'test something else'
def hangman4():
pygame.draw.line(screen, black, (925, 50), (925, 175), (4))
print 'eggs'
def hangman5():
pygame.draw.circle(screen, black, (925, 100), 30, (0))
print 'hangman'
def hangman6():
pygame.draw.line(screen, black, (925, 125), (925, 200), (4))
print 'facebook'
def hangman7():
pygame.draw.line(screen, black, (885, 160), (965, 160), (4))
print 'internet'
def hangman8():
pygame.draw.line(screen, black, (925, 200), (900, 225), (4))
print 'more tests'
def hangman9():
pygame.draw.line(screen, black, (925, 200), (950, 225), (4))
print 'cheese'
while True:
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
screen.fill((255, 255, 255))
list1 = [hangman1(), hangman2(), hangman3(), hangman4(), hangman5(), hangman6(), hangman7(), hangman8(), hangman9()]
for greet in list1:
greet
time.sleep(1)
pygame.display.flip()
It prints a bunch of words at the same time in the shell, and then after nine
seconds the screen changes into the hangman fully drawn, and the same set of
words come back on. Any ideas, anyone of you with experience at Pygame?
Answer: I made a functioning hangman program just now. Perhaps looking at this logic
will help:
Ignore this:
#!/usr/bin/python
import string
alphabet = string.ascii_lowercase
# represents drawing routines
def hangman1():
print(1)
def hangman2():
print(2)
def hangman3():
print(3)
# ignore this, never do this:
def myprint(x): #python2 hack, unnecessary in python3
print(x)
for i in range(1,10):
globals()['hangman{}'.format(i)] = lambda i=i: myprint('bodypart#{}'.format(i))
# ignore this
class EnumItem(object):
def __init__(self, namespace, namespace_name, value):
self.namespace = namespace
self.namespace_name = namespace_name
self.value = value
def __repr__(self):
return '{}.{}'.format(self.namespace_name, self.value)
class Enumeration(object):
def __init__(self, prefix, names):
prefix = prefix.upper().replace(' ','_')
globals()[prefix] = self #don't do this with locals()
self.items = names
for i,name in enumerate(names.strip().splitlines()):
name = name.strip().upper().replace(' ','_')
value = EnumItem(self, prefix, name)
setattr(self, name, value)
#globals()[name] = value #optional, also don't do this with locals()
Some enums:
Enumeration('GAME_STATE', '''
active
lost
won
''')
Enumeration('GUESS', '''
invalid not a letter
invalid already guessed
correct
correct win
incorrect
incorrect loss
''')
**Game logic** \- look at this part if you have trouble thinking through the
rules of hangman (I left out a few things to make it run entirely correctly
with spaces, to make it easier to understand):
class HangmanGame(object):
MAX_INCORRECT_GUESSES = 10
_bodyparts = [
hangman1, hangman2, hangman3 #...
]
def __init__(self, hidden_goal_phrase):
self.phrase = hidden_goal_phrase.lower() # e.g. batman
self.revealed = '?'*len(hidden_goal_phrase) # e.g. ??????
self.guessed = set() # e.g. {'b', 't'}
self.num_incorrect_guesses = 0
self.game_state = GAME_STATE.ACTIVE
def guess(self, letter):
"""
Interact with game by calling this function repeatedly with user's guesses
letter - the letter the player has guessed
"""
if not letter in alphabet or not len(letter)==1:
return GUESS.INVALID_NOT_A_LETTER
if letter in self.guessed:
return GUESS.INVALID_ALREADY_GUESSED # or throw a custom exception class HangmanIncorrectGuessException(Exception): pass
# else guess is legitimate
self.guessed.add(letter)
if letter in self.phrase: # if guess was correct
# update internal state
self.revealed = ''.join((c if c in self.guessed else (' ' if c==' ' else '?')) for c in self.phrase)
# check for win
print(set(self.guessed), set(self.phrase))
if self.guessed>=set(self.phrase): # non-strict superset, see set.__ge__ etc.
self.game_state = GAME_STATE.WON
self.redraw
return GUESS.CORRECT_WIN
else:
return GUESS.CORRECT
else: # if guess was incorrect
self.num_incorrect_guesses += 1
# check for loss
if self.num_incorrect_guesses==HangmanGame.MAX_INCORRECT_GUESSES:
self.game_state = GAME_STATE.LOST
self.redraw()
return GUESS.INCORRECT_LOSS
else:
self.redraw()
return GUESS.INCORRECT
def redraw(self):
'''
updates canvas to reflect current game state
'''
# pygame.clearcanvasorsomething()
for bodypart in HangmanGame._bodyparts[:self.num_incorrect_guesses]:
bodypart()
if self.game_state==GAME_STATE.LOST:
pass #draw appropriate GAME OVER
elif self.game_state==GAME_STATE.WON:
pass #draw appropriate CONGRATULATIONS
Interactive loop:
while True:
print('NEW GAME')
game = HangmanGame('penguin')
while game.game_state==GAME_STATE.ACTIVE:
result = game.guess(raw_input('Guess a letter: '))
print(game.revealed, result)
print('')
Demo games:
NEW GAME
Guess a letter: p
(set(['p']), set(['e', 'g', 'i', 'n', 'p', 'u']))
('p??????', GUESS.CORRECT)
Guess a letter: e
(set(['p', 'e']), set(['e', 'g', 'i', 'n', 'p', 'u']))
('pe?????', GUESS.CORRECT)
Guess a letter: n
(set(['p', 'e', 'n']), set(['e', 'g', 'i', 'n', 'p', 'u']))
('pen???n', GUESS.CORRECT)
Guess a letter: guin
('pen???n', GUESS.INVALID_NOT_A_LETTER)
Guess a letter: 7
('pen???n', GUESS.INVALID_NOT_A_LETTER)
Guess a letter:
('pen???n', GUESS.INVALID_NOT_A_LETTER)
Guess a letter: z
bodypart#1
('pen???n', GUESS.INCORRECT)
Guess a letter: x
bodypart#1
bodypart#2
('pen???n', GUESS.INCORRECT)
Guess a letter: c
bodypart#1
bodypart#2
bodypart#3
('pen???n', GUESS.INCORRECT)
Guess a letter: i
(set(['p', 'c', 'e', 'i', 'x', 'z', 'n']), set(['e', 'g', 'i', 'n', 'p', 'u']))
('pen??in', GUESS.CORRECT)
Guess a letter: u
(set(['c', 'e', 'i', 'n', 'p', 'u', 'x', 'z']), set(['e', 'g', 'i', 'n', 'p', 'u']))
('pen?uin', GUESS.CORRECT)
Guess a letter: g
(set(['c', 'e', 'g', 'i', 'n', 'p', 'u', 'x', 'z']), set(['e', 'g', 'i', 'n', 'p', 'u']))
('penguin', GUESS.CORRECT_WIN)
NEW GAME
Guess a letter: q
bodypart#1
('???????', GUESS.INCORRECT)
Guess a letter: w
bodypart#1
bodypart#2
('???????', GUESS.INCORRECT)
Guess a letter: r
bodypart#1
bodypart#2
bodypart#3
('???????', GUESS.INCORRECT)
...more incorrect guesses...
('???????', GUESS.INCORRECT_LOSS)
|
The PyData Ecosystem
Question: I have read about PyData in a few places (e.g. [here](http://pydata.org/)),
but I am still confused about this term really means.
Is PyData an official entity? (e.g. is there a foundation that owns/supports
[PyData.org](http://pydata.org/)?). Is it just a conference? Or is it mostly a
term used loosely to refer to a list of Python packages?
Also what packages are considered the core part of the PyData ecosystem? Is it
just any package that can be used to work with data? (that would be quite
generic). Some packages that I have found are typically associated with PyData
are:
* [Numpy](http://www.numpy.org/)
* [Scipy](http://www.scipy.org/)
* [Pandas](http://pandas.pydata.org/)
* [Scikit-Learn](http://scikit-learn.org/stable/)
* [NLTK](http://nltk.org/)
* [PyMC](https://github.com/pymc-devs/pymc)
* [Numba](http://numba.pydata.org/)
* [Blaze](https://github.com/ContinuumIO/blaze)
Is this list consistent with the group of packages typically associated with
PyData ? Or are there any important omissions?
Finally, to what extent does the PyData ecosystem support **Python 3.x**? Is
it safe to assume that most of the PyData ecosystem is compatible with Python
3.x? If not, what packages do not support it yet?
Answer: PyData is a series of conferences, organized by
[NUMFocus](http://numfocus.org/), a non-profit group that supports open source
scientific software. Throughout the year, they hold conferences in Silicon
Valley, Boston, NYC, and most recently London. Many of the conference
organizers are located in Austin, TX at the [Continuum
Analytics](http://continuum.io/) company co-founded by Travis Oliphant and
Peter Wang. Leah Silen is the main organizer for all the conferences, yet they
also recruit local volunteer organizers to help with much of the logistics per
event and myself volunteering time to work on their website.
PyData also refers to the community that is focused on primarily using Python
for data analysis (more business focused than SciPy, which is organized by
Enthought and leans more towards academic applications). There is much overlap
between the two communities, however you'll find more financial related topics
at PyData.
PyData also refers to the packages you listed. In addition many of the
individuals in the community use iPython notebooks to demonstrate usage of the
listed packages. Continuum Analytics has
[Anaconda](https://store.continuum.io/cshop/anaconda/) available for
installation, which is a all in one type of installation, which includes [100+
of the most popular Python
packages](http://docs.continuum.io/anaconda/pkgs.html) for science, math,
engineering, data analysis. It currently supports Python 2.7, but you can get
[compatibility with Python
3.3](http://docs.continuum.io/anaconda/pkgs.html#python-3-3-available-through-
conda) using the Conda add-on.
|
Reasoning behind Object Oriented Access Specifiers
Question: I have a general question regarding the reason for object oriented access
specifiers. I have never completely understood the reasoning why they exist
and just thought they were there as a very rudimentary form of code-security
but after looking at the discussion on this thread
[Does python have 'private' variables in
classes?](http://stackoverflow.com/questions/1641219/does-python-have-private-
variables-in-classes)
I have understood that I am completely wrong and they don't help with security
at all. So are access specifiers just considered good programming practice in
object oriented design? And when we say private or protected who exactly is
this protecting the data-field or class or method from? Isn't it still
possible for people to get access to the method already know its there?
Through reflection and other methods?
I apologize if this question seems basic or a bit meta-oop but its always
bothered me that I don't quite know the exact reasoning for one of the main
OOP concepts of Encapsulation.
Answer: Access modifiers are used to indicate how you intend for callers to use your
code. They are especially important when you maintain internal state. Consider
a class that keeps a count:
public class Thing {
public int count = 0;
public void doSomething() { count++; }
public int getHowManyTimesDone() { return count; }
}
What's the problem with this code? If a caller modifies `count`, my code
violates its contract:
Thing x = new Thing();
x.doSomething();
x.doSomething();
x.count = 0;
System.out.println(x.getHowManyTimesDone());
My class' contract says that this should print `2`, but it prints `0` because
the caller modified `count`. By making `count` a `private` variable, I am
telling callers, "Hey, you're not supposed to touch this! Doing so might make
this code break!"
[Python doesn't have a concept of
privacy](http://stackoverflow.com/a/7456865/1394393). Rather, by convention, a
single underscore prefixed to a variable provides the same warning to callers:
"If you touch this, it could break this code!" This is probably most akin to
`protected`. A double underscore also triggers name mangling, which suggests
that not even subclasses should use it; this would be most akin to `private`.
In Python, it's the caller's responsibility to accept the risk if they access
these members anyway, but programmers are encouraged to make as much as
possible public.
As for who is implementing the visibility of variables in Java, it's the
runtime itself. There are indeed clever ways around this, though. I believe
reflection provides some means, and anyone getting into the bytecode or JVM
itself can certainly do something. Consider the kinds of things mocks do; they
can convince the JVM that the mock is a particular type even though it's not.
## Edit:
One more thing. In Java, programmers are encouraged to keep all instance
variables `private` and use methods to access and mutate them. This is for
maintainability, not for a philosophical reason about hiding details. Because
Java does not have a C# or Python-like property mechanism, if you need to add
logic to the getting or setting of an instance variable, all code depending on
that instance variable will need to be modified to use the methods. By always
using methods to access variables, you minimize the dependent code you would
break by making such a change. In other words, it's a kludge for dealing with
a shortcoming in the language.
|
Registration timeout on PBS cluster with Ipython parallel
Question: I'm trying to set up ipython parallel on a linux cluster using PBS scheduling.
I was following instructions at <http://www.andreazonca.com/2013/04/ipython-
parallell-setup-on-carver-at.html> (the official instructions are much harder
to follow). I'm running the command on the head node, which sends the jobs to
the slave nodes with PBS (ie, a standard cluster configuration).
My problem is that I get a timeout. I tried increasing the waiting time from
2s up to 20s but without success. Any help would be appreciated. Full output
is below.
Actually, in the end I want to be able to run the ipython commands from my
ssh-connected laptop rather than from the cluster-head-node, but I thought
this was a reasonable first step.
2013-08-11 13:56:07,380.380 [IPEngineApp] Config changed:
2013-08-11 13:56:07,381.381 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:07,381.381 [IPEngineApp] Config changed:
2013-08-11 13:56:07,382.382 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:07,381.381 [IPEngineApp] Config changed:
2013-08-11 13:56:07,381.381 [IPEngineApp] Config changed:
2013-08-11 13:56:07,383.383 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:07,382.382 [IPEngineApp] Config changed:
2013-08-11 13:56:07,382.382 [IPEngineApp] Config changed:
2013-08-11 13:56:07,383.383 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:07,382.382 [IPEngineApp] Config changed:
2013-08-11 13:56:07,383.383 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:07,383.383 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:07,383.383 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:07,382.382 [IPEngineApp] Config changed:
2013-08-11 13:56:07,383.383 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:07,387.387 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:07,387.387 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:07,388.388 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:07,388.388 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:07,388.388 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:07,389.389 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:07,389.389 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:07,389.389 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:07,389.389 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:07,389.389 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:07,389.389 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:07,389.389 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:07,389.389 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:07,389.389 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:07,389.389 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:07,389.389 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:07,389.389 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:07,389.389 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:07,389.389 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:07,389.389 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:07,389.389 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:07,389.389 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:07,390.390 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:07,390.390 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:07,390.390 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:07,390.390 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:07,390.390 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:07,391.391 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:07,391.391 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:07,391.391 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:07,391.391 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:07,391.391 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:07.391 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:07,391.391 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:07.391 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:07,391.391 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:07.391 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:07,391.391 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:07,392.392 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:07.392 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:07.392 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:07.392 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:07,392.392 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:07.392 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:07.393 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:07.402 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:07.402 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:07.402 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:07.402 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:07.403 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:07.403 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:07.403 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:07.403 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:01,273.273 [IPEngineApp] Config changed:
2013-08-11 13:56:01,273.273 [IPEngineApp] Config changed:
2013-08-11 13:56:01,274.274 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:01,274.274 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:01,272.272 [IPEngineApp] Config changed:
2013-08-11 13:56:01,274.274 [IPEngineApp] Config changed:
2013-08-11 13:56:01,275.275 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:01,275.275 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:01,273.273 [IPEngineApp] Config changed:
2013-08-11 13:56:01,276.276 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:01,276.276 [IPEngineApp] Config changed:
2013-08-11 13:56:01,276.276 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:01,278.278 [IPEngineApp] Config changed:
2013-08-11 13:56:01,278.278 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:01,279.279 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:01,279.279 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:01,279.279 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:01,279.279 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:01,279.279 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:01,279.279 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:01,279.279 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:01,280.280 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:01,280.280 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:01,280.280 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:01,280.280 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:01,280.280 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:01,280.280 [IPEngineApp] Config changed:
2013-08-11 13:56:01,280.280 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:01,280.280 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:01,280.280 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:01,280.280 [IPEngineApp] {'EngineFactory': {'timeout': 10}, 'IPEngineApp': {'log_level': 10}}
2013-08-11 13:56:01,280.280 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:01,280.280 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:01,280.280 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:01,280.280 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:01,280.280 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:01,281.281 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:01,281.281 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:01,281.281 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:01,281.281 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:01,281.281 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:01,281.281 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:01,281.281 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:01,281.281 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:01,281.281 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:01,281.281 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:01,281.281 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:01,281.281 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:01,282.282 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:01,282.282 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:01,282.282 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:01,282.282 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:01.282 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:01,282.282 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:01.282 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:01,282.282 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:01.282 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:01,283.283 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:01,283.283 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:01,283.283 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:01,284.284 [IPEngineApp] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-08-11 13:56:01,284.284 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:01,284.284 [IPEngineApp] Searching path [u'/home/username', u'/home/username/.ipython/profile_default'] for config files
2013-08-11 13:56:01,284.284 [IPEngineApp] Attempting to load config file: ipython_config.py
2013-08-11 13:56:01.284 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:01.284 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:01.284 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:01.284 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:01,284.284 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipython_config.py
2013-08-11 13:56:01,285.285 [IPEngineApp] Attempting to load config file: ipengine_config.py
2013-08-11 13:56:01,286.286 [IPEngineApp] Loaded config file: /home/username/.ipython/profile_default/ipengine_config.py
2013-08-11 13:56:01.286 [IPEngineApp] Loading url_file u'/home/username/.ipython/profile_default/security/ipcontroller-engine.json'
2013-08-11 13:56:01.295 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:01.295 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:01.296 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:01.296 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:01.296 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:01.296 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:01.298 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:01.299 [IPEngineApp] Registering with controller at tcp://127.0.0.1:53956
2013-08-11 13:56:17.411 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:17.412 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:17.413 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:17.412 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:17.412 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:17.413 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:17.413 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:17.413 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:11.304 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:11.304 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:11.305 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:11.306 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:11.306 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:11.306 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:11.307 [IPEngineApp] Registration timed out after 10.0 seconds
2013-08-11 13:56:11.309 [IPEngineApp] Registration timed out after 10.0 seconds
--------------------------------------------------------------------------
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.
--------------------------------------------------------------------------
**Update following advice from Andrea Zonca**
Adding
c.HubFactory.ip = '*'
appears to help, although not immediately for some unknown reason.
But it still doesn't work:
I run with
./.ipython/profile_default/ipc 2 1
2013-09-09 12:19:51,884.884 [IPClusterStart] Using existing profile dir: u'/home/username/.ipython/profile_default'
2013-09-09 12:19:51.889 [IPClusterStart] Starting ipcluster with [daemon=False]
2013-09-09 12:19:51.890 [IPClusterStart] Creating pid file: /home/username/.ipython/profile_default/pid/ipcluster.pid
2013-09-09 12:19:51.890 [IPClusterStart] Starting Controller with LocalControllerLauncher
2013-09-09 12:19:52.890 [IPClusterStart] Starting 2 Engines with PBS
2013-09-09 12:19:52.904 [IPClusterStart] Job submitted with job id: '4783'
2013-09-09 12:20:22.904 [IPClusterStart] Engines appear to have started successfully
and then run on the head node:
>IPython.parallel import Client
>rc = Client()
>lview = rc.load_balanced_view()
but the output from
>rc.ids
is
[]
So I tried running
ipcontroller --port=8888
and then run nmap
$nmap -sT -O localhost
...
8888/tcp open sun-answerbook
...
which shows the port is open, and sure-enough telnet gives me a response from
the slave node.
But when I run the original command above ie,
./.ipython/profile_default/ipc 2 1
nmap shows no port as being open. So the problem appears to be that ipengine
run in the qsub file is not opening the port like ipcontroller run from the
command line is.
Here is the qsub file:
$cat /home/username/.ipython/profile_default/pbs.engine.template.ppn2
#!/bin/sh
#PBS -q longqueue
#PBS -l nodes={n/2}:ppn=2
cd $PBS_O_WORKDIR
which ipengine
mpirun -np {n} ipengine --timeout=20
and here is my /home/username/.ipython/profile_default/ipcluster_config.py:
c = get_config()
c.IPClusterStart.controller_launcher_class = 'LocalControllerLauncher'
c.IPClusterStart.engine_launcher_class = 'PBS'
c.PBSLauncher.batch_template_file = u'/home/username/.ipython/profile_default/pbs.engine.template'
Answer: You need to allow connections to the controller from other hosts, setting in
`~/.config/ipython/profile_default/ipcontroller_config.py`:
c.HubFactory.ip = '*'
Btw, I also updated the blog post.
|
Maintaining same settings.py in Windows virtualenv and heroku?
Question: Is there a way to maintain same django settings.py in both Heroku and windows
local systemI am using postgres database. I believe it is only the
DATABASE_URL set as environment variable in Heroku that is making the
difference. Is there some way I could make this work in Windows by setting
DATABASE_URL in the environment? Or correct if am wrong here. I tried setting
the DATABASE_URL in windows by setting:
set DATABASE_URL='postgres://[DB_USER]:[DB_PASS]@localhost:5432/[DBNAME]
But it is not working, I am getting error when I run syncdb -
**"ImproperlyConfigured: settings.DATABASES is improperly configured. Please
supply the ENGINE value. Check settings documentation for more details."**.
below is my settings.py file:
# Django settings for triplanner project.
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ADMINS = (
# ('Your Name', '[email protected]'),
)
MANAGERS = ADMINS
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'triplanner', # Or path to database file if using sqlite3.
# The following settings are not used with sqlite3:
'USER': 'akk',
'PASSWORD': 'admin',
'HOST': '', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP.
'PORT': '', # Set to empty string for default.
}
}
import dj_database_url
DATABASES['default'] = dj_database_url.config()
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
# Hosts/domain names that are valid for this site; required if DEBUG is False
# See https://docs.djangoproject.com/en/1.5/ref/settings/#allowed-hosts
ALLOWED_HOSTS = ['*']
# Local time zone for this installation. Choices can be found here:
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
# although not all choices may be available on all operating systems.
# In a Windows environment this must be set to your system time zone.
TIME_ZONE = 'America/Chicago'
# Language code for this installation. All choices can be found here:
# http://www.i18nguy.com/unicode/language-identifiers.html
LANGUAGE_CODE = 'en-us'
SITE_ID = 1
# If you set this to False, Django will make some optimizations so as not
# to load the internationalization machinery.
USE_I18N = True
# If you set this to False, Django will not format dates, numbers and
# calendars according to the current locale.
USE_L10N = True
# If you set this to False, Django will not use timezone-aware datetimes.
USE_TZ = True
# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/var/www/example.com/media/"
MEDIA_ROOT = ''
# URL that handles the media served from MEDIA_ROOT. Make sure to use a
# trailing slash.
# Examples: "http://example.com/media/", "http://media.example.com/"
MEDIA_URL = ''
# Absolute path to the directory static files should be collected to.
# Don't put anything in this directory yourself; store your static files
# in apps' "static/" subdirectories and in STATICFILES_DIRS.
# Example: "/var/www/example.com/static/"
import os
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
STATIC_ROOT = 'staticfiles'
# URL prefix for static files.
# Example: "http://example.com/static/", "http://static.example.com/"
STATIC_URL = '/static/'
# Additional locations of static files
STATICFILES_DIRS = (
os.path.join(BASE_DIR,'static')
# Put strings here, like "/home/html/static" or "C:/www/django/static".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
# Make this unique, and don't share it with anybody.
SECRET_KEY = 'w+anq)bguzp0j2d-il*2nbvtwj^f1s4_exq@u)d!^z2ccc=)(3'
# List of callables that know how to import templates from various sources.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
# Uncomment the next line for simple clickjacking protection:
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'triplanner.urls'
# Python dotted path to the WSGI application used by Django's runserver.
WSGI_APPLICATION = 'triplanner.wsgi.application'
TEMPLATE_DIRS = (
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'registration',
# Uncomment the next line to enable the admin:
# 'django.contrib.admin',
# Uncomment the next line to enable admin documentation:
# 'django.contrib.admindocs',
)
####### Begin Of Arguments specific to django-registration application ############
ACCOUNT_ACTIVATION_DAYS = 7
EMAIL_HOST = 'localhost'
DEFAULT_FROM_EMAIL = '[email protected]'
LOGIN_REDIRECT_URL = '/'
####### End Of Arguments specific to django-registration application ############
####### Begin Of Arguments specific to Invitation application ############
ACCOUNT_ACTIVATION_DAYS = 7
ACCOUNT_INVITATION_DAYS = 7
INVITATIONS_PER_USER = 3
INVITE_MODE = True
####### End Of Arguments specific to Invitation application ############
# A sample logging configuration. The only tangible logging
# performed by this configuration is to send an email to
# the site admins on every HTTP 500 error when DEBUG=False.
# See http://docs.djangoproject.com/en/dev/topics/logging for
# more details on how to customize your logging configuration.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
Answer: Basically should work. Try setting `DATABASE_URL` in a file called `.env`
which you place next to your `Procfile`. Foreman will find it and use it when
running your processes. The syntax is simply `DATABASE_URL=...`, without any
quotes, and without the word "set", etc.
|
Programatically determine if a user is calling code from the notebook
Question: I'm writing some software that creates matplotlib plots of simulation data.
Since these plotting routines are often running in a headless environment,
I've chosen to use the matplotlib object oriented interface explicitly assign
canvases to figures only just before they are saved. This means I cannot use
pylab or pyplot based solutions for this issue.
I've added some special sauce so the plots show up inline either by invoking a
`display` method on the plot object or by invoking `__repr__`. However, the
check I'm doing to determine if a user is running under IPython (checking for
`"__IPYTHON__" in dir(__builtin__)`) cannot discriminate whether the user is
in a notebook or just a regular terminal session where inline figures won't
work.
Is there some way to programatically check whether a code snippet has been
executed in a notebook, qt console, or terminal IPython session? Am I doing
something silly here - I haven't looked too closely at the display semantics
so perhaps I'm ignorant about some portion of the IPython internal API that
will take care of this for me.
Answer: Answerd many time : No you cant.
[How can I check if code is executed in the IPython
notebook?](http://stackoverflow.com/questions/15411967/how-can-i-check-if-
code-is-executed-in-the-ipython-notebook)
Same kernel can be connected to notebook, qtconsole and terminal at the same
time, even to many at once.
Your question is like a TV star asking "how can I know if the person watching
me on tv is male of female? ". It does not make sens.
Don't invoke `_repr_*_` yourself. Try to import display, make it no-op if
import fail. that should be sufficient to both in python or IPython not.
Better return object instead of displaying. The display hook will work by
itself in IPython if the object has a `_repr_png_` or equivalent.
|
Python regex substitution using a dictionary
Question: I have the following regex to parse access strings inside brackets and remove
them:
>>> a = 'a[b]cdef[g ]hi[ j]klmno[ p ]'
>>> re.sub(r'\[\s?(.*?)\s?\]',r'\1',a)
'abcdefghijklmnop'
But what I want to do is have what is in brackets target a dictionary. Let's
say I have the following dictionary:
d = {'b':2,'g':7,'j':10,'p':16}
when I run my desired regex it should print the string: `'a2cdef7hi10klmno16'`
However, I cannot simply have the replace part of `sub` be `d['\1']` because
there will be a `KeyError: '\x01'`.
**Is there any simple way to replace a pattern with a dictionary responding to
a capture in regex?**
Answer: You can use
[`format`](http://docs.python.org/2/library/stdtypes.html#str.format),
assuming `a` doesn't contain substrings of the form `{...}`:
>>> import re
>>> a = 'a[b]cdef[g ]hi[ j]klmno[ p ]'
>>> d = {'b':2,'g':7,'j':10,'p':16}
>>>
>>> re.sub(r'\[\s?(.*?)\s?\]',r'{\1}',a).format(**d)
'a2cdef7hi10klmno16'
Or you can use a `lambda`:
>>> re.sub(r'\[\s?(.*?)\s?\]', lambda m: str(d[m.group(1)]), a)
'a2cdef7hi10klmno16'
* * *
The `lambda` solution appears to be much faster:
>>> from timeit import timeit
>>>
>>> setup = """
... import re
... a = 'a[b]cdef[g ]hi[ j]klmno[ p ]'
... d = {'b':2,'g':7,'j':10,'p':16}
... """
>>>
>>> timeit(r"re.sub(r'\[\s?(.*?)\s?\]',r'{\1}',a).format(**d)", setup)
13.796708106994629
>>> timeit(r"re.sub(r'\[\s?(.*?)\s?\]', lambda m: str(d[m.group(1)]), a)", setup)
6.593755006790161
|
In Mechanize, how to set the x,y value of ImageButton?
Question: I am using Mechanize Python as the web client to submit a form. However, I did
not find a way to set the x,y of ImageButton in this form.
I have tried different ways:
1) the first is to find the control:
bt = frm.find_control('ImageButtonID')
bt.x = 88
bt.y = 10
2) the second one is to use a dictionary:
ctls = {}
ctls['ImageButton1.x'] = 88
ctls['ImageButton1.y'] = 10
data = urllib.urlencode(ctls)
br.open(url,data)
But none of the ways works, Is there any solution? thanks a lot
Here is the html code:
TITLE function EnterTo(){ if (window.event.keyCode == 13){ form1.submit(); } }
function ChangeImage() { document.getElementById("yzm").src =
document.getElementById("yzm").src+'?'; }
<input name="deptID" type="hidden" id="deptID" value="1" /> <input name="dateType" type="hidden" id="dateType" value="Today" />
<input name="timeType" type="hidden" id="timeType" value="AM" />
<div class="numberBG">
<div class="Repeater">
<table class="Item">
<tr>
<td>
<input type="image" name="Repeater1$ctl00$ImageButton1" id="Repeater1_ctl00_ImageButton1" title="XXXXXXXX" src="images/1_1.jpg" style="border-width:0px;" />
</td>
</tr>
</table>
<table class="table">
<tr>
<td style="width:100px;" class="txt">the code</td>
<td style="width:90px;"><input name="Txt_Yzm" type="text" id="Txt_Yzm" onkeydown="if(event.keyCode==13)event.keyCode=9 " style="width:90px;" />
<td style="width:50px;"><img src="../../gif.aspx?" id="yzm" onclick="ChangeImage();" alt="try again" style="width:50px;height:24px"></td>
</tr>
<tr>
<td colSpan="3" style="width:240px;"><select name="DropDownList1" id="DropDownList1" style="width:240px;">
From the Chrome, I find the post data is
__VIEWSTATE=%2FwEPDwUJMjE0NzU0MTk4D2QWAmYPZBYIAgMPDxYCHgRUZXh0BQnotKLliqHpg6hkZAIEDw8WAh8ABRfnlKjmiLfvvJo1NTczL%2BWImOWtpuS6rmRkAgUPFgIeC18hSXRlbUNvdW50AgQWCGYPZBYEAgEPDxYCHwAFLOWFseWPluWPt1s3NV3lvZPliY3lj7dbMTAyMF3nrYnlvoXkurrmlbBbNTVdZGQCAw8PFgYeD0NvbW1hbmRBcmd1bWVudAUBMR4ISW1hZ2VVcmwFDmltYWdlcy8xXzEuanBnHgdUb29sVGlwBRLnu7zlkIjmiqXplIDkuJrliqFkZAIBD2QWBAIBDw8WAh8ABSrlhbHlj5blj7dbMV3lvZPliY3lj7dbMjAwMV3nrYnlvoXkurrmlbBbMV1kZAIDDw8WBh8CBQEyHwMFDmltYWdlcy8xXzIuanBnHwQFDOW8gOelqOS4muWKoWRkAgIPZBYEAgEPDxYCHwAFKuWFseWPluWPt1sxXeW9k%2BWJjeWPt1szMDAxXeetieW%2BheS6uuaVsFswXWRkAgMPDxYGHwIFATMfAwUOaW1hZ2VzLzFfMy5qcGcfBAUS5Z%2B65bu65oql6ZSA5Lia5YqhZGQCAw9kFgQCAQ8PFgIfAAUr5YWx5Y%2BW5Y%2B3WzE1XeW9k%2BWJjeWPt1s0MDEzXeetieW%2BheS6uuaVsFsyXWRkAgMPDxYGHwIFATQfAwUOaW1hZ2VzLzFfNC5qcGcfBAUV5YCf5qy%2B44CB6Jaq6YWs5Yqz5YqhZGQCBw8QZBAVABUAFCsDAGRkGAEFHl9fQ29udHJvbHNSZXF1aXJlUG9zdEJhY2tLZXlfXxYGBRxSZXBlYXRlcjEkY3RsMDAkSW1hZ2VCdXR0b24xBRxSZXBlYXRlcjEkY3RsMDEkSW1hZ2VCdXR0b24xBRxSZXBlYXRlcjEkY3RsMDIkSW1hZ2VCdXR0b24xBRxSZXBlYXRlcjEkY3RsMDMkSW1hZ2VCdXR0b24xBQtQcmludEJ1dHRvbgUKQmFja0J1dHRvbpwJ7xuebnfVXIs68Z0mpioF3Dpy&Repeater1%24ctl00%24ImageButton1.x=74&Repeater1%24ctl00%24ImageButton1.y=19&Txt_Yzm=&__EVENTVALIDATION=%2FwEWCwLc%2Bdl4AqPSn9wLAuqo1NQIAq3v2K8JAtW984UHAtDAyPwJAuP1ziAC3oKsnQsC5NXi8AQCw9v50gkCz6%2BuzAH2E8Hvp9iYVUtn77jo3FKnheOfhg%3D%3D
Answer: From the look of the HTML mechanize can't handle submitting of this form. As
you already know which data to post try to do it manually by using `urllib`
and `urllib2` to post data:
import urllib
import urllib2
url = 'http://example.com'
form_data = {'field1': 'value1', 'field2': 'value2'}
params = urllib.urlencode(form_data)
response = urllib2.urlopen(url, params)
data = response.read()
|
How do I get django to tell me where I am inside my code
Question: I am starting to us logging in my django project at the moment (on step at a
time) and I was wondering if there was a way to put where I am in my code into
the error message using python. I.e. If I am in
`something.views.something_view` how do I get this class/function location to
then tag it onto `logging.error("something went wrong in "+???)`?
Answer: You should configure the logging at application level (in settings.py) using a
dictionary, in this way:
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'standard': {
'format': '[%(levelname)s] [%(asctime)s] [%(name)s:%(lineno)s] %(message)s',
'datefmt': '%d/%b/%Y %H:%M:%S'
},
},
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
},
'logfile': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'filename': os.path.join(LOGS_DIR, 'application.log'),
'maxBytes': 5242880, # 5MB
'backupCount': 10,
'formatter': 'standard',
},
'console':{
'level':'DEBUG',
'class':'logging.StreamHandler',
'formatter': 'standard'
},
},
'loggers': {
'django': {
'handlers':['console'],
'level':'DEBUG',
'propagate': False,
},
'django.db.backends': {
'handlers': ['console'],
'level': 'DEBUG',
'propagate': False,
},
'django.request': {
'handlers': ['mail_admins'],
'level': 'DEBUG',
'propagate': False,
},
'com.mysite': {
'handlers': ['console', 'logfile'],
'level': 'DEBUG',
'propagate': True,
},
}
}
The line:
'format': '[%(levelname)s] [%(asctime)s] [%(name)s:%(lineno)s] %(message)s'
will produce a log output like:
[DEBUG] [11/Aug/2013 12:34:43] [com.mysite.apps.myapp.middleware.MyMiddleware:28] My log message
where:
com.mysite.apps.myapp.middleware.MyMiddleware
is the class that has logged your message and :28 the line in your code. A
logger is configured in this way at module-level:
import logging
logger = logging.getLogger(__name__)
in this way, your logger will automatically be resolved using the fully
qualified class name!
|
Opscenter 3.2.x not working on Debian 7
Question: After upgrading to Debian7, opscenter refuses to start with the following
error:
2013-08-12 10:09:05+0200 [] INFO: Log opened.
2013-08-12 10:09:05+0200 [] INFO: twistd 10.2.0 (/usr/bin/python2.7 2.7.3) starting up.
2013-08-12 10:09:05+0200 [] INFO: reactor class: twisted.internet.selectreactor.SelectReactor.
2013-08-12 10:09:05+0200 [] INFO: set uid/gid 0/0
2013-08-12 10:09:05+0200 [] INFO: Logging level set to 'info'
2013-08-12 10:09:05+0200 [] INFO: OpsCenter version: 3.2.1
2013-08-12 10:09:05+0200 [] INFO: Compatible agent version: 3.2.1
2013-08-12 10:09:05+0200 [] INFO: Loading per-cluster config file /etc/opscenter/clusters/MLX_TEST.conf
2013-08-12 10:09:05+0200 [] INFO: HTTP BASIC authentication enabled
2013-08-12 10:09:05+0200 [] INFO: Starting webserver with ssl disabled.
2013-08-12 10:09:05+0200 [] INFO: Unhandled error in Deferred:
2013-08-12 10:09:05+0200 [] Unhandled Error
Traceback (most recent call last):
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/scripts/_twistd_unix.py", line 317, in startApplication
app.startApplication(application, not self.config['no_save'])
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/application/app.py", line 653, in startApplication
service.IService(application).startService()
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/application/service.py", line 277, in startService
service.startService()
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1141, in unwindGenerator
return _inlineCallbacks(None, f(*args, **kwargs), Deferred())
--- <exception caught here> ---
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/twisted/internet/defer.py", line 1020, in _inlineCallbacks
result = g.send(result)
File "/usr/lib/python2.7/dist-packages/opscenterd/OpsCenterdService.py", line 51, in startService
File "/usr/lib/python2.7/dist-packages/opscenterd/OpsCenterdService.py", line 136, in setupClusterManager
File "/usr/lib/python2.7/dist-packages/opscenterd/ClusterServices.py", line 66, in __init__
File "/usr/lib/python2.7/dist-packages/opscenterd/ClusterServices.py", line 264, in makeStatsReporter
File "/usr/lib/python2.7/dist-packages/opscenterd/StatsReporter.py", line 60, in makeReporter
File "/usr/lib/python2.7/dist-packages/opscenterd/StatsReporter.py", line 42, in __init__
File "/usr/lib/python2.7/dist-packages/opscenterd/SslUtils.py", line 3, in <module>
File "/usr/share/opscenter/lib/py-debian/2.7/amd64/OpenSSL/__init__.py", line 11, in <module>
import rand, crypto, SSL, tsafe
exceptions.ImportError: libssl.so.0.9.8: cannot open shared object file: No such file or directory
there seems to be no candidate for libssl0.9.8; which has been suggested in
other questions:
esb-a-test:~# apt-get install libssl0.9.8
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package libssl0.9.8 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'libssl0.9.8' has no installation candidate
I do see the lib in /usr/lib32 ; but not in /usr/lib
Answer: This is a known issue, but the workaround is documented here:
<http://www.datastax.com/documentation/opscenter/3.2/webhelp/#opsc/troubleshooting/opscTroubleshooting_g.html#topic_opsc_troubleshooting>
See the section relating to that import error.
|
Where can I find an updated tutorial for Tkinter
Question: I need to write a small application in python. From what I read, `Tkinter`
should be a reasonable option to use. I tried several tutorials about it and
all said I should import the module using
import Tkinter
which fails. using
import tkinter
however, succeed. From that I gather there was a version update since the
tutorial(s) was written. I tried to ignore this, but quickly I come upon
errors which are not solved as easily. I guess I need to either switch
tutorial or a gui library. So do you have a link to a recent tutorial or to a
freshly documented gui library for python?
Answer: The only significant changes to tkinter from python 2.x to python 3.x is how
it is imported. The core functionality is the same. Any tutorial for tkinter
2.x should work just fine for 3.x once you get past the import issue.
That being said, the tutorial at tkdocs.com is probably the most modern
tutorial at the time I write this. See
<http://www.tkdocs.com/tutorial/index.html>
|
How to instantiate objects during a unit test's setup phase in Python
Question: I've been diving into unit testing with Python, but can't figure out how I'm
supposed to instantiate the object I want to test during the setup phase, and
end up with a new object for each test. For example, I have the following
class I want to test:
class Cfg():
data = {}
def set(self, key, value):
self.data[key] = value
def get(self, key):
return self.data.get(key, None)
For each unit test, I want a newly instantiated `Cfg` object. My tests look
like this:
from cfg import Cfg
class TestCfg():
def setup(self):
self.cfg = Cfg()
def teardown(self):
self.cfg = None
def test_a(self):
self.cfg.set('foo', 'bar')
assert self.cfg.get('foo') == 'bar'
def test_b(self):
assert self.cfg.get('foo') == 'bar'
I don't understand why `test_b` passes. I expected `setup` and `tearDown` to
'reset' my `cfg` instance, but it seems that `cfg` is persisting between
tests. What am I doing wrong here and how can I achieve the expected behavior?
Answer: This is related to how you've written `Cfg` class. Move `data` initialization
into `__init__` method:
class Cfg():
def __init__(self):
self.data = {}
def set(self, key, value):
self.data[key] = value
def get(self, key):
return self.data.get(key, None)
And, you'll see failing `test_b`.
|
ViewDoesNotExist error on linux but same code works on windows, why?
Question: I'm currently trying to learn django and I've been messing around with an idea
for a website on my local machines. I've run into the following error which I
don't seem to be able to solve.
When I run the development server on my windows machine everything works as I
expect it to, however, when I run the (same) code on my linux machine I get a
ViewDoesNotExist error. However the view definitely exists in the views.py
file and the path is definitely set up correctly (as I can see from the
traceback).
I've read that for some reason django sometimes provides these error messages
when in actual fact it has a problem with something else, maybe something
imported by the views.py file so I ran
`python manange.py shell`
and tried to import my views, which failed. But my views.py file doesn't
import anything other than my models from my models.py file so I tried to
import them manually and I found that only one of the models would import
properly and the other two would always fail, for example when trying to run:
`from racing.models import Event`
I get the following error
`ImportError: cannot import name Event`
However, when I run:
`from racing.models import Race`
It works fine and I can work with the Race class in the shell
It is as if it can't even see them? All the code can be found here:
<https://github.com/sj175/ulmk>
If anyone could help me figure out how to solve this error so that I can
continue using django on my linux machine I would be very grateful.
Answer: using djangos `manage.py startproject` should create directory like:
cms/
manage.py
cms/
__init__.py
settings.py
urls.py
wsgi.py
It looks like this is the case with `cms`. But it looks like your `apps` are a
directory higher then they should be:
cms/
manage.py
cms/
__init__.py
settings.py
urls.py
wsgi.py
coltrane/
racing/
tagging/
when by default i believe those apps should be inside the `cms` directory, ie
inside your django project
cms/
manage.py
cms/
__init__.py
settings.py
urls.py
wsgi.py
coltrane/
racing/
tagging/
So i'm guessing that your apps are on your pythonpath in windows, and on
linux, they are not
|
Plotting a line in between subplots
Question: I have created a plot in Python with Pyplot that have multiple subplots.
I would like to draw a line which is not on any of the plots. I know how to
draw a line which is part of a plot, but I don't know how to do it on the
white space between the plots.
Thank you.
* * *
Thank you for the link but I don't want vertical lines between the plots. It
is in fact a horizontal line above one of the plots to denote a certain range.
Is there not a way to draw an arbitrary line on top of a figure?
Answer: First off, a quick way to do this is jut to use `axvspan` with y-coordinates
greater than 1 and `clip_on=False`. It draws a rectangle rather than a line,
though.
As a simple example:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(range(10))
ax.axvspan(2, 4, 1.05, 1.1, clip_on=False)
plt.show()

For drawing lines, you just specify the `transform` that you'd like to use as
a kwarg to `plot` (the same applies to most other plotting commands,
actually).
To draw in "axes" coordinates (e.g. 0,0 is the bottom left of the axes, 1,1 is
the top right), use `transform=ax.transAxes`, and to draw in figure
coordinates (e.g. 0,0 is the bottom left of the figure window, while 1,1 is
the top right) use `transform=fig.transFigure`.
As @tcaswell mentioned, `annotate` makes this a bit simpler for placing text,
and can be very useful for annotations, arrows, labels, etc. You could do this
with annotate (by drawing a line between a point and a blank string), but if
you just want to draw a line, it's simpler not to.
For what it sounds like you're wanting to do, though, you might want to do
things a bit differently.
It's easy to create a transform where the x-coordinates use one transformation
and the y-coordinates use a different one. This is what `axhspan` and
`axvspan` do behind the scenes. It's very handy for something like what you
want, where the y-coordinates are fixed in axes coords, and the x-coordinates
reflect a particular position in data coords.
The following example illustrates the difference between just drawing in axes
coordinates and using a "blended" transform instead. Try panning/zooming both
subplots, and notice what happens.
import matplotlib.pyplot as plt
from matplotlib.transforms import blended_transform_factory
fig, (ax1, ax2) = plt.subplots(nrows=2)
# Plot a line starting at 30% of the width of the axes and ending at
# 70% of the width, placed 10% above the top of the axes.
ax1.plot([0.3, 0.7], [1.1, 1.1], transform=ax1.transAxes, clip_on=False)
# Now, we'll plot a line where the x-coordinates are in "data" coords and the
# y-coordinates are in "axes" coords.
# Try panning/zooming this plot and compare to what happens to the first plot.
trans = blended_transform_factory(ax2.transData, ax2.transAxes)
ax2.plot([0.3, 0.7], [1.1, 1.1], transform=trans, clip_on=False)
# Reset the limits of the second plot for easier comparison
ax2.axis([0, 1, 0, 1])
plt.show()
## Before panning

## After panning

Notice that with the bottom plot (which uses a "blended" transform), the line
is in data coordinates and moves with the new axes extents, while the top line
is in axes coordinates and stays fixed.
|
Python: Docstrings are not being updated
Question: I have a docstring on the very first line of a Python file called a_file.py:
"""some text"""
When I print this using
import a_file
print a_file.__doc__
it prints `some text` as exptected. However, whenever I change the docstring
to
"""different text"""
it still prints `some text`. I have made sure the file was saved with the
changes. I have a feeling that there is something very simple that I am just
overlooking. Any help would be greatly appreciated. I have read over the
[python page on docstrings](http://www.python.org/dev/peps/pep-0257/) but
still no luck.
EDIT: SOLVED - I have figured it out. Basically, I have a makefile which
creates a new file. I was printing from the new file when I thought I was
actually printing from the source file. When I re-ran the makefile with the
edited text, all was well.
Answer: Are you editing the module's doctstring ? If so, you'll need to reimport the
file as its docstring is now in the memory. Use
[reload](http://docs.python.org/2/library/functions.html#reload) for this If
not, this code works:
>>> def a():
... """ hello """
... print 'aa'
...
>>> a.__doc__
' hello '
>>> a.__doc__ = 'bla bla'
>>> a.__doc__
'bla bla'
|
lxml: Obtain file current line number when calling etree.iterparse(f)
Question: Since no one answer or comment this post, I decide to have this post
rewritten.
Consider the following Python code using lxml:
treeIter = etree.iterparse(fObj)
for event, ele in treeIter:
if ele.tag == 'logRoot':
try:
somefunction(ele)
except InternalException as e:
e.handle(*args)
ele.clear()
InternalException is user-defined and wraps all exceptions from somefunction()
besides lxml.etree.XMLSyntaxError. InternalException has well-defined handler
function .handle().
fObj has "trueRoot" as top-level tag, and many "logRoot" as 2nd-level leaves.
My question is: Is there a way to record current line number when handling the
exception e? *args can be replaced by any arguments available.
Any suggestion is much appreciated.
Answer:
import lxml.etree as ET
import io
def div(x):
return 1/x
content = '''\
<trueRoot>
<logRoot a1="x1"> 2 </logRoot>
<logRoot a1="x1"> 1 </logRoot>
<logRoot a1="x1"> 0 </logRoot>
</trueRoot>
'''
for event, elem in ET.iterparse(io.BytesIO(content), events=('end', ), tag='logRoot'):
num = int(elem.text)
print('Calling div({})'.format(num))
try:
div(num)
except ZeroDivisionError as e:
print('Ack! ZeroDivisionError on line {}'.format(elem.sourceline))
prints
Calling div(2)
Calling div(1)
Calling div(0)
Ack! ZeroDivisionError on line 4
|
launching Excel application using Python to view the CSV file , but CSV file is opening in read mode and cant view the data written on it
Question:
from win32com.client import Dispatch
base_path = os.path.dirname(os.path.abspath(__file__))
_csvFilename = os.path.join(base_path, "bcForecasting.csv")
_csvFile = open (_csvFilename, 'wb')
_csvFile = csv.writer(_csvFile, quoting=csv.QUOTE_ALL)
_Header = ['Name']+self.makeIntoList (self.root.tss.series () [0].getAllTimes (), self.originalTimesteps + _futurePeriods)
_csvFile.writerow (_Header)
xl = Dispatch('Excel.Application')
wb = xl.Workbooks.Open(_csvFilename)
xl.Visible = True
Here launching Excel application using Python to view the CSV file , but CSV
file is opening in read mode and cant view the data written on it. Please
help.
Answer: You need to _close_ the `csv` file before you open it with Excel:
with open (_csvFilename, 'wb') as _csvFile
_csvFile = csv.writer(_csvFile, quoting=csv.QUOTE_ALL)
_Header = ['Name']+self.makeIntoList (self.root.tss.series () [0].getAllTimes (), self.originalTimesteps + _futurePeriods)
_csvFile.writerow (_Header)
xl = Dispatch('Excel.Application')
wb = xl.Workbooks.Open(_csvFilename)
xl.Visible = True
By using a `with` statement the open file object is automatically closed when
the block indented under the statement completes.
Windows doesn't like it when more than one application has a file open.
|
Get revision value if present,if not get default revision value
Question: INPUT:-
<manifest>
<default revision="jb_2.5.4" remote="quic"/>
<project name="platform/vendor/google/proprietary/widevine"
path="vendor/widevine"
revision="refs/heads/jb_2.6"
x-grease-customer="none"
x-quic-dist="none"
x-ship="none" />
<project path="vendor/widevine" name="platform/vendor/google/proprietary/bluetooth" x-ship="none" x-quic-dist="none" x-grease-customer="none"/>
</manifest>
I am trying to get the revision value from the input above if the revision tag
is present,if it is not present use the revision value in the default tag,I
have the following code and running into following error.. can anyone provide
inputs on what is wrong here?
import shlex
import os
import sys
import json
import fileinput
import pwd
import itertools
import subprocess
from subprocess import Popen, PIPE, STDOUT
import xml.etree.ElementTree as ET
import re
def manifest_data (name):
pattern = re.compile('refs/heads/(.*)')
tree = ET.parse('.repo/manifests/default.xml')
root = tree.getroot()
project = root.find("./project[@name='%s']" % name)
revision = project.get('revision')
res = pattern.match(revision)
return res.group(1)
def main ():
branch_name = "jb_2.5.4"
print "branch_name"
print branch_name
projects = ['platform/vendor/google/proprietary/widevine','platform/vendor/google/proprietary/bluetooth']
for project in projects :
branch = manifest_data(project)
print branch
if __name__ == '__main__':
main()
Error:-
File "branch_manifest.py", line 35, in <module>
main()
File "branch_manifest.py", line 32, in main
branch = manifest_data(project)
File "branch_manifest.py", line 18, in manifest_data
project = root.find("./project[@name='%s']" % name)
File "/usr/lib/python2.6/xml/etree/ElementTree.py", line 330, in find
return ElementPath.find(self, path)
File "/usr/lib/python2.6/xml/etree/ElementPath.py", line 186, in find
return _compile(path).find(element)
File "/usr/lib/python2.6/xml/etree/ElementPath.py", line 176, in _compile
p = Path(path)
File "/usr/lib/python2.6/xml/etree/ElementPath.py", line 93, in __init__
"expected path separator (%s)" % (op or tag)
SyntaxError: expected path separator ([)
Answer: It works well with python2.7, but got the same exception with python2.6.
You can renference [ElementTree XPath - Select Element based on
attribute](http://stackoverflow.com/questions/222375/elementtree-xpath-select-
element-based-on-attribute). It gives an answer: python2.6 do not support
syntax like `./project[@name='%s']`
|
get the last sunday and saturday's date in python
Question: Looking to leverage `datetime` to get the date of beginning and end of the
previous week, sunday to saturday.
So, if it's 8/12/13 today, I want to define a function that prints:
`Last Sunday was 8/4/2013 and last Saturday was 8/10/2013`
How do I go about writing this?
EDIT: okay, so there seems to be some question about edge cases. For
saturdays, I want the same week, for anything else, I'd like the calendar week
immediately preceding `today`'s date.
Answer: [datetime.date.weekday](http://docs.python.org/2/library/datetime.html#datetime.date.weekday)
returns `0` for Monday. You need to adjust that.
Try following:
>>> import datetime
>>> today = datetime.date.today()
>>> today
datetime.date(2013, 8, 13)
>>> idx = (today.weekday() + 1) % 7 # MON = 0, SUN = 6 -> SUN = 0 .. SAT = 6
>>> idx
2
>>> sun = today - datetime.timedelta(7+idx)
>>> sat = today - datetime.timedelta(7+idx-6)
>>> 'Last Sunday was {:%m/%d/%Y} and last Saturday was {:%m/%d/%Y}'.format(sun, sat)
'Last Sunday was 08/04/2013 and last Saturday was 08/10/2013'
If you are allowed to use [python-deteutil](http://labix.org/python-dateutil):
>>> import datetime
>>> from dateutil import relativedelta
>>> today = datetime.datetime.now()
>>> start = today - datetime.timedelta((today.weekday() + 1) % 7)
>>> sat = start + relativedelta.relativedelta(weekday=relativedelta.SA(-1))
>>> sun = sat + relativedelta.relativedelta(weekday=relativedelta.SU(-1))
>>> 'Last Sunday was {:%m/%d/%Y} and last Saturday was {:%m/%d/%Y}'.format(sun, sat)
'Last Sunday was 08/04/2013 and last Saturday was 08/10/2013'
|
Get unique combinations of elements from a python list
Question: Edit: This is not a exact duplicate of [Python code to pick out all possible
combinations from a list?](http://stackoverflow.com/questions/464864/python-
code-to-pick-out-all-possible-combinations-from-a-list)
This topic is about finding unique combinations while the other topic is about
finding ALL combinations.
If I have a python list:
L = [1,2,3,4]
what's the best way to get all the possible _unique combinations_ of 3
elements from the list like below:
["1,2,3", "1,2,4", "2,3,4", "3,4,1"]
The order of the elements in the combinations doesn't matter. For example,
`"1,2,3"` and `"3,2,1"` will be considered the same combination.
I can probably write a few loops to do this but I think there might be a one-
liner which can do the same.
Answer: You need
[`itertools.combinations`](https://docs.python.org/2/library/itertools.html#itertools.combinations):
>>> from itertools import combinations
>>> L = [1, 2, 3, 4]
>>> [",".join(map(str, comb)) for comb in combinations(L, 3)]
['1,2,3', '1,2,4', '1,3,4', '2,3,4']
|
How to sort higher to lower in Python
Question: Hi i need to **sort** a int value in **forloop** and i am running this on
terminal, below is the code
match = myworld.objects.filter(series=1)
for i in match:
print i.w
which results like
34
32
24
39
32
33
36
33
23
34
38
38
32
31
30
34
30
30
31
36
31
35
33
43
please tell me how can i sort them so that i can get a **lower** to **higher**
or higher to lower thanks
Answer: for lower to higher
from operator import attrgetter
for i in sorted(match, key=attrgetter('w')):
for higher to lower
for i in sorted(match, key=attrgetter('w'), reverse=True):
If you don't need the objects, you can also just sort the attribute you are
interested in
for i in sorted(x.w for x in match):
|
Python automating text data files into csv
Question: I am trying to automate a process where in a specific folder, there are
multiple text files following the same data format/structure. In the text
files, the data is separated by a comma. I want to be able to output all of
these text files into one cumulative csv file. This is what I currently have,
and seem to be stuck where I am because of my lack of python knowledge.
from collections import defaultdict
import glob
def get_site_files():
sites = defaultdict(list)
for fname in glob.glob('*.txt'):
csv_out = csv.writer(open('out.csv', 'w'), delimiter=',')
f = open('myfile.txt')
for line in f:
vals = line.split(',')
csv_out.writerow()
f.close()
EDIT: bringing up comments: I want to make sure that all of the text files are
read, not just only myfile.txt.
Also, if I could combine them all into one large .txt file and then I could
make those into a csv that would be great too, I just am not sure the exact
way to do this.
Answer: Just a little bit of reordering of your code.
import csv
import glob
def get_site_files():
with open('out.csv', 'w') as out_file:
csv_out = csv.writer(out_file, delimiter=',')
for fname in glob.glob('*.txt'):
with open(fname) as f:
for line in f:
vals = line.split(',')
csv_out.writerow(vals)
get_site_files()
But since they are all in the same format you can just concatenate them:
import glob
with ('out.csv', 'w') as fout:
for fname in glob.glob('*.txt'):
with open(fname, 'r') as fin:
fout.write(fin.read())
|
Python Matplotlib : Just the graph image
Question: I cannot get [pyplot](http://matplotlib.org/users/transforms_tutorial.html) to
produce "cropped" images, that is, get rid of the grey left and right borders,
as it is, it is not an accurate representation of the sound waveform : This
sound file has no silence before and after.

My code :
import gtk
from matplotlib.figure import Figure
from numpy import arange, sin, pi
import scipy.io.wavfile as wavfile
from matplotlib.backends.backend_gtkagg import FigureCanvasGTKAgg as FigureCanvas
win = gtk.Window()
win.connect("destroy", lambda x: gtk.main_quit())
win.set_default_size(400,300)
win.set_title("Cropping figure")
rate, data = wavfile.read(open('/home/px/gare_du_nord-catchlak.wav', 'r'))
f = Figure()
a = f.add_subplot(111, axisbg=(0.1843, 0.3098, 0.3098))
a.plot(range(len(data)),data, color="OrangeRed", linewidth=0.5, linestyle="-")
a.axis('off')
a.autoscale_view('tight')
canvas = FigureCanvas(f) # a gtk.DrawingArea
win.add(canvas)
win.show_all()
gtk.main()
Answer: OK, I got [my answer](http://stackoverflow.com/questions/8182124/how-do-i-
autosize-text-in-matplotlib-
python/8188287?noredirect=1#comment26699981_8188287) :
f.subplots_adjust(0, 0, 1, 1)
|
Is there a way to check if a module is being loaded by multiprocessing standard module in Windows?
Question: I believe on Windows, because there is no _fork_ , the _multiprocessing_
module reloads modules in new Python's processes.
You are required to have this code in your main script, otherwise very nasty
crashes occur
if __name__ == '__main__':
from multiprocessing import freeze_support
freeze_support()
I have a bunch of modules which have debug print statements in them at the
module level. Therefore, the print statements get called whenever a module is
being loaded.
Whenever I run something in parallel all of these print statements are
executed.
My question is if there is a way to see if a module is being imported by the
_multiprocessing_ module, and if so silence those print statements?
I'm basically looking if there is something like:
import multiprocessing
if not multiprocessing.in_parallel_process:
print('Loaded module: ' + __name___)
I've been unable to find it so far. Is this possible?
Answer: **Yes** , the _multiprocessing_ module does provide a way to check whether the
module is being executed in a subprocess or in the main process.
from multiprocessing import Process, current_process
if current_process().name == 'MainProcess':
print('Hello from the main process')
else:
print('Hello from child process')
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
Output:
Hello from the main process
Hello from child process
hello bob
|
Why method now in Python is obtained as datetime.datetime.now instead of datetime.time.now?
Question: I would like to know why the method **now** was implemented under the
datetime.datetime label instead of the datetime.time?
For example to obtain today's date on python you do the following.
import datetime
print datetime.date.today()
but you can't the same for the time now, I mean
print datetime.time.now()
Instead you have to do the following:
print datetime.datetime.now()
Answer: "Now" is a point in time. That means date matters; if it's noon now, yesterday
noon is not also now. (Time of day also matters; 9 AM today is also not now.)
Thus, it makes sense to have a `datetime.datetime.now`, but not a
`datetime.time.now`. It could make sense to have a
`datetime.time.currentlocaltime` or `datetime.time.currentutctime`, but those
methods don't exist. You could put in a feature request if you want.
|
How to pass arguments to a python greenlet as additional arguments
Question: I'm looking for a way to pass function arguments through another function, in
a manner identical to Stackless' `tasklet` instantiation:
stackless.tasklet(function_being_called)(*args)
So far the best way I've come up with is:
mylib.tasklet(function_being_called,*args)
which works, but is not identical to Stackless' syntax. I'm not sure where to
look in the documentation to find out how to accomplish this (hence the rather
ambiguous title for this question). Is this even possible, or is it part of
Stackless' changes to the interpreter?
EDIT: I'm now aware that there is an approach which will work for functions,
but I'm unsure if it will work in my case. I'm using the greenlet library:
greenlet threads obtain args when the greenlet is `switch()`ed to, not on
instantiation. Calling them as shown below results in
`TypeError: 'greenlet.greenlet' object is not callable`.
Using `greenlet.greenlet(function(args))` (still not right syntax) executes
immediately and still requires args in the `switch()` method. Hence I
currently store variables in-class, using the syntax shown above, to pass when
calling `switch()`. Hope this doesn't change the question too much!
As requested, here is the code in question. First, a variant on eri's answer
(DISCLAIMER: I've never used decorators before):
import greenlet # Background "greenlet" threadlet library
_scheduled = [] # Scheduler queue
def newtasklet(func): # Returns a greenlet-making function & switch() arguments.
def inner(*args,**kwargs):
newgreenlet = greenlet.greenlet(func,None)
return newgreenlet,args,kwargs
return inner
class tasklet():
def __init__(self,function=None):
global _scheduled
initializer = newtasklet(function)
self.greenlet,self.variables,self.kvars = initializer()
_scheduled.append(self)
self.blocked = False
tasklet(print)("A simple test using the print function.")
Traceback (most recent call last):
File "<pyshell#604>", line 1, in <module>
tasklet(print)("A simple test using the print function.")
TypeError: 'tasklet' object is not callable
Original code (working but not syntactically ideal):
class tasklet():
def __init__(self,function=None,*variables,parent=None):
global _scheduled
self.greenlet = greenlet.greenlet(function,parent)
self.variables = variables
_scheduled.append(self)
self.blocked = False
>>> tasklet(print,"A simple test using the print function.")
<__main__.tasklet object at 0x7f352280e610>
>>> a = _scheduled.pop()
>>> a.greenlet.switch(*a.variables)
A simple test using the print function.
Answer: I'm not familiar with Stackless, but what's happening there is that the
`tasklet` functions returns a reference to the function, that is then called
by the interpreter with the *args.
An example:
def return_the_method(method):
return method
def add_two(num1, num2):
return num1 + num2
If you have this, then run `return_the_method(add_two)(1, 2)`, you'll get `3`.
|
how to download attachments from e-mail and keep original filename? using Python/outlook
Question: I'm trying to download the attachments in an email from outlook using Python
and the windows extensions, so far I've tried the following:
import win32com.client
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.GetDefaultFolder(6).Folders('Subfolder')
messages = inbox.Items
message = messages.GetLast() #open last message
attachments = message.Attachments #assign attachments to attachment variable
attachment = attachments.Item(1)
attachment.SaveASFile("File_name")
This code will save the file under the filename: "File_name" . Is there any
way I could use the original file name as the file name I'm using to save?
Answer: Sure, use the `Attachment.FileName` property (concatenate it with the
directory name where you want to save the attachment).
|
Problems Accessing MS Word 2010 with Python
Question: I am using Python with Eclipse. I need to access MS Word file with Python. I
have seen some examples on this and I have already installed pywin32. I tried
some of the examples but I am getting some errors.
import win32com.client as win32
word = win32.Dispatch("Word.Application")
word.Visible = 0
word.Documents.Open("myfile.docx")
doc = word.ActiveDocument
print doc.Content.Text
word.Quit()
This is the error I am getting. It would be great if anyone can tell me what I
did wrong here.
Traceback (most recent call last):
File "C:\Users\dino\Desktop\Python27\Test\src\AccessWordDoc.py", line 10, in <module>
word = win32.Dispatch("Word.Application")
File "C:\Python27\lib\site-packages\win32com\client\__init__.py", line 95, in Dispatch
dispatch, userName = dynamic._GetGoodDispatchAndUserName(dispatch,userName,clsctx)
File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 114, in _GetGoodDispatchAndUserName
return (_GetGoodDispatch(IDispatch, clsctx), userName)
File "C:\Python27\lib\site-packages\win32com\client\dynamic.py", line 91, in _GetGoodDispatch
IDispatch = pythoncom.CoCreateInstance(IDispatch, None, clsctx, pythoncom.IID_IDispatch)
pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)
Is there another way to access the MS word file and extract the data in it
without going through all this?
Answer: The code below worked for me, which is just a simple change of
"Word.Application" to "Word.Application.8":
import win32com.client as win32
word = win32.Dispatch("Word.Application.8")
word.Visible = 0
word.Documents.Open("myfile.docx")
doc = word.ActiveDocument
print doc.Content.Text
word.Quit()
I came to this solution following @Torxed's suggestion to examine the
registry. When I tried Word.Document.8, the set of methods available did not
include .Visible, .Quit, and .Open and so @Torxed's solution did not work for
me. (It is clear now that the Application and Word objects are intended to
have different uses.) Instead, I also found Word.Application,
Word.Application.8, and Word.Application.14 under my registry and just tried
Word.Application.8 and it worked as expected.
|
scrapy tutorial: cannot run scrapy crawl dmoz
Question: I'm asking a new question because I'm aware I wasn't clear enough in the last
one. I'm trying to follow the scrapy tutorial, but I'm stuck in the crucial
step, the "scrapy crawl dmoz' command. The code is this one (I have written
that in the python shell and save it typing .py extension):
ActivePython 2.7.2.5 (ActiveState Software Inc.) based on
Python 2.7.2 (default, Jun 24 2011, 12:20:15)
[GCC 4.2.1 (Apple Inc. build 5664)] on darwin
Type "copyright", "credits" or "license()" for more information.
>>> from scrapy.spider import BaseSpider
class dmoz(BaseSpider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
filename = response.url.split("/")[-2]
open(filename, 'wb').write(response.body)
>>>
The directory I'm using should be fine, please find below the tree:
.
βββ scrapy.cfg
βββ tutorial
βββ __init__.py
βββ __init__.pyc
βββ items.py
βββ pipelines.py
βββ settings.py
βββ settings.pyc
βββ spiders
βββ __init__.py
βββ __init__.pyc
βββ dmoz_spider.py
2 directories, 10 files
Now when I try to run "scapy crawl dmoz" I get this:
$ scrapy crawl dmoz
2013-08-14 12:51:40+0200 [scrapy] INFO: Scrapy 0.16.5 started (bot: tutorial)
2013-08-14 12:51:40+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/bin/scrapy", line 5, in <module>
pkg_resources.run_script('Scrapy==0.16.5', 'scrapy')
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources.py", line 499, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources.py", line 1235, in run_script
execfile(script_filename, namespace, namespace)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/EGG-INFO/scripts/scrapy", line 4, in <module>
execute()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 131, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 76, in _run_print_help
func(*a, **kw)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 138, in _run_command
cmd.run(args, opts)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/commands/crawl.py", line 43, in run
spider = self.crawler.spiders.create(spname, **opts.spargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/command.py", line 33, in crawler
self._crawler.configure()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/crawler.py", line 40, in configure
self.spiders = spman_cls.from_crawler(self)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/spidermanager.py", line 35, in from_crawler
sm = cls.from_settings(crawler.settings)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/spidermanager.py", line 31, in from_settings
return cls(settings.getlist('SPIDER_MODULES'))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/spidermanager.py", line 22, in __init__
for module in walk_modules(name):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/utils/misc.py", line 65, in walk_modules
submod = __import__(fullpath, {}, {}, [''])
File "/Users//Documents/tutorial/tutorial/spiders/dmoz_spider.py", line 1
ActivePython 2.7.2.5 (ActiveState Software Inc.) based on
^
SyntaxError: invalid syntax
Does anybody know what is wrong with the steps I'm making? Thank you for your
help. This is my very first programming experience, so it might be a very
stupid issue.
Answer: It's not an indentation problem, the error message is clear:
File "/Users//Documents/tutorial/tutorial/spiders/dmoz_spider.py", line 1
ActivePython 2.7.2.5 (ActiveState Software Inc.) based on
^
SyntaxError: invalid syntax
You clearly have copy pasted the code in IDLE including starting strings from
the IDLE, which are no code.
Instead of copy-pasting, try opening an editor and actually typing the
tutorial code there, you'll learn better and you wont be pasting crap
inadvertently.
|
python pandas merging data based off 2 keys
Question: Right now I have 2 dataframes. One with donor information and one with
fundraiser information. Ideally what I want to do is for each donor sum up
their donations and store it in the fundraiser dataframe. Problems are that it
is possible to have a fundraiser in multiple events (so need to use the id and
event as the key) and not all fundraisers actually collect anything. I've
figured out how to groupby the donation dataframe to calculate the amount
raised by the fundraisers that collected anything, but I have no idea how to
then get that information over to the fundraiser dataframe :(
import pandas as pd
Donors = pd.DataFrame({"event": pd.Series([1,1,1,1,2,2]), "ID": pd.Series(['a','a','b','c','a','d']), "amount": ([1,2,3,4,5,6])})
fundraisers = pd.DataFrame({"event": pd.Series([1,1,1,2,2,1]), "ID": pd.Series(['a','b','c','a','d','e'])})
foo = Donors.groupby(["event", "ID"])["amount"].sum().reset_index()
ideally I want the fundraiser frame to look like:
event | id | amount raised
--------------------------
1 | a | 3
1 | b | 3
1 | c | 4
1 | e | 0
2 | a | 5
2 | d | 6
Answer: Do an outer join:
In [15]: pd.merge(foo,fundraisers,how='outer').fillna(0)
Out[15]:
event ID amount
0 1 a 3
1 1 b 3
2 1 c 4
3 2 a 5
4 2 d 6
5 1 e 0
If you need the `DataFrame` to be sorted by the `'event'` column then you can
do
In [16]: pd.merge(foo,fundraisers,how='outer').fillna(0).sort('event')
Out[16]:
event ID amount
0 1 a 3
1 1 b 3
2 1 c 4
5 1 e 0
3 2 a 5
4 2 d 6
If you have different column names that you want to merge on, in this case
let's say that `'ID'` in `Donors` should be `'fundraiser ID'` you can do
In [42]: merge(foo, fundraisers, left_on=['fundraiser ID', 'event'], right_on=['ID', 'event'], how='outer')
Out[42]:
event fundraiser ID amount ID
0 1 a 3 a
1 1 b 3 b
2 1 c 4 c
3 2 a 5 a
4 2 d 6 d
5 1 NaN NaN e
|
Trouble with simple https authentication with urllib2 (to get PayPal OAUTH bearer token)
Question: I'm at the first stage of integrating our web app with PayPal's express
checkout api. For me to place a purchase, I have to get a Bearer token of
course using our client id and our client secret.
I use the following curl command to successfully get that token:
curl https://api.sandbox.paypal.com/v1/oauth2/token \
-H "Accept: application/json" \
-H "Accept-Language: en_US" \
-u "ourID:ourSecret" \
-d "grant_type=client_credentials"
Now I am trying to achieve the same results in python using urllib2. I've
arrived at the following code, which produces a 401 HTTP Unauthorized
exception.
import urllib
import urllib2
url = "https://api.sandbox.paypal.com/v1/oauth2/token"
PAYPAL_CLIENT_ID = "ourID"
PAYPAL_CLIENT_SECRET = "ourSecret"
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, url, PAYPAL_CLIENT_ID, PAYPAL_CLIENT_SECRET)
authhandler = urllib2.HTTPBasicAuthHandler(passman)
opener = urllib2.build_opener(authhandler)
urllib2.install_opener(opener)
req = urllib2.Request( url=url,
headers={
"Accept": "application/json",
"Accept-Language": "en_US",
},
data =urllib.urlencode({
"grant_type":"client_credentials",
}),)
result = urllib2.urlopen(req).read()
print result
Does anyone have any idea what I'm doing wrong above? Many thanks for any
insights
Answer: Experiencing the same problem here. Based on [Get access token from Paypal in
Python - Using urllib2 or requests
library](http://stackoverflow.com/questions/31476540) working python code is:
import urllib
import urllib2
import base64
token_url = 'https://api.sandbox.paypal.com/v1/oauth2/token'
client_id = '.....'
client_secret = '....'
credentials = "%s:%s" % (client_id, client_secret)
encode_credential = base64.b64encode(credentials.encode('utf-8')).decode('utf-8').replace("\n", "")
header_params = {
"Authorization": ("Basic %s" % encode_credential),
"Content-Type": "application/x-www-form-urlencoded",
"Accept": "application/json"
}
param = {
'grant_type': 'client_credentials',
}
data = urllib.urlencode(param)
request = urllib2.Request(token_url, data, header_params)
response = urllib2.urlopen(request).open()
print response
The reason, I believe, is explained at [Python urllib2 Basic Auth
Problem](http://stackoverflow.com/questions/2407126)
> Python libraries, per HTTP-Standard, first send an unauthenticated request,
> and then only if it's answered with a 401 retry, are the correct credentials
> sent. If the servers don't do "totally standard authentication" then the
> libraries won't work.
|
Python: 3D scatter losing colormap
Question: I'm creating a 3D scatter plot with multiple sets of data and using a colormap
for the whole figure. The code looks like this:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for R in [range(0,10), range(5,15), range(10,20)]:
data = [np.array(R), np.array(range(10)), np.array(range(10))]
AX = ax.scatter(*data, c=data[0], vmin=0, vmax=20, cmap=plt.cm.jet)
def forceUpdate(event): AX.changed()
fig.canvas.mpl_connect('draw_event', forceUpdate)
plt.colorbar(AX)
This works fine but as soon as I save it or rotate the plot, the colors on the
first and second scatters turn blue.

The force update is working by keeping the colors but only on the last scatter
plot drawn. I tried making a loop that updates all the scatter plots but I get
the same result as above:
AX = []
for R in [range(0,10), range(5,15), range(10,20)]:
data = [np.array(R), np.array(range(10)), np.array(range(10))]
AX.append(ax.scatter(*data, c=data[0], vmin=0, vmax=20, cmap=plt.cm.jet))
for i in AX:
def forceUpdate(event): i.changed()
fig.canvas.mpl_connect('draw_event', forceUpdate)
Any idea how I can make sure all scatters are being updated so the colors
don't disappear?
Thanks!
Answer: Having modified your code so that it does anything:
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from mpl_toolkits.mplot3d import Axes3D
>>> AX = \[\]
>>> fig = plt.figure()
>>> ax = fig.add_subplot(111, projection='3d')
>>> for R in \[range(0,10), range(5,15), range(10,20)\]:
... data = \[np.array(R), np.array(range(10)), np.array(range(10))\]
... AX = ax.scatter(*data, c=data\[0\], vmin=0, vmax=20, cmap=plt.cm.jet)
... def forceUpdate(event): AX.changed()
... fig.canvas.mpl_connect('draw_event', forceUpdate)
...
9
10
11
>>> plt.colorbar(AX)
<matplotlib.colorbar.Colorbar instance at 0x36265a8>
>>> plt.show()
then I get: 
So the above code is working. If your existing code isn't then I suggest that
you try the exact code above and if that doesn't work look into the versions
of code that you are using if it does work then you will have to investigate
the differences between it and your actual code, (rather than your example
code).
|
Python Multiprocessing in a GUI with redirected printing
Question: When performing multiprocessing in python using the Process call, I would like
to immediately print a completion statement after completion of each process
from the parent process rather than in the child worker function. I am doing
this in a Tk GUI, so the sys.stdout is redirected (see simple example version
below). If I print inside the worker function (printLetters - uncomment out
"#print letter") I get the error: "The process has forked and you cannot use
this CoreFoundation functionality safely. You MUST exec()."
If someone can suggest a way to fix either of these two issues (identifying
when the queue has been updated from the parent process OR a way to print
inside a process with sys.stdout redirected), it would be greatly appreciated.
Thanks...
import Tkinter
from Tkinter import *
import multiprocessing
import time
from multiprocessing import Process, Queue
class StringVarFile:
def __init__(self,stringVar,window):
self.__newline = 0
self.__stringvar = stringVar
self.__window = window
def write(self,s):
new = self.__stringvar.get()
for c in s:
if self.__newline:
new = ""; self.__newline = 0
new = new+c
self.set(new)
def set(self,s):
self.__stringvar.set(s); self.__window.update()
def get(self):
return self.__stringvar.get()
def flush(self): pass
def executeSomething(letters):
procs=list()
queue = Queue()
print 'Begining letter printing...'
for letter in letters:
p = Process(target=printLetters, args=(queue,letter))
procs.append(p)
p.start()
for p in procs:
p.join()
print 'Finished printing letters'
def printLetters(queue,letter):
time.sleep(2)
#print letter
queue.put(letter)
if __name__ == '__main__':
root = Tk()
statusVar = StringVar() ### Class method for Tkinter. Description: "Value holder for strings variables."
letters = ['a','b','c','d']
Label(root,width=50,height=10,textvariable=statusVar).pack()
sys.stdout = StringVarFile(statusVar,root)
root.after(100, executeSomething(letters))
root.mainloop()
Answer: Do you just want to print to the GUI? If so, inside `executeSomething()`
between starting and joining the processes, why not just iterate over queue
results? Like this:
for letter in letters:
print queue.get()
|
Python tempfile
Question: I am trying to create tempfile but my python versions do not allow me to
proceed and give the following complaints. Do I need to upgrade this version
to use tempfile module. Thanks
Python 2.4.3 (#1, Jan 9 2013, 06:47:03) [GCC 4.1.2 20080704 (Red Hat 4.1.2-54)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tempfile
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "tempfile.py", line 2, in ?
temp = tempfile.NamedTemporaryFile()
AttributeError: 'module' object has no attribute 'NamedTemporaryFile'
Answer: You are importing a **local** file. You have a file named `tempfile.py` in
your local directory that masks the global module. `import tempfile` imports
that file, not the standard library module.
Use `print tempfile.__file__` to locate that file if you cannot find it.
Rename that file to something else.
The [Python 2.4 `tempfile` module](http://docs.python.org/2.4/lib/module-
tempfile.html) supports `NamedTemporaryFile` without upgrading.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.