text
stringlengths 226
34.5k
|
---|
Why are stale header values still around when uploading a subsequent Item to archive.org using ia-wrapper?
Question: I mirrored a batch of videos from EuroPython2014 on archive.org using the
master of ia-wrapper. As discussed in [#64](https://github.com/jjjake/ia-
wrapper/issues/64), metadata from the previous upload shows up in a subsequent
upload.
I went through and hand edited the descriptions in the archive.org interface
(it was just a few of the videos), but I'd like for this not to happen the
next time I mirror a conference. I have a workaround (explicitly set headers
when calling upload.) I'd really really really like to know how it is that the
headers dict is still populated from previous calls.
When I run this, [item.py L579](https://github.com/jjjake/ia-
wrapper/blob/master/internetarchive/item.py#L579) is not passing headers in
kwargs when it calls upload_file. (I even stepped through using pycharm's
debugger).
What the heck is going on?
If you want to try this out, the code below demonstrates it.
`pip install -e git+https://github.com/jjjake/ia-
wrapper.git@9b7b951cfb0e9266f329c9fa5a2c468a92db75f7#egg=internetarchive-
master`
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import datetime
import internetarchive as ia
import os
from tempfile import NamedTemporaryFile
ACCESS_KEY = os.environ.get('IAS3_ACCESS_KEY')
SECRET_KEY = os.environ.get('IAS3_SECRET_KEY')
now = datetime.datetime.utcnow().strftime('%Y_%m_%d_%H%M%S')
item = ia.Item('test_upload_iawrapper_first_%s' % now)
item2 = ia.Item('test_upload_iawrapper_second_%s' % now)
def upload(item, metadata):
with NamedTemporaryFile() as fh:
fh.write('testing archive_uploader')
item.upload(fh.name,
metadata=metadata,
access_key=ACCESS_KEY,
secret_key=SECRET_KEY,
# adding headers={} is a workaround
)
upload(item,
metadata={
'collection': 'test_collection',
'description': 'not an empty description',
})
upload(item2,
metadata={
'collection': 'test_collection',
# you can also comment out description and get hte same result
'description': '',
})
print 'visit https://archive.org/details/{}'.format(item.identifier)
print 'visit https://archive.org/details/{}'.format(item2.identifier)
Answer: You've tripped over the "mutable defaults" gotcha in Python: ["Least
Astonishment" in Python: The Mutable Default
Argument](http://stackoverflow.com/questions/1132941/least-astonishment-in-
python-the-mutable-default-argument)
Change this:
def upload_file(self, body, headers={}, ...):
to this:
def upload_file(self, body, headers=None, ...):
if headers is None:
headers = {}
|
Python program prints an extra empty line when reading a text file
Question: I'm using python 3.3. I have a text file with three lines of text, as an
example. I want to select a number and it will display the contents of that
line number. For some reason, it prints the line I want, and prints an empty
line below it.
The text file looks like this:
AAPL,Apple,700
P,Pandora,32
MW,Men's Warehouse,54.32
The output in the interpreter I get if i is 2:
>>
P,Pandora,32
>>
And the code is here:
line_number = int(input('Enter the line number: '))
with open('C:/Python33/myprogramtests/filewrite1.txt') as f:
i = 1
for line in f:
if i == line_number:
break
i += 1
print (line)
I did try a comma after print (line) but it didn't work. I'm guessing I'm
missing some bit of code that would print just the line and not an extra
whitespace line.
Answer: You should provide an `end=''` to `print` to suppress the automatic behaviour
of adding the equivalent of `\n` to output.
I'd also remove the counting logic and use `islice` to extract the line you
wish, eg:
from itertools import islice
line_number = int(input('Enter the line number: '))
with open('yourfile') as fin:
print(next(islice(fin, line_number - 1, line_number), ''), end='')
If you wanted to use the count approach, then you can use `enumerate` starting
at 1, eg:
for idx, line in enumerate(fin, start=1):
if idx == line_number:
print(line, end='')
break
|
AttributeError: 'module' object has no attribute 'postDirectOrder'
Question: I'm trying to use this python package <https://pypi.python.org/pypi/pesapal>
in one of my projects.
But I noticed that for some weird reason the author stuffed all the logic in
the [`__init__.py` file](https://github.com/kelonye/python-
pesapal/blob/master/lib/__init__.py) which doesn't make sense to me, but hey..
So when I try to use the package I get the following error:
'module' object has no attribute 'postDirectOrder
I'm on Python 2.7.7, I've tried `from pesapal import *` but no luck. I tried:
>>> from pesapal import PesaPal
>>> url = PesaPal.postDirectOrder(post_params, request_data)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method postDirectOrder() must be called with PesaPal instance as first argument (got dict inst
ance instead)
Answer: The 0.3 release differs significantly from later code; it contains a _class_
`PesaPal()` which takes `consumer_key` and `consumer_secret` arguments:
pp = PesaPal(key, secret)
url = pp.postDirectOrder(post_params, request_data)
The project never correctly tagged the 0.3 release, but there is a [0.3
release commit](https://github.com/kelonye/python-
pesapal/commit/1a7848fc84d7277e7faa9c697bd1455761ed996c#diff-d41d8cd98f00b204e9800998ecf8427e);
you can [browse the source tree at that
commit](https://github.com/kelonye/python-
pesapal/tree/1a7848fc84d7277e7faa9c697bd1455761ed996c) to get an idea of what
is expected.
The project README there includes an example:
>
> import pesapal, urllib2
>
> consumer_key ='consumer_key'
> consumer_secret = 'consumer_secret'
> testing = False
>
> ### make client
> client = pesapal.PesaPal(consumer_key, consumer_secret, testing)
>
> ### post a direct order
>
> request_data = {
> 'Amount': '',
> 'Description': '',
> 'Type': '',
> 'Reference': '',
> 'PhoneNumber': ''
> }
> post_params = {
> 'oauth_callback': 'www.example.com/post_payment_page'
> }
> request = client.postDirectOrder(post_params, request_data)
> # get url to display as an iframe
> print request.to_url()
>
|
scrapy how to import the settings to override it
Question: this is my code
class Test(Spider):
self.settings.overrides['JOBDIR']= "seen"
I got:
File "C:\Python27\lib\site-packages\scrapy\spider.py", line 46, in settings
return self.crawler.settings
File "C:\Python27\lib\site-packages\scrapy\spider.py", line 41, in crawler
assert hasattr(self, '_crawler'), "Spider not bounded to any crawler"
AssertionError: Spider not bounded to any crawler
I am extending `Spider` and I am not using `Crawler` because I don't have
links nor rules to follow
I am guessing that my problem is because I didn't import the settings well and
I need your help please
Answer: In order to change the settings in the spider you can:
class TestSpider(Spider):
def set_crawler(self, crawler):
super(TestSpider, self).set_crawler(crawler)
crawler.settings.set('JOBDIR','seen')
# rest of spider code
|
system cannot find the specified file python subprocess
Question: I am trying to run a simple command in python:
from subprocess import *
check_output("ls")
When I run this it raises
Error:
WindowsError: [Error 2] The system cannot find the file specified
Answer: `ls` doesn’t exist on Windows; `dir` does. Furthermore, you may need to pass
`shell=True`, since it’s built in to `cmd.exe`.
If it’s not a test and you just want to get the contents of a directory, use
[`os.listdir`](https://docs.python.org/3/library/os.html#os.listdir) instead.
|
Need to dump entire DOM tree with element id from selenium server
Question: I have been using python selenium for web automation testing. The key part of
automation is to find the right element for a user-visible object in a HTML
page. The following API will work most of the time, but not all the time.
find_element_by_xxx, xxx can be id, name, xpath, tag_name etc.
When HTML page is too complicated, I would like to search the dom tree. Wonder
if it's possible to ask the selenium server to serialize the entire DOM (with
the element id that can be used to perform action on through webdriver
server). Client side (python script) can do its own search algorithm to find
the right element.
Note that python selenium can get the entire html page by
drv.page_source
However, parsing this doesn't give the internal element id from selenium
server's point of view, hence not useful.
**EDIT1:** Paraphrase it to make it more clear (thanks @alecxe): what's needed
here is a serialized representation of all the DOM elements (with their DOM
structure preserved) in the selenium server, this serialized representation
can be sent to the client side (a python selenium test app) which can do its
own search.
Answer: ### The Problem
Ok, so there may be cases where you need to perform some substantial
processing of a page on the client (Python) side rather than on the server
(browser) side. For instance, if you have some sort of machine learning system
already written in Python and it needs to analyze the whole page before
performing actions on them, then although it is possible to do it with a bunch
of `find_element` calls, this gets very expensive because each call is a
round-trip between the client and the server. And rewriting it to work in the
browser may be too expensive.
### Why Selenium's Identifiers wont' do it
However, I do not see an _efficient_ way to get a serialization of the DOM
_together_ with Selenium's own identifiers. Selenium creates these identifiers
on an as-needed basis, when you call `find_element` or when DOM nodes are
returned from an `execute_script` call (or passed to the callback that
`execute_async_script` gives to the script). But if you call `find_element` to
get identifiers for each element, then you are back to square one. I could
imagine decorating the DOM in the browser with the required information but
there is no public API to request some sort of pre-assignment of `WebElement`
ids. As a matter of fact, these identifiers are designed to be opaque so even
if a solution managed somehow to get the required information, I'd be
concerned about cross-browser viability and ongoing support.
### A Solution
There is however a way to get an addressing system that would work on both
sides: XPath. The idea is to parse the DOM serialization into a tree on the
client side and then get the XPath of the nodes you are interested in and use
this to get the corresponding WebElement. So if you'd have to perform dozens
of client-server roundtrips to determine which single element you need to
perform a click on, you'd be able so reduce this to an initial query of the
page source plus a single `find_element` call with the XPath you need.
Here is a super simple proof of concept. It fetches the main input field of
the Google front page.
from StringIO import StringIO
from selenium import webdriver
import lxml.etree
#
# Make sure that your chromedriver is in your PATH, and use the following line...
#
driver = webdriver.Chrome()
#
# ... or, you can put the path inside the call like this:
# driver = webdriver.Chrome("/path/to/chromedriver")
#
parser = lxml.etree.HTMLParser()
driver.get("http://google.com")
# We get this element only for the sake of illustration, for the tests later.
input_from_find = driver.find_element_by_id("gbqfq")
input_from_find.send_keys("foo")
html = driver.execute_script("return document.documentElement.outerHTML")
tree = lxml.etree.parse(StringIO(html), parser)
# Find our element in the tree.
field = tree.find("//*[@id='gbqfq']")
# Get the XPath that will uniquely select it.
path = tree.getpath(field)
# Use the XPath to get the element from the browser.
input_from_xpath = driver.find_element_by_xpath(path)
print "Equal?", input_from_xpath == input_from_find
# In JavaScript we would not call ``getAttribute`` but Selenium treats
# a query on the ``value`` attribute as special, so this works.
print "Value:", input_from_xpath.get_attribute("value")
driver.quit()
Notes:
1. The code above does not use `driver.page_source` because Selenium's documentation states that there is no guarantee as to the freshness of what it returns. It could be the state of the current DOM or the state of the DOM when the page was first loaded.
2. This solution suffers from the exact same problems that `find_element` suffers from regarding dynamic contents. If the DOM changes while the analysis is occurring, then you are working on a stale representation of the DOM.
3. If you have to generate JavaScript events while performing the analysis, and these events change the DOM, then you'd need fetch the DOM again. (This is similar to the previous point but a solution that uses `find_element` calls could conceivably avoid the problem I'm talking about in _this_ point by ordering the sequence of calls carefully.)
4. `lxml`'s tree could possibly differ _structurally_ from the DOM tree in such a way that the XPath obtained from `lxml` does not address the corresponding element in the DOM. What `lxml` processes is the cleaned up serialized view that the browser has of the HTML passed to it. Therefore, **so long as the code is written to prevent the problems I've mentioned in point 2 and 3** , I do not see this as a likely scenario, but it is not impossible.
|
Python NLTK tokenize sentence with wrong syntax from human errors
Question: I am looking for a way to handle sentence tokenizing task well.
I have this text extracted from a human written review for a restaurant
Nevertheless, the soup enhances the prawns well.In contrast, the fish offered is fresh and well prepared.
Note that, the period that is the boundary of first sentence is not separated
by space. It is result from human error in writing. There are many sentences
that were written like this that I can't ignore this one case.
So far I tried nltk sentence tokenizer in python but does not work as
expected.
>>>import nltk.data
>>>tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
>>>sentences = tokenizer.tokenize(text)
>>>sentences
['Nevertheless, the soup enhances the prawns well.In contrast, the fish offered is fresh and well prepared.']
My expectation is it should be able to split the text into two sentences
['Nevertheless, the soup enhances the prawns well.', 'In contrast, the fish offered is fresh and well prepared.']
Any help is appreciated in advance
Answer: I decided to use regex for preprocessing of the text. The regex i use was.
re.sub(r'(\w{2})([.!?]+)(\w+)', r'\1\2 \3', text)
It has 3 groups. Group 1 is before the punctuation `(\w{2})`. Group 2 is the
punctuation which can be `[!?.]` and can repeat more than once so it is
`([.!?]{1,})`. Group 3 is the next word after punctuation which can anywhere
be 1 or more character word like "I" `(\w{1})` .
|
Openpyxl missing 'jdcal'
Question: I tried to install the `openpyxl` module, but during the installation it
showed some errors with `JDcall`. When I then try to import it, I get this
error:
Traceback (most recent call last):
File "C:\Andrzej\workspace\sandbox\sandbox.py", line 7, in <module>
import openpyxl
File "C:\Python34\lib\site-packages\openpyxl-2.0.5-py3.4.egg\openpyxl\__init__.py", line 29, in <module>
from openpyxl.workbook import Workbook
File "C:\Python34\lib\site-packages\openpyxl-2.0.5-py3.4.egg\openpyxl\workbook\__init__.py", line 25, in <module>
from .workbook import *
File "C:\Python34\lib\site-packages\openpyxl-2.0.5-py3.4.egg\openpyxl\workbook\workbook.py", line 35, in <module>
from openpyxl.worksheet import Worksheet
File "C:\Python34\lib\site-packages\openpyxl-2.0.5-py3.4.egg\openpyxl\worksheet\__init__.py", line 25, in <module>
from .worksheet import *
File "C:\Python34\lib\site-packages\openpyxl-2.0.5-py3.4.egg\openpyxl\worksheet\worksheet.py", line 35, in <module>
import openpyxl.cell
File "C:\Python34\lib\site-packages\openpyxl-2.0.5-py3.4.egg\openpyxl\cell\__init__.py", line 25, in <module>
from .cell import *
File "C:\Python34\lib\site-packages\openpyxl-2.0.5-py3.4.egg\openpyxl\cell\cell.py", line 47, in <module>
from openpyxl.date_time import (
File "C:\Python34\lib\site-packages\openpyxl-2.0.5-py3.4.egg\openpyxl\date_time.py", line 34, in <module>
from jdcal import (
ImportError: No module named 'jdcal'
I get this same problem for Python 3.4 and 3.3, how can I solve it?
Answer: You should install [jdcal](https://pypi.python.org/simple/jdcal/) separately.
After this, install [Openpyxl](https://openpyxl.readthedocs.org/en/latest/). I
faced the same issue - after doing this, it got resolved
|
Keep getting some permission denied error
Question: I keep getting some permission denied error, what is that?
Every time when I am trying to install anything, the same permission denied
error message I get in python, in nltk tool
> error: could not create '/usr/local/lib/python2.7/dist-packages/nameparser':
> Permission denied ---------------------------------------- Cleaning up...
> Command /usr/bin/python -c "import setuptools, tokenize;**file**
> ='/tmp/pip_build_vandana/nameparser/setup.py';exec(compile(getattr(tokenize,
> 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install
> --record /tmp/pip-4Rd7GE-record/install-record.txt --single-version-
> externally-managed --compile failed with error code 1 in
> /tmp/pip_build_vandana/nameparser Storing debug log for failure in
> /home/vandana/.pip/pip.log
Answer: You are probably doing `python setup.py install something` or `pip install
something` and its trying to install to the global Python package location,
for which your user does not have access. You need to use [virtual
environments](http://virtualenv.readthedocs.io/en/latest/).
|
python finding multiple occurrences between 2 delimiter
Question: i'm trying to find multiply occurrences between 2 delimiters using regex.
Unfortunately i can't figure out how. The 2 delimiters are ' and ':
import re
string = "'lightOff' 'lightOn':,'lightOff' 'ovenOff' 'ovenOn': None 'radioOn': 'radioOff'"
print string
print 'newString', re.findall("^'(.*?)':", string)
I only get the first match
'lightOn'
What i want is to get the 3 substrings between ' and ':
'lightOn'
'ovenOn'
'radioOn'
Answer: Do not use the anchor. `^` and `$` are the anchors in a regex pattern. Also,
when you match between two `'`, it'll return the string `'word1' 'word2':` as
output, instead of only `'word2':`. Try to match everything between two `'`
which isn't the character `'` itself.
re.findall("'([^']+)':", string)
will work.
|
To print a particular folder name in python?
Question: I've made a script in python which scans all the folders and sub-folders and
then prints all the folders and subfolders which begin with the word Hello.
Now I need to implement a change which is that I should be able to print any
folder which has the word Hello. Like the word could be hiHello or
abcHelloxyz. So this folder must also be printed. I guess the grep command
needs to be used.
import os
if os.path.isdir('/'):
for root, dirs, files in os.walk('/'):
for name in dirs:
if name.startswith(("hello")):
print ("\n\nFile Name: "+ name)
name = os.path.abspath(os.path.join(root, name)
)
print ("Path :"+name)
Answer: You can test if `'Hello'` is a substring in the `dirname` using `'Hello' in
dirname`:
In [1]: dirname = 'hiHello'
In [3]: 'Hello' in dirname
Out[3]: True
In [4]: dirname.startswith('Hello')
Out[4]: False
|
PyCUDA NPP Library Compatability
Question: I've been trying to access the Nvidia Performance Primitives library through
Python, and I found a very useful tutorial last updated in 2011 at this site:
<http://openvidia.sourceforge.net/index.php/OpenVIDIA/python>
However, after downloading the CUDA 6.0 toolkit I can't seem to find any CUDA
".dll" files at all (like those referenced near the start of the tutorial).
Thanks to responses on here, I know the file names should be different to
those in the tutorial, but I can't find any.
Does anybody know an alternative method or command to import the library? Any
help would be greatly appreciated, and if I've missed any key details then
please let me know.
Thanks a lot, Dan
Board: Jetson TK1 OS: L4T Ubuntu 14.04 (from
<https://developer.nvidia.com/jetson-tk1-support>) Language: Python 2.7
Answer: I just used the cdll.LoadLibrary() command from the ctypes library and called
the "libnppi.so" and "libcudart.so" files. Worked perfectly, thanks for the
help!
|
importRows Google Fusion Tables API
Question: I would like to import rows from csv file to fusion table using the Google
fusion tables API, I read [this
reference](https://developers.google.com/fusiontables/docs/v1/reference/table/importRows),
but I don't understand how to post my csv file here:
<https://www.googleapis.com/upload/fusiontables/v1/tables/---tableId---/import>
How should I attach myFile.csv to request in python ?
request = urllib2.Request("https://www.googleapis.com/fusiontables/v1/tables/---tableID---/import")
request.get_method = lambda: 'POST'
response = opener.open(request).read()
What I already have:
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request('https://www.googleapis.com/fusiontables/v1/query?%s' % \
(urllib.urlencode({'access_token': access_token,
'sql': query})),
headers={'Content-Length':0})
request.get_method = lambda: 'POST'
response = opener.open(request).read()
print response
This code adds single rows to fusion table using simple SQL queries, but I
need to add 100k rows, so according to [this
reference](https://developers.google.com/fusiontables/docs/v1/sql-
reference#insertRow) I have to use the
[importRows](https://developers.google.com/fusiontables/docs/v1/reference/table/importRows)
method and not [SQL insert
statements](https://developers.google.com/fusiontables/docs/v1/sql-
reference#insertRow).
Thank you.
Answer: According to this
[documentation](https://developers.google.com/fusiontables/docs/v1/using#ImportingRowsIntoTables)
the correct URL to import rows is
https://www.googleapis.com/upload/fusiontables/v1/tables/tableId/import
> **Importing rows into a table**
>
> To import more rows into an existing table, send an authenticated POST HTTP
> request to the following URI (note the **upload** in the URI below):
>
> <https://www.googleapis.com/upload/fusiontables/v1/tables/tableId/import>
>
> You must supply the row data in the message body. The row data should be CSV
> formatted data, though you may specify alternative delimiters.
Probably [this example](https://puravidaapps.com/taifunFT2.php#import) can
help somehow.
|
Google cloud sql: Lost connection to MySQL server at 'reading initial communication packet'
Question: I've set up a default django/django-wiki project. Local tests work fine.
Connecting to cloud sql from the local server (with
`google.appengine.ext.django.backends.rdbms`) doesn't work, I believe due to
some authentication issue. More importantly, I can't connect from the
production server.
I've made sure not to deploy my local `MySQLdb` sitting in my virtual
environment directory.
I have the following in `app.yaml`:
- name: MySQLdb
version: "latest"
My `DATABASE` entry is the following:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'dbname',
'USER': 'root',
'PASSWORD': '',
'HOST': '/cloudsql/appname:sqlinstance',
'PORT': '',
}
}
It seems to be working, or at least doesn't complain about missing packages or
mysql import issues. My problem is the following (obtained from the production
logs, but also visible via the django debug output):
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/MySQLdb-1.2.4b4/MySQLdb/connections.py", line 190, in __init__
super(Connection, self).__init__(*args, **kwargs2)
OperationalError: (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 38")
What could be causing this? Does this mean a connection was made to something
but there was no reply? Other posts seem to mention this as an issue when
connecting from an external source, but this connection is from app engine
AFAIK.
Answer: Well, this was much more straight forward than expected. I had assumed the
cloud sql, created with my google account would just grant access to my GAE
app by default. This has to be set explicitly, from `Developer
Console`->`Storage`->`Cloud SQL`->`instance-name`->`Edit`->`AUTHORIZED APP
ENGINE APPLICATIONS`. Or when you create the SQL instance, open the advanced
options.
I just added the name of my app in the box, hit save and everything worked.
|
Trying to apply CPS to an interpreter
Question: I'm trying to use CPS to simplify control-flow implementation in my Python
interpreter. Specifically, when implementing `return`/`break`/`continue`, I
have to store state and unwind manually, which is tedious. I've read that it's
extraordinarily tricky to implement exception handling in this way. What I
want is for each `eval` function to be able to direct control flow to either
the next instruction, or to a different instruction entirely.
Some people with more experience than me suggested looking into CPS as a way
to deal with this properly. I really like how it simplifies control flow in
the interpreter, but I'm not sure how much I need to actually do in order to
accomplish this.
1. Do I need to run a CPS transform on the AST? Should I lower this AST into a lower-level IR that is smaller and then transform that?
2. Do I need to update the evaluator to accept the success continuation everywhere? (I'm assuming so).
I think I generally understand the CPS transform: the goal is to thread the
continuation through the entire AST, including all expressions.
I'm also a bit confused where the `Cont` monad fits in here, as the host
language is Haskell.
**Edit** : here's a condensed version of the AST in question. It is a 1-1
mapping of Python statements, expressions, and built-in values.
data Statement
= Assignment Expression Expression
| Expression Expression
| Break
| While Expression [Statement]
data Expression
| Attribute Expression String
| Constant Value
data Value
= String String
| Int Integer
| None
To evaluate statements, I use `eval`:
eval (Assignment (Variable var) expr) = do
value <- evalExpr expr
updateSymbol var value
eval (Expression e) = do
_ <- evalExpr e
return ()
To evaluate expressions, I use `evalExpr`:
evalExpr (Attribute target name) = do
receiver <- evalExpr target
attribute <- getAttr name receiver
case attribute of
Just v -> return v
Nothing -> fail $ "No attribute " ++ name
evalExpr (Constant c) = return c
What motivated the whole thing was the shenanigans required for implementing
break. The break definition is reasonable, but what it does to the while
definition is a bit much:
eval (Break) = do
env <- get
when (loopLevel env <= 0) (fail "Can only break in a loop!")
put env { flow = Breaking }
eval (While condition block) = do
setup
loop
cleanup
where
setup = do
env <- get
let level = loopLevel env
put env { loopLevel = level + 1 }
loop = do
env <- get
result <- evalExpr condition
when (isTruthy result && flow env == Next) $ do
evalBlock block
-- Pretty ugly! Eat continue.
updatedEnv <- get
when (flow updatedEnv == Continuing) $ put updatedEnv { flow = Next }
loop
cleanup = do
env <- get
let level = loopLevel env
put env { loopLevel = level - 1 }
case flow env of
Breaking -> put env { flow = Next }
Continuing -> put env { flow = Next }
_ -> return ()
I am sure there are more simplifications that can be done here, but the core
problem is one of stuffing state somewhere and manually winding out. I'm
hoping that CPS will let me stuff book-keeping (like loop exit points) into
state and just use those when I need them.
I dislike the split between statements and expressions and worry it might make
the CPS transform more work.
Answer: This finally gave me a good excuse to try using `ContT`!
Here's one possible way of doing this: store (in a `Reader` wrapped in
`ContT`) the continuation of exiting the current (innermost) loop:
newtype M r a = M{ unM :: ContT r (ReaderT (M r ()) (StateT (Map Id Value) IO)) a }
deriving ( Functor, Applicative, Monad
, MonadReader (M r ()), MonadCont, MonadState (Map Id Value)
, MonadIO
)
runM :: M a a -> IO a
runM m = evalStateT (runReaderT (runContT (unM m) return) (error "not in a loop")) M.empty
withBreakHere :: M r () -> M r ()
withBreakHere act = callCC $ \break -> local (const $ break ()) act
break :: M r ()
break = join ask
(I've also added `IO` for easy printing in my toy interpreter, and `State (Map
Id Value)` for variables).
Using this setup, you can write `Break` and `While` as:
eval Break = break
eval (While condition block) = withBreakHere $ fix $ \loop -> do
result <- evalExpr condition
unless (isTruthy result)
break
evalBlock block
loop
Here's the full code for reference:
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
module Interp where
import Prelude hiding (break)
import Control.Applicative
import Control.Monad.Cont
import Control.Monad.State
import Control.Monad.Reader
import Data.Function
import Data.Map (Map)
import qualified Data.Map as M
import Data.Maybe
type Id = String
data Statement
= Print Expression
| Assign Id Expression
| Break
| While Expression [Statement]
| If Expression [Statement]
deriving Show
data Expression
= Var Id
| Constant Value
| Add Expression Expression
| Not Expression
deriving Show
data Value
= String String
| Int Integer
| None
deriving Show
data Env = Env{ loopLevel :: Int
, flow :: Flow
}
data Flow
= Breaking
| Continuing
| Next
deriving Eq
newtype M r a = M{ unM :: ContT r (ReaderT (M r ()) (StateT (Map Id Value) IO)) a }
deriving ( Functor, Applicative, Monad
, MonadReader (M r ()), MonadCont, MonadState (Map Id Value)
, MonadIO
)
runM :: M a a -> IO a
runM m = evalStateT (runReaderT (runContT (unM m) return) (error "not in a loop")) M.empty
withBreakHere :: M r () -> M r ()
withBreakHere act = callCC $ \break -> local (const $ break ()) act
break :: M r ()
break = join ask
evalExpr :: Expression -> M r Value
evalExpr (Constant val) = return val
evalExpr (Var v) = gets $ fromMaybe err . M.lookup v
where
err = error $ unwords ["Variable not in scope:", show v]
evalExpr (Add e1 e2) = do
Int val1 <- evalExpr e1
Int val2 <- evalExpr e2
return $ Int $ val1 + val2
evalExpr (Not e) = do
val <- evalExpr e
return $ if isTruthy val then None else Int 1
isTruthy (String s) = not $ null s
isTruthy (Int n) = n /= 0
isTruthy None = False
evalBlock = mapM_ eval
eval :: Statement -> M r ()
eval (Assign v e) = do
val <- evalExpr e
modify $ M.insert v val
eval (Print e) = do
val <- evalExpr e
liftIO $ print val
eval (If cond block) = do
val <- evalExpr cond
when (isTruthy val) $
evalBlock block
eval Break = break
eval (While condition block) = withBreakHere $ fix $ \loop -> do
result <- evalExpr condition
unless (isTruthy result)
break
evalBlock block
loop
and here's a neat test example:
prog = [ Assign "i" $ Constant $ Int 10
, While (Var "i") [ Print (Var "i")
, Assign "i" (Add (Var "i") (Constant $ Int (-1)))
, Assign "j" $ Constant $ Int 10
, While (Var "j") [ Print (Var "j")
, Assign "j" (Add (Var "j") (Constant $ Int (-1)))
, If (Not (Add (Var "j") (Constant $ Int (-4)))) [ Break ]
]
]
, Print $ Constant $ String "Done"
]
which is
i = 10
while i:
print i
i = i - 1
j = 10
while j:
print j
j = j - 1
if j == 4:
break
so it will print
10 10 9 8 7 6 5
9 10 9 8 7 6 5
8 10 9 8 7 6 5
...
1 10 9 8 7 6 5
|
Python multithreaded ZeroMQ REQ-REP
Question: I am looking to implement a REQ-REP pattern with Python and ZeroMQ using
multithreading.
With Python, I can create a new thread when a new client connects to the
server. This thread will handle all communications with that particular
client, until the socket is closed:
# Thread that will handle client's requests
class ClientThread(threading.Thread):
# Implementation...
def __init__(self, socket):
threading.Thread.__init__(self)
self.socket = socket
def run(self):
while keep_alive:
# Thread can receive from client
data = self.socket.recv(1024)
# Processing...
# And send back a reply
self.socket.send(reply)
while True:
# The server accepts an incoming connection
conn, addr = sock.accept()
# And creates a new thread to handle the client's requests
newthread = ClientThread(conn)
# Starting the thread
newthread.start()
Is it possible to do the same[*] using ZeroMQ? I have seen some examples of
multithreading with ZeroMQ and Python, but in all of them a pool of threads is
created with a fixed number of threads at the beginning and it seems to be
more oriented to load balancing.
[*] Notice what I want is to keep the connection between a client and its
thread alive, as the thread is expecting multiple REQ messages from the client
and it will store information that must be kept between messages (i.e.: a
variable counter that increments its value on a new REQ message; so each
thread has its own variable and no other client should ever be able to access
that thread). New client = new thread.
Answer: ## Yes, ZeroMQ is a powerfull Can-Do toolbox
However, the major surprise will be, that ZeroMQ <_socket_ >-s are by far more
structured than their plain counterparts, you use in the sample.
## { aZmqContext -> aZmqSocket -> aBehavioralPrimitive }
ZeroMQ builds a remarkable, abstraction-rich framework, under a hood of a
"**singleton** " ZMQ-**`Context`** , which is (and shall remain) the only
thing used as "shared".
Threads shall not "share" any other "derived" objects, the less any their
state, as there is a strong distributed-responsibility framework architecture
implemented, both in the sake of clean-design and a high performance & low-
latency.
For all ZMQ-**`Socket`** -s one shall rather imagine a much smarter, layered
sub-structure, where one receives off-loaded worries about I/O-activities (
managed inside ZMQ-**`Context`** responsibility -- thus keep-alive issues,
timing issues and fair-queue buffering / select-polling issues **simply
cease** to be visible for you ... ), with one sort of a formal communication
pattern **behaviour** ( given by a chosen ZMQ-`Socket`-type archetype ).
## Finally
**ZeroMQ** and similarly **nanomsg** libraries are rather LEGO-alike projects,
that empower you, as an architect & designer, more, than one typically
realises at the very beginning.

One thus can focus on distributed-system behaviour, as opposed to lose time
and energy on solving just-another-socket-messaging-[nightmare].
( Definitely **worth to have a look** into both books from Pieter Hintjens,
co-father of the ZeroMQ. There you find plenty Aha!-moments on this great
subject. )
... and as a cherry on a cake -- you get all of this as a Transport-agnostic,
universal environment, whether passing some messages on **`inproc://`** ,
other over **`ipc://`** and also in parallel listening / speaking over
**`tcp://`** layers.
**`EDIT#1`**`2014-08-19 17:00 [UTC+0000]`
Kindly check comments below and further review your -- both elementary and
advanced -- design-options for a <_trivial-failure-prone_ >-spin-off
processing, for a <_load-balanced_ >-REP-worker queueing, for a <_scale-able_
>-distributed processing and a <_faul-resilient_mode_>-REP-worker binary-start
shaded processing.
## No heap of mock-up SLOC(s), no single code-sample will do a One-Size-Fits-
All.
This is exponentially valid in designing distributed messaging systems.
Sorry to say that.
**Hurts, but true.**
"""REQ/REP modified with QUEUE/ROUTER/DEALER add-on ---------------------------
Multithreaded Hello World server
Author: Guillaume Aubert (gaubert) <guillaume(dot)aubert(at)gmail(dot)com>
"""
import time
import threading
import zmq
print "ZeroMQ version sanity-check: ", zmq.__version__
def aWorker_asRoutine( aWorker_URL, aContext = None ):
"""Worker routine"""
#Context to get inherited or create a new one trick------------------------------
aContext = aContext or zmq.Context.instance()
# Socket to talk to dispatcher --------------------------------------------------
socket = aContext.socket( zmq.REP )
socket.connect( aWorker_URL )
while True:
string = socket.recv()
print( "Received request: [ %s ]" % ( string ) )
# do some 'work' -----------------------------------------------------------
time.sleep(1)
#send reply back to client, who asked --------------------------------------
socket.send( b"World" )
def main():
"""Server routine"""
url_worker = "inproc://workers"
url_client = "tcp://*:5555"
# Prepare our context and sockets ------------------------------------------------
aLocalhostCentralContext = zmq.Context.instance()
# Socket to talk to clients ------------------------------------------------------
clients = aLocalhostCentralContext.socket( zmq.ROUTER )
clients.bind( url_client )
# Socket to talk to workers ------------------------------------------------------
workers = aLocalhostCentralContext.socket( zmq.DEALER )
workers.bind( url_worker )
# --------------------------------------------------------------------||||||||||||--
# Launch pool of worker threads --------------< or spin-off by one in OnDemandMODE >
for i in range(5):
thread = threading.Thread( target = aWorker_asRoutine, args = ( url_worker, ) )
thread.start()
zmq.device( zmq.QUEUE, clients, workers )
# ----------------------|||||||||||||||------------------------< a fair practice >--
# We never get here but clean up anyhow
clients.close()
workers.close()
aLocalhostCentralContext.term()
if __name__ == "__main__":
main()
|
exe generated from a python script with Py2exe does not work on xp
Question: I have a python script that works fine on my computer (Python 2.7 32 bit
installed). It has the following imports :
import mechanize
from bs4 import BeautifulSoup
from Tkinter import *
import json
import webbrowser
I wanted to distribute this to others so I found that we can create exe files
using py2exe. I wrote a script like this:
from distutils.core import setup
import py2exe
setup(console=['notification.py'],
options = {'py2exe' : {
'packages' : ['bs4', 'mechanize','Tkinter', 'json', 'webbrowser']
}})
This works fine on my computer but when I run it on Windows XP, I get this
error -
Traceback (most recent call last):
File "notification.py", line 3, in
File "Tkinter.pyc", line 38, in
File "FixTk.pyc", line 65, in
File "_tkinter.pyc", line 12, in
File "_tkinter.pyc", line 10, in __load
ImportError: DLL load failed: %1 is not a valid Win32 application.
I tried searching other threads but found none that has the same problem. So
please help me fix this issue.
Answer: Maybe Tinkiter is a 64 bit version GUI, while Windows XP version you run it is
32bit.
Check it out and tell us if that's the case.
Reason I assume this is the line:
ImportError: DLL load failed: %1 is not a valid Win32 application.
combined with the fact that Tinkiter is 64 bit.
Python can be 32 bit. Works on both Operating Systems, 32 and 64 bit ones. But
Tinkiter is a GUI, something different than the language. So Including a 64bit
add-on, into a 32bit application... can cause some trouble. :)
Suggestion: You can start by making the app work in console interface if
possible. Then you can use another GUI that can run in 32 bit.
For instance, you can get a 32bit version of
[THIS](http://www.wxpython.org/what.php)
Edit: Added a suggeston.
|
ImportError: No module named ... after spyder install
Question: Anaconda Spyder is supposed to include numpy, scipy etc with the installation.
Someone has installed Spyder for me on Windows 7 but if I try to `import
numpy` or `scipy` , I get this error:
import numpy as np
ImportError: No module named numpy
I also can't run "conda" on the console.
What's wrong? What should I do to fix this? I tried adding `PYTHONPATH` in
environment variables but no difference.
How can I check if they're even installed? I searched for NumPy in the
computer, I only found the following:

Answer: It sounds like someone installed just spyder, not Anaconda, which is a
separate thing (Anaconda is a collection of several Python packages, including
Spyder, NumPy, and SciPy). Try downloading and installing
[Anaconda](http://continuum.io/downloads) and using the Spyder that comes with
that.
|
Parsing ip address with dpkt
Question: I am using dpkt to parse a pcap file, however I am confused about how to
extract the destination ip address. I am parsing the packets using `eth =
dpkt.ethernet.Ethernet(buf)` which returns an ethernet object which looks like
the following:
Ethernet(src='\x00\x1a\xa0kUf', dst='\x00\x13I\xae\x84,', data=IP(src='\xc0\xa8\n\n',
off=16384, dst='C\x17\x030', sum=25129, len=52, p=6, id=51105, data=TCP(seq=9632694,
off_x2=128, ack=3382015884, win=54, sum=65372, flags=17, dport=80, sport=56145)))
I am confused about 2 things.
1. Should I be grabbing the dst field in Ethernet, or the one in IP (Ethernet.data)?
2. How can I turn these weird strings into ip addresses of the form x.x.x.x where x is an integer from 0-255?
I tried a solution like [Convert "little endian" hex string to IP address in
Python](http://stackoverflow.com/questions/2197974/convert-little-endian-hex-
string-to-ip-address-in-python), but both dst fields seem to sometimes contain
data which seemingly cannot be parsed to an ip address such as `_daQ` (how is
_daQ parsed to address?) or `RT\x00\x125\x02` (what is RT?) or
`33\x00\x01\x00\x03` (what is the 33 at the beginning and why does this look
like 5 bytes not 4?)
Answer: 1. The `eth.dst` field will contain the destination MAC address (e.g. `01:23:45:67:89:ab`), not the destination IP address. You need the ip.dst field.
2. The strings are byte strings, rather than ASCII (or otherwise) encoded readable character strings.
Try this:
ip_hdr = eth.data
ip_hdr.dst # will contain your destination IP address in BINARY
# adapted from http://www.commercialventvac.com/dpkt.html#mozTocId303989
import socket
dst_ip_addr_str = socket.inet_ntoa(ip_hdr.dst)
|
Exchange Time for Memory in Python
Question: Challenge here is in evaluating multiple large files.
What coding will instruct Python to "load" a limited number of files into
memory, process them, garbage collect and then load the next set?
def main(directory):
"""
Create AudioAnalysis Objects from directory and call object_analysis().
"""
ff = os.listdir(directory)
for f in ff:
# can we limit the number we load at one time?
audiofile = audio.LocalAudioFile(os.path.join(directory,f)) # hungry!
Tried adding `audiofile = 0` to the loop, but the memory allocation is the
same.
As I understand it, `Lazy Evaluation` "is an evaluation strategy which delays
the evaluation of an expression until its value is needed", but in this case I
need to _delay evaluation until there's memory available_.
Am expecting that a `decorator`, `descriptor` and/or use of Pythons
`property()` function may be involved, or possibly buffering or queueing the
input.
Answer: Here's one solution: have Python spawn a process, run the function on one
file, then exit. The parent proc will collect results from each of the files.
This is in no way graceful, but if `LocalAudioFile` refuses to be dislodged
from memory, it allows some flexibility in getting results.
This code runs runs a function on each Python file in the current directory,
returning a message to the parent process, which prints it out.
## source
import glob, multiprocessing, os
def proc(path):
"""
Create AudioAnalysis Objects from directory and call object_analysis().
"""
# audiofile = audio.LocalAudioFile(path) # hungry!
return 'woot: {}'.format(path)
if __name__=='__main__': # required for Windows
pool = multiprocessing.Pool() # one Process per CPU
for output in pool.map(proc, [
os.path.abspath(name) for name in glob.glob('q*.py')
]):
print 'output:',output
## output
output: woot: /home/johnm/src/johntellsall/karma/qpopen.py
output: woot: /home/johnm/src/johntellsall/karma/quotes.py
|
Using urlparse to remove a certain string?
Question: I have this URL:
www.domain.com/a/b/c/d,authorised=false.html
and I want to convert it into
www.domain.com/a/b/c/d.html
Please note I am using python 2.7.
from urlparse import urlparse
url = "www.domain.com/a/b/c/d,athorised=false.html_i_location=http%3A%2F%2Fwww.domain.com%2Fcms%2Fs%2F0%2Ff416e134-2484-11e4-ae78-00144feabdc0.html%3Fsiteedition%3Dintl&siteedition=intl&_i_referer=http%3A%2F%2Fwww.domain.com%2Fhome%2Fus"
o = urlparse(url)
url = o.hostname + o.path
print url
returns `www.domain.com/a/b/c/d,authorised=false.html` but I don't know how to
remove `authorised=false` part from the URL
Answer:
import re
print re.sub(r',.+\.', '.', 'www.domain.com/a/b/c/d,authorised=false.html')
# www.domain.com/a/b/c/d.html
|
parsing python to csv
Question: I'm trying to download some game stats to do some analysis, only problem is
each season the data their isn't 100% consistent.
I grab the json file from the site, then wish to save it to a csv with the
first line in the csv containing the heading for that column, so the heading
would be essentially the key from the python data type.
#!/usr/bin/env python
import requests
import json
import csv
base_url = 'http://www.afl.com.au/api/cfs/afl/'
token_url = base_url + 'WMCTok'
player_url = base_url + 'matchItems/round'
def printPretty(data):
print(json.dumps(data, sort_keys=True, indent=2, separators=(',', ': ')))
session = requests.Session() # session makes it simple to use the token across the requests
token = session.post(token_url).json()['token'] # get the token
session.headers.update({'X-media-mis-token': token}) # set the token
Season = 2014
Roundno = 4
if Roundno<10:
strRoundno = '0'+str(Roundno)
else:
strRoundno = str(Roundno)
# get some data (could easily be a for loop, might want to put in a delay using Sleep so that you don't get IP blocked)
data = session.get(player_url + '/CD_R'+str(Season)+'014'+strRoundno)
# print everything
printPretty(data.json())
with open('stats_game_test.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter="'",quotechar='|', quoting=csv.QUOTE_ALL)
for profile in data.json()['items']:
spamwriter.writerow(['%s' %(profile)])
#for key in data.json().keys():
# print("key: %s , value: %s" % (key, data.json()[key]))
The above code grabs the json and writes it to a csv, but it puts the key in
each individual cell next to the value (eg 'venueId': 'CD_V190'), the key
needs to be just across the first row as a heading.
It gives me a csv file with data in the cells like this
Column A B
'tempInCelsius': 17.0 'totalScore': 32
'tempInCelsius': 16.0 'totalScore': 28
What I want is the data like this
tempInCelsius totalScore
17 32
16 28
As I mentioned up the top, the data isn't always consistent so if I define
what fields to grab with spamwriter.writerow([profile['tempInCelsius'],
profile['totalScore']]) then it will error out on certain data grabs.
This is why I'm now trying the above method so it just grabs everything
regardless of what data is there.
Answer: you want to write a heading first
spamwriter.writerow('tempInCelsius', 'totalScore') # if you assume the column
names else get them by key or what have you
and then for loop write the values
spamwriter.writerow([profile['tempInCelsius'], profile['totalScore']])
|
rPython using wrong python installation on Mac OSX
Question: I've installed python 2.7.8 alongside the 2.7.5 which comes with OSX 10.9.4.
Now how can I point `rPython` to python 2.7.8?
## Attempt #1
I've modified the OSX `.bash_profile` as follows to point everything to the
newer python installation.
export PATH=/usr/local/Cellar/python/2.7.8/bin/:$PATH:usr/local/bin:
And now when I run python from the terminal, it correctly runs the newer
version
mba:~ tommy$ which python
/usr/local/Cellar/python/2.7.8/bin//python
However, `rPython`, still sees 2.7.5.
> library(rPython)
Loading required package: RJSONIO
> python.exec("import sys; print(sys.version)")
2.7.5 (default, Mar 9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]
## Attempt #2
It looks like the `.bash_profile` doesn't get used by R at all... so I've
tried to modify the PATH within R. But still no luck.
> Sys.getenv("PATH")
[1] "/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin"
> Sys.setenv(PATH = "usr/local/Cellar/python/2.7.8/bin")
> library(rPython)
Loading required package: RJSONIO
> python.exec("import sys; print(sys.version)")
2.7.5 (default, Mar 9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]
## Attempt #3
I tried removing and re-installing the `rPython` package thinking perhaps it
was using the version of Python that it found upon installation. No luck
either.
## Attempt #4
I've tried installing from source to see if that does anything... no luck.
# Update
Okay so it looks like the problem isn't anything to do with rPython itself.
<http://cran.r-project.org/web/packages/rPython/INSTALL>
> Package rPython depends on Python (>= 2.7).
>
> It requires both Python and its headers and libraries. These can be found in
> python and python-dev packages in Debian-like Linux distributions.
>
> In systems where several Python versions coexist, the user can choose the
> Python version to use at installation time. By default, the package will be
> installed using the Python version given by
>
> $ python --version
When I run that in the terminal..
mba:src tommy$ python --version
Python 2.7.8
But when I run it in R...
> system("python --version")
Python 2.7.5
So the problem is simply that R doesn't use OSX's `.bash_profile`. I'll need
to figure out how to change `PATH` outside of `.bash_profile`, or get R to use
`.bash_profile`.
What else can I try to get `rPython` working with `2.7.8`?
Answer: You can have a look at the rPython INSTALL file (sorry, perhaps I should make
it more explicit). It has a section on how to install rPython using the
desired Python version when several coexists. It says:
> In systems where several Python versions coexist, the user can choose the
> Python version to use at installation time. By default, the package will be
> installed using the Python version given by
>
>
> $ python --version
>
>
> but it is possible to select a different one if the RPYTHON_PYTHON_VERSION
> environment variable is appropriately set.
>
> For instance, if it is defined as
>
>
> RPYTHON_PYTHON_VERSION=3.2
>
>
> it will try to use Python 3.2 (looking for python3.2 and python3.2-config in
> the path). If set to
>
>
> RPYTHON_PYTHON_VERSION=3
>
>
> it will install against the "canonical" Python version in the system within
> the 3.x branch.
|
py2app: modulegraph missing scan_code
Question: For some reason I can't explain or google, py2app crashes on me even with the
simplest examples. Im using a python 3.4.1 virtual environment created as
`Projects/Test/virtenv` which has py2app installed via pip.
Here is the output of `$pip list`:
altgraph (0.12)
macholib (1.7)
modulegraph (0.12)
pip (1.5.6)
py2app (0.9)
setuptools (3.6)
foo.py is a hello world example file saved in Projects/Test/ and contains a
single line:
print('hello world')
setup.py is saved in Projects/Test as generated by `$py2applet --make-setup
foo.py`:
"""
This is a setup.py script generated by py2applet
Usage:
python setup.py py2app
"""
from setuptools import setup
APP = ['foo.py']
DATA_FILES = []
OPTIONS = {'argv_emulation': True}
setup(
app=APP,
data_files=DATA_FILES,
options={'py2app': OPTIONS},
setup_requires=['py2app'],
)
Here is the full output of running `$python setup.py py2app` (all pip and
python commands were done with the virtual enviroment activated) :
running py2app
creating /Users/mik/Desktop/Projects/Test/build
creating /Users/mik/Desktop/Projects/Test/build/bdist.macosx-10.8-x86_64
creating /Users/mik/Desktop/Projects/Test/build/bdist.macosx-10.8-x86_64/python3.4-standalone
creating /Users/mik/Desktop/Projects/Test/build/bdist.macosx-10.8-x86_64/python3.4-standalone/app
creating /Users/mik/Desktop/Projects/Test/build/bdist.macosx-10.8-x86_64/python3.4-standalone/app/collect
creating /Users/mik/Desktop/Projects/Test/build/bdist.macosx-10.8-x86_64/python3.4-standalone/app/temp
creating /Users/mik/Desktop/Projects/Test/dist
creating build/bdist.macosx-10.8-x86_64/python3.4-standalone/app/lib-dynload
creating build/bdist.macosx-10.8-x86_64/python3.4-standalone/app/Frameworks
*** using recipe: lxml ***
*** using recipe: ftplib ***
*** using recipe: sip ***
*** using recipe: ctypes ***
*** using recipe: xml ***
*** using recipe: pydoc ***
Traceback (most recent call last):
File "setup.py", line 18, in <module>
setup_requires=['py2app'],
File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/local/Cellar/python3/3.4.1/Frameworks/Python.framework/Versions/3.4/lib/python3.4/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/Users/mik/Desktop/Projects/Test/virtenv/lib/python3.4/site-packages/py2app/build_app.py", line 659, in run
self._run()
File "/Users/mik/Desktop/Projects/Test/virtenv/lib/python3.4/site-packages/py2app/build_app.py", line 865, in _run
self.run_normal()
File "/Users/mik/Desktop/Projects/Test/virtenv/lib/python3.4/site-packages/py2app/build_app.py", line 943, in run_normal
self.process_recipes(mf, filters, flatpackages, loader_files)
File "/Users/mik/Desktop/Projects/Test/virtenv/lib/python3.4/site-packages/py2app/build_app.py", line 824, in process_recipes
rval = check(self, mf)
File "/Users/mik/Desktop/Projects/Test/virtenv/lib/python3.4/site-packages/py2app/recipes/virtualenv.py", line 80, in check
mf.scan_code(co, m)
AttributeError: 'ModuleGraph' object has no attribute 'scan_code'
Can someone please explain whats going on and how to fix it?
EDIT:
[here](https://pythonhosted.org/modulegraph/modulegraph.html#modulegraph.modulegraph.scan_code)
is the documentation for scan_code in modulegraph.py, however the file found
in Projects/Test/virtenv/lib/python3.4/site-
packages/modulegraph/modulegraph.py contains a function called _scan_code with
a leading underscore. Is this some type of change that broke py2app?
EDIT: posted
[this](https://bitbucket.org/ronaldoussoren/modulegraph/issue/22/scan_code-in-
modulegraphpy-contains-a)
EDIT: Manually removing leading underscores from a couple function definitions
in the file mentioned allowed py2app to run without error. I'm still confused
regarding what happened
Answer: I had the same problem as you and solved it now.
I referred to this [post](http://www.marinamele.com/from-a-python-script-to-a-
portable-mac-application-with-py2app).
First, search for the path
$ /yourenv/lib/python2.7/site-packages/py2app/recipes/virtualenv.py
Next, open this file `virtualenv.py`, look for `scan_code` or `load_module`
and change it to `_scan_code` or `_load_module`.
Last, try to run your app
|
How do I perform some operation on the output of a function in Python?
Question: I am trying to make a function's output behave as if it's my input. The goal
is to make a new output from the old output.
I have some code that looks like this:
def func():
BLOCK OF CODE
func()
There is no return statement in the function and no parameters within the
parenthesis. When I type `func()` to call my function as shown above, I get
the desired output, which is a bunch of printed statements. Now I want to do
something with that output to get another output. All I'm trying to do is
effectively "pipe" the output of one function into the input of another
function (or, if possible, not even worry about creating another function at
all, and instead doing something more direct). I looked into [Python 3 writing
to a pipe](http://stackoverflow.com/questions/5965874/python-3-writing-to-a-
pipe) but it did not help me. I also tried defining another function and using
the preceding function as a parameter, which did not work either:
def another_func(func):
print another_statement
another_func(func)
I also tried making a closure (which "kind" of worked because at least it
printed the same thing that func() would print, but still not very
encouraging):
def func():
def another_func():
print another_statement
BLOCK OF CODE
another_func()
Finally, I tried designing both a decorator and a nested function to
accomplish this, but I have no parameters in my function, which really threw
off my code (didn't print anything at all).
Any advice on how to manipulate a function's output like as if it is your
input so that it's possible to create a new output?
Answer: You _could_ achieve this by redirecting stdout using a decorator:
from StringIO import StringIO
import sys
def pipe(f):
def decorated(*args, **kwargs):
old,sys.stdout = sys.stdout,StringIO()
try:
result = f(*args, **kwargs)
output = sys.stdout.getvalue()
finally:
sys.stdout = old
return result, output
return decorated
You could then get the `result, output` pair from any decorated function, eg:
@pipe
def test(x):
print x
return 0
test(3) -> (0, '3\n')
* * *
However, I can't think of a good reason why you'd _want_ to do this.
(Actually, that's not _quite_ true; it is handy when writing unit tests for
user IO, such as when testing student assignments in a software engineering
course. I seriously doubt that that's what the OP is trying to do, though.)
|
How to run port lookup command in python subprocess
Question: I am using terminal command
while ! echo exit | nc 10.0.2.11 9445; do sleep 10; done
in my commandline to lookup port in my remote machine.( it is working fine). I
want to do this operation inside my python script. I found subprocess and I
want to know that how can I do this with subprocess ?
from subprocess import call
call(["while xxxxxxxxxxxxxxxxxxxxxxxxxxx"])
Answer: `subprocess.call` does not by default use a shell to run its commands.
Therefore, things like `while` are unknown commands. Instead, you could pass
`shell=True` to `call` ([**security risk** with dynamic data and user
input](https://docs.python.org/2/library/subprocess.html#frequently-used-
arguments)*) or call the shell directly (the same advice applies):
from subprocess import call
call("while ! echo exit | nc 10.0.2.11 9445; do sleep 10; done", shell="True")
or directly with the shell, this is (a) _less portable_ (because it assumes a
specific shell) and (b) _more secure_ (because you can specify what shell is
to be used as syntax is not unified over different shells, e.g. `csh` vs.
`bash`, and usage on other shells may lead to undefined or unwanted
behaviour):
from subprocess import call
call(["bash", "-c", "while ! echo exit | nc 10.0.2.11 9445; do sleep 10; done"])
The exact argument to the shell to execute a command (here `-c`) depends on
your shell.
You may want to have a look at the [`subprocess`
docs](https://docs.python.org/dev/library/subprocess.html), especially for
other ways of invoking processes. See e.g. `check_call` as a way of checking
the return code for success, `check_output` to get the standard output of the
process and `Popen` for advanced input/output interaction with the process.
Alternatively, you could use `os.system`, which implicitly launches a shell
and returns the return code (`subprocess.check_call` with `shell=True` is a
more flexible alternative to this)
* _This link is to the Python 2 docs instead of the Python 3 docs used otherwise because it better outlines the security problems_
|
Using pandas Combining/merging 2 different Excel files/sheets
Question: I am trying to combine 2 different Excel files. (thanks to the post [Import
multiple excel files into python pandas and concatenate them into one
dataframe](http://stackoverflow.com/questions/20908018/import-multiple-excel-
files-into-python-pandas-and-concatenate-them-into-one-dat))
The one I work out so far is:
import os
import pandas as pd
df = pd.DataFrame()
for f in ['c:\\file1.xls', 'c:\\ file2.xls']:
data = pd.read_excel(f, 'Sheet1')
df = df.append(data)
df.to_excel("c:\\all.xls")
Here is how they look like.

However I want to:
1. Exclude the last rows of each file (i.e. row4 and row5 in File1.xls; row7 and row8 in File2.xls).
2. Add a column (or overwrite Column A) to indicate where the data from.
For example:

Is it possible? Thanks.
Answer: For num. 1, you can specify `skip_footer` as explained
[here](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.read_excel.html#pandas.read_excel); or,
alternatively, do
data = data.iloc[:-2]
once your read the data.
For num. 2, you may do:
from os.path import basename
data.index = [basename(f)] * len(data)
Also, perhaps would be better to put all the data-frames in a list and then
`concat` them at the end; something like:
df = []
for f in ['c:\\file1.xls', 'c:\\ file2.xls']:
data = pd.read_excel(f, 'Sheet1').iloc[:-2]
data.index = [os.path.basename(f)] * len(data)
df.append(data)
df = pd.concat(df)
|
The number of tries never increments by more than one in Python - help please?
Question: Whenever it takes me several tries to beat the game, it always says the
number_of_guesses is 1, which isn't true. What have I done wrong?
My code:
import random
print("Welcome to Rock, Paper, Scissors. \nYou will be going up against the computer, who will randomly",
"choose an object to duel you with!")
user_win = False
while not user_win:
user_guess = input("\nChoose either Rock, Paper or Scissors: ")
user_guess = user_guess.lower()
if user_guess != "rock" and user_guess != "paper" and user_guess != "scissors":
print("You didn't enter a valid guess. Try again, please.")
user_guess = input("\nChoose either Rock, Paper or Scissors: ")
user_guess = user_guess.lower()
computer_guess = random.randint(1,3)
if computer_guess == 1:
computer_guess = "rock"
elif computer_guess == 2:
computer_guess = "paper"
else:
computer_guess = "scissors"
print("Your guess:", user_guess.capitalize(), "\nComputer guess:", computer_guess.capitalize())
number_of_guesses = 1
if user_guess == computer_guess:
print("\nThe game is a tie. You guessed", user_guess, "and so did the computer.")
number_of_guesses += 1
user_win = False
elif (user_guess == "rock" and computer_guess == "scissors") or (user_guess == "paper" and computer_guess == "rock"):
print("\nCongratulations! You have beaten the computer by playing", user_guess.capitalize(), "while the computer played", computer_guess.capitalize())
user_win = True
number_of_guesses += 1
else:
print("\nDamn! The computer won by playing", computer_guess.capitalize(), "while you played", user_guess.capitalize())
user_win = False
number_of_guesses += 1
if number_of_guesses == 1:
print("\nYou won, and it only took you %d try!" % number_of_guesses)
else:
print("\nYou won, and it only took you %d tries!" % number_of_guesses)
input("\nPress enter to exit the program.")
I think that's formatted correctly. It's not easy to put code in here. Thank
you!
Answer: First of all in the while loop you always initialize `number_of_guesses = 1`
on every run. That is why this will always be 1 in each run.
Take this initialization before the while.
|
Migrate a database in Django
Question: I am getting started with Django through
[this](http://www.youtube.com/watch?v=3DccH9AMwFQ) beautiful video tutorial.On
Tutorial 15 of the video series, there is database migration using **south**.
But when I do `python manage.py migrate signups`, I got a whole lot of errors.
The first error was:
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 164, i
n _run_migration
for name, db in south.db.dbs.iteritems():
AttributeError: 'dict' object has no attribute 'iteritems'
I changed `iteritems()` to `items()` to fix that but there is a whole lot of
other errors popping up. My guess is that it has to do with the versions in
action- `South==1.0 Django == 1.6.5 and Python 3.4.1`
Here is the content of my _models.py_ and `for_you, timestamp, updated` are
the attributes added after migration. The commented out attributes were there
originally.
`from django.db import models
class SignUp(models.Model):
for_you = models.BooleanField(default = True)
first_name = models.CharField(max_length = 120, null=True, blank=True)
last_name = models.CharField(max_length = 120, null=True, blank=True)
email = models.EmailField()
timestamp = models.DateTimeField(auto_now_add = True, auto_now = False)
updated = models.DateTimeField(auto_now_add = False, auto_now = True, default=True)
#timestamp = models.DateTimeField(auto_now_add = False, auto_now = True)
#timestamp = models.DateTimeField(auto_now_add = True, auto_now = False)
def __str__(self):
return self.email`
The autogenerated
**migrations/0002_auto__add_field_signup_for_you__add_field_signup_updated.py**
looks like
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'SignUp.for_you'
db.add_column('signups_signup', 'for_you',
self.gf('django.db.models.fields.BooleanField')(default=True),
keep_default=False)
# Adding field 'SignUp.updated'
db.add_column('signups_signup', 'updated',
self.gf('django.db.models.fields.DateTimeField')(blank=True, default=True, auto_now=True),
keep_default=False)
def backwards(self, orm):
# Deleting field 'SignUp.for_you'
db.delete_column('signups_signup', 'for_you')
# Deleting field 'SignUp.updated'
db.delete_column('signups_signup', 'updated')
models = {
'signups.signup': {
'Meta': {'object_name': 'SignUp'},
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75'}),
'first_name': ('django.db.models.fields.CharField', [], {'blank': 'True', 'null': 'True', 'max_length': '120'}),
'for_you': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'last_name': ('django.db.models.fields.CharField', [], {'blank': 'True', 'null': 'True', 'max_length': '120'}),
'timestamp': ('django.db.models.fields.DateTimeField', [], {'blank': 'True', 'auto_now_add': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'blank': 'True', 'default': 'True', 'auto_now': 'True'})
}
}
complete_apps = ['signups']
And here is the complete error log:
Running migrations for signups:
- Migrating forwards to 0002_auto__add_field_signup_for_you__add_field_signup_u
pdated.
> signups:0002_auto__add_field_signup_for_you__add_field_signup_updated
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 175, i
n _run_migration
migration_function()
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 60, in
<lambda>
return (lambda: direction(orm))
File "D:\Projects\skillshare\src\signups\migrations\0002_auto__add_
field_signup_for_you__add_field_signup_updated.py", line 19, in forwards
keep_default=False)
File "C:\Python34\lib\site-packages\south\db\sqlite3.py", line 35, in add_colu
mn
field_default = "'%s'" % field.get_db_prep_save(default, connection=self._ge
t_connection())
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
350, in get_db_prep_save
prepared=False)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
911, in get_db_prep_value
value = self.get_prep_value(value)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
895, in get_prep_value
value = self.to_python(value)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
854, in to_python
parsed = parse_datetime(value)
File "C:\Python34\lib\site-packages\django\utils\dateparse.py", line 67, in pa
rse_datetime
match = datetime_re.match(value)
TypeError: expected string or buffer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line
399, in execute_from_command_line
utility.execute()
File "C:\Python34\lib\site-packages\django\core\management\__init__.py", line
392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python34\lib\site-packages\django\core\management\base.py", line 242,
in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Python34\lib\site-packages\django\core\management\base.py", line 285,
in execute
output = self.handle(*args, **options)
File "C:\Python34\lib\site-packages\south\management\commands\migrate.py", lin
e 111, in handle
ignore_ghosts = ignore_ghosts,
File "C:\Python34\lib\site-packages\south\migration\__init__.py", line 220, in
migrate_app
success = migrator.migrate_many(target, workplan, database)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 256, i
n migrate_many
result = migrator.__class__.migrate_many(migrator, target, migrations, datab
ase)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 331, i
n migrate_many
result = self.migrate(migration, database)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 133, i
n migrate
result = self.run(migration, database)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 113, i
n run
dry_run.run_migration(migration, database)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 192, i
n run_migration
self._run_migration(migration)
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 178, i
n _run_migration
raise exceptions.FailedDryRun(migration, sys.exc_info())
south.exceptions.FailedDryRun: ! Error found during dry run of '0002_auto__add_
field_signup_for_you__add_field_signup_updated'! Aborting.
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 175, i
n _run_migration
migration_function()
File "C:\Python34\lib\site-packages\south\migration\migrators.py", line 60, in
<lambda>
return (lambda: direction(orm))
File "D:\Projects\skillshare\src\signups\migrations\0002_auto__add_
field_signup_for_you__add_field_signup_updated.py", line 19, in forwards
keep_default=False)
File "C:\Python34\lib\site-packages\south\db\sqlite3.py", line 35, in add_colu
mn
field_default = "'%s'" % field.get_db_prep_save(default, connection=self._ge
t_connection())
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
350, in get_db_prep_save
prepared=False)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
911, in get_db_prep_value
value = self.get_prep_value(value)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
895, in get_prep_value
value = self.to_python(value)
File "C:\Python34\lib\site-packages\django\db\models\fields\__init__.py", line
854, in to_python
parsed = parse_datetime(value)
File "C:\Python34\lib\site-packages\django\utils\dateparse.py", line 67, in pa
rse_datetime
match = datetime_re.match(value)
TypeError: expected string or buffer
Answer: There's that problem, you use boolean as a default value (see `default=True`
on line 19 in your migration) for `DateTime` column. That wont work. Just
remove that `default=True` from your model and regenerate your migration.
You would probably need `null=True` in that column or some time-based default
value.
|
sphinx autosummary with toctree also lists imported members
Question: I use Sphinx and autosummary to produce the documentation of a Python
software. It works well but the produced .rst files also list the imported
functions and classes, which is not the behaviour I would like.
For example a package "packageex" with the docstring:
"""
Package Example (:mod:`packageex`)
==================================
.. currentmodule:: packageex
.. autosummary::
:toctree:
module0
module1
"""
would produce a file packageex.module0.rst with
Module0 (:mod:`packageex.module0`)
=================================
.. currentmodule:: packageex.module0
.. rubric:: Functions
.. autosummary::
f0
f1
f2_imported
f3_imported
.. rubric:: Classes
.. autosummary::
Class0
ClassImported
Is there a way to list only the functions and classes defined in the module
(and not those imported)?
In the doc of autodoc (<http://sphinx-doc.org/latest/ext/autodoc.html>), there
is "In an automodule directive with the members option set, only module
members whose `__module__` attribute is equal to the module name as given to
automodule will be documented. This is to prevent documentation of imported
classes or functions. Set the imported-members option if you want to prevent
this behaviour and document all available members. Note that attributes from
imported modules will not be documented, because attribute documentation is
discovered by parsing the source file of the current module." Is it possible
to get the same behaviour with autosummary?
Answer: As mentioned by mzjn, it seems to be a known strange behaviour of the
extension autosummary. In order to get the wanted behaviour (i.e. to prevent
the listing of imported objects), I have just modified the function
`get_members` (l. 166 of sphinx.ext.autosummary.generate) like so:
def get_members(obj, typ, include_public=[], imported=False):
items = []
for name in dir(obj):
try:
obj_name = safe_getattr(obj, name)
documenter = get_documenter(obj_name, obj)
except AttributeError:
continue
if documenter.objtype == typ:
try:
cond = (
imported or
obj_name.__module__ == obj.__name__
)
except AttributeError:
cond = True
if cond:
items.append(name)
public = [x for x in items
if x in include_public or not x.startswith('_')]
return public, items
|
Python SQLAlchemy: Mapping a PostGIS geom field
Question: SQLAlchemy maps DB columns to object members in Python. [For
example](http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html):
class User(Base):
__tablename__ = 'users'
id = Column(Integer, Sequence('user_id_seq'), primary_key=True)
name = Column(String(50))
fullname = Column(String(50))
password = Column(String(12))
Which type should be used to map a [PostGIS geom
column](http://dba.stackexchange.com/a/71779/972) to a class member?
Answer: As far as I know, there's no built in way to do this with SQLAlchemy. However,
[GeoAlchemy2](https://github.com/geoalchemy/geoalchemy2) is an extension to
SQLAlchemy that adds support for geospatial fields (points, lines, polygons,
etc):
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Sequence
from geoalchemy2 import Geometry
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, Sequence('user_id_seq'), primary_key=True)
name = Column(String(50))
fullname = Column(String(50))
password = Column(String(12))
geom = Column(Geometry('POINT'))
GeoAlchemy2 also provides functionality for spatial queries. From the
[docs](https://geoalchemy-2.readthedocs.org/en/0.2.4/orm_tutorial.html#spatial-
query):
query = session.query(Lake).filter(Lake.geom.ST_Contains('POINT(4 1)'))
|
How to make a multithreaded system work on same dictionary in python
Question: I have a system designed to take data via a socket and store that into a
dictionary to serve as a database. Then all my other modules (GUI, analysis,
write_to_log_file, etc) will access the database and do what they need to do
with the dictionary e.g make widgets/copy the dictionary to a log file. But
since all these things happen at a different rate, I chose to have each module
on their own thread so I can control the frequency.
In the main run function there's something like this:
from threading import Thread
import data_collector
import write_to_log_file
def main():
db = {}
receive_data_thread = Thread(target=data_collector.main, arg=(db,))
recieve_data_thread.start() # writes to dictionary @ 50 Hz
log_data_thread = Thread(target=write_to_log_file.main, arg(db,))
log_data_thread.start() # reads dictionary @ 1 Hz
But it seems that both modules aren't working on the same dictionary instance
because the log_data_thread just prints out the empty dictionary even when the
data_collector shows the data it's inserted into the dictionary.
There's only one writer to the dictionary so I don't have to worry about
threads stepping on each others toes, I just need to figure out a way for all
the modules to read the current database as it's being written.
Answer: [Use a Queue.Queue](http://pymotw.com/2/Queue/#using-queues-with-threads) to
pass values from the reader threads to a single writer thread. Pass the Queue
instance to each `data_collector.main` function. They can all call the Queue's
`put` method.
Meanwhile the `write_to_log_file.main` should also be passed the same Queue
instance, and it can call the Queue's `get` method. As items are pulled out of
the Queue, they can be added to the `dict`.
See also: Alex Martelli, on why [Queue.Queue is the secret sauce of CPython
multithreading](http://stackoverflow.com/a/1192114/190597).
|
Memoizing an entire block in Python
Question: Say I have some code that creates several variables:
# Some code
# Beginning of the block to memoize
a = foo()
b = bar()
...
c =
# End of the block to memoize
# ... some more code
I would like to **memoize** the entire block above without having to be
explicit about every variable created/changed in the block or pickle them
manually. How can I do this in Python?
Ideally I would like to be able to wrap it with something (`if/else` or `with`
statement) and have a flag that forces a refresh if I want.
**Conceptually** speaking, it woul dbe like:
# Some code
# Flag that I can set from outside to save or force a reset of the chache
refresh_cache = True
if refresh_cache == False
load_cache_of_block()
else:
# Beginning of the block to memoize
a = foo()
b = bar()
...
c = stuff()
# End of the block to memoize
save_cache_of_block()
# ... some more code
Is there any way to do this without having to explicitly pickle each variable
defined or changed in the code? (i.e. at the end of the first run we save, and
we later just reuse the values)
Answer: There are a lot of ways to go about this but I think the way that's closest to
what you're describing would be to use pythons `module scope` as your memoized
and `import` or `reload` as needed. Something like this:
# a.py
import b
print b.a, b.b
b.func(5)
b.b = 'world'
print b.a, b.b
if b.needs_refresh():
reload(b)
print b.a, b.b
With your "variable scope" being the module `b`:
# b.py
a = 0
b = 'hello'
def func(i):
global a
a += i
def needs_refresh():
return a >= 5
Executing this results in what you'd expect:
0 hello
5 world
0 hello
Edit: to be allow you to copy and save the entire scope you could just use a
class scope:
memo_stack = list()
class MemoScope(object):
def __init__(self):
self.a = 0
self.b = 'hello'
memo = MemoScope()
memo.a = 2
memo.b = 3
memo_stack.append(memo)
memo_stack.append(MemoScope())
for i, o in enumerate(memo_stack):
print "memo t%i.a = %s" % (i, o.a)
print "memo t%i.b = %s" % (i, o.b)
if o.a == 2:
memo_stack[i] = MemoScope()
print "refreshed"
# memo t0.a = 2
# memo t0.b = 3
# refreshed
# memo t1.a = 0
# memo t1.b = hello
|
Connecting to ES using python script
Question: I am trying to connect and retrieve data from a ES engine.
I am using the following script:
from elasticsearch import Elasticsearch as ES
print "Setup connection..."
es=ES(['http://elasticsearch......com:9200/cuevents-2014.34,cuevents-2014.33/_search?pretty'])
print "Done!"
print "Count number of users..."
print es.count(index='cuevents-2014.34')
But I am getting the following messaged returned instead.
Setup connection...
No handlers could be found for logger "elasticsearch"
Done!
Count number of users...
Traceback (most recent call last):
File "/home/es.py", line 8, in <module>
print es.count(index='cuevents-2014.34')
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 68, in _wrapped
return func(*args, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/__init__.py", line 622, in count
params=params, body=body)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 284, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 51, in perform_request
raise ConnectionError('N/A', str(e), e)
elasticsearch.exceptions.ConnectionError: ConnectionError(('Connection aborted.', gaierror(-2, 'Name or service not known'))) caused by: ProtocolError(('Connection aborted.', gaierror(-2, 'Name or service not known')))
I am trying to connect and return the number of documents in the index
cuevents-2014.34
Answer: Okay, I looked at your code again more closely. The Elasticsearch constructor
needs to be pointed to the root URL - you're not supposed to specify indexes
on the URL.
from elasticsearch import Elasticsearch as ES
print "Setup connection..."
es=ES(['http://elasticsearch......com:9200/'])
print "Done!"
print "Count number of users..."
print es.count(index='cuevents-2014.34')
See <http://elasticsearch-py.readthedocs.org/en/master/api.html#elasticsearch>
If you want to limit access to only certain indexes - Elasticsearch does not
provide access control at all. You can do it via URL rewriting on a reverse
proxy.
|
My csv export download button is broken
Question: I am using cherrypy to run an interactive website and although the python
function to generate the CSV seems to be working (if you interact with it
directly, my browser downloads it), it does not seem to be giving the user
this CSV file when I embed it in a form request:
<form id="export_csv_left" action="/c/flex_export_csv" method="get">
<input type="hidden" name="datakey" value="8TZbmRZ54IL7" >
<button type="button">Export stories and data as CSV</button>
</form>
I'd like there to be a button that says "export CSV" and return the file. That
form generates a request to my cherrypy that looks like this:
djotjog.com/c/flex_export_csv?datakey=8TZbmRZ54IL7
The headers inside the cherrypy part are...
csv = make_csv(literal_eval(raw_data), filename)
cherrypy.response.headers['Content-Type'] = "application/x-download"
cherrypy.response.headers['Content-Disposition'] = ('attachment; filename= %s' % (filename,))
return csv
And loading that link in the browser DOES generate the CSV. So what's up with
the form stuff?
Here are some potentially relevant javascript console messages I don't really
understand:
Denying load of chrome-extension://ganlifbpkcplnldliibcbegplfmcfigp/scripts/vendor/jquery/jquery.min.map. Resources must be listed in the web_accessible_resources manifest key in order to be loaded by pages outside the extension.
In case that's related.
Answer: Your issue has nothing to do with CherryPy per se. Just make sure your _form
button_ `type` attribute is `submit` and response `content-type` header is
generic `application/octet-stream` (or `text/csv`). Like this.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import cherrypy
config = {
'global' : {
'server.socket_host' : '127.0.0.1',
'server.socket_port' : 8080,
'server.thread_pool' : 4
}
}
class App:
@cherrypy.expose
def index(self):
return '''<!DOCTYPE html>
<html>
<body>
<form action="/gimmefile" method="get">
<input type="hidden" name="key" value="8TZbmRZ54IL7"/>
<button type="submit">Export CSV</button>
</form>
</body>
</html>
'''
@cherrypy.expose
def gimmefile(self, key):
cherrypy.response.headers['Content-Type'] = 'application/octet-stream'
cherrypy.response.headers['Content-Disposition'] = 'attachment; filename=yourfile.csv'
return 'Your;file;content;and;{0}'.format(key)
if __name__ == '__main__':
cherrypy.quickstart(App(), '/', config)
|
loading javascript into ipython notebook
Question: I am trying to import the following three js libraries. They all work except
for crossfilter.js. Can anyone tell me what I am doing wrong?
import jinja2
from IPython.display import display, Javascript, HTML
%%javascript
require.config({
paths: {
d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min',
crossfilter: '//cdnjs.cloudflare.com/ajax/libs/crossfilter/1.3.7/crossfilter.min',
dc: '//cdnjs.cloudflare.com/ajax/libs/dc/1.7.0/dc.min'
}
});
dc = jinja2.Template(
"""
require(["d3","crossfilter","dc"], function(d3,crossfilter,dc) {
console.log(d3);
console.log(crossfilter);
console.log(dc);
});
""")
display(Javascript(dc.render()))
## Output I am getting:
Object
Undefined
Object
Answer: Crossfilter does not (yet) support requireJS:
<https://github.com/square/crossfilter/issues/114>
The ticket shows how to use a shim config to load it.
|
How to create directory in specified path?
Question: I want to create directory. This is the code:
from os import mkdir
mkdir(*name of new directory*)
When run from terminal, this creates directory in the folder that contains
this python script. How to specify a path where I want to create new directory
but remaining the possibility to name it?
Answer:
mkdir(*absolute path of new directory*)
So, for example:
mkdir("c:/users/me/desktop/new_dir") # for Windows
mkdir("/home/users/me/Desktop/new_dir") # for Linux
|
(Python, pandas) adding an extra column, to the combined/merged Excel files
Question: Hello all…I am wondering how it is possible to add an extra column, to the
combined/merged Excel files. (thanks to the help to my [previous
question](http://stackoverflow.com/questions/25400240/using-pandas-combining-
merging-2-different-excel-files-sheets))
The Excel files look like:

What I have is:
import xlwt
import os
import pandas as pd
from os.path import basename
files = os.listdir('C:\\Kids')
files_xls = []
for f in files:
if f[-3:] == 'xls':
fff = 'C:\\Kids\\' + f
files_xls.append(fff)
df = pd.DataFrame()
for f in files_xls:
data = pd.read_excel(f
team_name = basename(f)[0:basename(f).find(' ')-1]
team_code = basename(f)[basename(f).find(' ')+1 : basename(f).find('.xls')]
data.index = [team_code] * len(data) # I can have either ‘team_code’ or
# ‘team_name’ added into the 1st column,
# but I want them to be added into 2
# different columns
df = df.append(data)
df.to_excel("C:\\Kids\\combined2.xls")
As a result, there’s only 1 column can be added to what I want, however I need
both the ‘team_name’ and ‘team_code’ listed in separate columns. Like:

How can I do that? i tried something like:
df.insert(0, 'new_col', team_code)
but it doesn't work. Thanks.
Answer: You have to use a multiIndex.
data.index = pd.MultiIndex.from_tuples([(team_name,team_code)] * len(data))
also to avoid the team_name cells from merging you can use
df.to_excel("C:\\Kids\\combined2.xls",merge_cells = False)
|
Using weak pointers that evaluate to a None type
Question: In my implementation for python integration into a c++ application, I am
adding support for nodes that might or might not be valid. Internally these
are stored as weak pointers, so I was thinking of having an isValid() method
that users can use before calling the exposed methods. If they call an exposed
method on an invalid node it would throw an exception.
However, I was wondering if it's possible to be a bit more pythonic than that.
Is it possible to internally check whether the pointer is valid before calling
the exposed method, and if it isn't making the python object None?
An example of what I want is here:
>>> my_valid_node = corelibrary.getNode("valid_node")
>>> my_valid_node.printName()
valid_node
Now however, something somewhere else in the system might invalidate the node,
but from python's point of view, I want the node to become None.
>>> my_valid_node.printName()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'printName'
Can anyone think of a way to do this?
Answer: There is no _clean_ way to make a reference to an object become a reference to
`None` when an external event occurs. Nevertheless, when working towards a
Pythonic interface, one could:
* Implement the [`__nonzero__`](https://docs.python.org/2/reference/datamodel.html#object.__nonzero__) method to allow the object to be evaluated in a boolean context.
* Throw a Python exception when the `weak_ptr` fails to lock. One simple solution, would be to access a member attribute on a default constructed `boost::python::object`, as it references `None`.
Note that attribute lookup customization points, such as
[`__getattr__`](https://docs.python.org/2/reference/datamodel.html#object.__getattr__),
will not be sufficient enough as the object pointed to by `weak_ptr` may
expire between attribute access and dispatch to the C++ member functions.
* * *
Here is a complete minimal example based on the above details. In this
example, `spam` and `spam_factory`, a factory that instantiates `spam` objects
managed by `shared_ptr`, are considered legacy types. A `spam_proxy` auxiliary
class that references `spam` via `weak_ptr` and additionally auxiliary
functions help adapt the legacy types to Python.
#include <string>
#include <boost/make_shared.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/smart_ptr/weak_ptr.hpp>
#include <boost/python.hpp>
/// Assume legacy APIs.
// Mockup class containing data.
class spam
{
public:
explicit spam(const char* name)
: name_(name)
{}
std::string name() { return name_; }
private:
std::string name_;
};
// Factory for creating and destroying the mockup class.
class spam_factory
{
public:
boost::shared_ptr<spam> create(const char* name)
{
instance_ = boost::make_shared<spam>(name);
return instance_;
}
void destroy()
{
instance_.reset();
}
private:
boost::shared_ptr<spam> instance_;
};
/// Auxiliary classes and functions to help obtain Pythonic semantics.
// Helper function used to cause a Python AttributeError exception to
// be thrown on None.
void throw_none_has_no_attribute(const char* attr)
{
// Attempt to extract the specified attribute on a None object.
namespace python = boost::python;
python::object none;
python::extract<python::object>(none.attr(attr))();
}
// Mockup proxy that has weak-ownership.
class spam_proxy
{
public:
explicit spam_proxy(const boost::shared_ptr<spam>& impl)
: impl_(impl)
{}
std::string name() const { return lock("name")->name(); }
bool is_valid() const { return !impl_.expired(); }
boost::shared_ptr<spam> lock(const char* attr) const
{
// Attempt to obtain a shared pointer from the weak pointer.
boost::shared_ptr<spam> impl = impl_.lock();
// If the objects lifetime has ended, then throw.
if (!impl) throw_none_has_no_attribute(attr);
return impl;
}
private:
boost::weak_ptr<spam> impl_;
};
// Use a factory to create a spam instance, but wrap it in the proxy.
spam_proxy spam_factory_create(
spam_factory& self,
const char* name)
{
return spam_proxy(self.create(name));
}
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
// Expose the proxy class as if it was the actual class.
python::class_<spam_proxy>("Spam", python::no_init)
.def("__nonzero__", &spam_proxy::is_valid)
.add_property("name", &spam_proxy::name)
;
python::class_<spam_factory>("SpamFactory")
.def("create", &spam_factory_create) // expose auxiliary method
.def("destroy", &spam_factory::destroy)
;
}
Interactive usage:
>>> import example
>>> factory = example.SpamFactory()
>>> spam = factory.create("test")
>>> assert(spam.name == "test")
>>> assert(bool(spam) == True)
>>> if spam:
... assert(bool(spam) == True)
... factory.destroy() # Maybe occurring from a C++ thread.
... assert(bool(spam) == False) # Confusing semantics.
... assert(spam.name == "test") # Confusing exception.
...
Traceback (most recent call last):
File "<stdin>", line 5, in <module>
AttributeError: 'NoneType' object has no attribute 'name'
>>> assert(spam is not None) # Confusing type.
One could argue that while the interface is Pythonic, the object's semantics
are not. With `weak_ptr` semantics not being too common in Python, one
generally does not expect the object referenced by a local variable to be
destructed. If `weak_ptr` semantics are necessary, then consider introducing a
means to allow the user to obtain shared ownership within a specific context
via the [context manager
protocol](https://docs.python.org/2/library/stdtypes.html#context-manager-
types). For example, the following pattern allows an object's validity to be
checked once, then be guaranteed within a limited scope:
>>> with spam: # Attempt to acquire shared ownership.
... if spam: # Verify ownership was obtained.
... spam.name # Valid within the context's scope.
... factory.destroy() # spam is still valid.
... spam.name # Still valid.
... # spam destroyed once context's scope is exited.
Here is a complete extension of the previous example, where in the
`spam_proxy` implements the context manager protocol:
#include <string>
#include <boost/make_shared.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/smart_ptr/weak_ptr.hpp>
#include <boost/python.hpp>
/// Assume legacy APIs.
// Mockup class containing data.
class spam
{
public:
explicit spam(const char* name)
: name_(name)
{}
std::string name() { return name_; }
private:
std::string name_;
};
// Factory for creating and destroying the mockup class.
class spam_factory
{
public:
boost::shared_ptr<spam> create(const char* name)
{
instance_ = boost::make_shared<spam>(name);
return instance_;
}
void destroy()
{
instance_.reset();
}
private:
boost::shared_ptr<spam> instance_;
};
/// Auxiliary classes and functions to help obtain Pythonic semantics.
// Helper function used to cause a Python AttributeError exception to
// be thrown on None.
void throw_none_has_no_attribute(const char* attr)
{
// Attempt to extract the specified attribute on a None object.
namespace python = boost::python;
python::object none;
python::extract<python::object>(none.attr(attr))();
}
// Mockup proxy that has weak-ownership and optional shared ownership.
class spam_proxy
{
public:
explicit spam_proxy(const boost::shared_ptr<spam>& impl)
: shared_impl_(),
impl_(impl)
{}
std::string name() const { return lock("name")->name(); }
bool is_valid() const { return !impl_.expired(); }
boost::shared_ptr<spam> lock(const char* attr) const
{
// If shared ownership exists, return it.
if (shared_impl_) return shared_impl_;
// Attempt to obtain a shared pointer from the weak pointer.
boost::shared_ptr<spam> impl = impl_.lock();
// If the objects lifetime has ended, then throw.
if (!impl) throw_none_has_no_attribute(attr);
return impl;
}
void enter()
{
// Upon entering the runtime context, guarantee the lifetime of the
// object remains until the runtime context exits if the object is
// alive during this call.
shared_impl_ = impl_.lock();
}
bool exit(boost::python::object type,
boost::python::object value,
boost::python::object traceback)
{
shared_impl_.reset();
return false; // Do not suppress the exception.
}
private:
boost::shared_ptr<spam> shared_impl_;
boost::weak_ptr<spam> impl_;
};
// Use a factory to create a spam instance, but wrap it in the proxy.
spam_proxy spam_factory_create(
spam_factory& self,
const char* name)
{
return spam_proxy(self.create(name));
}
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
// Expose the proxy class as if it was the actual class.
python::class_<spam_proxy>("Spam", python::no_init)
.def("__nonzero__", &spam_proxy::is_valid)
// Support context manager protocol.
.def("__enter__", &spam_proxy::enter)
.def("__exit__", &spam_proxy::exit)
.add_property("name", &spam_proxy::name)
;
python::class_<spam_factory>("SpamFactory")
.def("create", &spam_factory_create) // expose auxiliary method
.def("destroy", &spam_factory::destroy)
;
}
Interactive usage:
>>> import example
>>> factory = example.SpamFactory()
>>> spam = factory.create("test")
>>> with spam:
... assert(bool(spam) == True)
... if spam:
... assert(spam.name == "test")
... factory.destroy()
... assert(bool(spam) == True)
... assert(spam.name == "test")
...
>>> assert(bool(spam) == False)
The exact pattern may not be the most Pythonic, but it provides a clean way to
guarantee the object's lifetime within a limited scope.
|
Scrapy tutorial only crawling one page - tried everything in the current SO answer pages
Question: I'm having a problem with the Scrapy tutorial. No matter what solutions I
enter from others' questions about Scrapy only crawling one page that I've
found on the web (including all of them on SO), it only crawls one page. I'm
experienced enough with Python that I'm almost completely sure it's not an
indentation problem. Here is my current code, somewhat modified from the
tutorial to reflect others' solutions that I've found. Note that I've also
tried parse_item, and that didn't work either.
Basically, for my spider I am going to need to start with an .html page so I
did so for my own tutorial purposes. Is this the problem - does it need to be
a directory only?
import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
class DmozSpider(CrawlSpider):
name = "dmoz"
allowed_domains = ["mollydesjardin.com"]
start_urls = [
"http://www.mollydesjardin.com/"
]
rules = [Rule(SgmlLinkExtractor(allow=()), callback='parse_item')]
def parse(self, response):
filename = response.url.split("/")[-2]
with open(filename, 'wb') as f:
f.write(response.body)
Answer: It's a common trap for `CrawlSpider`: you should not override the built-in
`parse()` method (where all `CrawlSpider` magic happens), unless you know what
you're doing.
[The warning is in the
docs](http://doc.scrapy.org/en/latest/topics/spiders.html#crawling-rules) but
many users miss it.
Rename your `parse()` method to `parse_item()` (as you declare in your rules).
Depending on what you need, you may want to add a `follow=True` to your rule,
so that links found in grabbed pages (after the start URL) are also crawled.
|
Gtk-WARNING: Unable to locate theme engine in module_path: "wimp"
Question: I developed an Gtk Application that works on Windows. I use pyInstaller to
build one exe file. Everything would be OK if not for the theme that not
loads.
**So, I have my spec file:**
# -*- mode: python -*-
directory = 'C:\\my_project'
a = Analysis(['main.py'],
pathex=[directory],
hiddenimports=None,
hookspath=None)
more_datas = []
more_binaries = []
more_datas.append(('gtkrc', os.path.join(directory, 'gtkrc'), 'DATA'))
more_binaries.append(('libwimp.dll', os.path.join(directory, 'libwimp.dll'), 'BINARY'))
pyz = PYZ(a.pure)
exe = EXE(pyz,
a.scripts,
a.binaries + more_binaries,
a.zipfiles,
a.datas + more_datas,
name='main.exe',
debug=False,
strip=None,
upx=True,
console=True)
In the DATA section, I include the gtkrc file And in BINARIES section, I
include the libwimp.dll. Both files are in my application directory.
**In my project python code, I have:**
def resource_path(relative):
directory = getattr(sys, '_MEIPASS', os.getcwd())
return os.path.join(directory, relative)
theme = resource_path("gtkrc")
gtk.rc_set_default_files([theme])
gtk.rc_reparse_all_for_settings(gtk.settings_get_default(), True)
gtk.rc_reset_styles(gtk.settings_get_for_screen(window.get_screen()))
When I run application, a temporary directory is created ("_MEIXXXXXX") and
the two files are included there.
**But the console shows the message:**
> Gtk-WARNING: Unable to locate theme engine in module_path: "wimp"
And the theme not load. This Warning appears after this code bellow is called:
gtk.rc_reparse_all_for_settings(gtk.settings_get_default(), True)
What I'm missing?
Answer: I found where I was going wrong.
The problem was that I copied the theme to the local directory of my
application. The libwimp.dll file must be created within a specific directory
to work in the temporary folder.
This tip helped me solve:
[Add hooks for PyGTK themes](http://www.pyinstaller.org/ticket/14)
|
Detail AttributeError: 'module' object has no attribute 'workbook'
Question: I keep getting the Detail `AttributeError: 'module' object has no attribute
'workbook'` error.
below is my code
import xlwt
workbook = xlwt.workbook()
sheet = workbook.add_sheet('Eswar')
sheet.write (4,4,'Test passed')
workbook.save("D:\resultsLatest.xls")
what have i done wrong?
I am using python 2.7
Answer: In source [code on github](https://github.com/python-
excel/xlwt/blob/master/xlwt/__init__.py#L3) you can see right spelling
`Workbook`. Your code should be:
import xlwt
workbook = xlwt.Workbook()
sheet = workbook.add_sheet('Eswar')
sheet.write(4,4,'Test passed')
workbook.save("D:\\resultsLatest.xls")enter code here
|
Python embedded in C++ try_rich_compare error on types
Question: I have an error in my program, which appears to be something to do with
comparing two object types in python. Here is the error from gdb
Program received signal SIGSEGV, Segmentation fault.
0x00007fffc3acd35c in try_rich_compare (v=0x7fffcc433ec0 <UTOPIA::PyNodeType>, w=0x7fffc3a06ec0 <UTOPIA::PyNodeType>, op=3) at ../Objects/object.c:621
621 ../Objects/object.c: No such file or directory.
(gdb) bt
#0 0x00007fffc3acd35c in try_rich_compare (v=0x7fffcc433ec0 <UTOPIA::PyNodeType>, w=0x7fffc3a06ec0 <UTOPIA::PyNodeType>, op=3) at ../Objects/object.c:621
#1 0x00007fffc3acded7 in do_richcmp (v=0x7fffcc433ec0 <UTOPIA::PyNodeType>, w=0x7fffc3a06ec0 <UTOPIA::PyNodeType>, op=3) at ../Objects/object.c:930
#2 0x00007fffc3ace164 in PyObject_RichCompare (v=0x7fffcc433ec0 <UTOPIA::PyNodeType>, w=0x7fffc3a06ec0 <UTOPIA::PyNodeType>, op=3) at ../Objects/object.c:982
#3 0x00007fffc3b74a24 in cmp_outcome (op=3, v=0x7fffcc433ec0 <UTOPIA::PyNodeType>, w=0x7fffc3a06ec0 <UTOPIA::PyNodeType>) at ../Python/ceval.c:4525
#4 0x00007fffc3b6bbbe in PyEval_EvalFrameEx (f=0x157e990, throwflag=0) at ../Python/ceval.c:2287
#5 0x00007fffc3b6ff3e in PyEval_EvalCodeEx (co=0x7fffc2553510, globals=0x7fffc254b1a8, locals=0x0, args=0x7fffc257c6a0, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0)
at ../Python/ceval.c:3252
#6 0x00007fffc3aa643b in function_call (func=0x7fffc2552300, arg=0x7fffc257c678, kw=0x0) at ../Objects/funcobject.c:526
#7 0x00007fffc3a64c79 in PyObject_Call (func=0x7fffc2552300, arg=0x7fffc257c678, kw=0x0) at ../Objects/abstract.c:2529
#8 0x00007fffc3a810a1 in instancemethod_call (func=0x7fffc2552300, arg=0x7fffc257c678, kw=0x0) at ../Objects/classobject.c:2602
#9 0x00007fffc3a64c79 in PyObject_Call (func=0x7fffd80df5e0, arg=0x7fffc2550370, kw=0x0) at ../Objects/abstract.c:2529
#10 0x00007fffc3a64de1 in call_function_tail (callable=0x7fffd80df5e0, args=0x7fffc2550370) at ../Objects/abstract.c:2561
#11 0x00007fffc3a651f3 in PyObject_CallMethod (o=0x7fffc255ced0, name=0x7fffcc4659a4 "invoke", format=0x7fffcc4659a2 "O") at ../Objects/abstract.c:2638
#12 0x00007fffcc45d556 in UTOPIA::Python_service_interface::invoke (this=0x14ed610, invocation_=0x16aa6f0, input_=...)
at /home/oni/Projects/utopia/components/libutopia/plugins/python/service_interface.cpp:134
My program contains a library of objects. Some of these objects are wrapped
inside a python object wrapper. My main C++ program loads this python library
in order to get the definitions of these types. In this case, the offending
type looks like this:
// Node class
static PyTypeObject PyNodeType =
{
PyObject_HEAD_INIT(0)
0, /* ob_size */
"utopia.Node", /* tp_name */
sizeof(PyNode), /* tp_basicsize */
0, /* tp_itemsize */
(destructor) PyNode_dealloc, /* tp_dealloc */
0, /* tp_print */
0, /* tp_getattr */
0, /* tp_setattr */
0, /* tp_compare */
(reprfunc) PyNode_repr, /* tp_repr */
0, /* tp_as_number */
0, /* tp_as_sequence */
&PyNode_as_mapping, /* tp_as_mapping */
0, /* tp_hash */
0, /* tp_call */
0, /* tp_str */
0, /* tp_getattro */
0, /* tp_setattro */
0, /* tp_as_buffer */
Py_TPFLAGS_DEFAULT, /* tp_flags */
"UTOPIA::GenericNode class", /* tp_doc */
0, /* tp_traverse */
0, /* tp_clear */
0, /* tp_richcompare */
0, /* tp_weaklistoffset */
0, /* tp_iter */
0, /* tp_iternext */
PyNode_methods, /* tp_methods */
0, /* tp_members */
0, /* tp_getset */
0, /* tp_base */
0, /* tp_dict */
0, /* tp_descr_get */
0, /* tp_descr_set */
0, /* tp_dictoffset */
(initproc) PyNode_init, /* tp_init */
0, /* tp_alloc */
PyNode_new, /* tp_new */
0, /* tp_free */
0, /* tp_is_gc */
0, /* tp_bases */
0, /* tp_mro */
0, /* tp_cache */
0, /* tp_subclasses */
0, /* tp_weaklist */
0, /* tp_del */
};
Now, later on, the C++ program loads the python2.7 library and launches python
inside itself. It then imports this wrapper python library.
This means this PyNodeType appears in the C++ program and also inside the
python instance running inside the c++ program. At some point, these two
things are compared and the program blows up! :S
Not really sure how to get around this one as the definition is needed in both
places.
Further inspection reveals that, while the type is somehow deduced, one of
these parameters is full of null pointers
(gdb) print v
$4 = (PyObject *) 0x7fffcc433ec0 <UTOPIA::PyNodeType>
(gdb) print *v
$5 = {_ob_next = 0x0, _ob_prev = 0x0, ob_refcnt = 2, ob_type = 0x0}
(gdb) print *w
$6 = {_ob_next = 0x7fffd80b3310, _ob_prev = 0x7fffe00aca70, ob_refcnt = 43, ob_type = 0x7fffc3efc0c0 <PyType_Type>}
*****UPDATE*****
So When I create an object I take a look at the memory locations
PyNode* newPyNode = PyObject_New(PyNode, &PyNodeType);
(gdb) print &PyNodeType
$4 = (PyTypeObject *) 0x7fffe353cea0 <UTOPIA::PyNodeType>
But then, if I look at the ob_type field in my newPyNode object
(gdb) print *newPyNode
$7 = {_ob_next = 0x7fffe12fd1c8, _ob_prev = 0x7fffe32024e0 <refchain>, ob_refcnt = 1, ob_type = 0x7fffe2a10ec0 <UTOPIA::PyNodeType>, node = 0xef2cc0}
ob_type does not match. What gives? Looking at the comparison functions like
PyObject_TypeCheck
... these memory locations should be the same.
Answer: Looks like its fixed. Ive joined together the CPP .so part of the project with
the python library.so which seems to have fixed the problem. The former .so
was so the main project could load python scripts whilst the second was to
allow a python program access to all the utopia functionality (the opposite in
effect). Joining the two libs together appears to have fixed it.
|
Python last 60 end-of-months
Question: This is my first post here so please let me know if I'm doing this wrong...
I'm looking to take the last day of each month for the last 60 months, from a
reference date.
For example, if the reference date is today (Aug 21 2014), then the last end
of month is July 31st, 2014, and the one before that is June 30th, 2014...
Does anyone know how to do this in python?
Thanks!
Answer: Here's an easier/cleaner way that makes use of the
[`datetime`](https://docs.python.org/2/library/datetime.html) module:
>>> import datetime
>>> def prevMonthEnd(startDate):
... ''' given a datetime object, return another datetime object that
... is set to the last day of the prevoius month '''
... firstOfMonth = startDate.replace(day=1)
... return firstOfMonth - datetime.timedelta(days=1)
...
>>> theDate = datetime.date.today()
>>> for i in range(60):
... theDate = prevMonthEnd(theDate)
... print theDate
...
2014-07-31
2014-06-30
2014-05-31
2014-04-30
2014-03-31
2014-02-28
2014-01-31
2013-12-31
(etc.)
|
Passing a list of indices to another list in Python. Correct syntax?
Question: So I have the following code from sklearn:
>>> from sklearn import cross_validation
>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([1, 2, 3, 4])
>>> kf = cross_validation.KFold(4, n_folds=2)
>>> len(kf)
2
>>> print(kf)
sklearn.cross_validation.KFold(n=4, n_folds=2, shuffle=False,
random_state=None)
>>> for train_index, test_index in kf:
... print("TRAIN:", train_index, "TEST:", test_index)
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
TRAIN: [2 3] TEST: [0 1]
TRAIN: [0 1] TEST: [2 3]
.. automethod:: __init__
It gives me an error when I pass on the train_index and the test_index in
these lines of code (IndexError: indices are out-of-bounds):
... X_train, X_test = X[train_index], X[test_index]
... y_train, y_test = y[train_index], y[test_index]
Why can't I pass a list of indices to a list? What is the correct syntax to
pass a list of indices to another list to get those elements of that list?
I am using Python 2.7.
Thanks.
Answer: Unlike Numpy arrays, python lists don't support accessing by multiple indexes.
It's easy to solve using list comprehensions, though:
l= range(10)
indexes= [1,3,5]
result= [l[i] for i in indexes]
Or the slighly less readable (but more useful in some occasions) map:
result= map(l.__getitem__, indexes)
However, as _Ashwini Chaudhary_ noted, `X` and `y` **are** numpy arrays in
your example, so you either entered the wrong example code or your particular
indexes indeed are out of range.
|
Trying to run django with gunicorn (Worker failed to boot)
Question: I have been trying to deploy django using nginx and gunicorn, currently
running into an issue right off the bat, I followed the advice suggested [on
SO](http://stackoverflow.com/questions/24488891/gunicorn-errors-haltserver-
haltserver-worker-failed-to-boot-3-django) and I've tried [this
way](http://docs.gunicorn.org/en/latest/run.html?highlight=django#django),
suggested in gunicorn docs But still has not worked...
(env)nathann@localhost:~/ipals$ ls -l
total 12
drwxrwxr-x 14 nathann nathann 4096 Aug 21 17:32 apps
-rw-rw-r-- 1 nathann nathann 1590 Aug 21 17:55 ipals_wsgi.py
-rw-rw-r-- 1 nathann nathann 1091 Aug 21 17:32 README.md
(env)nathann@localhost:~/ipals$ gunicorn ipals:application -b 127.0.0.1:8001
Traceback (most recent call last):
File "/home/nathann/env/bin/gunicorn", line 11, in <module>
sys.exit(run())
File "/home/nathann/env/local/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 74, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/home/nathann/env/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 185, in run
super(Application, self).run()
File "/home/nathann/env/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 71, in run
Arbiter(self).run()
File "/home/nathann/env/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 169, in run
self.manage_workers()
File "/home/nathann/env/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 477, in manage_workers
self.spawn_workers()
File "/home/nathann/env/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 542, in spawn_workers
time.sleep(0.1 * random.random())
File "/home/nathann/env/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 209, in handle_chld
self.reap_workers()
File "/home/nathann/env/local/lib/python2.7/site-packages/gunicorn/arbiter.py", line 459, in reap_workers
raise HaltServer(reason, self.WORKER_BOOT_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>
And here is the contents of `ipals_wsgi.py`
import sys
import os
import os.path
# assume we(this file) exist as a sibling to the CODE_DIR
OUR_DIR = os.path.abspath(os.path.dirname(__file__))
# apps dir is our sibling. That's where our apps are.
APPS_DIR = os.path.join(OUR_DIR, 'apps')
# env dir is also a sibling to us and ipals
ENV_DIR = os.path.join(OUR_DIR, '../env')
# activate the virtualenv
activate_this = os.path.join(ENV_DIR, 'bin', 'activate_this.py')
execfile(activate_this, dict(__file__=activate_this))
# add the apps directory to the python path
sys.path.insert(0, APPS_DIR)
# load up django
# from django.core.management import execute_manager
from django.core.handlers.wsgi import WSGIHandler
# tell django to find settings at APPS_DIR/mainsite/settings.py'
#os.environ['DJANGO_SETTINGS_MODULE'] = 'ipals.settings_production'
os.environ['DJANGO_SETTINGS_MODULE'] = 'ipals.settings'
# hand off to the wsgi application
application = WSGIHandler()
Answer: I'm not sure why you've got a file called `ipals_wsgi.py`, and that it appears
to be in the top level of the project, or where your settings file is. Usually
Django's `startproject` command creates a subdirectory with the same name as
the project, and puts a file called `wsgi.py` alongside `settings.py` in that
subdirectory. You seem to have modified that, but I can't understand why.
In any case, the fact that your wsgi file is called that points to what the
problem is: you are telling gunicorn to look in a file called `ipals`, but as
noted the file is called `ipals_wsgi`. Either rename the file or change the
command: but even more preferable, revert to Django's default app layout - I
suspect that even once you have fixed the current issue, you will encounter
problems with the settings not being found, which wouldn't happen with the
default.
|
python regex for operating system name
Question: I am trying to get regex for operating system name: Windows XP, Windows Vista,
Windows 7, Windows 8, windows 8.1 and windows server 2008 R2
Normally the operating system name is in a line, for example: Operating
System: Windows 7 Professional 64-bit (6.1, Build 7601) Service Pack 1
(7601.win7sp1_gdr.140303-2144) I tried this regex `:\s*(\w+\s*\w*\.\w|\s])`
which is only good till Windows 8.1, what would be a good regex for all the
above mentioned operating system name. Have been trying for hours thank you
for help.
Answer: You don't need regex. Use this:
>>> import os
>>> os.name
'posix'
>>> import platform
>>> platform.system()
'Linux'
>>> platform.release()
'2.6.22-15-generic'
This works for all operating systems.
|
How to read in a file with a mixture of different delimiters using Python csv module?
Question: Input:
A B C
D E F
This file is NOT exclusively tab-delimited, some entries are space-delimited
to look like they were tab-delimited (which is annoying). I tried reading in
the file with the `csv` module using the canonical tab delimited option hoping
it wouldn't mind a few spaces (needless to say, my output came out botched
with this code):
with open('file.txt') as f:
input = csv.reader(f, delimiter='\t')
for row in input:
print row
I then tried replacing the second line with `csv.reader('\t'.join(f.split()))`
to try to take advantage of [Remove whitespace in Python using
string.whitespace](http://stackoverflow.com/questions/1898656/remove-
whitespace-in-python-using-string-whitespace/1898835#1898835) but my error
was: `AttributeError: 'file' object has no attribute 'split'`.
I also tried examining [python: import csv file (delimiter ";" or
",")](http://stackoverflow.com/questions/16312104/python-import-csv-file-
delimiter-or) but here the OP imported either semicolon-delimited or comma-
delimited files, but not a file which was a random mixture of both kinds of
delimiters.
Was wondering if the `csv` module can handle reading in files with a mix of
various delimiters or whether I should try a different approach (e.g., not use
the `csv` module)?
I am hoping that there exists a way to read in a file with a mixture of
delimiters and automatically turn this file into a tab-delimited file.
Answer: Just use .split():
csv='''\
A\tB\tC
D E F
'''
data=[]
for line in csv.splitlines():
data.append(line.split())
print data
# [['A', 'B', 'C'], ['D', 'E', 'F']]
Or, more succinctly:
>>> [line.split() for line in csv.splitlines()]
[['A', 'B', 'C'], ['D', 'E', 'F']]
For a file, something like:
with open(fn, 'r') as fin:
data=[line.split() for line in fin]
It works because
[str.split()](https://docs.python.org/2/library/stdtypes.html#str.split) will
split on all whitespace between data elements even if more than 1 whitespace
character or if mixed:
>>> '1\t\t\t2 3\t \t \t4'.split()
['1', '2', '3', '4']
|
Django project is very slow
Question: I have a problem with my Django project. Currently I'm using Django 1.6,
Python 3.3.3 sorl-thumbnail 12.0 and everything is really slow. I've spent the
last 3 days in attempts to change it, but everything that I've tried, has had
a very minor effect. Below are the numbers from the django-debug-toolbar:
User CPU time - 1976.123 msec
System CPU time - 176.011 msec
Total CPU time - 2152.134 msec
Elapsed time - 3671.669 msec
SQL - 25.95 ms (62 queries)
CACHE - 76 in 7.409811019897461 ms
Haystack query - 0.031ms
And the time, needed for the code in the view to be executed is
0.04816937446594238. The result was calculated in the following way:
import time
...
def base(request):
start_time = time.time()
#do something
end_time = time.time()
print(end_time - start_time)
return render(request, 'service/service.html', { 'services': services })
Can you please give me some advice on this? The db, static files, media files
and elasticsearch are installed on my local machine. The DEBUG flag is True.
Edit 1: According to Tommaso answer, I have measured the time, needed for
template rendering, and the result was awful - 3407.9 ms (using
template_timings_panel for django_debug_toolbar). Also when I make a test with
ab, on the same page, the time is not very different compared to the number
above. Is it normal? What I can do to optimize it?
Answer: 1. on production server, static fils would be served by nginx (or similar).
2. thumbnails via SORL would be also updated once per image/size, then nginx serving
3. you gonna have cached views/calculations via memcache
4. your code would be compiled only once per session start, other calls would be much faster.
|
from pylab import * fails on RHEL 6 due to need for GLIBC 2.14
Question: How might I get a pylab import that works... I'm willing to recompile for my
box, but need to know where to put everything this works for all the users.
Note that GLIB2.14 is required by libpng16.so.16, which was the original
"trip-up" point. But, I found a version of that, and am now stuck here. Here's
the error:
In [2]:
from pylab import *
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-2-de1a8241b951> in <module>()
----> 1 from pylab import *
/users/p/c/pclemins/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pylab.py in <module>()
----> 1 from matplotlib.pylab import *
2 import matplotlib.pylab
3 __doc__ = matplotlib.pylab.__doc__
/users/p/c/pclemins/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/pylab.py in <module>()
224 # make mpl.finance module available for backwards compatability, in case folks
225 # using pylab interface depended on not having to import it
--> 226 import matplotlib.finance
227
228 from matplotlib.dates import date2num, num2date,\
/users/p/c/pclemins/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/finance.py in <module>()
21 from matplotlib.dates import date2num
22 from matplotlib.cbook import iterable, mkdirs
---> 23 from matplotlib.collections import LineCollection, PolyCollection
24 from matplotlib.colors import colorConverter
25 from matplotlib.lines import Line2D, TICKLEFT, TICKRIGHT
/users/p/c/pclemins/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/collections.py in <module>()
21 import matplotlib.artist as artist
22 from matplotlib.artist import allow_rasterization
---> 23 import matplotlib.backend_bases as backend_bases
24 import matplotlib.path as mpath
25 from matplotlib import _path
/users/p/c/pclemins/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/backend_bases.py in <module>()
48
49 import matplotlib.tight_bbox as tight_bbox
---> 50 import matplotlib.textpath as textpath
51 from matplotlib.path import Path
52 from matplotlib.cbook import mplDeprecation
/users/p/c/pclemins/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/textpath.py in <module>()
9 from matplotlib.path import Path
10 from matplotlib import rcParams
---> 11 import matplotlib.font_manager as font_manager
12 from matplotlib.ft2font import FT2Font, KERNING_DEFAULT, LOAD_NO_HINTING
13 from matplotlib.ft2font import LOAD_TARGET_LIGHT
/users/p/c/pclemins/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/font_manager.py in <module>()
51 import matplotlib
52 from matplotlib import afm
---> 53 from matplotlib import ft2font
54 from matplotlib import rcParams, get_cachedir
55 from matplotlib.cbook import is_string_like
ImportError: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /usr/lib64/libpng16.so.16)
Answer: Apparently, you installed a wrong version of libpng16, perhaps from an rpm or
compiled for a different system, not the OS you are using.
The canopy package manager has the correct version of libpng available, if you
install that things should work fine. You can install it from the canopy
package manager gui or via the `enpkg libpng` command. (Unfortunately it
appears libpng is missing from default canopy install whereas it should have
been included.)
|
Python login to my cpanel with python
Question: I want to create a script with python that test if my combination (username
and password) is correct, but I always get a 401 HTTP response so i think the
script can't submit the login data. (the cpanel login isn't a traditional
login panel so i will use the demo login panel as our example-site.com) :
import urllib, urllib2, os, sys, re
site = 'http://cpanel.demo.cpanel.net/'
username = 'demo'
password = 'demo'
headers = {
"User-Agent" : "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:28.0) Gecko/20100101 Firefox/28.0",
"Accept" : "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Accept-Encoding" : "gzip, deflate",
"Accept-Charset" : "ISO-8859-1,utf-8;q=0.7,*;q=0.7"}
data = [
("user",username),
("pass",password),
("testcookie",1),
("submit","Log In"),
("redirect_to",'http://cpanel.demo.cpanel.net/'),
("rememberme","forever")]
req = urllib2.Request(site, urllib.urlencode(dict(data)), dict(headers))
response = urllib2.urlopen(req)
if any('index.html' in v for v in response.headers.values()) == True :
print "Correct login"
else :
print "incorrect"
I get this error :
Traceback (most recent call last):
File "C:\Python27\cp\cp4.py", line 19, in <module>
response = urllib2.urlopen(req)
File "C:\Python27\lib\urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 400, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 513, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 438, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 372, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 521, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 401: Access Denied
Any ideas to solve the problem and test the login details?
Answer: Consider using [Requests](http://python-requests.org), a much more user-
friendly HTTP client library for Python.
import requests
url = 'http://cpanel.demo.cpanel.net/login/'
username = 'demo'
password = 'demo'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Firefox/31.0',
}
data = {
'user': username,
'pass': password,
}
response = requests.post(url, headers=headers, data=data)
if response.status_code == 200:
print "Successfully logged in as {username}".format(username=username)
else:
print "Login unsuccessful: HTTP/{status_code}".format(status_code=response.status_code)
Edited to check for HTTP/200, as CPanel does throw an HTTP/401 if the login is
incorrect.
|
Error on slicing array in Python
Question: I have a problem with slicing an array. The problem is that if I do some
operation inside of function and return modified array my array outside of
function is also changed. I could not really understand that behavior of numpy
slicing opration on array.
Here is the code:
import numpy as np
def do_smth(a):
x = np.concatenate((a[..., 1], a[..., 3]))
y = np.concatenate((a[..., 2], a[..., 4]))
xmin, xmax = np.floor(min(x)), np.ceil(max(x))
ymin, ymax = np.floor(min(y)), np.ceil(max(y))
a[..., 1:3] = a[...,1:3] - np.array([xmin, ymin])
a[..., 3:5] = a[...,3:5] - np.array([xmin, ymin])
return a
def main():
old_a = np.array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
new_a = do_smth(old_a)
print "new_a:\n", new_a, '\n\n'
print "old_a:\n", old_a
gives output:
new_a:
[[ 0 0 0 2 2]
[ 5 5 5 7 7]
[10 10 10 12 12]]
old_a:
[[ 0 0 0 2 2]
[ 5 5 5 7 7]
[10 10 10 12 12]]
Can anyone tell why `old_a` has been changed? And how can I make `old_a`
unchanged? Thank you
Answer: Although values in Python are passed by assignment, the object you are sending
in is actually a reference to a memory location. So thus, when you change the
object in the function you are altering the same variable you sent into the
function.
You will want to clone the object first, and then do your operations on the
copy.
|
Django amazon s3 SuspiciousOperation
Question: So when i try accessing a certain image on S3 from my browser everything works
fine. But when python is doing it i get a `SuspiciousOperation` error. My
static folder is public on S3 so i really have no idea where this is coming
from.
Publication.objects.get(id=4039).cover.url
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/vagrant/.pyenv/versions/blook/lib/python2.7/site-packages/django/db/models/fields/files.py", line 64, in _get_url
return self.storage.url(self.name)
File "/home/vagrant/.pyenv/versions/blook/lib/python2.7/site-packages/queued_storage/backends.py", line 291, in url
return self.get_storage(name).url(name)
File "/home/vagrant/.pyenv/versions/blook/lib/python2.7/site-packages/queued_storage/backends.py", line 115, in get_storage
elif cache_result is None and self.remote.exists(name):
File "/home/vagrant/.pyenv/versions/blook/lib/python2.7/site-packages/storages/backends/s3boto.py", line 410, in exists
name = self._normalize_name(self._clean_name(name))
File "/home/vagrant/.pyenv/versions/blook/lib/python2.7/site-packages/storages/backends/s3boto.py", line 341, in _normalize_name
name)
SuspiciousOperation: Attempted access to 'http:/s3-eu-west-1.amazonaws.com/xpto/static/images/default-image.png' denied.
My settings:
AWS_S3_SECURE_URLS = True # use http instead of https
S3_URL = 'http://s3-eu-west-1.amazonaws.com/%s' % AWS_STORAGE_BUCKET_NAME
MEDIA_ROOT = 'media/'
STATIC_ROOT = '/static/'
STATIC_URL = S3_URL + STATIC_ROOT
MEDIA_URL = S3_URL + '/' + MEDIA_ROOT
For now i can work around it, but that is not a long term solution. any ideas?
Answer: Danigosa's answer in this thread is the answer: [django-storages and amazon s3
- suspiciousoperation](http://stackoverflow.com/questions/12535123/django-
storages-and-amazon-s3-suspiciousoperation)
create a special storage class as outlined in this post:
[<https://www.caktusgroup.com/blog/2014/11/10/Using-Amazon-S3-to-store-your-
Django-sites-static-and-media-files/][2]>
Then override _normalize_name like this:
from django.conf import settings
from storages.backends.s3boto3 import S3Boto3Storage
# custom_storages
class StaticStorage(S3Boto3Storage):
location = settings.STATICFILES_LOCATION
def _clean_name(self, name):
return name
def _normalize_name(self, name):
if not name.endswith('/'):
name += "/"
name += self.location
return name
# media storages
class MediaStorage(S3Boto3Storage):
location = settings.MEDIAFILES_LOCATION
def _clean_name(self, name):
return name
def _normalize_name(self, name):
if not name.endswith('/'):
name += "/"
name += self.location
return name
Finally - (on Python 3 at least) DON'T use
{% load static from staticfiles %}
in your templates.
Stick with:
{% load static %}
|
Convert time to string with milliseconds
Question: Python datetime.now() gives me current time including milliseconds like this:
2014-08-22 19:23:40.630000
How do I convert the above datetime object to a string that includes the
milliseconds?
I looked into `time.strftime()` but it does not provide an option for
milliseconds.
Ideas?
Thanks.
Answer: Use datetime.strftime. In Python 3.4.1:
from datetime import datetime
mytime= datetime.now()
s= mytime.strftime("%Y-%b-%d %H:%M:%S.%f")
print(s)
'2014-Aug-23 08:51:32.911271'
|
Use cases of parallel programming using multiprocessing module in python
Question: Im novice even in python and Im trying to write fast code with the
multiprocessing module of python. Actually my question is very general: I'd
like to know different ways of using multiprocessing and Im very confused
because Im not sure how exactly this code works in order to do correct
generalizations
import numpy as np
from multiprocessing import Process, Pool
def sqd(x):
return x*x.T
A = np.random.random((10000, 10000))
if __name__ == '__main__':
pool = Pool(processes = 4)
result = pool.apply_async(sqd, [A])
print result.get(timeout = 1)
print len(pool.map(sqd, A))
However, when I performed the following generalization in order to accelerate
the random matrix generation, things are not so good
import numpy as np
from multiprocessing import Pool
def sqd(d):
x = np.random.random((d, d))
return x*x.T
D=100
if __name__ == '__main__':
pool = Pool(processes = 4)
result = pool.apply_async(sqd, [D])
print result.get(timeout = 1)
print pool.map(sqd, D)
So the output is:
$ python prueba2.py
[[ 0.50770071 0.36508745 0.02447127 ..., 0.12122494 0.72641019
0.68209404]
[ 0.19470934 0.89260293 0.58143287 ..., 0.25042778 0.05046485
0.50856362]
[ 0.67367326 0.76929582 0.4232229 ..., 0.72910757 0.56047056
0.11873254]
...,
[ 0.91234565 0.20216969 0.2961842 ..., 0.57539533 0.99836323
0.79875158]
[ 0.85407066 0.99905665 0.12948157 ..., 0.58411818 0.06688349
0.71026483]
[ 0.0599241 0.82759421 0.9532148 ..., 0.22463593 0.0859876
0.41072156]]
Traceback (most recent call last):
File "prueba2.py", line 14, in <module>
print pool.map(sqd, D)
File "/home/nacho/anaconda/lib/python2.7/multiprocessing/pool.py", line 251, in map
return self.map_async(func, iterable, chunksize).get()
File "/home/nacho/anaconda/lib/python2.7/multiprocessing/pool.py", line 304, in map_async
iterable = list(iterable)
TypeError: 'int' object is not iterable
In this case, I know that Im passing incorrect arguments to "something" but Im
not sure what is the reason for that, what I can and what I can't do for this
specific cases and others different to pass lists or ranges to the
multiprocessing module, also I'd like to know how to free memory after this
given that I permitted once executing without memory error...
I'd like to add some details, regardless to I'd like to know different use
cases using multiprocessing, the motivation underlying this question is
because I took a picture of my processors just at working and there is an
isolated process working at single processor which I suppose is due to
random() so I'd like to parallelize the complete task
I hope not being so ambiguous. Thank you in advance...
Answer: You can not define the argument of the function that you want to compute with
`multiprocessing` inside the function. What `pool.map` does, chop your `A`
array and map it between different processors and the job is contributed
between the number of processors you have submitted until it finishes. However
in your code, you just gave the dimension of your input array as an argument
to `pool.map`, therefore it just computed it for once and raises error, while
`map` needs your function plus an _iterable_ argument.
|
Fastest way to append a string to a text file w/o impact(effect) on web site load time
Question: Suppose my web site needs to run this python code
for everytime it loads..
f = open('/aaa', 'a')
f.write('test' + '\n')
all it does is.. "append" .. "test" to "/a" file.
but perhaps there is a way to run this command without having impact on the
web site load time.
perhaps it can be forked ?
but the only way to do that would be to.. use
import os
os.popen()
which sounds like can make things even more heavy.
based on my understanding.. things such as "APACHE"..
can append lines to files such as error_log at amazing speeds.
and i think i read somewhere that apache actually appends lines to error_log
and
access_log without even opening those files.
is there a way to do things that way from a python script ?
Answer: I think you can put these file IO operations into another thread in your main
process. Just execute this Python script in another thread paralleled. If your
main process is written by Python, try to use `threading` module. Hope it
helps.
|
Showing test count in buildbot
Question: I am not particularly happy about the [stats that Buildbot
provides](http://buildbot.scons.org/builders/centos7-python-2.7/builds/5). I
understand that it is for building and not testing - that's why it has a
concept of Steps, but no concept of Test. Still there are many cases when you
need test statistics from build results. For example when comparing skipped
and failed tests on different platforms to estimate the impact of a change.
So, what is needed to make Buildbot display test count in results?
What is the most simple way, so that a person who don't know anything about
Buildbot can do this in 15 minutes?
Answer: Depending how you want to process the test results and how the test results
are presented, Buildbot does provide a `Test` step,
`buildbot.steps.shell.Test`
An example of how I use it for my build environment:
from buildbot.steps import shell
class CustomStepResult(shell.Test):
description = 'Analyzing results'
descriptionDone = 'Results analyzed'
def __init__(self, log_file = None, *args, **kwargs):
self._log_file = log_file
shell.Test.__init__(self, *args, **kwargs)
self.addFactoryArguments(log_file = log_file)
def start(self):
if not os.path.exists(self._log_file):
self.finished(results.FAILURE)
self.step_status.setText('TestResult XML file not found !')
else:
import xml.etree.ElementTree as etree
tree = etree.parse(self._log_file)
root = tree.getroot()
passing = len(root.findall('./testsuite/testcase/success'))
skipped = len(root.findall('./testsuite/testcase/skip'))
fails = len(root.findall('./testsuite/error')) + len(root.findall('./testsuite/testcase/error')) + len(root.findall('./testsuite/testcase/failure'))
self.setTestResults(total = fails+passing+skipped, failed = fails, passed = passing)
## the final status for WARNINGS is green but the step itself will be orange
self.finished(results.SUCCESS if fails == 0 else results.WARNINGS)
self.step_status.setText(self.describe(True))
And in the configuration factory I create a step as below:
factory.addStep(CustomStepResult(log_file = log_file))
Basically I override the default `Test` shell step and pass a custom XML file
which contains my test results. I then look for the `pass/fail/skip` result
nodes and accordingly display the results in the waterfall.

|
Python: subprocess call with shell=False not working
Question: I am using Python script to invoke a Java virtual machine. The following
command works:
subprocess.call(["./rvm"], shell=False) # works
subprocess.call(["./rvm xyz"], shell=True) # works
But,
subprocess.call(["./rvm xyz"], shell=False) # not working
does not work. [Python
documentation](https://docs.python.org/2/library/subprocess.html) advices to
avoid `shell=True`.
Answer: You need to split the commands into separate strings:
subprocess.call(["./rvm", "xyz"], shell=False)
A string will work when `shell=True` but you need a list of args when
`shell=False`
The [shlex](https://docs.python.org/2/library/shlex.html) module is useful
more so for more complicated commands and dealing with input but good to learn
about:
import shlex
cmd = "python foo.py"
subprocess.call(shlex.split(cmd), shell=False)
[shlex tut](http://pymotw.com/2/shlex/)
|
Basic Tkinter/GUI Questions
Question: I'm dirtying my hands with Python's Tkinter, specifically learning OOP for
GUIs and have a few questions.
1. When constructing a class like so:
class Keylogger(Frame):
def __init__(self, master=None):
frame = Frame.__init__(self, master)
text = Text(frame, width=20, height=5)
...
test = KeyLogger()
test.mainloop()
* What's the purpose of the default master argument being `None`? Normally I've been learning to specify a master widget like `root = Tk()`. Are there times a master widget is not needed/wanted?
* I also noticed that I did not need to `pack()` my KeyLogger object, regardless if it had a master or not. I did not even pack the my Frame object. Why is that?
2. I then noticed that if I changed the code in my `__init__` method like so:
def __init__(self, master=None):
Frame.__init__(self, master) # no more assignment statement
text = Text(self, width=20, height=5) # first constructor arg. changed
...
test = KeyLogger()
test.mainloop()
* I now have to `pack()` my KeyLogger object, regardless if a master widget was present. What changed when I removed the assignment statement and just called `Frame.__init__()`?
* Should I call the `pack()` method inside `__init__()` or outside the class? It works both ways but I'm guessing there are some pros/cons.
**Thank you!**
Answer: 1\. In this code, since **init** returns nothing, you just initialize a Text
with None, like
text = Text(None, width=20, height=5)
And you need to use `text.pack()` inside `__init__()` to make it take effect.
The master is used as master widget. The widget without it is its own master
widget. That's why there are times we don't give it a master.
2.
In this code, you initialize a Text with a Frame instance by giving it a self
parameter. You need to pack both inside `__init__()` and in the main process
to make them both take effect. Like
from Tkinter import *
class KeyLogger(Frame):
def __init__(self, master=None):
# frame = Frame.__init__(self, master)
Frame.__init__(self, master)
# text = Text(frame, width=20, height=5)
text = Text(self, width=20, height=5)
text.pack()
test = KeyLogger()
test.pack()
test.mainloop()
If this helps, please confirm it as an answer so that someone else could find
it quickly. Thanks!
|
Python/Numpy: How to avoid storing Boolean-indexed sub-arrays?
Question: In using Numpy I often have the need to use Boolean indexing to access parts
of an array. To makes things easier to read and type, I often store these sub-
arrays into new variables, for example:
n = 10000
X = np.random.rand((n, n))
W = np.random.random_integers(0, 1, n)
X0 = X[W==0]
X1 = X[W==1]
However, as I deal with larger and larger data sets, this seems very wasteful.
What is the recommended practice in this case? Should I just write (in the
example above) X[W==0] and X[W==1] every time instead?
Answer: If you have only a limited number of elements to index, you might want to
store the indices as a list of indices. For this purpose the `numpy.nonzero`
is very useful. However, if your number of elements to index is large, you use
less memory by storing the boolean array (1 byte per element).
So, there are four possibilities:
1. store a boolean array
2. store indices of non-zeros
3. always make the comparison separately
4. masked arrays
From the memory storage point of view alternative 1 takes 8 bytes per indexed
element per dimension. (Of course, the "per dimension" can be avoided by using
flat indices.) The boolean approach takes 1 byte per element, so if you have
more than 1/8 of the elements with a `True` in the boolean table, it is the
more space-saving solution. Solution #3 probably takes the same space as the
boolean solution.
(I do not know enough of the internals of NumPy to say much about the masked
arrays. I suspect they behave in a similar way to the boolean indexing.)
Performance-wise the situation is similar. The boolean solution is efficient,
if you have a lot of elements to pick, but if you have only a few elements,
then the index solution is better.
Just to give some benchmarking idea:
import numpy as np
import time
def create_indices(prob):
data = np.random.random(100000000) < prob
return data, np.nonzero(data)
def bool_index(data):
return data[data]
def list_index(data, indices):
return data[indices]
By using timeit with different probabilities, the results are:
p boolean list
0.01 0.206 0.012
0.10 0.415 0.099
0.20 0.405 0.146
0.50 0.786 0.373
0.75 0.539 0.555
1.00 0.214 0.723
This is actually quite interesting: Using boolean indices is worst when half
of the elements are `True`. The use of list indices behaves as expected.
This benchmark must not be taken as the whole truth. It may be that the type
of the array to be indexed changes the situation (here `bool` which is the
same as `uint8`), etc. However, it seems that performance-wise list indices
are very good in most cases.
|
Python getting all links from a google search result page
Question: i want to create a script that returns all the urls found in a page a google
for example , so i create this script : (using BeautifulSoup)
import urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("https://www.google.dz/search?q=see")
soup = BeautifulSoup(page.read())
links = soup.findAll("a")
for link in links:
print link["href"]
and it return this 403 forbidden result :
Traceback (most recent call last):
File "C:\Python27\sql\sql.py", line 3, in <module>
page = urllib2.urlopen("https://www.google.dz/search?q=see")
File "C:\Python27\lib\urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 400, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 513, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 438, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 372, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 521, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
any idea to avoid this error or another methode to get the urls from of the
search result ?
Answer: No problem using [requests](http://docs.python-requests.org/en/latest/)
import requests
from BeautifulSoup import BeautifulSoup
page = requests.get("https://www.google.dz/search?q=see")
soup = BeautifulSoup(page.content)
links = soup.findAll("a")
Some of the links have links are like `search%:http://` where the end of one
joins another so we need to split then using re
from bs4 import BeautifulSoup
page = requests.get("https://www.google.dz/search?q=see")
soup = BeautifulSoup(page.content)
import re
links = soup.findAll("a")
for link in soup.find_all("a",href=re.compile("(?<=/url\?q=)(htt.*://.*)")):
print re.split(":(?=http)",link["href"].replace("/url?q=",""))
['https://www.see.asso.fr/&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CBIQFjAA&usg=AFQjCNF2_I8jB98JwR3jcKniLZekSrRO7Q']
['http://webcache.googleusercontent.com/search%3Fq%3Dcache:f7M8NX1XmDsJ', 'https://www.see.asso.fr/%252Bsee%26hl%3Dfr%26%26ct%3Dclnk&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CBUQIDAA&usg=AFQjCNF8WJButjMNXQXvXBbtyXnF1SgiOg']
['https://www.see.asso.fr/3ei&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CBgQ0gIoADAA&usg=AFQjCNGnPL1RiX5TekI_yMUc-w_f2oVXtw']
['https://www.see.asso.fr/node/9587&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CBkQ0gIoATAA&usg=AFQjCNHX-6AzBgLQUF0s8TxFcZjIhxz_Hw']
['https://www.see.asso.fr/ree&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CBoQ0gIoAjAA&usg=AFQjCNGkkd8e1JjiNrhSM4HQYE-M6g6j-w']
['https://www.see.asso.fr/node/130&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CBsQ0gIoAzAA&usg=AFQjCNEkVdpcbXDz5-cV9u2NNYoV6aM8VA']
['http://www.wordreference.com/enfr/see&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CB0QFjAB&usg=AFQjCNHQGwcsGpro26dhxFP6q-fQvwbB0Q']
['http://webcache.googleusercontent.com/search%3Fq%3Dcache:ooK-I_HuCkwJ', 'http://www.wordreference.com/enfr/see%252Bsee%26hl%3Dfr%26%26ct%3Dclnk&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CCAQIDAB&usg=AFQjCNFRlV5Zv_n48Wivr4LeOkTQsA0D1Q']
['http://fr.wikipedia.org/wiki/S%25C3%25A9e&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CCMQFjAC&usg=AFQjCNGmtqmcXPqYZ_nwa0RWL0uYf5PMJw']
['http://webcache.googleusercontent.com/search%3Fq%3Dcache:GjcgkyzsUigJ', 'http://fr.wikipedia.org/wiki/S%2525C3%2525A9e%252Bsee%26hl%3Dfr%26%26ct%3Dclnk&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CCYQIDAC&usg=AFQjCNHesOIBU3OXBspARcONbK_k_8-gnw']
['http://fr.wikipedia.org/wiki/Camille_S%25C3%25A9e&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CCkQFjAD&usg=AFQjCNGO-WIDl4TrBeo88WY9QsopWmsMyQ']
['http://webcache.googleusercontent.com/search%3Fq%3Dcache:izhQjC85nOoJ', 'http://fr.wikipedia.org/wiki/Camille_S%2525C3%2525A9e%252Bsee%26hl%3Dfr%26%26ct%3Dclnk&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CCwQIDAD&usg=AFQjCNEfcIKsKbf026xgWT7NkrAueZvL0A']
['http://de.wikipedia.org/wiki/Zugersee&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CDEQ9QEwBA&usg=AFQjCNHpfJW5-XdsgpFUSP-jEmHjXQUWHQ']
['http://commons.wikimedia.org/wiki/File:Champex_See.jpg&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CDMQ9QEwBQ&usg=AFQjCNEordFWr2QIaob45WlR5Yi-ZvZSiA']
['http://www.all-free-photos.com/show/showphotop.php%3Fidtop%3D4%26lang%3Dfr&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CDUQ9QEwBg&usg=AFQjCNEC24FOIE5cvF4zmEDgq5-5xubM3w']
['http://www.allbestwallpapers.com/travel-zell_am_see,_kaprun,_austria_wallpapers.html&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CDcQ9QEwBw&usg=AFQjCNFkzMZDuthZHvnF-JvyksNUqjt1dQ']
['http://www.see-swe.org/&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CDkQFjAI&usg=AFQjCNF1zbcLfjanxgCXtHoOQXOdMgh_AQ']
['http://webcache.googleusercontent.com/search%3Fq%3Dcache:lzh6JxvKUTIJ', 'http://www.see-swe.org/%252Bsee%26hl%3Dfr%26%26ct%3Dclnk&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CDwQIDAI&usg=AFQjCNFYN6tzzVaHsAc5aOvYNql3Zy4m3A']
['http://fr.wiktionary.org/wiki/see&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CD8QFjAJ&usg=AFQjCNFWYIGc1gj0prytowzqI-0LDFRvZA']
['http://webcache.googleusercontent.com/search%3Fq%3Dcache:G9v8lXWRCyQJ', 'http://fr.wiktionary.org/wiki/see%252Bsee%26hl%3Dfr%26%26ct%3Dclnk&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CEIQIDAJ&usg=AFQjCNENzi4E1n-9qHYsNahY6lQzaW5Xvg']
['http://en.wiktionary.org/wiki/see&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CEUQFjAK&usg=AFQjCNECGZjw-rBUALO43WaTh2yB9BUhDg']
['http://webcache.googleusercontent.com/search%3Fq%3Dcache:ywc4URuPdIQJ', 'http://en.wiktionary.org/wiki/see%252Bsee%26hl%3Dfr%26%26ct%3Dclnk&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CEgQIDAK&usg=AFQjCNE0pykIqXXRl08E-uTtoj03QEpnbg']
['http://see-concept.com/&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CEsQFjAL&usg=AFQjCNGFWjhiH7dEBhITJt01ob_JENlz1Q']
['http://webcache.googleusercontent.com/search%3Fq%3Dcache:jHTkOVEoRsAJ', 'http://see-concept.com/%252Bsee%26hl%3Dfr%26%26ct%3Dclnk&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CE4QIDAL&usg=AFQjCNECPgxt9ZSFmZzK_ker9Hw_FoCi_A']
['http://www.theconjugator.com/la/conjugaison/du/verbe/see.html&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CFEQFjAM&usg=AFQjCNETCTQ0vPDIdV_2Q57qq11dyN0d8Q']
['http://webcache.googleusercontent.com/search%3Fq%3Dcache:xD7_Qo7roS8J', 'http://www.theconjugator.com/la/conjugaison/du/verbe/see.html%252Bsee%26hl%3Dfr%26%26ct%3Dclnk&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CFQQIDAM&usg=AFQjCNF_hBCyDZncivYGnL7je5kYme9hEg']
['http://www.zellamsee-kaprun.com/fr&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CFcQFjAN&usg=AFQjCNFVDeBWrZMDSjK9jKYF4AQlIXa9lA']
['http://webcache.googleusercontent.com/search%3Fq%3Dcache:BFBEUp05w7YJ', 'http://www.zellamsee-kaprun.com/fr%252Bsee%26hl%3Dfr%26%26ct%3Dclnk&sa=U&ei=ryv6U6PvEKzA7AaB4ICwCA&ved=0CFoQIDAN&usg=AFQjCNHtrOeEpYWqvT3f0M1p-gxUkYT1IA']
|
Python-OpenCv error-261
Question: I'm new to Python-OvenCV programming. I'm using python idle to load and
display image from folder. its showing the following error:
**Traceback (most recent call last):
File "C:\Python27\a.py", line 4, in <module>
cv2.imshow("abc",img)
error: ..\..\..\..\opencv\modules\highgui\src\window.cpp:261: error: (-215) size.width>0 && size.height>0 in function cv::imshow**
my code is:
import cv2
import numpy as np
img = cv2.imread("C:\Users\Mayur\Desktop\ab.bmp",0)
cv2.imshow("abc",img)
cv2.waitKey()
I have searched for solutions but they are all for matlab and I'm using python
IDLE.
Answer: You may have found the solution already, but ill give an answer anyways.
This is a common mistake when trying to read files in python. in my experience
python does not like the "\" when you are naming the file location. instead
you should use a "/". Here is what is should look like:
img = cv2.imread("C:/Users/Mayur/Desktop/ab.bmp",0)
|
Python hadoop mapreduce job using mrjob subprocess.CalledProcessError
Question: I am trying to run basic example from the mrjob's website on my custom data. I
have run Hadoop map reduce successfully using streaming, I have also
successfully tried the script without Hadoop, but now I am trying to run it on
Hadoop via mrjob by following command.
./mapred.py -r hadoop --hadoop-bin /usr/bin/hadoop -o hdfs:///user/cloudera/wc_result_mrjob hdfs:///user/cloudera/books
Source code of mapred.py is following:
#! /usr/bin/env python
from mrjob.job import MRJob
class MRWordFrequencyCount(MRJob):
def mapper(self, _, line):
yield "chars", len(line)
yield "words", len(line.split())
yield "lines", 1
def reducer(self, key, values):
yield key, sum(values)
if __name__ == '__main__':
MRWordFrequencyCount.run()
Unfortunately I am getting following error:
no configs found; falling back on auto-configuration
no configs found; falling back on auto-configuration
creating tmp directory /tmp/mapred.cloudera.20140824.195414.420162
writing wrapper script to /tmp/mapred.cloudera.20140824.195414.420162/setup-wrapper.sh
STDERR: mkdir: `hdfs:///user/cloudera/tmp/mrjob/mapred.cloudera.20140824.195414.420162/files/': No such file or directory
Traceback (most recent call last):
File "./mapred.py", line 18, in <module>
MRWordFrequencyCount.run()
File "/usr/lib/python2.6/site-packages/mrjob/job.py", line 494, in run
mr_job.execute()
File "/usr/lib/python2.6/site-packages/mrjob/job.py", line 512, in execute
super(MRJob, self).execute()
File "/usr/lib/python2.6/site-packages/mrjob/launch.py", line 147, in execute
self.run_job()
File "/usr/lib/python2.6/site-packages/mrjob/launch.py", line 208, in run_job
runner.run()
File "/usr/lib/python2.6/site-packages/mrjob/runner.py", line 458, in run
self._run()
File "/usr/lib/python2.6/site-packages/mrjob/hadoop.py", line 238, in _run
self._upload_local_files_to_hdfs()
File "/usr/lib/python2.6/site-packages/mrjob/hadoop.py", line 265, in _upload_local_files_to_hdfs
self._mkdir_on_hdfs(self._upload_mgr.prefix)
File "/usr/lib/python2.6/site-packages/mrjob/hadoop.py", line 273, in _mkdir_on_hdfs
self.invoke_hadoop(['fs', '-mkdir', path])
File "/usr/lib/python2.6/site-packages/mrjob/fs/hadoop.py", line 109, in invoke_hadoop
raise CalledProcessError(proc.returncode, args)
subprocess.CalledProcessError: Command '['/usr/bin/hadoop', 'fs', '-mkdir', 'hdfs:///user/cloudera/tmp/mrjob/mapred.cloudera.20140824.195414.420162/files/']' returned non-zero exit status 1
It seems to me that mrjob is not able to create some directory in HDFS, but
unfortunately I don't know how to fix this problem.
My Hadoop is the cloudera CDH5.1. quickstart.
Thank you in advance for any suggestions and advices.
**EDIT:**
I have tried to run the same code on cloudera CDH4.7. quickstart and it worked
well. So my modified question is: Is cloudera CDH5.1. supported by mrjob
framework? If so, then how to run it?
Answer: I encountered the same error, the workaround I did was to change the :
`self.invoke_hadoop(['fs', '-mkdir', path])`
to
`self.invoke_hadoop(['fs', '-mkdir','-p', path])`
The file modified was : /usr/lib/python2.6/site-packages/mrjob/hadoop.py
My MRJOB have been running for couple of months now without any troubles, so
this looks fine to me.
I am myself keen to know alternative for the same.
|
Python subprocess calling
Question: I have a problem with running files through python. This is my code :
def report1(self):
str="/Users/Apple/Desktop/Report1.exe"
subprocess.call(str)
This is the error i am getting :
File "./DBMS.py", line 427, in <module>
Main().run();
File "./DBMS.py", line 415, in run
self.report1()
File "./DBMS.py", line 383, in report1
subprocess.call(str)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 524, in call
return Popen(*popenargs, **kwargs).wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1308, in _execute_child
raise child_exception
OSError: [Errno 13] Permission denied
PS: i tried changing permission rights on folders and i tried using
subprocress.Popen . i also tried adding shell=True. i don't understand why its
still not working .
any help is really appreciated. have to submit in 24 hours :(
Answer: For all its merits, `subprocess` doesn't make it abundantly clear when an
error has occurred while _trying_ to execute a command.
If the deepest frame in your traceback (the one right before the actual
exception) is `raise child_exception` from `subprocess.py`, that means there
was some issue inclusively between the `fork(2)` and `exec*(2)` calls -- in
other words, that an error occurred while trying to run the command you
requested.
The actual exception you pasted was `OSError: [Errno 13] Permission denied`.
An `errno` of `13` corresponds to `EACCES`:
>>> import errno; print errno.errorcode[13]
EACCES
If you've never used `fork(2)` or `exec(2)` things will be pretty inscrutable,
because `subprocess` has dropped the real traceback. However, I can tell you
that this `OSError` almost certainly came from the `exec*` call. It turns out
`execve` raises this under the following conditions:
[EACCES] Search permission is denied for a component of the
path prefix.
[EACCES] The new process file is not an ordinary file.
[EACCES] The new process file mode denies execute permission.
[EACCES] The new process file is on a filesystem mounted with
execution disabled (MNT_NOEXEC in <sys/mount.h>).
(Courtesy of
[Apple](https://developer.apple.com/library/ios/documentation/System/Conceptual/ManPages_iPhoneOS/man2/execve.2.html
"Apple"))
If I had to guess, you encountered this this exception because the command
you're trying to run isn't marked executable (with something like `chmod
u+x`).
Now, it's unlikely that your `.exe` file will run on your Mac after solving
this, but at least it'll be a different error!
|
How can I restrict the scope of a multiprocessing process?
Question: Using python's
[`multiprocessing`](https://docs.python.org/2/library/multiprocessing.html)
module, the following contrived example runs with minimal memory requirements:
import multiprocessing
# completely_unrelated_array = range(2**25)
def foo(x):
for x in xrange(2**28):pass
print x**2
P = multiprocessing.Pool()
for x in range(8):
multiprocessing.Process(target=foo, args=(x,)).start()
Uncomment the creation of the `completely_unrelated_array` and you'll find
that each spawned process allocates the memory for a copy of the
`completely_unrelated_array`! This is a minimal example of a much larger
project that I can't figure out how to workaround; multiprocessing seems to
make a copy of everything that is global. I **don't** need a shared memory
object, I simply need to pass in `x`, and process it _without_ the memory
overhead of the entire program.
_Side observation_ : What's interesting is that `print
id(completely_unrelated_array)` inside `foo` gives the same value, suggesting
that somehow that might not be copies...
Answer: Because of the nature of `os.fork()`, any variables in the global namespace of
your `__main__` module will be inherited by the child processes (assuming
you're on a Posix platform), so you'll see the memory usage in the children
reflect that as soon as they're created. I'm not sure if all that memory is
really being allocated though, as far as I know that memory is shared until
you actually try to change it in the child, at which point a new copy is made.
Windows, on the other hand, doesn't use `os.fork()` \- it re-imports the main
module in each child, and pickles any local variables you want sent to the
children. So, using Windows you can actually avoid the large global ending up
copied in the child by only defining it inside an `if __name__ == "__main__":`
guard, because everything inside that guard will only run in the parent
process:
import time
import multiprocessing
def foo(x):
for x in range(2**28):pass
print(x**2)
if __name__ == "__main__":
completely_unrelated_array = list(range(2**25)) # This will only be defined in the parent on Windows
P = multiprocessing.Pool()
for x in range(8):
multiprocessing.Process(target=foo, args=(x,)).start()
Now, in Python 2.x, you can only create new `multiprocessing.Process` objects
by forking if you're using a Posix platform. But on Python 3.4, you can
specify how the new processes are created, by using contexts. So, we can
specify the
[`"spawn"`](https://docs.python.org/3/library/multiprocessing.html?highlight=multiprocessing#contexts-
and-start-methods) context, which is the one Windows uses, to create our new
processes, and use the same trick:
# Note that this is Python 3.4+ only
import time
import multiprocessing
def foo(x):
for x in range(2**28):pass
print(x**2)
if __name__ == "__main__":
completely_unrelated_array = list(range(2**23)) # Again, this only exists in the parent
ctx = multiprocessing.get_context("spawn") # Use process spawning instead of fork
P = ctx.Pool()
for x in range(8):
ctx.Process(target=foo, args=(x,)).start()
If you need 2.x support, or want to stick with using `os.fork()` to create new
`Process` objects, I think the best you can do to get the reported memory
usage down is immediately delete the offending object in the child:
import time
import multiprocessing
import gc
def foo(x):
init()
for x in range(2**28):pass
print(x**2)
def init():
global completely_unrelated_array
completely_unrelated_array = None
del completely_unrelated_array
gc.collect()
if __name__ == "__main__":
completely_unrelated_array = list(range(2**23))
P = multiprocessing.Pool(initializer=init)
for x in range(8):
multiprocessing.Process(target=foo, args=(x,)).start()
time.sleep(100)
|
How To Fetch data in Python from mysql database
Question: I am new to python and i am facing problem while fetching data from mysql db
while i am passing parameters in mysql query i think my mysql syntax is
incorrect .
**Here is the Error displayed on Screen Like this.**
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to
complete your request.
Please contact the server administrator at webmaster@localhost to inform them
of the time this error occurred, and the actions you performed just before
this error.
More information about this error may be available in the server error log.
Apache/2.4.6 (Ubuntu) Server at localhost Port 80
Here Is My Code For Select query in that I want to fetch data from get
parameter of Url.
#!/usr/bin/python2.7
import cgi
import cgitb
cgitb.enable()
print "Content-type: text/html\n\n"
print "<h1>Hello Python</h1>"
#!/usr/bin/python
import MySQLdb
# Create instance of FieldStorage
form = cgi.FieldStorage()
# Get data from fields
first_name = form.getvalue('first_name')
last_name = form.getvalue('last_name')
# Open database connection
db = MySQLdb.connect("localhost","root","123456789","testdrive" )
# prepare a cursor object using cursor() method
cursor = db.cursor()
# Prepare SQL query to INSERT a record into the database
sqlstmt = "SELECT * FROM EMPLOYEE WHERE FIRST_NAME = %(first_name)s AND LAST_NAME = %(last_name)s"
try:
# Execute the SQL command
cursor.execute(sqlstmt, {'first_name': first_name, 'last_name': last_name})
# Fetch all the rows in a list of lists.
results = cursor.fetchall()
for row in results:
fname = row[0]
lname = row[1]
age = row[2]
sex = row[3]
income = row[4]
# Now print fetched result
print "fname=%s,lname=%s,age=%d,sex=%s,income=%d" % \
(fname, lname, age, sex, income )
except:
print "Error: unable to fecth data"
# disconnect from server
db.close()
Answer: First off, you should try abstracting all that into a single function you can
call outside of CGI, but that's a whole other exercise now. Anyway, if you had
done that you can get the stacktrace much easier to see what you did wrong,
however, I can see you have a syntax error in the code you helpfully included
sql = "SELECT * FROM EMPLOYEE WHERE FIRST_NAME = '".first_name."' AND LAST_NAME = '".last_name."'"
Python string concatenation uses the `+` operator, not `.` like it is in PHP.
Second, this code is not secure. See <http://xkcd.com/327/>
To fix this, the [`cursor.execute`](http://dev.mysql.com/doc/connector-
python/en/connector-python-api-mysqlcursor-execute.html) method provides a
second argument to fill out the tokens, this is what you should do
sqlstmt = "SELECT * FROM EMPLOYEE WHERE FIRST_NAME = %(first_name)s AND LAST_NAME = %(last_name)s"
try:
cursor.execute(sqlstmt, {'first_name': first_name, 'last_name': last_name})
...
|
How do I set an Argparse argument's default value to a positional argument's value?
Question: I have a python script that sends a GET request. It uses Argparse to take
three arguments:
1. **Address** : where to send the GET request
2. **Host** : the host to declare in the GET request
3. **Resource** : which resource is being requested
An example usage might be:
`$ python get.py 198.252.206.16 stackoverflow.com /questions/ask`
In most cases, however, only the host and the resource need to be given as the
host will resolve to the address:
$ host -t a stackoverflow.com
stackoverflow.com has address 198.252.206.16
So desired usage might be:
$ python get.py stackoverflow.com /questions/ask
How do I set up Argparse so that the default value of the Address argument is
the value of the Host argument?
* * *
I've been asked to show the code that currently parses the arguments. Here it
is:
import argparse
parser = argparse.ArgumentParser(description=
'Send a GET request and obtain the HTTP response header and/or body.')
parser.add_argument("-v", "--verbose",
help="Turn on verbose mode.",
action="store_true")
parser.add_argument("-p", "--port",
type=int,
default=80,
help="Which port to use when sending the GET request."
parser.add_argument("address",
help="Where to send the GET request to.")
parser.add_argument("host",
help="Which Host to declare in the GET request.")
parser.add_argument("resource",
help="Which resource to request.")
parser.parse_args()
Answer: If you are trying to achieve the behaviour you described using ONLY positional
arguments you can use a list argument (nargs) like this:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("args", nargs="+")
parsed = parser.parse_args()
args = parsed.args
if len(args) == 3:
ip, host, address = args
print ip, host, address
elif len(args) == 2:
ip, host, address = args[0], args[0], args[1]
print ip, host, address
else:
print "Invalid args"
But not only is this hacky you also lose the benefits argparse provides (you
have to manually verify the arguments). I recommend you use optional
arguments. Perhaps like this:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-host", required=True)
parser.add_argument("-res", required=True)
parser.add_argument("-ip", required=False)
args = parser.parse_args()
ip = args.ip if args.ip else args.host
print args.host, args.res, ip
And execute it like this:
python2.7 test.py -host hello -res world
|
Can't find matching form
Question: I am trying to use Python's mechanize module to submit a form value and
download a subsequent file. However, I keep getting an error saying the script
can't find the form.
The website I'm using is
[here](https://ccmis.dhs.state.ia.us/clientportal/providersearch.aspx).
I'm trying to select by County = 'Linn'.
Below is the script I have up to selecting the form...
import mechanize
url = 'https://ccmis.dhs.state.ia.us/clientportal/providersearch.aspx'
br = mechanize.Browser()
br.open(url)
br.select_form(name="ctl00$MainContent$ddlSearchByLocationCounty")
I keep getting an error that there is no form with a matching name. When I use
developer tools this is the name of the that shows. Below is a snippet of the
HTML...
<select name="ctl00$MainContent$ddlSearchByLocationCounty" id="ctl00_MainContent_ddlSearchByLocationCounty" style="width:150px;">
<option value="">Select County</option>
<option value="Adair">Adair</option>
<option value="Adams">Adams</option>
<option value="Allamakee">Allamakee</option>
Answer: You need to first select the form by name, ID, etc before you can select an
input and set its value. Here is the updated code which sets the country to
Linn. I recommend reviewing the cheat sheet at
<http://www.pythonforbeginners.com/cheatsheet/python-mechanize-cheat-sheet>.
import mechanize
url = 'https://ccmis.dhs.state.ia.us/clientportal/providersearch.aspx'
br = mechanize.Browser()
br.open(url)
br.select_form(name="aspnetForm")
country = br.form.find_control("ctl00$MainContent$ddlSearchByLocationCounty")
country.value = ['Linn']
print country.value
|
Python 2.7 and Flask - using functions from "Random" module returns "500 Internal Server Error" when using venv
Question: I'm testing a web service written with python and flask. Within that service I
want to generate a random number. None of the solutions posted on the web seem
to work, **as if the "random" module does not work.** I checked, and there's
no random.py or random.pyc within the folders around the webservice.py
Here's how I start the web service:
$ . venv/bin/activate
$ python webservice.py
**How can I fix random module not being available when running in a virtual
environment?**
import random
from random import randint
x = random.random()*100
x = randint(0,10)
print 'r' + x #does not work
Here's what's in the venv folder:

Answer: Check to see if the help("random") method will give a list.
* Then reinstall the module in that venv. Also what is in the lib folder?
a link to help.
<https://docs.python.org/2/library/random.html>
|
how to use python to execute a curl command
Question: I'm new to python and I want to execute a curl command in python.
Usually, I just need enter the command in terminal and press return key.
However, I don't know how it works in python.
The command shows below:
curl -d @request.json --header "Content-Type: application/json" https://www.googleapis.com/qpxExpress/v1/trips/search?key=mykeyhere
There is a request.json file to be sent to get response.
I searched a lot and got confused. I tried to write a piece of code, although
I could not fully understand. It didn't work.
import pycurl
import StringIO
response = StringIO.StringIO()
c = pycurl.Curl()
c.setopt(c.URL, 'https://www.googleapis.com/qpxExpress/v1/trips/search?key=mykeyhere')
c.setopt(c.WRITEFUNCTION, response.write)
c.setopt(c.HTTPHEADER, ['Content-Type: application/json','Accept-Charset: UTF-8'])
c.setopt(c.POSTFIELDS, '@request.json')
c.perform()
c.close()
print response.getvalue()
response.close()
The error message is 'Parse Error'.Can anyone tell me how to fix it? or how to
get response from the sever correctly? Thanks a lot!
Answer: For sake of simplicity, maybe you should consider using the standard library
[Requests](http://docs.python-requests.org/).
An example with json response content would be something like:
import requests
r = requests.get('https://github.com/timeline.json')
r.json()
If you look for further information, in the [Quickstart](http://docs.python-
requests.org/en/latest/user/quickstart/) section, they have lots of working
examples.
**EDIT:**
For your specific curl translation:
import requests
import json
url = 'https://www.googleapis.com/qpxExpress/v1/trips/search?key=mykeyhere'
payload = json.load(open("request.json"))
headers = {'content-type': 'application/json', 'Accept-Charset': 'UTF-8'}
r = requests.post(url, data=json.dumps(payload), headers=headers)
|
Time-delayed grequests without using grequests.map()
Question: This is the first time I've tried to use a library with less-than-ideal levels
of documentation and example code, so bear with me. I have a tiny bit of
experience with the Requests library, but I need to send separate requests to
a specific address every second:
* Without waiting for the first request to complete, handling the individual responses as they come in
* The responses' content need to be parsed separately
* While limiting the total number of connections
I can't figure out how to satisfy these conditions simultaneously.
`grequests.map()` will give me the responses' content that I want, but only in
a batch after they've all completed. `grequests.send()` seems to only return a
response object that doesn't contain the html text of the web page. (I may be
wrong about `grequests.send()`, but I haven't yet found an example that pulls
content from that object)
Here's the code that I have so far:
import grequests
from time import sleep
def print_res(res, **kwargs):
print res
print kwargs
headers = {'User-Agent':'Python'}
req = grequests.get('http://stackoverflow.com', headers=headers, hooks=dict(response=print_res), verify=False)
for i in range(3):
job = grequests.send(req, grequests.Pool(10))
sleep(1)
The response I get:
1
<Response [200]>
{'verify': False, 'cert': None, 'proxies': {'http': 'http://127.0.0.1:8888', 'ht
tps': 'https://127.0.0.1:8888'}, 'stream': False, 'timeout': None}
2
<Response [200]>
{'verify': False, 'cert': None, 'proxies': {'http': 'http://127.0.0.1:8888', 'ht
tps': 'https://127.0.0.1:8888'}, 'stream': False, 'timeout': None}
3
<Response [200]>
{'verify': False, 'cert': None, 'proxies': {'http': 'http://127.0.0.1:8888', 'ht
tps': 'https://127.0.0.1:8888'}, 'stream': False, 'timeout': None}
I've tried accessing the html response with `req.content`, and `job.content`,
but neither work.
Answer: Of course, while writing up this question I realized that I hadn't tried to
access `res.content`, which turns out to be exactly what I needed.
Lesson learned: The object that is returned to the response hook in the
`grequests.get()` statement has a `content` attribute which contains the text
of the response sent from the server.
|
pkg_resources.ExtractionError: Can't extract file(s) to egg cache: [Errno 13] Permission denied:
Question: So, I've been battling this ongoing problem for days and here's my latest
dead-end:
I wanted to uninstall cx_Oracle to start fresh. I found
[this](http://stackoverflow.com/questions/922323/how-do-i-deactivate-an-egg)
that said the way to do that was to delete the egg file.
So I did:
rm -rf /Users/xxx/.python-eggs/cx_Oracle-5.1.3-py2.7-macosx-10.8-x86_64.egg-tmp/
Which I now see was a total reading comprehension failure on my part, but
what's done is done.
Then I tried to run the script and got:
File "time_reporting.py", line 31, in <module>
import cx_Oracle
File "build/bdist.macosx-10.8-x86_64/egg/cx_Oracle.py", line 7, in <module>
File "build/bdist.macosx-10.8-x86_64/egg/cx_Oracle.py", line 4, in __bootstrap__
File "build/bdist.macosx-10.8-x86_64/egg/pkg_resources.py", line 951, in resource_filename
err.manager = self
File "build/bdist.macosx-10.8-x86_64/egg/pkg_resources.py", line 1647, in get_resource_filename
def get_importer(path_item):
File "build/bdist.macosx-10.8-x86_64/egg/pkg_resources.py", line 1677, in _extract_resource
except ImportError:
File "build/bdist.macosx-10.8-x86_64/egg/pkg_resources.py", line 1017, in get_cache_path
Resource providers should call this method ONLY after successfully
File "build/bdist.macosx-10.8-x86_64/egg/pkg_resources.py", line 997, in extraction_error
pkg_resources.ExtractionError: Can't extract file(s) to egg cache
The following error occurred while trying to extract file(s) to the Python egg
cache:
[Errno 13] Permission denied: '/Users/xxx/.python-eggs/cx_Oracle-5.1.3-py2.7-macosx-10.8-x86_64.egg-tmp'
The Python egg cache directory is currently set to:
/Users/xxx/.python-eggs
Perhaps your account does not have write access to this directory? You can
change the cache directory by setting the PYTHON_EGG_CACHE environment
variable to point to an accessible directory.
Now I have no idea what to do. Python 2.7.8, OSX 10.8.5
Answer: UNIX file permissions and ownerships of the target directories are incorrect.
Most likely you have installed stuff as root, making those directories root
owned.
[Do not install packages as root in
Python](http://opensourcehacker.com/2012/09/16/recommended-way-for-sudo-free-
installation-of-python-software-with-virtualenv/), unless you really know what
you are doing.
[Use UNIX chmod and chown commands to fix directory
permissions](http://www.cyberciti.biz/faq/how-to-use-chmod-and-chown-command/)
... or remove directories with sudo and after this reinstall the packages
without sudo:
sudo rm -rf /Users/xxx/.python-eggs/
|
python's negative threshold, the lowest non-infinity negative number?
Question: What is python's threshold of representable negative numbers? What's the
lowest number below which Python will call any other value a - negative
inifinity?
Answer: There is no most negative integer, as Python integers have arbitrary
precision. The smallest float greater than negative infinity (which, depending
on your implementation, can be represented as `-float('inf')` can be found in
`sys.float_info`.
>>> import sys
>>> sys.float_info.max
1.7976931348623157e+308
The actual values depend on the actual implementation, but typically uses your
C library's `double` type. Since floating-point values typically use a sign
bit, the smallest negative value is simply the inverse of the largest positive
value. Also, because of how floating point values are stored (separate
mantissa and exponent), you can't simply subtract a small value from the
"minimum" value and get back negative infinity. Subtracting 1, for example,
simply returns the same value due to limited precision.
(In other words, the possible `float` values are a small subset of the actual
real numbers, and operations on two `float` values is not necessarily
equivalent to the same operation on the "equivalent" reals.)
|
Python (nltk) - UnicodeDecodeError: 'ascii' codec can't decode byte
Question: I'm new to NLTK. I'm getting this error and I've searched around for
encoding/decoding and specifically the UnicodeDecodeError but this error seems
specific to the NLTK source code.
Here's the error:
Traceback (most recent call last):
File "A:\Python\Projects\Test\main.py", line 2, in <module>
print(pos_tag(word_tokenize("John's big idea isn't all that bad.")))
File "A:\Python\Python\lib\site-packages\nltk\tag\__init__.py", line 100, in pos_tag
tagger = load(_POS_TAGGER)
File "A:\Python\Python\lib\site-packages\nltk\data.py", line 779, in load
resource_val = pickle.load(opened_resource)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xcb in position 0: ordinal not in range(128)
How do I go around fixing this error?
Here's what causes the error:
from nltk import pos_tag, word_tokenize
print(pos_tag(word_tokenize("John's big idea isn't all that bad.")))
Answer: I had the same problem with you. I use Python 3.4 in Windows 7.
I had installed the "nltk-3.0.0.win32.exe" (from
[here](https://pypi.python.org/pypi/nltk)). But when i installed the
"nltk-3.0a4.win32.exe" (from [here](http://www.nltk.org/nltk3-alpha/)), my
problem with nltk.pos_tag was solved. Check it.
EDIT: If the second link doesn't work, you can look
[here](https://github.com/nltk/nltk/releases/tag/3.0a4).
|
Downloading and Appending PDFs
Question: I'm trying to download all the pdf urls from a website and append all the pdfs
into a single file. At the moment, I have a list of all the urls containing
pdfs. How can I download all the pdfs and append them together? I've attached
my code below. I'm using Python 2.7.8.
# Download and merge pdfs
url_list = listofurl
for url in listofurl:
outfile = os.path.basename(url)
with open(outfile, 'w') as out:
out.write(urllib2.urlopen(url).read())
Answer: For me the download works but it throws an exception at one point where a file
is not found
HTTPError: HTTP Error 404: Not Found
I am not sure if python itself is able to merge the files. I would recommend
using "pdftk" and call it via the 'subprocess' module once your files are on
your harddisk.
On a linux system it works like this once 'pdftk' (an external and very
practical pdf merger for the command line) is installed:
from subprocess import call
call(['pdftk', '*.pdf', 'cat', 'output', 'combined.pdf'])
It is not quite the most pythonic way but the easiest I can think of at the
moment. Hope it helps.
|
Need help filtering out incomplete entries in a .csv [Python]
Question: I have a set of data points, which look like this:
ID 70014 1940 1 1 26.8 1 Y
ID 70014 1940 1 2 29.8 1 Y
ID 70014 1940 1 3 34.3 1 Y
ID 70014 1940 1 4 35.7 1 Y
ID 70014 1940 1 5 34.1 1 Y
but some of the entries have missing values, like the ones below:
ID 70014 1940 6 30
ID 70014 1940 7 1 14 1 N
I need to use the def function to set the parameters for legit entries and
then filter them out, but I'm not exactly sure how to go about that. I'm
getting confused about where it should be located in my code, I'm fairly
certain I know how to describe the parameters, but not how to connect this
code to the rest of my program.
If I've left anything out, let me know, I'm happy to describe the issue more
thoroughly :)
Cheers.
Answer: The simplest thing you could do is splitting the string and count if there are
enough fields:
def is_valid_ID(Id):
if len(Id.split()) == 8:
return True
else:
return False
>>>is_Valid_Id("ID 70014 1940 1 1 26.8 1 Y")
True
>>>is_Valid_Id("ID 70014 1940 1 1 26.8 1")
False
However, this wont check if the individual elements are valid (You could use
regexps for that).
Or the same thing in more compact notation
def is_valid_ID(Id):
return (True if len(Id.split()) == 8 else False)
If you want to use regexps you could try
import re
def accept_entry(entry):
try:
return re.search("ID [0-9]{5} [0-9]{4} [0-9] [0-9] [0-9.]* [0-9] Y",entry).group()
except:
return False
|
Find the difference of the two csv files
Question: I have two csv files named `x.csv` and `y.csv`. `x.csv` has only one row -
`Column A:0, Column B:1, Column C:2, Column D:3`. In `y.csv`, only one row -
`Column A:2, Column B:3, Column C:4`. I need to find the difference of the two
csv files using python and output to a third csv file.
So far I have tried `with open('x.csv','rb') as f1, open('y.csv','rb') as f2`
Answer: From what I understand from the question, this should compare the files and
create a third csv with the differences for each cell. Personally I don't
think this is a very elegant solution and will break down in a range of
scenarios but it should at least get you started. This was partially based off
the linked Q/A given in the comments.
import csv
def csv_get(csvfile):
with open(csvfile) as f:
for row in csv.reader(f):
for i in row:
yield i
def csv_cmp(csvfile1, csvfile2, output):
row = []
read_file_2 = csv_get(csvfile2)
for value_1 in csv_get(csvfile1):
value_2 = read_file_2.next()
print("File 1: {} File 2: {}").format(value_1, value_2)
difference = int(value_1) - int(value_2)
row.append(difference)
with open(output, "w") as output_file:
csv.writer(output_file).writerow(row)
read_file_2.close()
csv_cmp(csvfile1="C:\\...\\a.csv",
csvfile2="C:\\...\\b.csv",
output="C:\\...\\c.csv")
|
How do you accsess to a gmail inbox and manage emails using python?
Question: I've done a program to send mails, wich have a numbers code in the subject.
Now I need to write another one to be able to read through all the mail
subjects searching for one specific, and returns the contents of the mail I
was searching for.
Thank you for your help.
Answer: To access your gmail inbox and manage emails you will have to use a module
called "imaplib" Here you have a short example to access your inbox:
import imaplib
mail = imaplib.IMAP4_SSL('imap.gmail.com')
mail.login('[email protected]', 'yourpassword')
mail.list()
# Out: list of "folders" aka labels in gmail.
mail.select("inbox") # connect to inbox.
I'm sure you can continue from here.
|
Add BullsEye to Image (changing pixels)
Question: I am trying to add a target (bullseye) to an image without the use of
importing python functions, it is proving to be rather difficult however I
believe I need to define a circle through use of code. It should be done by
changing the pixels as opposed to importing functions.
Thanks
Needs to be in the centre of an image
Answer: You will need to add more information here - how is your image represnted in
Python? Normally for porduction code, dealing with images is done through 3rd
party modules, each of which have a way to draw or change different pixels. If
you are using none you have to define your image reading and writting code (or
a way to display the image on the screen).
Anyway, doing all of that without any "importing" will be quite artificial,
though feasible. Maybe you should use pnm files which have a minimum of
encoding required.
That said, you could represent the image in memory as a bytesarray object, and
use math.sin and math.cos (you will have to import those, or resort to a
"raytracing" approach which can render the circle based on x**2 + y * 2 = r **
2 ) to draw your circle.
|
Python Matplotlib: Remove black background when rasterizing part of plot in EPS?
Question: This question is related to [a comment on another
question](http://stackoverflow.com/questions/19638773/matplotlib-plots-lose-
transparency-when-saving-as-ps-eps#comment29161375_19640319).
In matplotlib/python, I am trying to rasterize a specific element in my image
and save it to eps. The issue is that when saving to EPS (but not SVG or PDF),
a black background appears behind the rasterized element.
Saving to PDF and converting to EPS does not seem to be a reasonable solution,
as there are weird pdf2ps and pdftops conversion issues that make
understanding bounding boxes very ... scary (or worse, seemingly
inconsistent). My current work around involves a convoluted process of saving
in svg and export in Inkscape, but this should also not be required.
Here is the sample code needed to reproduce the problem. Matplotlib and Numpy
will be needed. If the file is saved to `mpl-issue.py`, then it can be run
with:
`python mpl-issue.py`
#!/usr/bin/env python
# mpl-issue.py
import numpy as np
import matplotlib as mpl
# change backend to agg
# must be done prior to importing pyplot
mpl.use('agg')
import matplotlib.pyplot as plt
def transparencytest():
# create a figure and some axes
f = plt.figure()
a = {
'top': f.add_subplot(211),
'bottom': f.add_subplot(212),
}
# create some test data
# obviously different data on the subfigures
# just for demonstration
x = np.arange(100)
y = np.random.rand(len(x))
y_lower = y - 0.1
y_upper = y + 0.1
# a rasterized version with alpha
a['top'].fill_between(x, y_lower, y_upper, facecolor='yellow', alpha=0.5, rasterized=True)
# a rasterized whole axis, just for comparison
a['bottom'].set_rasterized(True)
a['bottom'].plot(x, y)
# save the figure, with the rasterized part at 300 dpi
f.savefig('testing.eps', dpi=300)
f.savefig('testing.png', dpi=300)
plt.close(f)
if __name__ == '__main__':
print plt.get_backend()
transparencytest()
The `testing.png` image looks like this:

The `testing.eps` image ends up looks like this (in converted pdf versions and
the figure-rasterized png): 
The black backgrounds behind the rasterized elements are not supposed to be
there. How can I remove the black backgrounds when saving an eps figure with
rasterized elements in it?
This has been tested with a bunch of other mpl backends, so it does not appear
to be a specific problem with agg. Mac OS X 10.9.4, Python 2.7.8 built from
MacPorts, Matplotlib 1.3.1.
Answer: This was a known bug which has been fixed.
This is due to the fact that eps does not know about transparency and the
default background color for the rasterization was (0, 0, 0, 0) (black which
is fully transparent).
I also had this problem
(<https://github.com/matplotlib/matplotlib/issues/2473>) and it is fixed
(<https://github.com/matplotlib/matplotlib/pull/2479>) in matplotlib 1.4.0
which was released last night.
|
Python Pandas Append List of Dataframes
Question: This is a sort of simple question but I don't think it has been asked before.
If I have a list of dataframes (they need to be in this format because of
multiprocessing),
df_list=[df1,df2,...,dfn]
Is there an elegant way to append all of them? A one liner would be even
better.
Answer: Following parallel processing example works in [IPython](http://ipython.org)
by using concat method:
from IPython import parallel
clients = parallel.Client() #a lightweight handle on all the engines of a cluster
clients.block = True # use synchronous computations
print(clients.ids)
dview = clients[:] #dview = clients.direct_view()
dview.block = True
dview.scatter("experiment", myDataFrame) # <myDataFrame> scattered as <experiment> to the engines
dview["wlist_ptrn"] = wlist_ptrn
dview.execute("experiment['allFeats'] = experiment.ttext.str.findall(wlist_ptrn)")
return pd.concat(dview.gather("experiment")) # gather method returns a list of data frames
I hope it can be useful for the multiprocessing module output.
|
How to capture special keys for python console application
Question: Python 2.7
I'm trying to capture key presses in an application I'm writing using the
getch() function in the msvcrt module. Some are easy, enter is 13, backspace
is 8, .> is 46 etc. Some keys, such as Home, I can't work out.
From the docstring for getch():
"If the pressed key was a special function key, this will return ‘000’ or
‘xe0’; the next call will return the keycode."
I've tried testing for a return value of '000' or 'xe0' but this is not
returned. What happens is I get 224 returned and on the next call of getch() I
get another code, so for Home it's 71. Other special keys behave this way too,
End is 224 79, Insert is 224 82, Page Up is 224 73 etc. I can't explain this
behaviour; I've tried seeing if adding the two values together and then taking
off a power of two helps (i.e. 224 + 73 - 256) but it doesn't produce anything
useful.
Does anyone understand this behaviour and/or does anyone have any advice about
how to capture these keys (I didn't want to hard code the 224 + x pattern
values as I'm not confident these are consistent with other users)?
Thank you.
EDIT: code if anyone wants to try it out
import msvcrt
while True:
key = msvcrt.getch()
print ord(key)
Answer: The mentioned value returned by `getch()` is not `'xe0'`, it's **`'\xe0'`** \-
note the backslash indicating an [escape
sequence](http://legacy.python.org/dev/peps/pep-0223/). `224` is just the
decimal value of that byte:
ord('\xe0') == 224
So in your case, this should work:
while True:
key = msvcrt.getch()
if key in ('\000', '\xe0'):
# special key, handle accordingly
# ...
|
Indexing Error with HDU FITS file format with numpy
Question: I'm attempting to read the data from a FITS file using the `astropy` module
`fits` and then standard numpy array handling. However, for some reason I am
receiving the following error:
IndexError: too many indices
This is the code that I am using:
from astropy.io import fits
import matplotlib.pyplot as plt
hdulist = fits.open('/Users/iMacHome/Downloads/spec-1959-53440-0605.fits')
hdu = hdulist[1]
data = hdu.data
flux = data[:, 1]
^ Error Traceback to the `flux = data[:, 1]` line.
loglam = data[:, 2]
This may be a question that perhaps astronomers could answer (or,
specifically, astronomers familiar with `.fits` files from the SDSS), but I
welcome the input from numpy and python users!
Answer: I have just had the following answer from the SDSS help desk:
Replace:
flux = data[:,0]
loglam = data[:,1]
with
flux = data['flux']
loglam = data['loglam']
This is the correct way to access fields in a Numpy record array.
|
How do I separate out unique rows in a list that has both a datetime and float column?
Question: I'm relatively new to Python, and I am having trouble separating out unique
rows from a data set that I had recently converted into lists. I broke
separated out the data's unixtime recordings and converted them into datetime.
Then when I recombined the data into a list I tried to separate out the unique
rows of data. But instead I get the error.
[[[datetime.datetime(2014, 6, 20, 0, 0) -16.0]
[datetime.datetime(2014, 6, 20, 0, 0) -16.0]........
Traceback (most recent call last):
File "C:\Users\lenovo\Favorites\Microsoft 网站\Downloads\OTdataparser.py", line 33, in <module>
indicies = np.unique(okdat, return_index = True) #<-- NOT WORKING
File "C:\Python27\lib\site-packages\numpy\lib\arraysetops.py", line 180, in unique
perm = ar.argsort(kind='mergesort')
TypeError: can't compare datetime.datetime to float
My script is below.
import numpy as np
from datetime import datetime
import matplotlib.pyplot as plt
import math
ds5 = np.genfromtxt("gpsdata.dat.140620", delimiter = '',
usecols = (2,4,5), dtype = object)
print ds5
ds = np.array([x for x in ds5 if x[0] == "06/20/2014"])
dot = ds[:,2].astype(float)
print ds
rndsht = np.genfromtxt(ds[:,1], delimiter = ".", dtype = float) #Rm decimal
print rndsht
dutc = np.array([datetime.utcfromtimestamp(x) for x in rndsht[:,0]])
print dutc
#dutc = np.array([datetime.utcfromtimestamp(x) for x in ds[:,1].astype(float)])
okdat = np.dstack((dutc,dot))
#okdat.astype(object)
print okdat
#indicies = np.unique(dutc, return_index=True) #<-- WORKS! BUT okdat??
#print indicies
indicies = np.unique(okdat, return_index = True) #<-- NOT WORKING
print indicies
#Can't figure out how to use indicies to limit dot
Answer: You could write your own unique function.
Here is quick example (you can probably do better). Note that is doesn't
preserve order, but you could use insert and do that.
def
def unique(data):
x = 0
while x < len(data):
i = data[x]
c = 0
while (i in data):
c += 1
data.remove(i)
data.append(i)
if (c <= 1):
x += 1
return data
|
Error in if statement at the colon
Question: I am helping my friend with his new idea about prime numbers. I am writing a
simple python code. I've encounter a SyntaxError that I don't know why. Thanks
for any help
File "./prime.py", line 9 if isPrime(n) and isPrime(m) and isPrime(p):^ SyntaxError: invalid syntax
Code:
#!/usr/bin/env python
##from __future__ import print_function
from math import floor, ceil, sqrt
def main():
n = int(raw_input('Nhap so thu nhat: ')) #input number
m = int(raw_input('Nhap so thu hai: ')) #input number
p = int(raw_input('Nhap so thu ba: ') #input number
if isPrime(n) and isPrime(m) and isPrime(p):
step = 2
q = n
while q != n*n:
if isPrime(q):
print(q)
q += step
step += 2
def isPrime(n):
if n <= 3:
if n > 1:
return True
else:
return False
if n%2 == 0 or n%3 == 0:
return False
sqroot = int(n**.5)
for i in range(5, sqroot + 1, 6):
if n%i == 0 or n%(i+2) == 0:
return False
return True
if __name__ == '__main__':
main()
Answer: You have a missing `)` in the line
p = int(raw_input('Nhap so thu ba: ')
It should be
p = int(raw_input('Nhap so thu ba: '))
^
|
Python Pandas read_csv with nrows=1
Question: I have this code reading a text file with headers. ANd append another file
with the same headers to it. As the main file is very huge, I only want to
read in part of it and get the column headers. I will get this error if the
only line there is the header. And I do not have an idea of how many rows the
file has. What I would like to achieve is to read in the file and get the
column header of the file. Because I want to append another file to it, I am
trying to ensure that the columns are correct.
import pandas as pd
main = pd.read_csv(main_input, nrows=1)
data = pd.read_csv(file_input)
data = data.reindex_axis(main.columns, axis=1)
data.to_csv(main_input,
quoting=csv.QUOTE_ALL,
mode='a', header=False, index=False)
Examine the stack trace:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\gohm\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 420, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Users\gohm\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 221, in _read
return parser.read(nrows)
File "C:\Users\gohm\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 626, in read
ret = self._engine.read(nrows)
File "C:\Users\gohm\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 1070, in read
data = self._reader.read(nrows)
File "parser.pyx", line 727, in pandas.parser.TextReader.read (pandas\parser.c:7110)
File "parser.pyx", line 774, in pandas.parser.TextReader._read_low_memory (pandas\parser.c:7671)
StopIteration
Answer: It seems that the whole file may be being read into memory. You can specify a
`chunksize=` in `read_csv(...)` [as discussed in the docs
here.](http://pandas.pydata.org/pandas-docs/stable/io.html#iterating-through-
files-chunk-by-chunk)
I think that `read_csv`s memory usage had been overhauled in version 0.10. So
pandas your version makes a difference too see [this
answer](http://stackoverflow.com/questions/11622652/large-persistent-
dataframe-in-pandas) from @WesMcKinney and the associated comments. The
changes were also discussed a while ago on [Wes'
blog](http://wesmckinney.com/blog/?p=543)
import pandas as pd
from cStringIO import StringIO
csv_data = """\
header, I want
0.47094534, 0.40249001,
0.45562164, 0.37275901,
0.05431775, 0.69727892,
0.24307614, 0.92250565,
0.85728819, 0.31775839,
0.61310243, 0.24324426,
0.669575 , 0.14386658,
0.57515449, 0.68280618,
0.58448533, 0.51793506,
0.0791515 , 0.33833041,
0.34361147, 0.77419739,
0.53552098, 0.47761297,
0.3584255 , 0.40719249,
0.61492079, 0.44656684,
0.77277236, 0.68667805,
0.89155627, 0.88422355,
0.00214914, 0.90743799
"""
tfr = pd.read_csv(StringIO(csv_data), header=None, chunksize=1)
main = tfr.get_chunk()
|
rename a log file in python using a logging module
Question: I need to rename a logging file but continue logging to it afterwards. I'm
running Python 3.x and using a `logging` module to create logs.
logger = logging.getLogger(r'mylogger')
handler = logging.FileHandler(r'mylog.txt', mode = r'w')
logger.addHandler(handler)
logger.info(r'msg 1')
logger.info(r'msg 2')
handler.renameFile(r'my_newlog.txt') # <--- Is something like this possible?
logger.info(r'msg 3')
Does anyone know if something like this is possible? If no, how would I go
about doing it? One idea that I had was to close the file (`handler.close()`),
remove the handler from the logger, rename the old file, create a new handler
with the new file name and add it to the logger. The issue however is that I
would need to create a completely identical handler, i.e it would need to have
the same attributes as the old one (with exception of the file name of
course). Is there a simple way to store old attributes and then use them to
set the attributes of the new handler?
Any help is appreciated.
Answer:
import glob
import logging
import logging.handlers
LOG_FILENAME = 'logging_rotatingfile_example.out'
# Set up a specific logger with our desired output level
my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)
# Add the log message handler to the logger
handler = logging.handlers.RotatingFileHandler(
LOG_FILENAME, maxBytes=20, backupCount=5)
my_logger.addHandler(handler)
# Log some messages
for i in range(20):
my_logger.debug('i = %d' % i)
# See what files are created
logfiles = glob.glob('%s*' % LOG_FILENAME)
for filename in logfiles:
print(filename)
The result should be 6 separate files, each with part of the log history for
the application:
logging_rotatingfile_example.out
logging_rotatingfile_example.out.1
logging_rotatingfile_example.out.2
logging_rotatingfile_example.out.3
logging_rotatingfile_example.out.4
logging_rotatingfile_example.out.5
|
How to pull a file with python code in Unix env
Question: I have the file in this place
/home/unica/app/Affinium/Campaign/partitions/partition1/scripts/runscripts/campaigns/cnyr/dev
I want to call it here.like.
with open('/home/unica/app/Affinium/Campaign/partitions/partition1/scripts/runscripts/campaigns/cnyr/dev/CNYR_DM_TM_CAMPAIGN_WAVES.csv','rb') as csvfile
But it is throwing error as syntax error.Also how can I simplify the path name
into some alias name.
Answer: Try this:
fileName = '/home/unica/app/Affinium/Campaign/partitions/partition1/scripts/runscripts/campaigns/cnyr/dev/CNYR_DM_TM_CAMPAIGN_WAVES.csv'
with open(fileName, 'rb') as csvfile: # notice that the line must end with a ':'
for line in csvfile:
# do something
Or even better, use the [`csv`](https://docs.python.org/2/library/csv.html)
module:
import csv
with open(fileName, 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='|') # specify delimiter, etc.
for row in reader:
# do something
|
Python CSV module handling comma within quote inside a field
Question: I am using Python's csv module to parse data from a CSV file in my
application. While testing the application, my colleague entered a piece of
sample text copy-pasted from random website.
The sample text has double quotes inside the field and a comma within the
double quotes. The commas outside of double quotes are correctly handled by
the csv module but the comma inside the double quote is split into next
column. I looked at the csv specification and the field does comply to the
specification by escaping the double quotes by another set of double quotes.
I checked the file in libreoffice and it is handled correctly.
Here's one line from the csv data where I'm having a problem:
company_name,company_revenue,company_start_year,company_website,company_description,company_email
Acme Inc,80000000000000,2004,http://google.com,"The company is never clearly defined in Road Runner cartoons but appears to be a conglomerate which produces every product type imaginable, no matter how elaborate or extravagant - most of which never work as desired or expected. In the Road Runner cartoon Beep, Beep, it was referred to as ""Acme Rocket-Powered Products, Inc."" based in Fairfield, New Jersey. Many of its products appear to be produced specifically for Wile E. Coyote; for example, the Acme Giant Rubber Band, subtitled ""(For Tripping Road Runners)"".
Sometimes, Acme can also send living creatures through the mail, though that isn't done very often. Two examples of this are the Acme Wild-Cat, which had been used on Elmer Fudd and Sam Sheepdog (which doesn't maul its intended victim); and Acme Bumblebees in one-fifth bottles (which sting Wile E. Coyote). The Wild Cat was used in the shorts Don't Give Up the Sheep and A Mutt in a Rut, while the bees were used in the short Zoom and Bored.
While their products leave much to be desired, Acme delivery service is second to none; Wile E. can merely drop an order into a mailbox (or enter an order on a website, as in the Looney Tunes: Back in Action movie), and have the product in his hands within seconds.",[email protected]
Here's what it looks like in the debug log:
2014-08-27 21:35:53,922 - DEBUG: company_website=http://google.com
2014-08-27 21:35:53,923 - DEBUG: company_revenue=80000000000000
2014-08-27 21:35:53,923 - DEBUG: company_start_year=2004
2014-08-27 21:35:53,923 - DEBUG: account_description=The company is never clearly defined in Road Runner cartoons but appears to be a conglomerate which produces every product type imaginable, no matter how elaborate or extravagant - most of which never work as desired or expected. In the Road Runner cartoon Beep, Beep, it was referred to as "Acme Rocket-Powered Products
2014-08-27 21:35:53,924 - DEBUG: company_name=Acme Inc
2014-08-27 21:35:53,925 - DEBUG: company_email=Inc."" based in Fairfield
The relevant piece of code to handle csv parsing:
with open(csvfile, 'rU') as contactsfile:
# sniff for dialect of csvfile so we can automatically determine
# what delimiters to use
try:
dialect = csv.Sniffer().sniff(contactsfile.read(2048))
except:
dialect = 'excel'
get_total_jobs(contactsfile, dialect)
contacts = csv.DictReader(contactsfile, dialect=dialect, skipinitialspace=True, quoting=csv.QUOTE_MINIMAL)
# Start reading the rows
for row in contacts:
process_job()
for key, value in row.iteritems():
logging.debug("{}={}".format(key,value))
I understand that this is just junk data and we'll likely never encounter such
a data but the csv files we receive are not within our control and we can have
such an edge case. And since it's a valid csv file, which is handled correctly
by libreoffice, it makes sense for me to handle it correctly as well.
I have searched for other questions on csv handling where people have had
problems with either handling of quotes or comma within the field. I have both
of these working fine, my problem is when a comma is nested within quotes
within a field. There is a question with same problem which does solve the
issue [Comma in DoubleDouble Quotes in CSV
File](http://stackoverflow.com/questions/20990472/comma-in-doubledouble-
quotes-in-csv-file) but it's a hackish way where I am not preserving the
contents as they are given to me, which is a valid way as per RFC4180.
Answer: The [Dialect.doublequote
attribute](https://docs.python.org/3.4/library/csv.html#csv.Dialect.doublequote)
> controls how instances of quotechar appearing inside a field should be
> themselves be quoted. When True, the character is doubled. When False, the
> escapechar is used as a prefix to the quotechar. It defaults to True.
The sniffer is setting the doublequote attribute to False, but the CSV you
posted should be parsed with `doublequote = True`:
import csv
with open(csvfile, 'rb') as contactsfile:
# sniff for dialect of csvfile so we can automatically determine
# what delimiters to use
try:
dialect = csv.Sniffer().sniff(contactsfile.read(2048))
except:
dialect = 'excel'
# get_total_jobs(contactsfile, dialect)
contactsfile.seek(0)
contacts = csv.DictReader(contactsfile, dialect=dialect, skipinitialspace=True,
quoting=csv.QUOTE_MINIMAL, doublequote=True)
# Start reading the rows
for row in contacts:
for key, value in row.iteritems():
print("{}={}".format(key,value))
yields
company_description=The company is never clearly defined in Road Runner cartoons but appears to be a conglomerate which produces every product type imaginable, no matter how elaborate or extravagant - most of which never work as desired or expected. In the Road Runner cartoon Beep, Beep, it was referred to as "Acme Rocket-Powered Products, Inc." based in Fairfield, New Jersey. Many of its products appear to be produced specifically for Wile E. Coyote; for example, the Acme Giant Rubber Band, subtitled "(For Tripping Road Runners)".
Sometimes, Acme can also send living creatures through the mail, though that isn't done very often. Two examples of this are the Acme Wild-Cat, which had been used on Elmer Fudd and Sam Sheepdog (which doesn't maul its intended victim); and Acme Bumblebees in one-fifth bottles (which sting Wile E. Coyote). The Wild Cat was used in the shorts Don't Give Up the Sheep and A Mutt in a Rut, while the bees were used in the short Zoom and Bored.
While their products leave much to be desired, Acme delivery service is second to none; Wile E. can merely drop an order into a mailbox (or enter an order on a website, as in the Looney Tunes: Back in Action movie), and have the product in his hands within seconds.
company_website=http://google.com
company_start_year=2004
company_name=Acme Inc
company_revenue=80000000000000
[email protected]
* * *
Also, [per the docs](https://docs.python.org/2.7/library/csv.html#csv.reader),
in Python2 the filehandle should be opened in 'rb' mode, not 'rU' mode:
> If csvfile is a file object, it must be opened with the ‘b’ flag on
> platforms where that makes a difference.
|
Set Input Variable at the Start of a Python Script
Question: I'm working on a python script in Red Hat that connect to a SQL database and
run queries on a specific row. What I am attempting to do is to set the row
that I want to query as an input variable that get's passed into the script
when i start it. For example, when I start the script I would like to just run
the command:
./SQLQuery.py Row_To_Query = Row_Name
For a similar script that I have in VBScript I would run the wscript command
like this:
wscript SQLQuery.vbs /name:Row_Name
Then the row name would get passed into the script and the relevant data used
as the script progresses.
My end goal is to run the python script as a task every 10 minuets or so and
pull the relevant data from a specific row, but also have the ability to
specify which row the SQL queries are run against without having to edit the
script each time I want to run it.
Answer: You can use [`argparse`](https://docs.python.org/dev/library/argparse.html) to
parse the commandline arguments given in a standard unix-like format:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--row-to-query', type=str, help='row to query')
namespace = parser.parse_args()
row_name = namespace.row_to_query
...
You'd then run the program like:
./SQLQuery.py --row-to-query=some_row_name
or
./SQLQuery.py --row-to-query some_row_name
both invocations should be equivalent.
|
ListRasters, TypeError: 'NoneType' object is not iterable
Question: Hi I have very minimal python experience and am not sure why I am getting this
type error. I am trying to perform a raster to polygon conversion with the
rasters from a different workspace than the initial env.workspace. Is this
possible? And how can there be a no data error in the raster2 Listasters()?
The reclassify command works fine and creates the output in the defined folder
but the raster to polygon tool is what is signaling the error.
Thanks for the help I need this done for work as soon as possible.
Here is the error:
Traceback (most recent call last):
File "C:\Users\mkelly\Documents\Namibia\Raster_Water\Script_try2.py", line 30, in <module>
for raster2 in arcpy.ListRasters():
TypeError: 'NoneType' object is not iterable
Here is the code:
# Import arcpy module
import arcpy
from arcpy import env
arcpy.env.overwriteOutput = True
# Check out any necessary licenses
arcpy.CheckOutExtension("3D")
#Set the workplace
arcpy.env.workspace = r"C:\Users\mkelly\Documents\Namibia\Raster_Water\1993"
#for all files in 1993, reclassify to water only rasters
for raster in arcpy.ListRasters():
folder = r"C:\Users\mkelly\Documents\Namibia\Raster_Water\1993\Reclass" + "\\"
outraster = folder + raster
arcpy.Reclassify_3d(raster, "Value", "1 1", outraster, "NODATA")
#Can I set up a new env workspace to get reclassified rasters from "Reclass" folder?
arcpy.env.workspace = r"C:Users\mkelly\Documents\Namibia\Raster_Water\1993\Reclass"
#for all files in 1993\Reclass, perform RastertoPolygon
for raster2 in arcpy.ListRasters():
folder2 = r"C:\Users\mkelly\Documents\Namibia\Raster_Water\1993\Polygons" + "\\"
outraster2 = folder2 + raster2
arcpy.RasterToPolygon_conversion(raster2, outraster2, "NO_SIMPLIFY", "VALUE")
print "end Processing..."`
Thanks in advance to anyone that can give guidance or suggestions!
Answer: `arcpy.ListRasters()` doesn't take any mandatory argument, see the [help
page](http://resources.arcgis.com/en/help/main/10.2/#/ListRasters/03q30000005m000000/).
Are you sure there are any rasters in the Reclass folder? Are they
successfully created by `Reclassify_3d`? My guess is that `outraster` is not
well understood as you combine single and double backslashes in the path.
Instead, write `outraster = os.path.join(folder, raster)` and `import os` in
the beginning of your script.
Also, the script will have trouble creating the polygons, because `raster2` is
probably something like raster.tiff or raster.jpg. You're using this to name
your output shapefile. If your rasters have an extension, you should trim it
using e.g. `arcpy.Describe(raster).baseName`. And in any case, add `.shp` as
you're saving the output in a folder.
**EDIT:** There is a typo in the second workspace, you've forgotten the `\`
after the C in
`r"C:Users\mkelly\Documents\Namibia\Raster_Water\1993\Reclass"`. The workspace
is wrong so your list of rasters is empty.
What format are your rasters? The extension (e.g. `'.tiff'`) will be used in
the name of the output shapefile so you have to remove it. That's what I mean
by 'trim'. And you should add `'.shp'`.
|
Run statement in terminal if user enters a certain key otherwise continue with raw_input in python
Question: I would like to be able to have the terminal ask questions to the user but I
also want the user to be able to enter an alternate character which would be
able to print something out and I was wondering how to do this throughout
several user inputs. Thanks this is a chunk of my code.
user_choice = []
if user_choice == "b":
print "we have: strawberry, chocolate, vanilla, and mint."
else:
# run the code like normal
user_choice.append( raw_input("would you like some ice cream?: "))
if user_choice[-1] =="yes":
print "Here you go."
user_choice.append( raw_input("would you like some cake?: "))
if user_choice[-1] =="yes":
print "Ooh it looks like we just ran out of cake."
Answer: Personally, I would make a function that computes your responses based on the
current question and the user input. You can also have a separate function
which handles the convoluted process of asking questions so you don't need to
repeat your code much. It makes it easy to extend your code this way.
import sys
def getResponse(question, userInput):
# User giving input that doesn't answer the question, but still
# invokes an appropriate response
if userInput == 'b':
return ("We have: strawberry, chocolate, vanilla, and mint.", False)
# Finally, if the user gives input that answers your question
# default case for unhandled input is included
else:
return {
("would you like some ice cream?: ", "yes") :
("Here you go.", True),
("would you like some cake?: ", "yes") :
("Ooh it looks like we just ran out of cake.", True)
}.setdefault((question, userInput), ("", False))
def askQuestion(question):
sys.stdout.write(question)
answered = False
while not answered:
response, answered = getResponse(question, raw_input())
if response: print response
# Ask questions
askQuestion("would you like some ice cream?: ")
askQuestion("would you like some cake?: ")
The functionality you ask for can be easily handled through the use of flow
control based on conditionals. Notice how I only used an if statement and a
while loop to find the solution. I also used an anonymous dictionary, but this
is not necessary, another if statement could have easily done the trick. if
statements and while loops are examples of flow control.
You can find more information on flow control in Chapter 3 of the Python
tutorial.
|
python transitivity between dictionaries
Question: I have a list like the following in python (the real one is huge and I cannot
do this only by looking at it):
original1=[['email', 'tel', 'fecha', 'descripcion', 'categ'],
['[email protected]', '1', '2014-08-06 00:00:06', 'MySpace a', 'animales'],
['[email protected]', '1', '2014-08-01 00:00:06', 'My Space a', 'ropa'],
['[email protected]', '2', '2014-08-06 00:00:06', 'My Space b', 'electronica'],
['[email protected]', '3', '2014-08-10 00:00:06', 'Myace c', 'animales'],
['[email protected]', '4', '2014-08-10 00:00:06', 'Myace c', 'animales']]
I split it between data and names to work with data:
datos=original1[-(len(original1)-1):len(original1)]
I need to do a dictionary that has all the duplicates together, considering
email and tel, but I need to apply transitivity: since line 0= line 2 if we
consider email, but also line 1 if we consider tel, and line 1= line 3 if we
consider email again, I need to get that all candidates in this case are 0,1,2
and 3, while 4 is alone.
I created the following code:
from collections import defaultdict
email_to_indices = defaultdict(list)
phone_to_indices = defaultdict(list)
for idx, row in enumerate(datos):
email = row[0].lower()
phone = row[1]
email_to_indices[email].append(idx)
phone_to_indices[phone].append(idx)
So now I need to apply transitivity rules, to get together 0 to 3 and alone 4.
If you print
print 'email', email_to_indices
print 'phone', phone_to_indices
You get:
> email defaultdict(, {'[email protected]': [0, 2],'[email protected]': [1, 3],
> '[email protected]': [4]})
>
> phone defaultdict(, {'1': [0, 1], '3': [3], '2': [2], '4': [4]})
Don't know how to get the union of those considering the transitive property.
I need to get something like:
> first_group: [0, 1, 2 , 3]
> second_group: [4]
Thanks!
Answer: Here you have a graph, or [Bipartite
graph](https://en.wikipedia.org/wiki/Bipartite_graph) to be more precise. The
nodes are of two types: emails and phones. Two nodes are connected if there
exists a record with that email and phone. Or we even can say that the record
itself is the edge which connects two nodes.
The task is to find [Connected
components](https://en.wikipedia.org/wiki/Connected_component_\(graph_theory\))
of this graph. By following the links you can find algorithms which can do it
in linear time.
Of course some quick and dirty solutions can also be invented, and even may be
deemed appropriate if your dataset is small enough.
You can find some Python implementations here: [Python connected
components](http://stackoverflow.com/questions/10301000/python-connected-
components)
**UPDATE:** Here is an example of how you can construct the graph:
graph = {};
EMAIL = "email";
PHONE = "phone";
for rec in datos:
graph.setdefault((EMAIL, rec[0]), set()).add((PHONE, rec[1]));
graph.setdefault((PHONE, rec[1]), set()).add((EMAIL, rec[0]));
print "\n".join("%s: %s" % (str(node), str(linkedNodes)) for (node, linkedNodes) in graph.iteritems());
So every node has a type (`EMAIL` or `PHONE`, they can actually be just
integers, for example 0 and 1, I made them strings only for nice printing) and
a value. Graph is a dictionary with nodes as keys and sets of connected nodes
as values.
|
How to get three integers from a specific line in a file with Python?
Question: I have a (ASCII) file, `foo.txt`, which has a lot of stuff in it but I only
care about the three numbers on line 2 (separated by white space). For your
information (I don't know if it will be relevant) the number of columns on the
lines before and after this line aren't the same as for line 2.
I want to take these three numbers from line 2 and store them as three
separate integers (whether three separate variables or a list of length 3 I
don't care).
I was using `linecache.getline()` to get that line specifically from the file
but it pulls the line out as one long string (rather than having each number
on the line be its own string) and I don't know how to extract the three
numbers from the resultant string.
Here is my code:
import linecache
linetemp = linecache.getline('foo.txt',2)
#nr = [int(i) for i in line.split(linetemp)]
print nr
The commented line was my attempt at extracting the numbers in linetemp as
integers but since linetemp is one string (rather than a list of strings) it
doesn't work.
If you can improve on what I have above using `linecache.getline()` or if you
have another method to pull the three numbers from line 2 of `foo.txt` I will
be happy either way.
Answer: Try
nr = [int(i) for i in linetemp.split()]
You need to call the `split()` function on the string you want to split.
Example:
In [1]: linetemp = ' 12 27 435'
In [2]: nr = [int(i) for i in linetemp.split()]
In [3]: nr
Out[3]: [12, 27, 435]
|
IPython Failing to Run Code
Question: I was trying out this Python code from [a training
website](http://www.pythonforbeginners.com/python-on-the-web/web-scraping-
with-beautifulsoup/) in IPython:
from bs4 import BeautifulSoup
import requests
url = raw_input("www.google.com")
r = requests.get("http://" +url)
data = r.text
soup = BeautifulSoup(data)
for link in soup.find_all('a'):
print(link.get('href'))
and found that it ran fine on the first try. I've now tried simply restarting
the kernel, opening a new notebook, and generally returning the settings to
how they were when I first ran the program with no luck. Why might IPython be
failing to run the code and giving no response at all (as though I haven't
clicked anything)?
Answer: Seems like `raw_input` is not supported by IPython. So it's probably just
hanging there. If you change:
`url = raw_input("www.google.com")`
to
`url = "www.google.com"`
it should work.
|
No such column error message
Question: I read the tutorial from the django site and now I hit the wall. It seems very
simple but it is giving me a problem. Everytime I run populate_rango.py script
I get an error.
I have modified my script and my models.py file based on the example from
Tango with Django site but still getting this not creating one of the column.
error :
Starting Rango population script...
Traceback (most recent call last):
File "populate_rango.py", line 67, in <module>
populate()
File "populate_rango.py", line 8, in populate
url="http://www.google.com")
File "populate_rango.py", line 55, in add_page
p = Page.objects.get_or_create(category=cat, title=title, url=url, likes=likes, views=views)[0]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/django/db/models/manager.py", line 154, in get_or_create
return self.get_queryset().get_or_create(**kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/query.py", line 376, in get_or_create
return self.get(**lookup), False
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/query.py", line 304, in get
num = len(clone)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/query.py", line 77, in __len__
self._fetch_all()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/query.py", line 857, in _fetch_all
self._result_cache = list(self.iterator())
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/query.py", line 220, in iterator
for row in compiler.results_iter():
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 713, in results_iter
for rows in self.execute_sql(MULTI):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 786, in execute_sql
cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/django/db/backends/util.py", line 69, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/backends/util.py", line 53, in execute
return self.cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/django/db/utils.py", line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/django/db/backends/util.py", line 53, in execute
return self.cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site- packages/django/db/backends/sqlite3/base.py", line 451, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such column: rango_page.likes
populate_rango.py
import os
def populate():
search_cat = add_cat('Search Engins')
add_page(cat=search_cat,
title="Google",
url="http://www.google.com")
add_page(cat=search_cat,
title="Yahoo !",
url="http://www.yahoo.com")
add_page(cat=search_cat,
title="Bing",
url="http://www.bing.com")
social_cat = add_cat("Social Media")
add_page(cat=social_cat,
title="Facebook",
url="http://www.facebook.com")
add_page(cat=social_cat,
title="LinkedIn",
url="http://www.linkedin.com")
add_page(cat=social_cat,
title="Twitter",
url="http://www.twitter.com/")
news_cat = add_cat("News Sites")
add_page(cat=news_cat,
title="CNN",
url="http://www.cnn.com/")
comme_cat = add_cat("Commerce")
add_page(cat=comme_cat,
title="Amazon",
url="http://www.amazon.com")
add_page(cat=comme_cat,
title="eBay",
url="http://www.ebay.com")
for c in Category.objects.all():
for p in Page.objects.filter(category=c):
print "- {0} - {1}".format(str(c), str(p))
def add_page(cat, title, url, views=0, likes=0):
p = Page.objects.get_or_create(category=cat, title=title, url=url, likes=likes, views=views)[0]
return p
def add_cat(name):
c = Category.objects.get_or_create(name=name)[0]
return c
if __name__ == '__main__':
print "Starting Rango population script..."
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'tangorango.settings')
from rango.models import Category, Page
populate()
modes.py:
from django.db import models
class Category(models.Model):
name = models.CharField(max_length=128, unique=True)
def __unicode__(self):
return self.name
class Page(models.Model):
category = models.ForeignKey(Category)
title = models.CharField(max_length=128)
url = models.URLField()
likes = models.IntegerField(default=0)
views = models.IntegerField(default=0)
def __unicode__(self):
return self.title
Answer: run this command :
python manage.py sqlall <your_app>
see the result , then if there wasn't `likes` table among them run this
command and add it manually :
python manage.py dbshell
you can add it with this code:
ALTER TABLE "Page" ADD COLUMN "likes" IntegerField(0)
**but note that sync db doesn't make integrate schema changes once the tables
are created. You have to delete the database manually and do syncdb again.**
for more information read
[THIS](https://docs.djangoproject.com/en/dev/ref/django-admin/#syncdb) recipe
of `syncdb` in django site !
|
Can you use GQL / Google DataStore outside of google appengine?
Question: Can I use GQl/DataStore in my local python app? I know I can use them locally
when running the dev_appserver.py when writing a webapp but now I don't want
to do anything web related and keep it all local with just my normal
pythonfile.py with no HTML interfacing. Is it possible to do this?
Answer: > The Remote API can also be used within local applications. This will allow
> you to write local applications that use App Engine services and access
> datastore. It is important to note that using the Remote API will incur
> quota usage on the application you are accessing.
>
> Before beginning, make sure the App Engine SDK is added to your Python path
> and Remote API is enabled in your App Engine application. The following
> example shows the use of OAuth 2.0 credentials to configure the Remote API:
from google.appengine.ext.remote_api import remote_api_stub
from helloworld import helloworld
remote_api_stub.ConfigureRemoteApiForOAuth('your_app_id.appspot.com',
'/_ah/remote_api')
# Fetch the most recent 10 guestbook entries
entries = helloworld.Greeting.all().order("-date").fetch(10)
# Create our own guestbook entry
helloworld.Greeting(content="A greeting").put()
Reference: <https://cloud.google.com/appengine/docs/python/tools/remoteapi>
|
Change tick frequency on X (time, not number) frequency in matplotlib
Question: My `python` plot data only show 2 points on x axis.
I would like to have more, but don't know how.
x = [ datetime.datetime(1900,1,1,0,1,2),
datetime.datetime(1900,1,1,0,1,3),
...
] # ( more than 1000 elements )
y = [ 34, 33, 23, ............ ]
plt.plot( x, y )
The X axis only shows 2 points of interval. I tried to use `.xticks` but
didn't work for X axis. It gave the below error:
TypeError: object of type 'datetime.datetime' has no len()
Answer: Whatever reason it is you are getting 2 ticks only by default, you can fix it
(customise it) by changing the ticker locator using a date locator.
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
x = [ datetime.datetime(1900,1,1,0,1,2),
datetime.datetime(1900,1,1,0,1,3),
...
] # ( more than 1000 elements )
y = [ 34, 33, 23, ............ ]
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
plt.plot( x, y )
ax.xaxis.set_major_locator(mdates.MinuteLocator(interval=15)) #to get a tick every 15 minutes
ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M')) #optional formatting
You have several locators (for example: DayLocator, WeekdayLocator,
MonthLocator, etc.) read about it in the documentation:
<http://matplotlib.org/api/dates_api.html>
But maybe this example will help more:
<http://matplotlib.org/examples/api/date_demo.html>
|
In Python how can I create and organized dictionary through the process of reading a CSV file
Question: I am attempting to read a CSV file into Python and create and organized
dictionary from the data retrieved. The CSV has a similar format to the one
shown below.
Time A B C D
0 1 2 4 5
.1 3 3 5 7
.2 4 5 7 9
When complete I would like the dictionary to look something like this Dict =
{'0':[1 2 3 4], '.1':[3 3 5 7], '.2':[4 5 7 9]}
The code below is all I can think to do up to this point. It is not giving me
the corret dictionary. I am new to python so any help I can get is greatly
appreciated!
import csv
counter = 0
inp = open('Nodal_Quardnets.csv', 'rb')
Xcoord = {}
k = []
v = []
for line in inp.readlines():
sd = line.strip().split(',')
for value in range(len(sd)):
if counter == 0:
k.append(sd)
counter = counter + 1
else:
v.append(sd)
Xcoord = dict(zip(k,v))
Answer:
d = dict()
with open('Nodal_Quardnets.csv', 'r') as f:
f.readline() # disregard headers
for line in f:
values = [s.strip() for s in line.split(',')]
d[values[0]] = map(int,values[1:]) # convert list values to int
print d
Output
{'0': [1, 2, 4, 5],
'.2': [4, 5, 7, 9],
'.1': [3, 3, 5, 7]}
|
Python Concurrency: Hangs when using apply_async()
Question: I am trying to learn about Python concurrency. As an experiment, I have the
following program that uses a process pool and calls workers via
apply_async(). To share information between the processes (work and results),
I use a queue from multiprocessing.Manager().
However, this code hangs -- sometimes -- when all items in the work queue have
been processed, and I'm not sure why. I have to run the program a few times to
observe the behavior.
As a side note, I _can_ make this work correctly: I have found some a design
pattern that people sometimes call the "poison pill" method, and it seems to
work. (In my worker() method, I enter an infinite loop and break out of the
loop when my work queue contains a sentinel value. I create as many sentinel
values on the work queue as I have running processes).
But I'm still interested in finding out why this code hangs. I am getting
output like the following (the process ID is in brackets):
Found 8 CPUs.
Operation queue has 20 items.
Will start 2 processes.
Joining pool...
[5885] entering worker() with work_queue size of 20
[5885] processed work item 0
[5885] worker() still running because work_queue has size 19
[5885] processed work item 1
[5885] worker() still running because work_queue has size 18
[5885] processed work item 2
[5885] worker() still running because work_queue has size 17
[5885] processed work item 3
[5885] worker() still running because work_queue has size 16
[5885] processed work item 4
[5885] worker() still running because work_queue has size 15
[5885] processed work item 5
[5886] entering worker() with work_queue size of 14
[5885] worker() still running because work_queue has size 14
[5886] processed work item 6
[5886] worker() still running because work_queue has size 13
[5885] processed work item 7
[5886] processed work item 8
[5885] worker() still running because work_queue has size 11
[5886] worker() still running because work_queue has size 11
[5885] processed work item 9
[5886] processed work item 10
[5885] worker() still running because work_queue has size 9
[5886] worker() still running because work_queue has size 9
[5885] processed work item 11
[5886] processed work item 12
[5885] worker() still running because work_queue has size 7
[5886] worker() still running because work_queue has size 7
[5885] processed work item 13
[5886] processed work item 14
[5885] worker() still running because work_queue has size 5
[5886] worker() still running because work_queue has size 5
[5885] processed work item 15
[5886] processed work item 16
[5885] worker() still running because work_queue has size 3
[5886] worker() still running because work_queue has size 3
[5885] processed work item 17
[5886] processed work item 18
[5885] worker() still running because work_queue has size 1
[5886] worker() still running because work_queue has size 1
[5885] processed work item 19
[5885] worker() still running because work_queue has size 0
[5885] worker() is finished; returning results of size 20
(The program hangs on the last line. The other process -- 5886 -- doesn't seem
to finish.)
import multiprocessing
from multiprocessing import Pool
import os
# Python 2.7.6 on Linux
# Worker (consumer) process
def worker(work_queue, results_queue):
print "[%d] entering worker() with work_queue size of %d" % (os.getpid(), work_queue.qsize())
while not work_queue.empty():
item = work_queue.get()
print "[%d] processed work item %s" % (os.getpid(), item)
s = '[%d] has processed %s.' % (os.getpid(), item)
results_queue.put(s)
work_queue.task_done()
print "[%d] worker() still running because work_queue has size %d" % (os.getpid(), work_queue.qsize())
print "[%d] worker() is finished; returning results of size %d" % (os.getpid(), results_queue.qsize())
if __name__ == '__main__':
MAX_PROCESS = 2 # Max number of processes to create
MAX_ITEMS = 20 # Max work items to process
m = multiprocessing.Manager()
results_queue = m.Queue()
work_queue = m.Queue()
# Assign work
for x in xrange(MAX_ITEMS):
work_queue.put(x)
print "Found %d CPUs." % multiprocessing.cpu_count()
print "Operation queue has %d items." % work_queue.qsize()
print "Will start %d processes." % MAX_PROCESS
# Pool method
pool = Pool(processes=MAX_PROCESS)
for n in range(0, MAX_PROCESS):
pool.apply_async(worker, args=(work_queue, results_queue))
pool.close()
print "Joining pool..."
pool.join()
print "Joining pool finished..."
print "--- After pool completion ---"
print "Work queue has %d items." % work_queue.qsize()
print "Results queue has %d items." % results_queue.qsize()
print "Results are:"
while not results_queue.empty():
item = results_queue.get()
print str(item)
results_queue.task_done()
print "--End---"
Thanks for the help.
Answer: You're hitting a race condition - process `5886` sees that the `Pool` has an
item in it:
[5886] worker() still running because work_queue has size 1
So it loops back around to the blocking `get` call:
while not work_queue.empty(): # It sees it's not emtpy here
item = work_queue.get() # But there's no item left by the time it gets here!
However, after it calls `work_queue.empty()`, but _before_ it calls
`work_queue.get()`, the other worker (`5885`) consumes the last item in the
queue:
[5885] processed work item 19
[5885] worker() still running because work_queue has size 0
[5885] worker() is finished; returning results of size 20
So now `5886` will block forever on the `get`. In general, you shouldn't use
the `empty()` method to decide whether or not to make a blocking `get()` call
if there are multiple consumers of a queue, because it's susceptible to this
kind of race condition. Using the "poison pill"/sentinel method you mentioned
is the right way to handle this scenario, or use a non-blocking `get` call,
and catch the `Empty` exception should it occur:
try:
item = work_queue.get_nowait()
print "[%d] processed work item %s" % (os.getpid(), item)
s = '[%d] has processed %s.' % (os.getpid(), item)
results_queue.put(s)
work_queue.task_done()
print "[%d] worker() still running because work_queue has size %d" % (os.getpid(), work_queue.qsize())
except Queue.Empty:
print "[%d] worker() is finished; returning results of size %d" % (os.getpid(), results_queue.qsize())
Note that you can only use this approach if you know the queue won't ever grow
in size once the workers have begun to consume from it. Otherwise you could
decide the worker should exit when there are still more items to be added to
the queue.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.