text
stringlengths 226
34.5k
|
---|
Python class not working after copying it
Question: I copied some code from `nltk` directly into my project (with crediting
sources) because my school computers don't allow me to install libraries.
I copied the `PorterStemmer` class and `StemmerI` interface from
[here](http://www.nltk.org/_modules/nltk/stem/porter.html#PorterStemmer)
However, when I run the code, I get
NotImplementedError
Why is this happening?
How I'm running + Stacktrace:
python
>>> from nltk_functions.stemmer import PorterStemmer as ps1
>>> stem1 = ps1()
>>> stem1.stem("operation")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "nltk_functions\stemmer.py", line 13, in stem
"""
NotImplementedError
* * *
Here is all the code:
from __future__ import print_function, unicode_literals
import re
class StemmerI(object):
""" A processing interface for removing morphological affixes from
words. This process is known as stemming."""
def stem(self, token):
"""
Strip affixes from the token and return the stem.
:param token: The token that should be stemmed.
:type token: str
"""
raise NotImplementedError()
class PorterStemmer(StemmerI):
## --NLTK--
## Add a module docstring
"""
A word stemmer based on the Porter stemming algorithm.
Porter, M. \"An algorithm for suffix stripping.\"
Program 14.3 (1980): 130-137.
A few minor modifications have been made to Porter's basic
algorithm. See the source code of this module for more
information.
The Porter Stemmer requires that all tokens have string types.
"""
# The main part of the stemming algorithm starts here.
# Note that only lower case sequences are stemmed. Forcing to lower case
# should be done before stem(...) is called.
def __init__(self):
## --NEW--
## This is a table of irregular forms. It is quite short, but still
## reflects the errors actually drawn to Martin Porter's attention over
## a 20 year period!
##
## Extend it as necessary.
##
## The form of the table is:
## {
## "p1" : ["s11","s12","s13", ... ],
## "p2" : ["s21","s22","s23", ... ],
## ...
## "pn" : ["sn1","sn2","sn3", ... ]
## }
##
## String sij is mapped to paradigm form pi, and the main stemming
## process is then bypassed.
irregular_forms = {
"sky" : ["sky", "skies"],
"die" : ["dying"],
"lie" : ["lying"],
"tie" : ["tying"],
"news" : ["news"],
"inning" : ["innings", "inning"],
"outing" : ["outings", "outing"],
"canning" : ["cannings", "canning"],
"howe" : ["howe"],
# --NEW--
"proceed" : ["proceed"],
"exceed" : ["exceed"],
"succeed" : ["succeed"], # Hiranmay Ghosh
}
self.pool = {}
for key in irregular_forms:
for val in irregular_forms[key]:
self.pool[val] = key
self.vowels = frozenset(['a', 'e', 'i', 'o', 'u'])
def _cons(self, word, i):
"""cons(i) -is TRUE <=> b[i] is a consonant."""
if word[i] in self.vowels:
return False
if word[i] == 'y':
if i == 0:
return True
else:
return (not self._cons(word, i - 1))
return True
def _m(self, word, j):
"""m() measures the number of consonant sequences between k0 and j.
if c is a consonant sequence and v a vowel sequence, and <..>
indicates arbitrary presence,
<c><v> gives 0
<c>vc<v> gives 1
<c>vcvc<v> gives 2
<c>vcvcvc<v> gives 3
....
"""
n = 0
i = 0
while True:
if i > j:
return n
if not self._cons(word, i):
break
i = i + 1
i = i + 1
while True:
while True:
if i > j:
return n
if self._cons(word, i):
break
i = i + 1
i = i + 1
n = n + 1
while True:
if i > j:
return n
if not self._cons(word, i):
break
i = i + 1
i = i + 1
def _vowelinstem(self, stem):
"""vowelinstem(stem) is TRUE <=> stem contains a vowel"""
for i in range(len(stem)):
if not self._cons(stem, i):
return True
return False
def _doublec(self, word):
"""doublec(word) is TRUE <=> word ends with a double consonant"""
if len(word) < 2:
return False
if (word[-1] != word[-2]):
return False
return self._cons(word, len(word)-1)
def _cvc(self, word, i):
"""cvc(i) is TRUE <=>
a) ( --NEW--) i == 1, and word[0] word[1] is vowel consonant, or
b) word[i - 2], word[i - 1], word[i] has the form consonant -
vowel - consonant and also if the second c is not w, x or y. this
is used when trying to restore an e at the end of a short word.
e.g.
cav(e), lov(e), hop(e), crim(e), but
snow, box, tray.
"""
if i == 0: return False # i == 0 never happens perhaps
if i == 1: return (not self._cons(word, 0) and self._cons(word, 1))
if not self._cons(word, i) or self._cons(word, i-1) or not self._cons(word, i-2): return False
ch = word[i]
if ch == 'w' or ch == 'x' or ch == 'y':
return False
return True
def _step1ab(self, word):
"""step1ab() gets rid of plurals and -ed or -ing. e.g.
caresses -> caress
ponies -> poni
sties -> sti
tie -> tie (--NEW--: see below)
caress -> caress
cats -> cat
feed -> feed
agreed -> agree
disabled -> disable
matting -> mat
mating -> mate
meeting -> meet
milling -> mill
messing -> mess
meetings -> meet
"""
if word[-1] == 's':
if word.endswith("sses"):
word = word[:-2]
elif word.endswith("ies"):
if len(word) == 4:
word = word[:-1]
# this line extends the original algorithm, so that
# 'flies'->'fli' but 'dies'->'die' etc
else:
word = word[:-2]
elif word[-2] != 's':
word = word[:-1]
ed_or_ing_trimmed = False
if word.endswith("ied"):
if len(word) == 4:
word = word[:-1]
else:
word = word[:-2]
# this line extends the original algorithm, so that
# 'spied'->'spi' but 'died'->'die' etc
elif word.endswith("eed"):
if self._m(word, len(word)-4) > 0:
word = word[:-1]
elif word.endswith("ed") and self._vowelinstem(word[:-2]):
word = word[:-2]
ed_or_ing_trimmed = True
elif word.endswith("ing") and self._vowelinstem(word[:-3]):
word = word[:-3]
ed_or_ing_trimmed = True
if ed_or_ing_trimmed:
if word.endswith("at") or word.endswith("bl") or word.endswith("iz"):
word += 'e'
elif self._doublec(word):
if word[-1] not in ['l', 's', 'z']:
word = word[:-1]
elif (self._m(word, len(word)-1) == 1 and self._cvc(word, len(word)-1)):
word += 'e'
return word
def _step1c(self, word):
"""step1c() turns terminal y to i when there is another vowel in the stem.
--NEW--: This has been modified from the original Porter algorithm so that y->i
is only done when y is preceded by a consonant, but not if the stem
is only a single consonant, i.e.
(*c and not c) Y -> I
So 'happy' -> 'happi', but
'enjoy' -> 'enjoy' etc
This is a much better rule. Formerly 'enjoy'->'enjoi' and 'enjoyment'->
'enjoy'. Step 1c is perhaps done too soon; but with this modification that
no longer really matters.
Also, the removal of the vowelinstem(z) condition means that 'spy', 'fly',
'try' ... stem to 'spi', 'fli', 'tri' and conflate with 'spied', 'tried',
'flies' ...
"""
if word[-1] == 'y' and len(word) > 2 and self._cons(word, len(word) - 2):
return word[:-1] + 'i'
else:
return word
def _step2(self, word):
"""step2() maps double suffices to single ones.
so -ization ( = -ize plus -ation) maps to -ize etc. note that the
string before the suffix must give m() > 0.
"""
if len(word) <= 1: # Only possible at this stage given unusual inputs to stem_word like 'oed'
return word
ch = word[-2]
if ch == 'a':
if word.endswith("ational"):
return word[:-7] + "ate" if self._m(word, len(word)-8) > 0 else word
elif word.endswith("tional"):
return word[:-2] if self._m(word, len(word)-7) > 0 else word
else:
return word
elif ch == 'c':
if word.endswith("enci"):
return word[:-4] + "ence" if self._m(word, len(word)-5) > 0 else word
elif word.endswith("anci"):
return word[:-4] + "ance" if self._m(word, len(word)-5) > 0 else word
else:
return word
elif ch == 'e':
if word.endswith("izer"):
return word[:-1] if self._m(word, len(word)-5) > 0 else word
else:
return word
elif ch == 'l':
if word.endswith("bli"):
return word[:-3] + "ble" if self._m(word, len(word)-4) > 0 else word # --DEPARTURE--
# To match the published algorithm, replace "bli" with "abli" and "ble" with "able"
elif word.endswith("alli"):
# --NEW--
if self._m(word, len(word)-5) > 0:
word = word[:-2]
return self._step2(word)
else:
return word
elif word.endswith("fulli"):
return word[:-2] if self._m(word, len(word)-6) else word # --NEW--
elif word.endswith("entli"):
return word[:-2] if self._m(word, len(word)-6) else word
elif word.endswith("eli"):
return word[:-2] if self._m(word, len(word)-4) else word
elif word.endswith("ousli"):
return word[:-2] if self._m(word, len(word)-6) else word
else:
return word
elif ch == 'o':
if word.endswith("ization"):
return word[:-7] + "ize" if self._m(word, len(word)-8) else word
elif word.endswith("ation"):
return word[:-5] + "ate" if self._m(word, len(word)-6) else word
elif word.endswith("ator"):
return word[:-4] + "ate" if self._m(word, len(word)-5) else word
else:
return word
elif ch == 's':
if word.endswith("alism"):
return word[:-3] if self._m(word, len(word)-6) else word
elif word.endswith("ness"):
if word.endswith("iveness"):
return word[:-4] if self._m(word, len(word)-8) else word
elif word.endswith("fulness"):
return word[:-4] if self._m(word, len(word)-8) else word
elif word.endswith("ousness"):
return word[:-4] if self._m(word, len(word)-8) else word
else:
return word
else:
return word
elif ch == 't':
if word.endswith("aliti"):
return word[:-3] if self._m(word, len(word)-6) else word
elif word.endswith("iviti"):
return word[:-5] + "ive" if self._m(word, len(word)-6) else word
elif word.endswith("biliti"):
return word[:-6] + "ble" if self._m(word, len(word)-7) else word
else:
return word
elif ch == 'g': # --DEPARTURE--
if word.endswith("logi"):
return word[:-1] if self._m(word, len(word) - 4) else word # --NEW-- (Barry Wilkins)
# To match the published algorithm, pass len(word)-5 to _m instead of len(word)-4
else:
return word
else:
return word
def _step3(self, word):
"""step3() deals with -ic-, -full, -ness etc. similar strategy to step2."""
ch = word[-1]
if ch == 'e':
if word.endswith("icate"):
return word[:-3] if self._m(word, len(word)-6) else word
elif word.endswith("ative"):
return word[:-5] if self._m(word, len(word)-6) else word
elif word.endswith("alize"):
return word[:-3] if self._m(word, len(word)-6) else word
else:
return word
elif ch == 'i':
if word.endswith("iciti"):
return word[:-3] if self._m(word, len(word)-6) else word
else:
return word
elif ch == 'l':
if word.endswith("ical"):
return word[:-2] if self._m(word, len(word)-5) else word
elif word.endswith("ful"):
return word[:-3] if self._m(word, len(word)-4) else word
else:
return word
elif ch == 's':
if word.endswith("ness"):
return word[:-4] if self._m(word, len(word)-5) else word
else:
return word
else:
return word
def _step4(self, word):
"""step4() takes off -ant, -ence etc., in context <c>vcvc<v>."""
if len(word) <= 1: # Only possible at this stage given unusual inputs to stem_word like 'oed'
return word
ch = word[-2]
if ch == 'a':
if word.endswith("al"):
return word[:-2] if self._m(word, len(word)-3) > 1 else word
else:
return word
elif ch == 'c':
if word.endswith("ance"):
return word[:-4] if self._m(word, len(word)-5) > 1 else word
elif word.endswith("ence"):
return word[:-4] if self._m(word, len(word)-5) > 1 else word
else:
return word
elif ch == 'e':
if word.endswith("er"):
return word[:-2] if self._m(word, len(word)-3) > 1 else word
else:
return word
elif ch == 'i':
if word.endswith("ic"):
return word[:-2] if self._m(word, len(word)-3) > 1 else word
else:
return word
elif ch == 'l':
if word.endswith("able"):
return word[:-4] if self._m(word, len(word)-5) > 1 else word
elif word.endswith("ible"):
return word[:-4] if self._m(word, len(word)-5) > 1 else word
else:
return word
elif ch == 'n':
if word.endswith("ant"):
return word[:-3] if self._m(word, len(word)-4) > 1 else word
elif word.endswith("ement"):
return word[:-5] if self._m(word, len(word)-6) > 1 else word
elif word.endswith("ment"):
return word[:-4] if self._m(word, len(word)-5) > 1 else word
elif word.endswith("ent"):
return word[:-3] if self._m(word, len(word)-4) > 1 else word
else:
return word
elif ch == 'o':
if word.endswith("sion") or word.endswith("tion"): # slightly different logic to all the other cases
return word[:-3] if self._m(word, len(word)-4) > 1 else word
elif word.endswith("ou"):
return word[:-2] if self._m(word, len(word)-3) > 1 else word
else:
return word
elif ch == 's':
if word.endswith("ism"):
return word[:-3] if self._m(word, len(word)-4) > 1 else word
else:
return word
elif ch == 't':
if word.endswith("ate"):
return word[:-3] if self._m(word, len(word)-4) > 1 else word
elif word.endswith("iti"):
return word[:-3] if self._m(word, len(word)-4) > 1 else word
else:
return word
elif ch == 'u':
if word.endswith("ous"):
return word[:-3] if self._m(word, len(word)-4) > 1 else word
else:
return word
elif ch == 'v':
if word.endswith("ive"):
return word[:-3] if self._m(word, len(word)-4) > 1 else word
else:
return word
elif ch == 'z':
if word.endswith("ize"):
return word[:-3] if self._m(word, len(word)-4) > 1 else word
else:
return word
else:
return word
def _step5(self, word):
"""step5() removes a final -e if m() > 1, and changes -ll to -l if
m() > 1.
"""
if word[-1] == 'e':
a = self._m(word, len(word)-1)
if a > 1 or (a == 1 and not self._cvc(word, len(word)-2)):
word = word[:-1]
if word.endswith('ll') and self._m(word, len(word)-1) > 1:
word = word[:-1]
return word
def stem_word(self, p, i=0, j=None):
"""Returns the stem of p, or, if i and j are given, the stem of p[i:j+1]."""
## --NLTK--
if j is None and i == 0:
word = p
else:
if j is None:
j = len(p) - 1
word = p[i:j+1]
if word in self.pool:
return self.pool[word]
if len(word) <= 2:
return word # --DEPARTURE--
# With this line, strings of length 1 or 2 don't go through the
# stemming process, although no mention is made of this in the
# published algorithm. Remove the line to match the published
# algorithm.
word = self._step1ab(word)
word = self._step1c(word)
word = self._step2(word)
word = self._step3(word)
word = self._step4(word)
word = self._step5(word)
return word
def _adjust_case(self, word, stem):
lower = word.lower()
ret = ""
for x in range(len(stem)):
if lower[x] == stem[x]:
ret += word[x]
else:
ret += stem[x]
return ret
## --NLTK--
## Don't use this procedure; we want to work with individual
## tokens, instead. (commented out the following procedure)
#def stem(self, text):
# parts = re.split("(\W+)", text)
# numWords = (len(parts) + 1)/2
#
# ret = ""
# for i in xrange(numWords):
# word = parts[2 * i]
# separator = ""
# if ((2 * i) + 1) < len(parts):
# separator = parts[(2 * i) + 1]
#
# stem = self.stem_word(string.lower(word), 0, len(word) - 1)
# ret = ret + self.adjust_case(word, stem)
# ret = ret + separator
# return ret
## --NLTK--
## Define a stem() method that implements the StemmerI interface.
def stem(self, word):
print("stem called")
stem = self.stem_word(word.lower(), 0, len(word) - 1)
return self._adjust_case(word, stem)
## --NLTK--
## Add a string representation function
def __repr__(self):
return '<PorterStemmer>'
## --NLTK--
## This test procedure isn't applicable.
#if __name__ == '__main__':
# p = PorterStemmer()
# if len(sys.argv) > 1:
# for f in sys.argv[1:]:
# with open(f, 'r') as infile:
# while 1:
# w = infile.readline()
# if w == '':
# break
# w = w[:-1]
# print(p.stem(w))
##--NLTK--
## Added a demo() function
def demo():
"""
A demonstration of the porter stemmer on a sample from
the Penn Treebank corpus.
"""
from nltk.corpus import treebank
from nltk import stem
stemmer = stem.PorterStemmer()
orig = []
stemmed = []
for item in treebank.files()[:3]:
for (word, tag) in treebank.tagged_words(item):
orig.append(word)
stemmed.append(stemmer.stem(word))
# Convert the results to a string, and word-wrap them.
results = ' '.join(stemmed)
results = re.sub(r"(.{,70})\s", r'\1\n', results+' ').rstrip()
# Convert the original to a string, and word wrap it.
original = ' '.join(orig)
original = re.sub(r"(.{,70})\s", r'\1\n', original+' ').rstrip()
# Print the results.
print('-Original-'.center(70).replace(' ', '*').replace('-', ' '))
print(original)
print('-Results-'.center(70).replace(' ', '*').replace('-', ' '))
print(results)
print('*'*70)
##--NLTK--
Answer: My best guess is that some of the lines you copied are not being treated as
part of the class due to indentation issues. Try adding a class property
definition at the tale end of `PorterStemmer` and verify that it appears on
the class as a first debugging step.
|
Cannot use os functions after having imported it
Question: I'm using Python (actually IronPython) with Visual Studio 2015 to make a WPF
application. I imported os but I cannot access its methods.
This is what I did:
import os
class Utils(object):
def fcn(self, arg):
if os.path.exists(arg):
print 'Exists!.'
else:
print 'Doesn't exist... :/'
raise
I call this class from the view model file after pressing a button in the GUI
class ViewModel(ViewModelBase):
def __init__(self):
ViewModelBase.__init__(self)
self.RunCommand = Command(self.RunMethod)
self.utils = Utils()
def RunMethod(self):
self.utils.fcn("C:\path")
If I set a breakpoint after "if os.path.exists(arg)" the program freezes, if I
set it before (or on that line) it stops normally.
Any ideas?
Thank you.
Answer: Submodules need to be imported explicitly:
import os.path # not just import os
In the standard Python implementation, `import os` would probably work on its
own due to the weird way `os.path` is implemented, but it should still be
`import os.path` if you want to use `os.path`.
|
Convert byte string to base64-encoded string (output not being a byte string)
Question: I was wondering if it is possible to convert a byte string which I got from
reading a file to a string (so `type(output) == str`). All I've found on
Google so far has been answers like [How do you base-64 encode a PNG image for
use in a data-uri in a CSS file?](http://stackoverflow.com/q/6375942/1256925),
which does seem like it would work in python 2 (where, if I'm not mistaken,
strings were byte strings anyway), but which doesn't work in python 3.4
anymore.
The reason I want to convert this resulting byte string to a normal string is
that I want to use this base64-encoded data to store in a JSON object, but I
keep getting an error similar to:
TypeError: b'Zm9v' is not JSON serializable
Here's a minimal example of where it goes wrong:
import base64
import json
data = b'foo'
myObj = [base64.b64encode(data)]
json_str = json.dumps(myObj)
So my question is: is there a way to convert this object of type `bytes` to an
object of type `str` while still keeping the base64-encoding (so in this
example, I want the result to be `["Zm9v"]`. Is this possible?
Answer: Try
data = b'foo'.decode('UTF-8')
instead of
data = b'foo'
to convert it into a string.
|
Install NCurses on python3 for Ubuntu
Question: I'm having issues installing `ncurses` for `Python3`. When I did the normal
`sudo apt-get install ncurses-dev`, it appeared to install for `Python2` but
when I try to run my script for `Python3`, it says.
ImportError: No module named curses
How would you get `ncurses` to work for `Python3`?
Answer: Try this:
import curses
curses is ncurses. It's also built in to python, there's nothing to install.
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-65-generic x86_64)
* Documentation: https://help.ubuntu.com/
Last login: Mon Oct 19 19:06:03 2015 from xxx.xxx.xxx.xxx
me@ubuntu:~$ python3
Python 3.4.0 (default, Jun 19 2015, 14:20:21)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import curses
>>>
|
select based on timestamp and update timestamp with zero
Question: How do I select records from a date field which has time (HH:MM:SS.Milisecond)
value greater than zero from Mongodb collection and update it with time
(HH:MM:SS) value as zero by keeping date value as same as existing in python
script.
Current data would look like as below -
1) "createdDate" : ISODate("2015-10-10T00:00:00Z")
2) "createdDate" : ISODate("2015-10-11T00:00:00Z")
3) "createdDate" : ISODate("2015-10-12T00:00:00Z")
4) "createdDate" : ISODate("2015-10-13T01:04:30.515Z")
5) "createdDate" : ISODate("2015-10-14T02:05:50.516Z")
6) "createdDate" : ISODate("2015-10-15T03:06:60.517Z")
7) "createdDate" : ISODate("2015-10-16T04:07:80.518Z")
How to select only rows 4,5,6,7 using mongodbsql and update it with timestamp
as zero in python script -
After update data would look like as below -
1) "createdDate" : ISODate("2015-10-10T00:00:00Z")
2) "createdDate" : ISODate("2015-10-11T00:00:00Z")
3) "createdDate" : ISODate("2015-10-12T00:00:00Z")
4) "createdDate" : ISODate("2015-10-13T00:00:00Z")
5) "createdDate" : ISODate("2015-10-14T00:00:00Z")
6) "createdDate" : ISODate("2015-10-15T00:00:00Z")
7) "createdDate" : ISODate("2015-10-16T00:00:00Z")
Answer: `ISODate()` is represented as a `datetime` object by PyMongo. MongoDB assumes
that dates and times are in UTC. There are several ways to get midnight (start
of a day) for a given UTC time `d`:
>>> from datetime import datetime, time, timedelta
>>> d = datetime(2015, 10, 13, 1, 4, 30, 515000)
>>> datetime(d.year, d.month, d.day) # @user3100115' answer
datetime.datetime(2015, 10, 13, 0, 0) # 369 ns
>>> datetime.fromordinal(d.toordinal()) # 451 ns
datetime.datetime(2015, 10, 13, 0, 0)
>>> datetime.combine(d, time.min) # 609 ns
datetime.datetime(2015, 10, 13, 0, 0)
>>> d - (d - d.min) % timedelta(days=1) # Python 3
datetime.datetime(2015, 10, 13, 0, 0) # 1.87 µs
>>> datetime(*d.timetuple()[:3])
datetime.datetime(2015, 10, 13, 0, 0) # 2.34 µs
>>> from calendar import timegm
>>> datetime.utcfromtimestamp((timegm(d.timetuple()) // 86400) * 86400) # POSIX
datetime.datetime(2015, 10, 13, 0, 0) # 4.72 µs
|
How to create a VM with a custom image using azure-sdk-for-python?
Question: I'm using the "new" azure sdk for python: <https://github.com/Azure/azure-sdk-
for-python>
Linked is a usage example to serve as documentation: <https://azure-sdk-for-
python.readthedocs.org/en/latest/resourcemanagementcomputenetwork.html>
In this example, they create an instance from a public image, providing an
image publisher, offer, SKU and version. I'd like to create an instance from a
custom image (present in "My Images" on the azure portal), for which I only
have an image name, no publisher or SKU.
Is this supported? How should I proceed?
Note: I'd like to avoid using the azure CLI command if possible, only relying
on the python library.
Thanks!
Answer: In case anyone else runs into this issue, the SourceImage is actually for the
older method (ASM). For ARM, the following will initialize the StorageProfile
to provide a reference to a custom image:
storage_profile = azure.mgmt.compute.StorageProfile(
os_disk=azure.mgmt.compute.OSDisk(
caching=azure.mgmt.compute.CachingTypes.none,
create_option=azure.mgmt.compute.DiskCreateOptionTypes.from_image,
name=OS_DISK_NAME,
virtual_hard_disk=azure.mgmt.compute.VirtualHardDisk(
uri='https://{0}.blob.core.windows.net/vhds/{1}.vhd'.
format(STORAGE_NAME, OS_DISK_NAME),
),
operating_system_type='Linux',
source_image=azure.mgmt.compute.VirtualHardDisk(
uri='https://{0}.blob.core.windows.net/{1}/{2}'.format(
STORAGE_NAME, CUSTOM_IMAGE_PATH, CUSTOM_IMAGE_VHD),
),
),
)
The two very important things above are 'operating_system_type' and the way
the source_image is created.
|
SSH paramiko Azure
Question: I have usually no problems ssh'ing in python with paramiko (version
paramiko==1.15.3). Doing:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('mylinodebox.com', key_filename='key_filename='/home/me/.ssh/id_rsa_linode', port=2222)
stdin, stdout, stderr = ssh.exec_command('ls')
print stdout.readlines()
ssh.close()
works absolutely fine for me for this linode machine. But for another, Azure,
machine if I try the same only replacing the connect line with
ssh.connect('myazurebox.net', key_filename='/home/me/.ssh/the-azure-ssh.key', port=22)
I get
AuthenticationException: Authentication failed.
This is despite the fact that from the linux terminal I have no issues at all
ssh'ing into the Azure machine with the keyfile ( I do `ssh myazurebox` and
have the ssh config below), so I know the creds are good.
My ssh config file looks like
Host *
ForwardAgent yes
ServerAliveInterval 15
ServerAliveCountMax 3
PermitLocalCommand yes
ControlPath ~/.ssh/master-%r@%h:%p
ControlMaster auto
Host myazurebox
HostName myazurebox.net
IdentityFile ~/.ssh/the-azure-ssh.key
User <azureuser>
Host mylinodebox
HostName mylinodebox.com
IdentityFile ~/.ssh/id_rsa_linode
Port 2222
Does anyone know why this wouldn't be working?
Adding the line
`paramiko.common.logging.basicConfig(level=paramiko.common.DEBUG)` after the
import doesn't show much more:
DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr
DEBUG:paramiko.transport:using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none
DEBUG:paramiko.transport:Switch to new keys ...
DEBUG:paramiko.transport:Adding ssh-rsa host key for myazurebox.net: 8d596885f13b8e45c1edd7d94bbfa817
DEBUG:paramiko.transport:Trying SSH agent key d403d1c6bec787e486548a3e0fbfa373
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
DEBUG:paramiko.transport:Trying SSH agent key 12e9db4c2cd2be32193s78b0b13cb5eb
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
DEBUG:paramiko.transport:Trying SSH agent key 1906e3debc819c0f5f40080d43de587d
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
Answer: I examined the logs on the server at `/var/log/auth.log` after setting
`LogLevel DEBUG` in `/etc/ssh/sshd_config`. It turned out that paramiko was
trying keys with fingerprints from my local [ssh-
agent](http://linux.die.net/man/1/ssh-agent) (which can be listed by doing
`ssh-add -l) by not actually trying the keyfile I provided in the arguments to
ssh.connect. Openssh via the terminal also tried the keys from ssh-agent by
then finally tried the keyfile.
I added the azure keyfile to the ssh-agent:
ssh-add /home/lee/.ssh/the-azure-ssh.key
and ran the script above again. This time paramiko established the connection
successfully.
(Does anyone know why adding the key to the agent works with paramiko but not
the keyfile alone?)
|
Tkinter entry getting text entered by user
Question: I am very new to Tkinter ( I find it very difficult to learn). I have a python
script working based on user input. I would like to wrap a GUI around it and
eventually put it on web. In any case for user input I would like to get this
from the GUI with a combination of Entry widgets and some buttons. First thing
is I was reading and some people mentioned to use a class so I have the
following. I have a few questions
1. I would like to check to see if indeed the users entered a value before he hits the GO button. How do I do this?
2. I would like the value entered to be made accessible by the rest of the program in the main body. How do I do this?
Thanks,
from Tkinter import *
class MainWindow():
def get_fc(self):
a = self.fc_gui.get()
return a
def __init__(self, master):
self.master = master
self.master.title('TEST')
self.fc_gui = DoubleVar(self.master, value = 500.00)
self.fclabel1 = Label(self.master, text = 'Please Enter a value', fg = 'black', bg = 'yellow')
self.fclabel1.grid(row = 0, column = 0)
self.fcedit1 = Entry(self.master, textvariable = self.fc_gui, bd = 5 )
self.fcedit1.grid(row = 1, column = 0)
fcbutton1 = Button(self.master, text='GO', command = self.get_fc)
fcbutton1.grid(row = 1, column = 1)
master = Tk()
MainWindow(master)
master.mainloop()
Answer: 1. It doesn't make sense to `return` to a `Button`. The `Button` can't do anything with the value. Instead, save the value as an instance variable.
2. You don't have a `mainloop()`.
3. You can't really check if the user entered a value _before_ they hit "Go" - at the start of the program, of course they haven't entered anything yet. If you needed to track the contents of this field, there are ways to do that, but it's not necessary for a simple validation. Just check the value when they hit the button.
from Tkinter import *
class MainWindow():
def get_fc(self):
a = self.fc_gui.get()
if a: # this block will execute if a has content
self.a = a # save it for future use
def __init__(self, master):
self.master = master
self.master.title('TEST')
self.fc_gui = DoubleVar(self.master, value = 500.00)
self.fclabel1 = Label(self.master, text='Please Enter a value',
fg = 'black', bg = 'yellow')
self.fclabel1.grid(row = 0, column = 0)
self.fcedit1 = Entry(self.master, textvariable = self.fc_gui, bd = 5 )
self.fcedit1.grid(row = 1, column = 0)
fcbutton1 = Button(self.master, text='GO', command = self.get_fc)
fcbutton1.grid(row = 1, column = 1)
master = Tk()
MainWindow(master)
master.mainloop() # don't forget mainloop()
|
Understanding about data type python
Question: Today I've started learn about reverse engineering. I met struc.pack(), but I
dont know what \x12 meaning.
from struct import pack
pack('>I', 0x1337)
'\x00\x00\x137'
So \x137 is equal to 0x1337 (hex) in big-edian?
Answer: `'0x137'` is not a single byte, its actually two different bytes - `0x13` and
`0x37` (or the character `'7'`) . The hexadecimal value for the ascii value of
`'7'` is `0x37`, hence you get `0x137`. Example -
>>> hex(ord('7'))
'0x37'
|
ipython 'no module named' error
Question: I am trying to use a Python module called **Essentia** which is used for audio
analysis. In order to use that, it has to be built in Ubuntu Environment as
explained [here](http://essentia.upf.edu/documentation/installing.html). I did
all the things to install `Essentia` in a folder in desktop.
Then in `IPython`, I am trying to import the installed and built `Essentia`
module. I am running `IPython` in the folder where my module is located. It is
not in `/usr/lib/python2.7`. It is located in my desktop as mentioned above.
But when I import Essentia module in IPython, it tells me
> ImportError: No module named essentia
What is the problem here? Do I have to build Essentia inside
`/usr/lib/python2.7`, and if so, how do I do that? Or has some other thing
gone wrong?
Answer: I had the exact same problem and was able to fix it.
From your question I can't be 100% sure what your problem is - however, these
are a couple of possible culprits which you - or others - may be having.
I, too, am using Python 2.7, and want to use Essentia in an IPython/Jupyter
Notebook environment.
## 1\. Essentia location
_This is my first guess as to what your problem is._
If you were able to successfully configure and install Essentia (otherwise see
below), the path where the Essentia Python files were likely installed is
`/usr/local/lib/python2.7/site-packages` or similar, and that Python isn't
looking there. To make sure it does, you could add
import sys
sys.path.append("/usr/local/lib/python2.7/site-packages")
to the start of your Python script.
# This solved it for me.
You could also add the following line to your `~/.bash_profile`:
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.7/site-packages/
to avoid having to add this path to every Python file/Notebook where you would
like to use Essentia.
## 2\. Configuration and installation
_Skip this if you were able to successfully configure and install Essentia.
These were other notable issues I had before I finally got the`install
finished successfully` message._
The main instructions, as OP noted, are
[here](http://essentia.upf.edu/documentation/installing.html).
### ffftw3f or taglib not found
I resolved this using MacPorts instead:
sudo port install fftw-3-single
sudo port install taglib
### Failed installation
I should note I had some issues during installation which made me get rid of
the C++ tests, Gaia, and Vamp plugin support (none of which I needed) by
removing these and some others from the config line ([as this has helped other
users in the past](https://github.com/MTG/essentia/issues/210)):
./waf configure --mode=release --with-python --with-examples
instead of
./waf configure --mode=release --build-static --with-python --with-cpptests --with-examples --with-vamp --with-gaia
This made the following error message go away:
Build failed
-> task in 'standard_fadedetection' failed (exit status 1):
{task 4417706448: cxxprogram standard_fadedetection.cpp.5.o -> standard_fadedetection}
['clang++', '-stdlib=libc++', 'src/examples/standard_fadedetection.cpp.5.o', '-o', '/Users/Brecht/Downloads/essentia-2.0.1/build/src/examples/standard_fadedetection', '-Lsrc', '-lessentia', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-lfftw3f', '-lyaml', '-lavformat', '-lavcodec', '-lavutil', '-lswresample', '-lsamplerate', '-ltag']
-> task in 'streaming_extractor_freesound' failed (exit status 1):
{task 4417783952: cxxprogram FreesoundExtractor.cpp.22.o,FreesoundLowlevelDescriptors.cpp.22.o,FreesoundRhythmDescriptors.cpp.22.o,FreesoundSfxDescriptors.cpp.22.o,FreesoundTonalDescriptors.cpp.22.o,streaming_extractor_freesound.cpp.22.o -> streaming_extractor_freesound}
['clang++', '-stdlib=libc++', 'src/examples/freesound/FreesoundExtractor.cpp.22.o', 'src/examples/freesound/FreesoundLowlevelDescriptors.cpp.22.o', 'src/examples/freesound/FreesoundRhythmDescriptors.cpp.22.o', 'src/examples/freesound/FreesoundSfxDescriptors.cpp.22.o', 'src/examples/freesound/FreesoundTonalDescriptors.cpp.22.o', 'src/examples/streaming_extractor_freesound.cpp.22.o', '-o', '/Users/Brecht/Downloads/essentia-2.0.1/build/src/examples/streaming_extractor_freesound', '-Lsrc', '-lessentia', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-L/opt/local/lib', '-lfftw3f', '-lyaml', '-lavformat', '-lavcodec', '-lavutil', '-lswresample', '-lsamplerate', '-ltag']
Let me know how that works out - I've got a feeling I've had about all the
errors you could encounter.
(**Acknowledgement:** _The main reason I was able to solve this so quickly
is[this thread](https://github.com/MTG/essentia/issues/113) \- also thanks to
[@djmoffat](https://twitter.com/djmoffat) and
[@justin_salamon](https://twitter.com/justin_salamon)._)
|
Python - Transliterate German Umlauts to Diacritic
Question: I have a list of unicode file paths in which I need to replace all umlauts
with an English diacritic. For example, I would ü with ue, ä with ae and so
on. I have defined a dictionary of umlauts (keys) and their diacritics
(values). So I need to compare each key to each file path and where the key is
found, replace it with the value. This seems like it would be simple, but I
can't get it to work. Does anyone out there have any ideas? Any feedback is
greatly appreciated!
code so far:
# -*- coding: utf-8 -*-
import os
def GetFilepaths(directory):
"""
This function will generate all file names a directory tree using os.walk.
It returns a list of file paths.
"""
file_paths = []
for root, directories, files in os.walk(directory):
for filename in files:
filepath = os.path.join(root, filename)
file_paths.append(filepath)
return file_paths
# dictionary of umlaut unicode representations (keys) and their replacements (values)
umlautDictionary = {u'Ä': 'Ae',
u'Ö': 'Oe',
u'Ü': 'Ue',
u'ä': 'ae',
u'ö': 'oe',
u'ü': 'ue'
}
# get file paths in root directory and subfolders
filePathsList = GetFilepaths(u'C:\\Scripts\\Replace Characters\\Umlauts')
for file in filePathsList:
for key, value in umlautDictionary.iteritems():
if key in file:
file.replace(key, value) # does not work -- umlauts still in file path!
print file
Answer: The `replace` method returns a new string, it does not modify the original
string.
So you would need
file = file.replace(key, value)
instead of just `file.replace(key, value)`.
* * *
Note also that you could use [the `translate`
method](https://docs.python.org/2/library/stdtypes.html#str.translate) to do
all the replacements at once, instead of using a `for-loop`:
In [20]: umap = {ord(key):unicode(val) for key, val in umlautDictionary.items()}
In [21]: umap
Out[21]: {196: u'Ae', 214: u'Oe', 220: u'Ue', 228: u'ae', 246: u'oe', 252: u'ue'}
In [22]: print(u'ÄÖ'.translate(umap))
AeOe
So you could use
umap = {ord(key):unicode(val) for key, val in umlautDictionary.items()}
for filename in filePathsList:
filename = filename.translate(umap)
print(filename)
|
right way to use eval statement in pandas dataframe map function
Question: I have a pandas dataframe where one column is 'organization', and the content
of such column is a string with a list inside the string :
data['organization'][0]
Out[6] "['loony tunes']"
data['organization'][1]
Out[7] "['the three stooges']"
I want to substitute the string with the list which is inside the string. I
try to use map, where the function inside map is eval:
data['organization'] = data['organization'].map(eval)
But the what I get is is:
Traceback (most recent call last):
File "C:\Users\xxx\Anaconda3\lib\site- packages\IPython\core\interactiveshell.py", line 3035, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-7-3dbc0abf8c2e>", line 1, in <module>
data['organization'] = data['organization'].map(eval)
File "C:\Users\xxx\Anaconda3\lib\site-packages\pandas\core\series.py", line 2015, in map
mapped = map_f(values, arg)
File "pandas\src\inference.pyx", line 1046, in pandas.lib.map_infer (pandas\lib.c:56983)
TypeError: eval() arg 1 must be a string, bytes or code object
Thus I resorted to the following block of code, which is extremely
inefficient:
for index, line in data['organization'].iteritems():
print(index)
if type(line) != str:
data['organization'][index] = []
try:
data['organization'][index] = eval(data['organization'][index])
except:
continue
What am I doing wrong? how can I use eval (or a vectorized implementation)
instead of the clumsy loop above?
I thought the problem might be that some elements in pd.series
data['organization'] are not strings, so I implemented the following:
def is_string(x):
if type(x) != str:
x = ''
data['organization'] = data['organization'].map(is_string)
But I still get the same error when I try:
data['organization'] = data['organization'].map(eval)
Thanks in advance.
Answer: Using eval is generally frowned upon as it **allows arbitrary python code to
be run**. So you should **strongly** prefer not to use it if possible.
In this case, you don't need to evaluate an expression, you just need to parse
the value. This means that you can use ast's
[`literal_eval`](https://docs.python.org/2/library/ast.html#ast.literal_eval):
In [11]: s = pd.Series(["['loony tunes']", "['the three stooges']"])
In [12]: from ast import literal_eval
In [13]: s.apply(literal_eval)
Out[13]:
0 [loony tunes]
1 [the three stooges]
dtype: object
In [14]: s.apply(literal_eval)[0] # look, it works!
Out[14]: ['loony tunes']
From the [docs](https://docs.python.org/2/library/ast.html#ast.literal_eval):
>
> ast.literal_eval(node_or_string)
>
>
> _Safely_ evaluate an expression node or a Unicode or Latin-1 encoded string
> containing a Python literal or container display. The string or node
> provided may only consist of the following Python literal structures:
> strings, numbers, tuples, lists, dicts, booleans, and None.
|
Python 2.7 Invalid syntax when running script from .csv file using pandas
Question: I am running a script with Python 2.7 using pandas to read from 2 csv files. I
keep getting "invalid syntax" error messages, particularly on line 6 and 8. I
can't figure out where is the problem, since line 6 is almost identical to
line 5 and there I don't get any error. Thanks very much for your help !
import numpy as np
import csv as csv
import pandas as pd
da = pd.read_csv('snp_rs.csv', index_col=(0,1), usecols=(0, 1), header=None, converters = dict.fromkeys([0,1])
db = pd.read_csv('chl.map.csv', index_col=(0,1), usecols=(0,1), header=None, converters = dict.fromkeys([0,1])
result = da.join(db, how='inner')
x = result.to_csv('snp_rs_out.csv', header=None) # write as csv
print x
Answer: As commented you need to close the parentheses around you `read_csv` call:
da = pd.read_csv('snp_rs.csv', index_col=(0,1), usecols=(0, 1), header=None, converters = dict.fromkeys([0,1])
It's missing a closing paren.
I find it a lot easier to write/read these if you split the lines:
da = pd.read_csv('snp_rs.csv',
index_col=(0,1),
usecols=(0, 1),
header=None,
converters=dict.fromkeys([0,1])
then it's much clearer that a final `)` is missing.
|
Sending a file from local to server using python - the correct way
Question: Probably a simple question for those who used to play with `socket` module.
But I didn't get to understand so far why I can't send a simple file.
As far as I know there are four important steps when we send an info over a
socket:
* open a socket
* bind
* listen
* accept ( possibly needed multiple times ).
Now, what I want to do is creating a file on my local machine and fill it with
some info. ( been there, done that - all good so far )
Then, I create my client flow, which is the following:
s = socket.socket() # create a socket
s.connect(("localhost", 8081)) # trying to connect to connect over 8081 port
f = open("logs.txt", "rb+") # I'm opening the file that contains what I want
l = f.read(1024) # I'm reading that file
# I'm sending all the info from the file
while l:
s.send(l)
l = f.read(1024)
s.close()
Of course, firstly, I'm creating a server (at first, on my localhost) which
will open that port and basically create the connection which will allow the
byte-chunked data to be sent.
import socket
import sys
s = socket.socket() # create the socket
s.bind(("localhost", 8081)) # bind
s.listen(10)
while True:
sc, address = s.accept()
print sc, address
f = open('logs_1.txt', 'wb+') # trying to open a new file on the server ( which in this case is also my localhost )
while True: # write all the data to the file and close everything
l = sc.recv(1024)
f.write(l)
l = sc.recv(1024)
while l:
f.write(l)
l = sc.recv(1024)
f.close()
sc.close()
s.close()
Now, what doesn't work on my Ubuntu 14.10 machine:
* the server part runs without error when doing `python server.py`
* after the client script finishes writing some data in `logs.txt` and connects to the server, I get the following response on the server:
<socket._socketobject object at 0x7fcdb3cf4750> ('127.0.0.1', 56821)
What am I doing wrong ? The port is also different from the one that I set ( I
also know that the port is not used - verifiet with `nc` ).
Can anybody explain me how to correctly treat this problem ?
Answer: I'm not sure what your second `while True` loop is for. Remove that, and it
works as you expect:
import socket
import sys
s = socket.socket() # create the socket
s.bind(("localhost", 8081)) # bind
s.listen(10)
while True:
sc, address = s.accept()
print sc, address
f = open('logs_1.txt', 'wb+')
l = sc.recv(1024)
while l:
f.write(l)
l = sc.recv(1024)
f.close()
sc.close()
s.close()
|
emacs 24.5 python-mode (stock version Vs 6.2.1)
Question: I have discovered an issue with python-mode in emacs. I generally c++ develop
and seldom do python.
I have recently discovered this issue:
I emacs –Q
I open a python file
It contains:
import re as myre
Var = [
%
The % represents the cursor location. Then, at that location I try to tab and
get this error:
`Debugger entered--Lisp error: (wrong-type-argument number-or-marker-p nil)
python-indent-context() python-indent--calculate-indentation() python-indent-
calculate-indentation(nil) python-indent-line(nil) python-indent-line-
function() indent-for-tab-command(nil) call-interactively(indent-for-tab-
command nil nil) command-execute(indent-for-tab-command)`
I have not developed in python for a month or so but I cannot remember this
being an issue. I am using emacs 24.5.1 windows 7 64, python 2.7.3 and – of
course - -Q so no configuration.
Now, I try to apply python-mode 6.2.1 by running this
Emacs –Q In _scratch_ (setq load-path (append load-path (list
"~/.emacs.d/python-mode.el-6.2.1"))) (require 'python-mode)
I open up a python file (the same as above) then I CAN indent. This is all
well and good, so if I load python-mode 6.2.1 el file in my NORMAL
configuration this solve the issue, BUT now with the new 6.2.1 I do not get
the same theme coloring as before (it is now bland and variable are just the
same colour as other text, rather than standing out. Also which-function-mode
seems to be broke (again) and developing in python is sluggish (when you open
a large file) - I remember python-mode and which-function-mode not being
friendly with each other in 24.3, but it was solved with STOCK el 24.5
For me, unfortunately, 6.2.1 solves one issue but creates others - INCLUDING
regressions.
If, instead, I can just have the patch that solves the indentation issue, that
would be great.
Thank you.
Answer: python-mode.el's `py-variable-name-face` inherits default face. Use `M-x`
customize-face `RET` ...
Please file bugs reports at
<https://gitlab.com/python-mode-devs/python-mode/issues>
or
<https://bugs.launchpad.net/python-mode>
|
Reading gzipped text file line-by-line for processing in python 3.2.6
Question: I'm a complete newbie when it comes to python, but I've been tasked with
trying to get a piece of code running on a machine which has a different
version of python (3.2.6) than that which the code was originally built for.
I've come across an issue with reading in a gzipped-text file line-by-line
(and processing it depending on the first character). The code (which
obviously is written in python > 3.2.6) is
for line in gzip.open(input[0], 'rt'):
if line[:1] != '>':
out.write(line)
continue
chromname = match2chrom(line[1:-1])
seqname = line[1:].split()[0]
print('>{}'.format(chromname), file=out)
print('{}\t{}'.format(seqname, chromname), file=mappingout)
(for those who know, this strips gzipped FASTA genome files into headers (with
">" at start) and sequences, and processes the lines into two different files
depending on this)
I have found <https://bugs.python.org/issue13989>, which states that mode 'rt'
cannot be used for gzip.open in python-3.2 and to use something along the
lines of:
import io
with io.TextIOWrapper(gzip.open(input[0], "r")) as fin:
for line in fin:
if line[:1] != '>':
out.write(line)
continue
chromname = match2chrom(line[1:-1])
seqname = line[1:].split()[0]
print('>{}'.format(chromname), file=out)
print('{}\t{}'.format(seqname, chromname), file=mappingout)
but the above code does not work:
UnsupportedOperation in line <4> of /path/to/python_file.py:
read1
How can I rewrite this routine to give out exactly what I want - reading the
gzip file line-by-line into the variable "line" and processing based on the
first character?
EDIT: traceback from the first version of this routine is (python 3.2.6):
Mode rt not supported
File "/path/to/python_file.py", line 79, in __process_genome_sequences
File "/opt/python-3.2.6/lib/python3.2/gzip.py", line 46, in open
File "/opt/python-3.2.6/lib/python3.2/gzip.py", line 157, in __init__
Traceback from the second version is:
UnsupportedOperation in line 81 of /path/to/python_file.py:
read1
File "/path/to/python_file.py", line 81, in __process_genome_sequences
with no further traceback (the extra two lines in the line count are the
`import io` and `with io.TextIOWrapper(gzip.open(input[0], "r")) as fin:`
lines
Answer: I have actually appeared to have solved the problem.
In the end I had to use `shell("gunzip {input[0]}")` to ensure that the
gunzipped file could be read in in text mode, and then read in the resulting
file using
for line in open(' *< resulting file >* ','r'):
if line[:1] != '>':
out.write(line)
continue
chromname = match2chrom(line[1:-1])
seqname = line[1:].split()[0]
print('>{}'.format(chromname), file=out)
print('{}\t{}'.format(seqname, chromname), file=mappingout)
|
UnsupportedOperation: fileno - How to fix this Python dependency mess?
Question: I'm building quite an extensive Python backend and things were working quite
good on server A. I then installed the system on a new (development) server B
on which I simply installed all pip packages again from scratch. Things seemed
to work fine, so I did a `pip freeze`. I then took that list and upgraded the
packages on server A.
But, as you might expect I should have known better. I didn't test things on
machine B wel enough, so I ran into a problem with Pillow version 3.0.0. So I
downgraded to version 1.7.8. That solves that single problem, bug gives me
another one:
File "/home/kramer65/theproject/app/models/FilterResult.py", line 26, in to_json
self.image.save(b, 'JPEG')
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 1437, in save
save_handler(self, fp, filename)
File "/usr/local/lib/python2.7/dist-packages/PIL/JpegImagePlugin.py", line 471, in _save
ImageFile._save(im, fp, [("jpeg", (0,0)+im.size, 0, rawmode)])
File "/usr/local/lib/python2.7/dist-packages/PIL/ImageFile.py", line 476, in _save
fh = fp.fileno()
UnsupportedOperation: fileno
And here I'm kinda lost. As far as I know this is a problem in Pillow itself,
so I wouldn't know why it used to work and why it doesn't work anymore.
I searched around on the internet, but I couldn't find any solution.
Does anybody know what I could do to solve this?
ps. PIL is not installed, so it's not a collision between PIL and Pillow
[EDIT]
I just tested an `import Image` in Python (which would suggest that PIL is
still installed). To my surprise that succeeds, even though pip tells me that
it is not installed:
$ python
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import Image
>>> exit()
$ sudo pip uninstall PIL
The directory '/home/hielke/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Cannot uninstall requirement PIL, not installed
Answer: `BytesIO` objects raise `UnsupportedOperation` (rather than `AttributeError`
which `StringIO` does) when their `fileno` method is called that exception
wasn't handled as it should be.
This was fixed in Pillow 3.0.0 by this commit <https://github.com/python-
pillow/Pillow/commit/197885164b22f82653af514e66c76f4b778c0b1b> by catching the
exception. The following is the fix. The rest of that commit are changes to
the test suite.
In `PIL/ImageFile.py`:
@@ -29,6 +29,7 @@
import Image
import traceback, os
+import io
MAXBLOCK = 65536
@@ -475,7 +476,7 @@ def _save(im, fp, tile):
try:
fh = fp.fileno()
fp.flush()
- except AttributeError:
+ except (AttributeError, io.UnsupportedOperation):
# compress to Python file-compatible object
for e, b, o, a in tile:
e = Image._getencoder(im.mode, e, a, im.encoderconfig)
You could simply patch 1.7.8 to handle the exception.
|
Concatenate .txt files. Write content in one .txt file
Question: I have some .txt files in a folder. I need to collect their content all in one
.txt file. I'm working with Python and tried:
import os
rootdir = "\\path_to_folder\\"
for files in os.walk(rootdir):
with open ("out.txt", 'w') as outfile:
for fname in files:
with open(fname) as infile:
for line in infile:
outfile.write(line)
but did not work. The 'out.txt' is generated but the code never ends. Any
advice? Thanks in advance.
Answer: `os.walk` returns tuples, not filenames:
with open ("out.txt", 'w') as outfile:
for root, dirs, files in os.walk(rootdir):
for fname in files:
with open(os.path.join(root, fname)) as infile:
for line in infile:
outfile.write(line)
Also you should open outfile in the beginning, not in each loop.
|
Python comparing XML output to a list
Question: I have an XML that looks something like this:
<Import>
<spId>1234</spId>
<GroupFlag>false</GroupFlag>
</Import>
I want to extract the value of spId and compare it with a list and I have the
following script:
import xml.etree.ElementTree as ET
xml_file = "c:/somefile.xml"
sp_id_list = ['1234']
tree = ET.parse(xml_file)
root = tree.getroot()
for sp_id in root.findall('./spId'):
if sp_id.text in sp_id_list:
print sp_id.text
This doesn't work for spId (numeric) but works for comparing GroupFlag
(string) with a list. Why is this happening and how can I rectify this
problem?
Sorry for the stupid question, I am a noob to this.
Answer: Your code example works correctly if your XML sample posted here is given as
input XML file.
However you want to find all elements. So, I assume that your real document
has many `<Import>` items. If a list of items is not wrapped by some parent
tag it is not a valid XML. In that case you would have
`xml.etree.ElementTree.ParseError`.
So, I assume that in your real document `<Import>` is not a root element and
`<Import>` elements are somewhere deeper in the document, for example
<Parent>
<Import>
<spId>1234</spId>
<GroupFlag>false</GroupFlag>
</Import>
<Import>
<spId>1234</spId>
<GroupFlag>false</GroupFlag>
</Import>
</Parent>
In that case the search pattern `'./spId'` cannot find those tags, since that
pattern matches only direct children of the root element. So, you can use
[XPath](https://docs.python.org/2/library/xml.etree.elementtree.html#elementtree-
xpath) matching tags all levels beneath or even better pointing direct path
from the root to the level where `spId` is located:
# all subelements, on all levels beneath the current element
root.findall('.//spId')
# all spId elements directly in Import tags that are directly
# beneath the root element (as in the above XML example)
root.findall('./Import/spId'):
|
Extracting URL parameters into Pandas DataFrame
Question: There is a list containing URL adresses with parameters:
http://example.com/?param1=apple¶m2=tomato¶m3=carrot
http://sample.com/?param1=banana¶m3=potato¶m4=berry
http://example.org/?param2=apple¶m3=tomato¶m4=carrot
Each URL may contain any of 4 parameters.
I want to extract URL parameters and add them into Pandas DataFrame. The
DataFrame should have a URL column and 4 columns with parameters. If a
parameter is not present in the URL, the cell is empty:
URL param1 param2 param3 param4
... apple tomato carrot
... banana potato berry
... apple tomato carrot
I was planning to use python built-in _urlparse_ module, which allows to
extract parameters easily:
import urlparse
url = 'http://example.com/?param1=apple¶m2=tomato¶m3=carrot'
par = urlparse.parse_qs(urlparse.urlparse(url).query)
print par['param1'], par['param2']
Out: ['apple'] ['tomato']
With _urlparse_ I can get the list of parameters in URLs:
import pandas as pd
urls = ['http://example.com/?param1=apple¶m2=tomato¶m3=carrot',
'http://sample.com/?param1=banana¶m3=potato¶m4=berry',
'http://example.org/?param2=apple¶m3=tomato¶m4=carrot']
df = pd.DataFrame(urls, columns=['url'])
params = [urlparse.parse_qs(urlparse.urlparse(url).query) for url in urls]
print params
Out: [{'param1': ['apple'], 'param2': ['tomato'], 'param3': ['carrot']},
{'param1': ['banana'], 'param3': ['potato'], 'param4': ['berry']},
{'param2': ['apple'], 'param3': ['tomato'], 'param4': ['carrot']}]
...
I don't know how to add extracted parameters into the DataFrame. Maybe there
is a better way of doing it? The original file is ~1m URLs.
Answer: You can use a dictionary comprehension to extract the data in the parameters
per parameter. I'm not sure if you wanted the final values in list form. If
not, it would be easy to extract it.
>>> pd.DataFrame({p: [d.get(p) for d in params]
for p in ['param1', 'param2', 'param3', 'param4']})
param1 param2 param3 param4
0 [apple] [tomato] [carrot] None
1 [banana] None [potato] [berry]
2 None [apple] [tomato] [carrot]
or...
>>> pd.DataFrame({p: [d[p][0] if p in d else None for d in params]
for p in ['param1', 'param2', 'param3', 'param4']})
param1 param2 param3 param4
0 apple tomato carrot None
1 banana None potato berry
2 None apple tomato carrot
|
Python: Save pickled object in the python script
Question: I have a class instance which I dump in a .pickle file using
`pickle.dump(instance,outputfile)` . I can distribute the script and the
pickle file and ask users to run the python script with the pickle file as an
argument, and then I can load that instance using
`pickle.load(pickle_file_passed_as_argument)`
Can I instead "embed" the pickle file inside the script itself, and then just
pass the script around? Then, when users run the script I can load the
instance of the "embedded" object and use all the object's member functions?
My question is similar to:
[This question](http://stackoverflow.com/questions/1887968/embed-pickle-or-
arbitrary-data-in-python-script)
I didn't understand any of the answers given there as they're abstract
descriptions of what to do, without code examples. I'm not sure how to use the
triple-quote trick to embed objects (though all the answers in that question
mention that). I've not used triple-quote strings like that before..
One of the answers mentions using `s=pickle.dumps(objectInstance)` followed by
`pickle.loads(s)` and combine that with the triple quotes to embed the object.
How exactly do I "combine" dumps,loads with the triple quotes trick, I don't
get that part..
Answer: What this [answer](http://stackoverflow.com/questions/1887968/embed-pickle-or-
arbitrary-data-in-python-script) means is to encode the data to be included
with `pickle` and `encode`:
import pickle
data = dict(a=1,b=2)
pickle.dumps(data).encode('base64')
> 'KGRwMApTJ2EnCnAxCkkxCnNTJ2InCnAyCkkyCnMu\n'
and to decode it accordingly in your file to be shared from the string being
written in the source code:
import pickle
# data encoded as a string
encoded_data = """KGRwMApTJ2EnCnAxCkkxCnNTJ2InCnAyCkkyCnMu\n"""
# decoding of the data
data = pickle.loads(encoded_data.decode('base64'))
print data
> {'a': 1, 'b': 2}
|
Asynchronous cmd or readline in Python
Question: I would like to write a simple program that both (1) produces lines of output
simultaneously, and (2) accepts input from the user via a command line (via
readline). (Think of a text-mode chat client, for example. I want to be able
to compose my chat messages while still seeing incoming chat messages as they
are received.) To accomplish this, I would like to be able to call readline
asynchronously.
The Readline library explicitly supports this, via its [callback
interface](http://www.delorie.com/gnu/docs/readline/rlman_41.html):
> An alternate interface is available to plain readline(). Some applications
> need to interleave keyboard I/O with file, device, or window system I/O,
> typically by using a main loop to select() on various file descriptors. To
> accommodate this need, readline can also be invoked as a `callback' function
> from an event loop. There are functions available to make this easy.
* **Is this functionality available via Python?**
* **Is it possible to use the[Cmd](https://docs.python.org/2/library/cmd.html) class for such a purpose?**
Answer: Some installations of python support the `readline` interface, some do not.
The only way to find out is via testing. Does `import readline` work or raise
and `ImportError`?
The python `cmd` module is intended for a totally diferent purpose involvivng
making your own shells and is not helpful in this context.
Even if the `readline` module is not available, a similar task could be
accomplished with threads.
|
Error while converting webp image file to jpg in python
Question: I have written small program to convert **webp** to jpg in python
import imghdr
from PIL import Image
im = Image.open("unnamed.webp").convert("RGB")
im.save("test.jpg","jpeg")
when executing it gives me following error
No handlers could be found for logger "PIL.ImageFile"
Traceback (most recent call last):
File "webptopng.py", line 3, in <module>
im = Image.open("unnamed.webp").convert("RGB")
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 2286, in open
% (filename if filename else fp))
IOError: cannot identify image file 'unnamed.webp'
I have installed pillow with webp capability. Here is my pillow installation
output
--------------------------------------------------------------------
PIL SETUP SUMMARY
--------------------------------------------------------------------
version Pillow 3.0.0
platform linux2 2.7.3 (default, Jun 22 2015, 19:33:41)
[GCC 4.6.3]
--------------------------------------------------------------------
--- TKINTER support available
--- JPEG support available
*** OPENJPEG (JPEG2000) support not available
--- ZLIB (PNG/ZIP) support available
*** LIBTIFF support not available
--- FREETYPE2 support available
*** LITTLECMS2 support not available
--- WEBP support available
*** WEBPMUX support not available
--------------------------------------------------------------------
Please help me how to proceed.
Answer: This issue has been resolve now. I have install latest libwebp library i,e
libwebp-0.4.3 and reinstall pillow.
[Here](https://github.com/python-
pillow/Pillow/issues/1502#issuecomment-150771825) is github issue thread if
some one face same issue.
|
UnboundLocalError: local variable 'event' referenced before assignment [PYGAME]
Question: I am currently trying to create a game for a school assignment setting up
timers for some events in one of my levels, but the "UnboundLocalError" keeps
appearing and I'm not sure how to fix it. I've read some other posts where you
can set the variable as global but I've tried that in a few places and it
still gives me the same error. We are using python 3.4.3.
Here is my code:
import pygame
import random
import sys
import time
import os
#Global Colours and Variables
BLACK = (0, 0, 0)
RED = (255, 0, 0)
GREEN = (0, 255, 0)
DARKGREEN = (0, 155, 0)
DARKGRAY = (40, 40, 40)
BLUE = (23, 176, 199)
WHITE = (255, 255, 255)
WIDTH = 640
HEIGHT = 480
TILE_SIZE = 32
NUM_TILES_WIDTH = WIDTH//TILE_SIZE
NUM_TILES_HEIGHT = HEIGHT//TILE_SIZE
COUNT = 10
COOKIEVENT = pygame.USEREVENT
pygame.time.set_timer(COOKIEVENT, 3000)
REVENT = pygame.USEREVENT + 3
pygame.time.set_timer(REVENT, 5000)
PEVENT = pygame.USEREVENT + 2
pygame.time.set_timer(PEVENT ,5000)
# - level 1 -
MAXHEALTH = 3
SCORE = 0
grave = pygame.sprite.Sprite()
#Global Definitions
candies = pygame.sprite.OrderedUpdates()
def add_candy(candies):
candy = pygame.sprite.Sprite()
candy.image = pygame.image.load('cookie.png')
candy.rect = candy.image.get_rect()
candy.rect.left = random.randint(1, 19)*32
candy.rect.top = random.randint(1, 13)*32
candies.add(candy)
for i in range(10):
add_candy(candies)
# OPENING SCREEN DEFINITIONS
def showStartScreen():
screen.fill((DARKGRAY))
StartFont = pygame.font.SysFont('Browallia New', 20, bold = False, italic = False)
first_image = StartFont.render("Instructions: Eat all the good cookies!", True, (255, 255, 255))
second_image = StartFont.render("Raccoon and purple cookies will kill you!", True, (255, 255, 255))
third_image = StartFont.render("Collect potions to regain HP!", True, (255, 255, 255))
first_rect = first_image.get_rect(centerx=WIDTH/2, centery=100)
second_rect = second_image.get_rect(centerx=WIDTH/2, centery=120)
third_rect = third_image.get_rect(centerx=WIDTH/2, centery=140)
screen.blit(first_image, first_rect)
screen.blit(second_image, second_rect)
screen.blit(third_image, third_rect)
while True:
drawPressKeyMsg()
if checkForKeyPress():
pygame.event.get()
return
pygame.display.update()
def showlevel2StartScreen():
screen.fill((DARKGRAY))
StartFont = pygame.font.SysFont('Browallia New', 20, bold = False, italic = False)
title_image = StartFont.render("Instructions: Eat all the cookies before the timer runs out!", True, (255, 255, 255))
title_rect = title_image.get_rect(centerx=WIDTH/2, centery=100)
screen.blit(title_image, title_rect)
while True:
drawPressKeyMsg()
if checkForKeyPress():
pygame.event.get()
return
pygame.display.update()
def drawPressKeyMsg():
StartFont = pygame.font.SysFont('Browallia New', 20, bold = False, italic = False)
pressKeyScreen = StartFont.render('Press any key to play.', True, WHITE)
pressKeyRect = pressKeyScreen.get_rect(centerx=WIDTH/2, centery=160)
screen.blit(pressKeyScreen, pressKeyRect)
def checkForKeyPress():
if len(pygame.event.get(pygame.QUIT)) > 0:
terminate()
keyUpEvents = pygame.event.get(pygame.KEYUP)
if len(keyUpEvents) == 0:
return None
if keyUpEvents[0] == pygame.K_ESCAPE:
terminate()
return keyUpEvents[0].key
def getRandomLocation():
return {'x': random.randint(0, TILE_SIZE - 1), 'y': random.randint(0, TILE_SIZE - 1)}
def terminate():
pygame.quit()
sys.exit()
# LEVEL 1 DEFINITIONS
pcandies = pygame.sprite.OrderedUpdates()
def add_candie(pcandies):
candy = pygame.sprite.Sprite()
candy.image = pygame.image.load('cookie.png')
candy.rect = candy.image.get_rect()
pcandy = pygame.sprite.Sprite()
pcandy.image = pygame.image.load('pcookie.png')
pcandy.rect = pcandy.image.get_rect()
pcandy.rect.left = random.randint(1, 19)*32
pcandy.rect.top = random.randint(1, 13)*32
candycollides = pygame.sprite.groupcollide(pcandies, candies, False, True)
while len(candycollides) > 0:
pcandies.remove(pcandy)
pcandy.rect.left = random.randint(1, 19)*32
pcandy.rect.top = random.randint(1, 13)*32
pcandies.add(pcandy)
for i in range (5):
add_candie(pcandies)
raccoons = pygame.sprite.GroupSingle()
def add_raccoon(raccoon):
raccoon = pygame.sprite.Sprite()
raccoon.image = pygame.image.load('enemy.gif')
raccoon.rect = raccoon.image.get_rect()
raccoon.rect.left = random.randint(1, 19)*32
raccoon.rect.top = random.randint(1, 13)*32
raccoon.add(raccoons)
potions = pygame.sprite.GroupSingle()
def add_potion(potion):
potion = pygame.sprite.Sprite()
potion.image = pygame.image.load('potion.gif')
potion.rect = potion.image.get_rect()
potion.rect.left = random.randint(1, 20)*32
potion.rect.top = random.randint(1, 13)*32
potion.add(potions)
#Classes
class Wall(pygame.sprite.Sprite):
def __init__(self, x, y, width, height, color):
""" Constructor function """
#Call the parent's constructor
super().__init__()
self.image = pygame.screen([width, height])
self.image.fill(BLUE)
self.rect = self.image.get_rect()
self.rect.y = y
self.rect.x = x
class Room():
#Each room has a list of walls, and of enemy sprites.
wall_list = None
candies = None
def __init__(self):
self.wall_list = pygame.sprite.Group()
self.candies = pygame.sprite.Group
class Player(pygame.sprite.Sprite):
# Set speed vector
change_x = 0
change_y = 0
def __init__(self, x, y):
super().__init__()
#Setting up main character + Adding image/properties to hero!
player = pygame.sprite.Sprite()
player.image = pygame.image.load('rabbit.png')
player_group = pygame.sprite.GroupSingle(hero)
player.rect = player.image.get_rect()
player.rect.y = y
player.rect.x = x
def changespeed(self, x, y):
""" Change the speed of the player. Called with a keypress. """
player.change_x += x
player.change_y += y
def move(self, walls):
""" Find a new position for the player """
# Move left/right
player.rect.x += player.change_x
# Did this update cause us to hit a wall?
block_hit_list = pygame.sprite.spritecollide(player, walls, False)
for block in block_hit_list:
# If we are moving right, set our right side to the left side of
# the item we hit
if player.change_x > 0:
player.rect.right = block.rect.left
else:
# Otherwise if we are moving left, do the opposite.
player.rect.left = block.rect.right
# Move up/down
player.rect.y += player.change_y
# Check and see if we hit anything
block_hit_list = pygame.sprite.spritecollide(self, walls, False)
for block in block_hit_list:
# Reset our position based on the top/bottom of the object.
if player.change_y > 0:
player.rect.bottom = block.rect.top
else:
player.rect.top = block.rect.bottom
class levelwalls(Room):
def __init__(self):
Room.__init__(self)
#Make the walls. (x_pos, y_pos, width, height)
# This is a list of walls. Each is in the form [x, y, width, height]
walls = [[0, 0, 20, 250, WHITE],
[0, 350, 20, 250, WHITE],
[780, 0, 20, 250, WHITE],
[780, 350, 20, 250, WHITE],
[20, 0, 760, 20, WHITE],
[20, 580, 760, 20, WHITE],
[390, 50, 20, 500, BLUE]
]
# Loop through the list. Create the wall, add it to the list
for item in walls:
wall = Wall(item[0], item[1], item[2], item[3], item[4])
self.wall_list.add(wall)
def main():
pygame.init()
global FPSCLOCK, screen, my_font, munch_sound, bunny_sound, potion_sound, COOKIEVENT, PEVENT, REVENT
FPSCLOCK = pygame.time.Clock()
munch_sound = pygame.mixer.Sound('crunch.wav')
bunny_sound = pygame.mixer.Sound('sneeze.wav')
screen = pygame.display.set_mode((WIDTH,HEIGHT))
my_font = pygame.font.SysFont('Browallia New', 34, bold = False, italic = False)
pygame.display.set_caption('GRABBIT')
#Sounds
pygame.mixer.music.load('Music2.mp3')
pygame.mixer.music.play(-1,0.0)
potion_sound = pygame.mixer.Sound('Correct.wav')
COOKIEVENT = pygame.USEREVENT
pygame.time.set_timer(COOKIEVENT, 3000)
REVENT = pygame.USEREVENT + 3
pygame.time.set_timer(REVENT, 5000)
PEVENT = pygame.USEREVENT + 2
pygame.time.set_timer(PEVENT ,5000)
showStartScreen()
while True:
level1_init()
showGameOverScreen
def level1_init():
global COOKIEVENT, PEVENT, REVENT
finish = False
win = False
gameOverMode = False
move = True
MAXHEALTH = 3
SCORE = 0
count = 10
COOKIEVENT = pygame.USEREVENT
pygame.time.set_timer(COOKIEVENT, 3000)
REVENT = pygame.USEREVENT + 3
pygame.time.set_timer(REVENT, 5000)
PEVENT = pygame.USEREVENT + 2
pygame.time.set_timer(PEVENT ,5000)
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
terminate()
if event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE:
terminate()
def drawHealthMeter(currentHealth):
for i in range(currentHealth): # draw health bars
pygame.draw.rect(screen, BLUE, (15, 5 + (10 * MAXHEALTH) - i * 10, 20, 10))
for i in range(MAXHEALTH): # draw the white outlines
pygame.draw.rect(screen, WHITE, (15, 5 + (10 * MAXHEALTH) - i * 10, 20, 10), 1)
if event.type == COOKIEVENT:
if win == False and gameOverMode == False:
add_candy(candies)
if event.type == PEVENT:
if win == False and gameOverMode == False:
add_potion(potions)
if event.type == REVENT:
if win == False and gameOverMode == False:
add_raccoon(raccoons)
if move == True:
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_UP:
hero.rect.top -= TILE_SIZE
elif event.key == pygame.K_DOWN:
hero.rect.top += TILE_SIZE
elif event.key == pygame.K_RIGHT:
hero.rect.right += TILE_SIZE
elif event.key == pygame.K_LEFT:
hero.rect.right -= TILE_SIZE
elif event.key == pygame.K_ESCAPE:
terminate()
screen.fill((DARKGRAY))
grass = pygame.image.load('grass.jpg')
for x in range(int(WIDTH/grass.get_width()+3)):
for y in range(int(HEIGHT/grass.get_height()+3)):
screen.blit(grass,(x*100,y*100))
candies.draw(screen)
pcandies.draw(screen)
potions.draw(screen)
hero_group.draw(screen)
raccoons.draw(screen)
playerObj = {'health': MAXHEALTH}
drawHealthMeter(playerObj['health'])
#Collision with Raccoon
instantdeath = pygame.sprite.groupcollide(hero_group, raccoons, False, True)
if len(instantdeath) > 0:
bunny_sound.play()
MAXHEALTH = 0
#Health Potions
morehealth = pygame.sprite.groupcollide(hero_group, potions, False, True)
if len(morehealth) > 0:
potion_sound.play()
MAXHEALTH = MAXHEALTH + 1
#Collision with Bad Cookies
bad = pygame.sprite.groupcollide(hero_group, pcandies, False, True)
if len(bad) > 0:
bunny_sound.play()
MAXHEALTH = MAXHEALTH - 1
if playerObj['health'] == 0:
gameOverMode = True
move = False
grave.image = pygame.image.load('grave.png')
grave.rect = grave.image.get_rect(left = hero.rect.left, top = hero.rect.top)
screen.blit(grave.image, grave.rect)
#Collision with Good Cookies
collides = pygame.sprite.groupcollide(hero_group, candies, False, True)
if len(collides) > 0:
munch_sound.play()
SCORE += 1
if len(candies) == 0:
win = True
scoretext = my_font.render("Score = "+str(SCORE), 1, (255, 255, 255))
screen.blit(scoretext, (520, 5))
#If you collide with Racoon
if gameOverMode == True:
font = pygame.font.SysFont('Browallia New', 36, bold = False, italic = False)
text_image = font.render("You Lose. Game Over!", True, (255, 255, 255))
text_rect = text_image.get_rect(centerx=WIDTH/2, centery=100)
screen.blit(text_image, text_rect)
if win:
move = False
CEVENT = pygame.USEREVENT + 5
pygame.time.set_timer(CEVENT, 1000)
if count > 0:
if event.type == CEVENT:
count -= 1
text_image = my_font.render('You won! Next level will begin in ' + str(count) + ' seconds', True, (255, 255, 255))
text_rect = text_image.get_rect(centerx=WIDTH/2, centery=100)
screen.blit(text_image, text_rect)
score_text_image = my_font.render("You achieved a score of " + str(SCORE), True, (255, 255, 255))
score_text_rect = score_text_image.get_rect(centerx = WIDTH/2, centery = 150)
screen.blit(score_text_image, score_text_rect)
if count == 0:
showlevel2StartScreen()
win = False
level2_init()
pygame.display.update()
main()
pygame.quit()
The error comes up as:
Traceback (most recent call last):
File "F:\Year 10\IST\Programming\Pygame\Final Game\Final Game - Grabbit.py", line 426, in <module>
main()
File "F:\Year 10\IST\Programming\Pygame\Final Game\Final Game - Grabbit.py", line 284, in main
level1_init()
File "F:\Year 10\IST\Programming\Pygame\Final Game\Final Game - Grabbit.py", line 324, in level1_init
if event.type == COOKIEVENT:
UnboundLocalError: local variable 'event' referenced before assignment
Is anyone able to suggest what I could do to fix this? Would also appreciate
any tips for improvements in my code. I'm a beginner at python and this is the
first project I've undertaken using this coding language. Thank you.
Answer: The `event` variable is not initialized in the line
if event.type == COOKIEVENT:
You may need to indent that part of the code so that it goes inside the loop
for event in pygame.event.get():
....
if event.type == COOKIEVENT:`
|
Error when migrating: django.db.utils.IntegrityError: column "primary_language_id" contains null values
Question: I am working on a Django project and made a model with several instances of a
models.ForeignKey with the same Model.
class Country(models.Model):
name = models.CharField(max_length=100)
primary_language = models.ForeignKey('Language', related_name='primary_language', default="")
secondary_language = models.ForeignKey('Language', related_name='secondary_language', default="")
tertiary_language = models.ForeignKey('Language', related_name='tertiary_language', default="")
def __str__(self):
return self.name
This is the Language model:
class Language(models.Model):
name = models.CharField(max_length=50)
abbreviation = models.CharField(max_length=2)
def __str__(self):
return self.name
when doing `$python3 manage.py makemigration base`it works fine, no errors. I
have put the 2 migration files I think are the most important.
class Migration(migrations.Migration):
dependencies = [
('base', '0002_country_country_code'),
]
operations = [
migrations.CreateModel(
name='Currency',
fields=[
('id', models.AutoField(serialize=False, auto_created=True, verbose_name='ID', primary_key=True)),
('name', models.CharField(max_length=50)),
('abbreviation', models.CharField(max_length=3)),
],
),
migrations.CreateModel(
name='Language',
fields=[
('id', models.AutoField(serialize=False, auto_created=True, verbose_name='ID', primary_key=True)),
('name', models.CharField(max_length=50)),
('abbreviation', models.CharField(max_length=2)),
],
),
migrations.AddField(
model_name='country',
name='phone_country_code',
field=models.CharField(default='', max_length=7),
),
migrations.AlterField(
model_name='country',
name='country_code',
field=models.CharField(default='', max_length=2),
),
migrations.AddField(
model_name='country',
name='primary_language',
field=models.ForeignKey(to='base.Language', default=''),
),
migrations.AddField(
model_name='country',
name='secondary_language',
field=models.ForeignKey(related_name='secondary_language', to='base.Language', default=''),
),
migrations.AddField(
model_name='country',
name='tertiary_language',
field=models.ForeignKey(related_name='tertiary_language', to='base.Language', default=''),
),
]
class Migration(migrations.Migration):
dependencies = [
('base', '0006_auto_20151023_0918'),
]
operations = [
migrations.AddField(
model_name='country',
name='primary_language',
field=models.ForeignKey(default='', related_name='primary_language', to='base.Language'),
),
migrations.AddField(
model_name='country',
name='secondary_language',
field=models.ForeignKey(default='', related_name='secondary_language', to='base.Language'),
),
migrations.AddField(
model_name='country',
name='tertiary_language',
field=models.ForeignKey(default='', related_name='tertiary_language', to='base.Language'),
),
migrations.AlterField(
model_name='language',
name='abbreviation',
field=models.CharField(max_length=2),
),
migrations.AlterField(
model_name='language',
name='name',
field=models.CharField(max_length=50),
),
]
Now when running the migration I get an error message I can't figure out. I
think these are the lines that matter in the stacktrace:
johan@johan-pc:~/sdp/gezelligehotelletjes_com$ python3 manage.py migrate
Operations to perform:
Synchronize unmigrated apps: staticfiles, messages
Apply all migrations: auth, base, sessions, admin, contenttypes, hotel
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying base.0003_auto_20151023_0912...Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.IntegrityError: column "primary_language_id" contains null values
The above exception was the direct cause of the following exception:
django.db.utils.IntegrityError: column "primary_language_id" contains null values
First of all I do not have a column "primary_language_id" but I guess this is
created by Django. Even when deleting the entire Language model and the lines
for the languages in the Country model, I still get this error.
Could someone help me with figuring this out?
Answer: You already have `Country` objects in your database.
When you add the `primary_language_id` column to them (which represents the
`primary_language` `ForeignKey`), those countries end up with an empty
`primary_language` (because you didn't specify a default), which throws an
error (because you didn't allow empty values either for `primary_language`).
The solution depends on how you want that migration to work. You can add
`blank = True` to the `primary_language` `ForeignKey` definition, add a
default, or you break down your migration in 3 migrations (add the column with
`blank = True`, set values, remove `blank = True`).
|
How to obtain current instance ID from boto3?
Question: Is there an equivalent of
curl http://169.254.169.254/latest/meta-data/instance-id
with boto3 to obtain the current running instance instance-id in python?
Answer: There is no api for it, no. There is
[`InstanceMetadataFetcher`](https://github.com/boto/botocore/blob/develop/botocore/utils.py#L157),
but it is currently only used to fetch IAM roles for authentication.
Any sort of `GET` should serve you though. Botocore uses the python
[`requests`](http://docs.python-requests.org/en/latest/) library which is
quite nice.
import requests
response = requests.get('http://169.254.169.254/latest/meta-data/instance-id')
instance_id = response.text
|
Python multiprocessing.Pool.map dying silently
Question: I have tried to put a for loop in parallel to speed up some code. consider
this:
from multiprocessing import Pool
results = []
def do_stuff(str):
print str
results.append(str)
p = Pool(4)
p.map(do_stuff, ['str1','str2','str3',...]) # many strings here ~ 2000
p.close()
print results
I have some debug messages showing from `do_stuff` to keep track of how far
the program gets before dying. It seems to die at different points each time
through. For example it will print 'str297' and then it will just stop
running, I will see all the CPUs stop working and the program just sits there.
Should be some error occuring but there is no error message showing. Does
anyone know how to debug this problem?
**UPDATE**
I tried re-working the code a little bit. Instead of using the `map` function
I tried the `apply_async` function like this:
pool = Pool(5)
results = pool.map(do_sym, underlyings[0::10])
results = []
for sym in underlyings[0::10]:
r = pool.apply_async(do_sym, [sym])
results.append(r)
pool.close()
pool.join()
for result in results:
print result.get(timeout=1000)
This worked just as good as the `map` function, but ended up hanging in the
same way. It would never get to the for loop where it prints the results.
After working on this a little more, and trying some debugging logging like
was suggested in unutbu's answer, I will give some more info here. The problem
is very strange. It seems like the pool is just hanging there and unable to
close and continue the program. I use the PyDev environment for testing my
programs, but I thought I would try just running python in the console. In the
console I get the same behavior, but when I press control+C to kill the
program, I get some output which might explain where the problem is:
> KeyboardInterrupt ^CProcess PoolWorker-47: Traceback (most recent call
> last): File "/usr/lib/python2.7/multiprocessing/process.py", line
> 258, in _bootstrap Process PoolWorker-48: Traceback (most recent call
> last): File "/usr/lib/python2.7/multiprocessing/process.py", line
> 258, in _bootstrap Process PoolWorker-45: Process PoolWorker-46:
> Process PoolWorker-44:
> self.run() File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
> self._target(*self._args, **self._kwargs) File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
> Traceback (most recent call last): Traceback (most recent call last):
> Traceback (most recent call last): File
> "/usr/lib/python2.7/multiprocessing/process.py", line 258, in
> _bootstrap File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap File
> "/usr/lib/python2.7/multiprocessing/process.py", line 258, in
> _bootstrap
> task = get() File "/usr/lib/python2.7/multiprocessing/queues.py", line 374, in get
> self.run() File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
> racquire()
> self._target(*self._args, **self._kwargs) File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
> KeyboardInterrupt
> task = get() File "/usr/lib/python2.7/multiprocessing/queues.py", line 374, in get
> self.run()
> self.run() File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
> self.run() File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run File
> "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
> self._target(*self._args, **self._kwargs) File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
> self._target(*self._args, **self._kwargs)
> self._target(*self._args, **self._kwargs) File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
> racquire() File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker KeyboardInterrupt
> task = get() File "/usr/lib/python2.7/multiprocessing/queues.py", line 374, in get
> task = get()
> task = get() File "/usr/lib/python2.7/multiprocessing/queues.py", line 376, in get
> File "/usr/lib/python2.7/multiprocessing/queues.py", line 374, in get
> racquire()
> return recv()
> racquire() KeyboardInterrupt KeyboardInterrupt KeyboardInterrupt
Then actually the program never dies. I end up having to close the terminal
window to kill it.
**UPDATE 2**
I narrowed down the problem inside the function that is running in the pool,
and it was a MySQL database transaction that was causing the problem. I was
using the `MySQLdb` package before. I switched it the a `pandas.read_sql`
function for the transaction, and it is working now.
Answer: `pool.map` returns the results in a list. So instead of calling
`results.append` in the concurrent processes (which will not work since each
process will have its own independent copy of `results`), assign `results` to
the value returned by `pool.map` in the main process:
import multiprocessing as mp
def do_stuff(text):
return text
if __name__ == '__main__':
p = mp.Pool(4)
tasks = ['str{}'.format(i) for i in range(2000)]
results = p.map(do_stuff, tasks)
p.close()
print(results)
yields
['str0', 'str1', 'str2', 'str3', ...]
* * *
One method of debugging scripts that use multiprocessing is to add logging
statements. The `multiprocessing` module provides a helper function,
[`mp.log_to_stderr`](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.log_to_stderr),
for this purpose. For example,
import multiprocessing as mp
import logging
logger = mp.log_to_stderr(logging.DEBUG)
def do_stuff(text):
logger.info('Received {}'.format(text))
return text
if __name__ == '__main__':
p = mp.Pool(4)
tasks = ['str{}'.format(i) for i in range(2000)]
results = p.map(do_stuff, tasks)
p.close()
logger.info(results)
which yields logging output like:
[DEBUG/MainProcess] created semlock with handle 139824443588608
[DEBUG/MainProcess] created semlock with handle 139824443584512
[DEBUG/MainProcess] created semlock with handle 139824443580416
[DEBUG/MainProcess] created semlock with handle 139824443576320
[DEBUG/MainProcess] added worker
[INFO/PoolWorker-1] child process calling self.run()
[DEBUG/MainProcess] added worker
[INFO/PoolWorker-2] child process calling self.run()
[DEBUG/MainProcess] added worker
[INFO/PoolWorker-3] child process calling self.run()
[DEBUG/MainProcess] added worker
[INFO/PoolWorker-4] child process calling self.run()
[INFO/PoolWorker-1] Received str0
[INFO/PoolWorker-2] Received str125
[INFO/PoolWorker-3] Received str250
[INFO/PoolWorker-4] Received str375
[INFO/PoolWorker-3] Received str251
...
[INFO/PoolWorker-4] Received str1997
[INFO/PoolWorker-4] Received str1998
[INFO/PoolWorker-4] Received str1999
[DEBUG/MainProcess] closing pool
[INFO/MainProcess] ['str0', 'str1', 'str2', 'str3', ...]
[DEBUG/MainProcess] worker handler exiting
[DEBUG/MainProcess] task handler got sentinel
[INFO/MainProcess] process shutting down
[DEBUG/MainProcess] task handler sending sentinel to result handler
[DEBUG/MainProcess] running all "atexit" finalizers with priority >= 0
[DEBUG/MainProcess] finalizing pool
[DEBUG/MainProcess] task handler sending sentinel to workers
[DEBUG/MainProcess] helping task handler/workers to finish
[DEBUG/MainProcess] result handler got sentinel
[DEBUG/PoolWorker-3] worker got sentinel -- exiting
[DEBUG/MainProcess] removing tasks from inqueue until task handler finished
[DEBUG/MainProcess] ensuring that outqueue is not full
[DEBUG/MainProcess] task handler exiting
[DEBUG/PoolWorker-3] worker exiting after 2 tasks
[INFO/PoolWorker-3] process shutting down
[DEBUG/MainProcess] result handler exiting: len(cache)=0, thread._state=0
[DEBUG/PoolWorker-3] running all "atexit" finalizers with priority >= 0
[DEBUG/MainProcess] joining worker handler
[DEBUG/MainProcess] terminating workers
[DEBUG/PoolWorker-3] running the remaining "atexit" finalizers
[DEBUG/MainProcess] joining task handler
[DEBUG/MainProcess] joining result handler
[DEBUG/MainProcess] joining pool workers
[DEBUG/MainProcess] cleaning up worker 4811
[DEBUG/MainProcess] running the remaining "atexit" finalizers
Notice that each line indicates which process emitted the logging record. So
the output to some extent serializes the order of events from amongst your
concurrent processes.
By judicious placement of `logging.info` calls you should be able to narrow
down where and maybe why your script is "dying silently" (or, at least it
won't be quite so silent as it dies).
|
Python Twitter Api: AttributeError: module 'twitter' has no attribute 'trends'
Question:
import twitter
import json
OAUTH_TOKEN='aaa'
OAUTH_SECRET='bbb'
CONSUMER_KEY='ccc'
CONSUMER_SECRET='ddd'
auth=twitter.oauth.OAuth(OAUTH_TOKEN,OAUTH_SECRET,CONSUMER_KEY,CONSUMER_SECRET)
twitter_api=twitter.Twitter(auth=auth)
print(twitter_api)
WORLD_WOE_ID=1
world_trends=twitter.trends.place(_id=WORLD_WOE_ID)
print(world_trends)
I always get the error: `AttributeError: module 'twitter' has no attribute
'trends'`
Answer: You have to call `.trends` on an instance of `Twitter` instead of the module.
This should work for you:
import twitter
import json
OAUTH_TOKEN='aaa'
OAUTH_SECRET='bbb'
CONSUMER_KEY='ccc'
CONSUMER_SECRET='ddd'
auth=twitter.oauth.OAuth(OAUTH_TOKEN,OAUTH_SECRET,CONSUMER_KEY,CONSUMER_SECRET)
twitter_api=twitter.Twitter(auth=auth)
print(twitter_api)
WORLD_WOE_ID=1
world_trends=twitter_api.trends.place(_id=WORLD_WOE_ID)
print(world_trends)
|
Training a new Stanford part-of-speech tagger from within the NLTK
Question: I've trained a part-of-speech tagger for an uncommon language (Uyghur) using
the Stanford POS tagger and some self-collected training data. I've been using
the NLTK's `nltk.tag.stanford.POSTagger` interface to tag individual sentences
in Python. This works well for most of my purposes: (running from
`/usr/share/stanford-postagger`)
>>> from nltk.tag.stanford import POSTagger
>>> uy = POSTagger('models/uyghur.tagger', 'stanford-postagger.jar')
>>> uy.tag('Men méning mantini yégenlikimni bilimen .'.split())
[[(u'Men', u'PN1s'), (u'm\xe9ning', u'PN1s.GEN'), (u'mantini', u'N-ACC'), (u'y\xe9genlikimni', u'Vt-PST.dir-1s2'), (u'bilimen', u'Vt-PRS-1s1'), (u'.', u'PUNCT')]]
I would like to do a ten-fold cross-validation to get a better sense of the
accuracy of this tagger, i.e., use each tenth of my complete training data as
test data for a tagger trained on the other nine-tenths of the data. Splitting
the data set ten ways is no problem in Python, but I don't know if there's a
way to train a new tagger from within Python. When I've done it, it's been
from the command line using `java -mx1g -classpath /usr/share/stanford-
postagger/stanford-postagger.jar edu.stanford.nlp.tagger.maxent.MaxentTagger
-props uyghurtagger.props`.
Is it possible to train a new Stanford tagger from within Python using the
NLTK interface, or will I need to create it manually via the command line each
time?
Answer: This was much simpler to implement than I had thought.
import subprocess32
subprocess32.call(
['java', '-mx1g', '-classpath',
'/usr/share/stanford-postagger/stanford-postagger.jar',
'edu.stanford.nlp.tagger.maxent.MaxentTagger', '-props',
'uyghurtagger.props'])
It's really just as simple as passing a list of the command line arguments to
`subprocess32.call()`. (I am using `subprocess32` instead of `subprocess` per
the recommendation in the [`subprocess`
docs](https://docs.python.org/2/library/subprocess.html)).
|
Create new screen buffer with win32api in Python
Question: I want to draw a specific image to my second screen in Windows 7 using Python
3.4. I can get the handle and screen dimensions using pywin32 :
import win32api
screens = win32api.EnumDisplayMonitors()
I get the handles,dimensions of my screens:
[(<PyHANDLE:393217>, <PyHANDLE:0>, (0, 0, 1280, 720)),
(<PyHANDLE:7472233>, <PyHANDLE:0>, (1920, 0, 3360, 900))]
I thought of creating a new buffer with the dimensions of my screen , writing
my data/image to the new buffer and setting it as the active screen.
I don't think I can do that with the pywin32 module and I though of accessing
the Windows API through ctypes. But I cannot find the functions of the API
like described here <https://msdn.microsoft.com/en-
us/library/windows/desktop/ms685032(v=vs.85).aspx>
How can I do that? Thank you!
Answer: Can use the win32console module:
=> python
Python 3.4.3 |Anaconda 2.3.0 (64-bit)| (default, Mar 6 2015, 12:06:10) [MSC v.1600 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import win32console
>>> help(win32console.CreateConsoleScreenBuffer)
Help on built-in function CreateConsoleScreenBuffer in module win32console:
CreateConsoleScreenBuffer(...)
Creates a new console screen buffer
|
How to prevent automatic assignment of values to missing data imported from SPSS
Question: Let's say I have an spss file named "ab.sav" which looks like this:
gender value value2
F 433 329
. . 787
. . .
M 121 .
F 311 120
. . 899
M 341 .
In spss (Variable View) I defined the labels of `gender` with the values `1`
and `2` for `M` and `F` respectively.
When I load this in python using the following commands:
>>> from rpy2.robjects.packages import importr
>>> from rpy2.robjects import pandas2ri
>>> foreign=importr("foreign")
>>> data=foreign.read_spss("ab.sav", to_data_frame=True, use_value_labels=True)
>>> pandas2ri.activate()
>>> data2=pandas2ri.ri2py(data)
I get the following dataframe:
>>> data2
gender value value2
0 F 433 329
1 M NaN 787
2 M NaN NaN
3 M 121 NaN
4 F 311 120
5 M NaN 899
6 M 341 NaN
So the missing values in the `gender` column for a given case are replaced by
the subsequent known value of the subsequent case. Is there a simple way to
prevent this?
When I change `use_value_labels` to `False` I get the expected result though:
>>> data2
gender value value2
0 2 433 329
1 NaN NaN 787
2 NaN NaN NaN
3 1 121 NaN
4 2 311 120
5 NaN NaN 899
6 1 341 NaN
However I'd like to be able to use the labels instead of numeric values for
`gender` as above. Ideally the output should be:
>>> data2
gender value value2
0 F 433 329
1 NaN NaN 787
2 NaN NaN NaN
3 M 121 NaN
4 F 311 120
5 NaN NaN 899
6 M 341 NaN
Answer: Assuming `data2` is a pandas DataFrame, and there's a 1-to-1 mapping between
nulls in `value` and `gender`, you can do the following:
nulls = pandas.isnull(data2['value'])
data2.loc[nulls, 'gender'] = np.nan
And that turns it into:
gender value value2
0 F 433 329
1 NaN NaN 787
2 NaN NaN NaN
3 M 121 NaN
4 F 311 120
5 NaN NaN 899
6 M 341 NaN
|
Image subtraction using opencv and python
Question: I'm want to subtract one image from other.
This is what I have done so far.
import cv2
import numpy as np
img1 = cv2.imread('template.jpg',0)
img2 = cv2.imread('shot_one.jpg',0)
img3 = img1-img2
cv2.imshow('result',img3)
cv2.waitKey()
cv2.destroyAllWindows()
I'm getting error saying:
Traceback (most recent call last):
File "E:/Python Programme/ATSS/subtra.py", line 7, in <module>
img3 = img1-img2
TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'
I have followed [this
question](http://stackoverflow.com/questions/21425992/how-to-subtract-two-
images-using-python-opencv2-to-get-the-foreground-object). Any suggestions?
Answer: The error line `unsupported operand type(s) for -: 'NoneType' and 'int'`
clearly states that, One of your image path is invalid, Opencv doesn't throws
error, when the path is invalid, instead it returns a `None` Object.
import cv2
import numpy as np
img1 = cv2.imread('template.jpg',0)
img2 = cv2.imread('shot_one.jpg',0)
if img1 and img2:
img3 = img1-img2
cv2.imshow('result',img3)
cv2.waitKey()
cv2.destroyAllWindows()
else:
print "Sorry the images weren't loaded properly."
|
Python printing inline with a sleep command
Question: Why is the following code
from __future__ import print_function
from time import sleep
def print_inline():
print("Hello ", end='')
sleep(5)
print("World")
print_inline()
waits until the sleep is done to print `Hello World`, shouldn't print `Hello`
then wait for 5 seconds and print `World` in the same line?
Answer: No, it shouldn't. "Hello" sits in the output buffer until there's a reason to
flush it to the output device. In this case, that "reason" is the end of the
program. If you want The delayed effect, add
import sys
sys.stdout.flush()
just before your sleep statement.
See also a more complete discussion
[here](http://stackoverflow.com/questions/230751/how-to-flush-output-of-
python-print).
|
How to check the classpath where Jython ScriptEngine looks for python module?
Question: I have this `Java` code which I am using to run a `python script` using
`Jython` `ScriptEngine`:
StringWriter writer = new StringWriter();
ScriptEngineManager manager = new ScriptEngineManager();
ScriptContext context = new SimpleScriptContext();
context.setWriter(writer);
ScriptEngine engine = manager.getEngineByName("python");
engine.eval(new FileReader("/Users/folder1/test.py"), context);
In my `python script` there are several module import statements and when I
run the `Java` code I get error as `javax.script.ScriptException: ImportError:
No module named psycopg2`. All the modules are installed in my machine and
when I run the `python script` normally through CLI it executes. So my
understanding is that `Jython` `classpath` is looking somewhere else for the
`python` modules.
How can I check where does the `Jython` `ScriptEngine` looks for modules and
then modify it include where actually my `python` modules are present? I am
new to this so please forgive any lack of understanding.
**Note:** I have `CentOS` and `python 2.7.5` installed on my machine
Answer: [`sys.path`](https://docs.python.org/2/tutorial/modules.html#the-module-
search-path) is a list of strings that specifies where Jython (and Python)
searches for modules. You can check its value like so:
engine.eval("import sys; print sys.path");
To add a directory to `sys.path`, use the
[`JYTHONPATH`](http://www.jython.org/docs/using/cmdline.html#jython-launcher-
options) environment variable. If `yourmodule` is installed in
`/path/to/modules/yourmodule`, it would look like this:
export JYTHONPATH=/path/to/modules
Another way is to use the
[`python.path`](https://wiki.python.org/jython/UserGuide#registry-properties)
property.
* * *
Unfortunately, in the case of
[psycopg2](http://initd.org/psycopg/docs/install.html) the above won't help
since that package is a C extension and therefore only compatible with
CPython. Perhaps you can use the port of Psycopg for Ctypes instead.
|
How can I draw diagrams from database in python
Question: How can I visually model items in a database using python?
I have a [Django](https://www.djangoproject.com/) project that currently
models my home network in the admin views. It currently describes what devices
there are and what they are connected to. For example:
Devices:
Computer1
Computer2
Laptop1
Mobile1
Router1
ROuter2
ConnectionTypes:
Wireless24ghz
Wireless5ghz
cat5
cat5e
cat6
Connections:
host1:
src:Computer1
dst:Router1
con_type:cat5e
trunk1:
src:Router1
dst:Router2
con_type:cat6
host2:
src:Mobile1
dst:Router1
con_type:Wireless24ghz
The database is a bit more complex than this, however I'm keeping it simple
for now as it's not that important.
What I am wondering is, how I can graphically model my network using python
code to look at the database tables? By graphically model I mean something
similar to a Visio diagram in that I can see it and (not necessary but a HUGE
bonus) interact with it, either via webpage or application.
Are there any existing python libraries that provide this sort of
functionality? I understand JavaScript is good for this kind of modelling but
I'm completely unsure of how I would go about doing it.
It's worth noting I'm not after anything fancy, simply drawing devices as
rectangles and connections as lines going between rectangles is good enough.
Answer: I am pretty sure there is no ready solution for this. Look at the graphviz
library and make a management command to create a DOT graph. Here is a
graphviz tutorial article <http://matthiaseisen.com/articles/graphviz/>
|
Can't move up and down while holding left or right
Question: I am starting an RPG and when the character is running against a wall to
either side, I can't get the player to move up and down smoothly. Also, the
character can't move left or right smoothly while holding down the up and down
key and running against a wall to the north or south.
I've tried different configurations of the 'move' function with no success. I
know why the algorithm doesn't work, I just can't figure out how to construct
the branch so that when running to the right, it will set 'self.rect.right =
p.rect.left' without setting 'self.rect.bottom = p.rect.top' on the next
invocation of 'move()' when I start to press down while running against a
wall.
def move(self, x, y, platforms):
self.rect.left += x
self.rect.top += y
for p in platforms:
if pygame.sprite.collide_rect(self, p):
if isinstance(p, ExitBlock):
pygame.event.post(pygame.event.Event(QUIT))
if self.x > 0: # Moving right
self.rect.right = p.rect.left
if self.x < 0: # Moving left
self.rect.left = p.rect.right
if self.y > 0: # Moving down
self.rect.bottom = p.rect.top
if self.y < 0: # Moving up
self.rect.top = p.rect.bottom
Here's the complete code you can run to see the unwanted behavior:
#! /usr/bin/python
import pygame, platform, sys
platform.architecture()
from pygame import *
import spritesheet
from sprite_strip_anim import SpriteStripAnim
WIN_W = 1400
WIN_H = 800
HALF_W = int(WIN_W / 2)
HALF_H = int(WIN_H / 2)
DEPTH = 32
FLAGS = 0
CAMERA_SLACK = 30
class Entity(pygame.sprite.Sprite):
def __init__(self):
pygame.sprite.Sprite.__init__(self)
class Player(Entity):
def __init__(self, x, y):
Entity.__init__(self)
self.x = 0
self.y = 0
self.onGround = False
self.image = Surface((32,32))
self.image.fill(Color("#0000FF"))
self.image.convert()
self.rect = Rect(x, y, 32, 32)
def update(self, up, down, left, right, running, platforms):
if up:
self.y = -5
if down:
self.y = 5
if left:
self.x = -5
if right:
self.x = 5
if not(left or right):
self.x = 0
if not(up or down):
self.y = 0
self.move(self.x, 0, platforms)
self.move(0, self.y, platforms)
def move(self, x, y, platforms):
self.rect.left += x
self.rect.top += y
for p in platforms:
if pygame.sprite.collide_rect(self, p):
if isinstance(p, ExitBlock):
pygame.event.post(pygame.event.Event(QUIT))
if self.x > 0: # Moving right
self.rect.right = p.rect.left
if self.x < 0: # Moving left
self.rect.left = p.rect.right
if self.y > 0: # Moving down
self.rect.bottom = p.rect.top
if self.y < 0: # Moving up
self.rect.top = p.rect.bottom
class Platform(Entity):
def __init__(self, x, y):
Entity.__init__(self)
self.image = Surface((32, 32))
self.image.convert()
self.image.fill(Color("#DDDDDD"))
self.rect = Rect(x, y, 32, 32)
def update(self):
pass
class ExitBlock(Platform):
def __init__(self, x, y):
Platform.__init__(self, x, y)
self.image.fill(Color("#0033FF"))
def main():
pygame.init
screen = pygame.display.set_mode((WIN_W, WIN_H), FLAGS, DEPTH)
pygame.display.set_caption("Use arrows to move!")
timer = pygame.time.Clock()
up = down = left = right = running = False
bg = Surface((32,32))
bg.convert()
bg.fill(Color("#000000"))
entities = pygame.sprite.Group()
player = Player(32, 32)
platforms = []
x = y = 0
level = [
"PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP",
"P P",
"P P",
"P P",
"P PPPPPPPPPPP P",
"P P",
"P P",
"P P",
"P PPPPPPPP P",
"P P",
"P PPPPPPP P",
"P PPPPPP P",
"P P",
"P PPPPPPP P",
"P P",
"P PPPPPP P",
"P P",
"P PPPPPPPPPPP P",
"P P",
"P PPPPPPPPPPP P",
"P P",
"P P",
"P P",
"P P",
"PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP",]
# build the level
for row in level:
for col in row:
if col == "P":
p = Platform(x, y)
platforms.append(p)
entities.add(p)
if col == "E":
e = ExitBlock(x, y)
platforms.append(e)
entities.add(e)
x += 32
y += 32
x = 0
total_level_width = len(level[0])*32
total_level_height = len(level)*32
entities.add(player)
while 1:
timer.tick(60)
# draw background
for y in range(32):
for x in range(32):
screen.blit(bg, (x * 32, y * 32))
for e in pygame.event.get():
if e.type == QUIT: raise SystemExit, "QUIT"
if e.type == KEYDOWN and e.key == K_ESCAPE:
raise SystemExit, "ESCAPE"
if e.type == KEYDOWN and e.key == K_w:
up = True
down = False
if e.type == KEYDOWN and e.key == K_s:
down = True
up = False
if e.type == KEYDOWN and e.key == K_a:
left = True
right = False
if e.type == KEYDOWN and e.key == K_d:
right = True
left = False
if e.type == KEYUP and e.key == K_w:
up = False
if e.type == KEYUP and e.key == K_s:
down = False
if e.type == KEYUP and e.key == K_d:
right = False
if e.type == KEYUP and e.key == K_a:
left = False
# update player, draw everything else
player.update(up, down, left, right, running, platforms)
for e in entities:
screen.blit(e.image, e.rect)
pygame.display.update()
if __name__ == "__main__":
main()
Answer: Found an algorithm that works @ <http://pygame.org/project-
Rect+Collision+Response-1061-.html>:
def move(self, dx, dy):
# Move each axis separately. Note that this checks for collisions both times.
if dx != 0:
self.move_single_axis(dx, 0)
if dy != 0:
self.move_single_axis(0, dy)
def move_single_axis(self, dx, dy):
# Move the rect
self.rect.x += dx
self.rect.y += dy
# If you collide with a wall, move out based on velocity
for wall in walls:
if self.rect.colliderect(wall.rect):
if dx > 0: # Moving right; Hit the left side of the wall
self.rect.right = wall.rect.left
if dx < 0: # Moving left; Hit the right side of the wall
self.rect.left = wall.rect.right
if dy > 0: # Moving down; Hit the top side of the wall
self.rect.bottom = wall.rect.top
if dy < 0: # Moving up; Hit the bottom side of the wall
self.rect.top = wall.rect.bottom
|
Difference between Bytearray and List in Python
Question: I am curious to know how memory management differs between Bytearray and list
in Python.
I have found a few questions like [Difference between bytearray and
list](http://stackoverflow.com/questions/30145490/difference-between-
bytearray-and-list) but not exactly answering my question.
My question precisely ...
from array import array
>>> x = array("B", (1,2,3,4))
>>> x.__sizeof__()
36
>>> y = bytearray((1,2,3,4))
>>> y.__sizeof__()
32
>>> z = [1,2,3,4]
>>> z.__sizeof__()
36
As we can see there is a difference in sizes between list/array.array (36
bytes for 4 elements) and a byte array (32 bytes for 4 elements). Can someone
explain to me why is this? It makes sense for byte array that it is occupying
`32` bytes of memory for `4` elements `( 4 * 8 == 32 )`, but how can this be
interpreted for list and array.array?
# Lets take the case of bytearray ( which makes more sense to me at least :p)
for i in y:
print(i, ": ", id(i))
1 : 499962320
2 : 499962336 #diff is 16 units
3 : 499962352 #diff is 16 units
4 : 499962368 #diff is 16 units
Why does the difference between two contiguous elements differ by `16` units
here, when each element occupies only `8` bytes. Does that mean each memory
address pointer points to a nibble?
Also what is the criteria for memory allocation for an integer? I read that
Python will assign more memory based on the value of the integer (correct me
if I am wrong) like the larger the number the more memory.
_Eg:_
>>> y = 10
>>> y.__sizeof__()
14
>>> y = 1000000
>>> y.__sizeof__()
16
>>> y = 10000000000000
>>> y.__sizeof__()
18
what is the criteria that Python allocates memory?
And why Python is occupying so much more memory while `C` only occupies 8
bytes (mine is a 64 bit machine)? when they are perfectly under the range of
integer `(2 ** 64)` ?
_Metadata :_
_Python version :_ `'3.4.3 (v3.4.3:9b73f1c3e601, Feb 24 2015, 22:43:06) [MSC
v.1600 32 bit (Intel)]'`
_Machine arch :_ 64-bit
**P.S** : Kindly guide me to a good article where Python memory management is
explained better. I had spent almost an hour to figure these things out and
ended up asking this Question in SO. :(
Answer: I'm not claiming this is complete answer, but there are some hints to
understanding this.
`bytearray` is a sequence of bytes and `list` is a sequence of object
references. So `[1,2,3]` actually holds memory pointers to those integers
which are stored in memory elsewhere. To calculate total memory consumption of
a list structure, we can do this (I'm using `sys.getsizeof` everywhere
further, it's calling `__sizeof__` plus GC overhead)
>>> x = [1,2,3]
>>> sum(map(getsizeof, x)) + getsizeof(x)
172
Result may be different on different machines.
Also, look at this:
>> getsizeof([])
64
That's because lists are mutable. To be fast, this structure allocates some
memory **range** to store references to objects (plus some storage for meta,
such as length of the list). When you append items, next memory cells are
filled with references to those items. When there are no room to store new
items, new, larger range is allocated, existed data copied there and old one
released. This called dynamic arrays.
You can observe this behaviour, by running this code.
import sys
data=[]
n=15
for k in range(n):
a = len(data)
b = sys.getsizeof(data)
print('Length: {0:3d}; Size in bytes: {1:4d}'.format(a, b))
data.append(None)
My results:
Length: 0; Size in bytes: 64
Length: 1; Size in bytes: 96
Length: 2; Size in bytes: 96
Length: 3; Size in bytes: 96
Length: 4; Size in bytes: 96
Length: 5; Size in bytes: 128
Length: 6; Size in bytes: 128
Length: 7; Size in bytes: 128
Length: 8; Size in bytes: 128
Length: 9; Size in bytes: 192
Length: 10; Size in bytes: 192
Length: 11; Size in bytes: 192
Length: 12; Size in bytes: 192
Length: 13; Size in bytes: 192
Length: 14; Size in bytes: 192
We can see that there are 64 bytes was used to store 8 memory addresses
(64-bit each).
Almost the same goes with `bytearray()` (change second line to `data =
bytearray()` and append 1 in the last one).
Length: 0; Size in bytes: 56
Length: 1; Size in bytes: 58
Length: 2; Size in bytes: 61
Length: 3; Size in bytes: 61
Length: 4; Size in bytes: 63
Length: 5; Size in bytes: 63
Length: 6; Size in bytes: 65
Length: 7; Size in bytes: 65
Length: 8; Size in bytes: 68
Length: 9; Size in bytes: 68
Length: 10; Size in bytes: 68
Length: 11; Size in bytes: 74
Length: 12; Size in bytes: 74
Length: 13; Size in bytes: 74
Length: 14; Size in bytes: 74
Difference is that memory now used to hold actual byte values, not pointers.
Hope that helps you to investigate further.
|
Python error - setting an array element with a sequence
Question: I have been trying to run the provided code to make a color map.
The data set has `x` and `y` coordinates, and each coordinate is to have it's
own color.
However, when I run the code, I get an error saying `setting an array element
with a sequence`.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from math import pi, sin
x, y, c = np.loadtxt('finaltheta.txt', unpack = True)
N = int(len(c)**0.5)
c = c.reshape(N,N)
plt.figure()
plt.imshow(c, extent = (np.amin(x), np.amax(x), np.amin(y), np.amax(y)), cmap = cm.binary, vmin = 0.0, vmax = 1.0)
cbar = plt.colorbar()
plt.show()
I have deduced that the error is streaming from the `np.loadtxt` line.
Answer: What is the delimiter in the file?
I can simulate this kind of load with:
In [223]: txt=b"1 2 3\n4 5 6".splitlines()
In [224]: a,b,c=np.loadtxt(txt,unpack=True)
In [225]: a
Out[225]: array([ 1., 4.])
In [226]: b
Out[226]: array([ 2., 5.])
In [227]: c
Out[227]: array([ 3., 6.])
Or with a , delimited text
In [228]: txt=b"1,2,3\n4,5,6".splitlines()
In [229]: a,b,c=np.loadtxt(txt,unpack=True,delimiter=',')
How do you deduce that the error is in the loadtxt? Normally an error gives
you a stacktrace that indicates clearly where the error occurs. It may be in
`loadtxt`, but if so the trace with show that it is indeed the loadtxt call.
I can't think, off hand, of a text that would produce this error in loadtxt.
The error means that something/someone is doing something like
In [236]: a[0]=[1,2,3]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-236-c47480f4cd6d> in <module>()
----> 1 a[0]=[1,2,3]
ValueError: setting an array element with a sequence.
* * *
There's a scattering of SO questions with `loadtxt sequence`, e.g.
[CDF Cumulative Distribution Function
Error](http://stackoverflow.com/questions/25791634/cdf-cumulative-
distribution-function-error)
reports the error with stacktrace:
Traceback (most recent call last):
File "cum_graph.py", line 7, in <module>
data = np.loadtxt('e_p_USC_30_days.txt')
File "/usr/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 804, in loadtxt
X = np.array(X, dtype)
ValueError: setting an array element with a sequence.
There is also a file sample. Unfortunately the accepted answer is just
flailing around trying to suggest a cause. I can't reproduce the error either.
That suggests that there are characters in the file that aren't reproduced in
the cutnpaste and/or the user has an older numpy version and/or there's an OS
issue (I'm on linux).
It would be intresting to see what the inputs to that line 804 look like, but
that would require a reproducable case, and then hacking loadtxt or running it
with the debugger.
I'd try `loadtxt` on a simpler, purpose built file, or try it on a portion of
the problem file. I'd also try `genfromtxt`. In most cases it does the
samething as load, but takes a sufficiently different approach that it could
bypass what ever problems loadtxt has.
|
load txt file containing string in python as matrix
Question: I have a .txt file containing int, string and floats. How can I import this
.txt file as a matrix while keeping strings?
Dataset contains:
16 disk 11 10.29 4.63 30.22 nan
79 table 11 20.49 60.60 20.22 nan
17 disk 11 22.17 0.71 10.37 nan
I used:
data=np.loadtxt("/home/Desktop/dataset.txt", delimiter=',')
the result is:
items = [conv(val) for (conv, val) in zip(converters, vals)]
ValueError: could not convert string to float: disk
In another try I used:
data = np.genfromtxt('/home/Desktop/dataset.txt', delimiter=",")
The result is:
16.0 nan 11 10.29 4.63 30.22
79.0 nan 11 20.49 60.60 20.22
17.0 nan 11 22.17 0.71 10.37
Answer: There is no way to load values of different types (e.g. str and float) to
numpy array. You could use [read_csv](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.read_csv.html) function from pandas package
instead:
import pandas as pd
data = pd.read_csv("/home/Desktop/dataset.txt")
Pandas will load data to a DataFrame and you will be able to access columns
and row by their names. You could read more about pandas
[here](http://pandas.pydata.org/pandas-docs/stable/index.html)
|
error: command 'x86_64-linux-gnu-gcc' when installing mysqlclient
Question: I installed django 1.8.5 in virtualenv and using python 3.4.3 the worked
displayed the **it works** page when using sqlite
I wanted to use mysql and I'm trying to install mysqlclient using
`pip install mysqlclient`
and I'm getting the following message
----------------------------------------
Failed building wheel for mysqlclient
Failed to build mysqlclient
Installing collected packages: mysqlclient
Running setup.py install for mysqlclient
Complete output from command /home/sasidhar/django/env/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip-build-5lj39q67/mysqlclient/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-da2_35zs-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/sasidhar/django/env/include/site/python3.4/mysqlclient:
running install
running build
running build_py
copying MySQLdb/release.py -> build/lib.linux-x86_64-3.4/MySQLdb
running build_ext
building '_mysql' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC -Dversion_info=(1,3,6,'final',1) -D__version__=1.3.6 -I/usr/include/mysql -I/usr/include/python3.4m -I/home/sasidhar/django/env/include/python3.4m -c _mysql.c -o build/temp.linux-x86_64-3.4/_mysql.o -DBIG_JOINS=1 -fno-strict-aliasing -g -DNDEBUG
_mysql.c:40:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Command "/home/sasidhar/django/env/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip-build-5lj39q67/mysqlclient/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-da2_35zs-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/sasidhar/django/env/include/site/python3.4/mysqlclient" failed with error code 1 in /tmp/pip-build-5lj39q67/mysqlclient
I did try installing libraries suggested in [error: Setup script exited with
error: command 'x86_64-linux-gnu-gcc' failed with exit status
1](http://stackoverflow.com/questions/26053982/error-setup-script-exited-with-
error-command-x86-64-linux-gnu-gcc-failed-wit)
and still the problem persists please help me solve this problem
Thanks guys!!
Answer: You need to install python-dev:
sudo apt-get install python-dev
And, since you are using python3:
sudo apt-get install python3-dev
This command should help you.
If you are using mac os you might try:
brew update && brew rm python3 && brew install python3
You need to brew has been installed already, otherwise you can install it. It
is very useful for getting packages in Mac OS. <http://brew.sh/>
|
Matplotlib even frequency binned by month bar chart
Question: I want to create a bar chart where the `x-axis` represents months and the
height of the bars are proportional to the amount of days entered into a list
of dates that fall in this month. I want to dynamically update the list and
the program should then update the graph as this list is extended.
I am able to create a list of dates as follows:
import matplotlib.pyplot as plt
import datetime
dates = [datetime.date(2015,01,05),
datetime.date(2015,01,18),
datetime.date(2015,01,25),
datetime.date(2015,02,18),
datetime.date(2015,03,07),
datetime.date(2015,03,27),]
If I run the script I would like to see something like:
[](http://i.stack.imgur.com/tmiGW.png)
Which I plotted here manually.
I know it will be possible to use a loop to run through the list of dates and
manually sum the dates using if statements if the dates' months correspond
etc. But I am hoping there is some more automated method in python/matplotlib.
Answer: You could use pandas for this:
import pandas as pd
import datetime
dates = [datetime.date(2015,01,05),
datetime.date(2015,01,18),
datetime.date(2015,01,25),
datetime.date(2015,02,18),
datetime.date(2015,03,07),
datetime.date(2015,03,27),]
df = pd.DataFrame({'dates':dates})
df.dates = pd.to_datetime(df.dates)
df.groupby(df.dates.dt.month).count().plot(kind='bar')
Gives:
[](http://i.stack.imgur.com/2t9Bl.png)
The `df.dates.dt.month` is getting the month for each. You could group by day,
year, etc, in the same way.
|
Read out definitions of a text file based dictionary
Question: I'm trying to write a Python function that takes as an input a text file based
dictionary, for example Webster's free dictionary. The function
"webster_definition" will then search through the text file and print the
definition for a specific word, e.g. "Canada".
Here is what I've got so far:
import re
import sys
def webster_definition(word):
word = word.upper()
output = ""
doc = open("webster.txt",'r')
for line in doc:
if re.match(word,line):
print line
return output
print webster_definition("Canada")
This will output the word I looked for. But the definition starts three lines
later with "Defn:" and is of variable length, e.g.:
CANADA
Can"a*da, n.
Defn: A British province in North America, giving its name to various
plants and animals. Canada balsam. See under Balsam.
-- Canada goose. (Zoöl.) See Whisky Jack.
-- Canada lynx. (Zoöl.) See Lynx.
-- Canada porcupine (Zoöl.) See Porcupine, and Urson.
-- Canada rice (Bot.) See under Rick.
-- Canada robin (Zoöl.), the cedar bird.
The desired output should look like:
CANADA
Defn: A British province in North America, giving its name to various
plants and animals. Canada balsam. See under Balsam.
-- Canada goose. (Zoöl.) See Whisky Jack.
-- Canada lynx. (Zoöl.) See Lynx.
-- Canada porcupine (Zoöl.) See Porcupine, and Urson.
-- Canada rice (Bot.) See under Rick.
-- Canada robin (Zoöl.), the cedar bird.
Can anyone help me with the output of the definition?
Answer: in file:
CANADA
Can"a*da, n.
Defn: A British province in North America, giving its name to various
plants and animals. Canada balsam. See under Balsam.
-- Canada goose. (Zoöl.) See Whisky Jack.
-- Canada lynx. (Zoöl.) See Lynx.
-- Canada porcupine (Zoöl.) See Porcupine, and Urson.
-- Canada rice (Bot.) See under Rick.
-- Canada robin (Zoöl.), the cedar bird.
ANOTHER DEFENITION
another defenition
Defn.. some words
more words
......
with open('webster_file', 'r') as f:
# read into a string.
data = f.read()
# uppercase word to search for
word = 'canada'.upper()
# search for empty line and the get everything non-greedy up to the
# another empty line.
pattern = '^' + word + '.*?\n^$\n.*?^$'
mo = re.search(pattern,data,re.M|re.DOTALL)
if mo:
print(mo.group(0))
CANADA
Can"a*da, n.
Defn: A British province in North America, giving its name to various
plants and animals. Canada balsam. See under Balsam.
-- Canada goose. (Zoöl.) See Whisky Jack.
-- Canada lynx. (Zoöl.) See Lynx.
-- Canada porcupine (Zoöl.) See Porcupine, and Urson.
-- Canada rice (Bot.) See under Rick.
-- Canada robin (Zoöl.), the cedar bird
|
Python lxml's XPath not finding <ul> in <p> tags
Question: I have a problem with the XPath function of pythons lxml. A minimal example is
the following python code:
from lxml import html, etree
text = """
<p class="goal">
<strong>Goal</strong> <br />
<ul><li>test</li></ul>
</p>
"""
tree = html.fromstring(text)
thesis_goal = tree.xpath('//p[@class="goal"]')[0]
print etree.tostring(thesis_goal)
Running the code produces
<p class="goal">
<strong>Goal</strong> <br/>
</p>
As you can see, the entire `<ul>` block is lost. This also means that it is
not possible to address the `<ul>` with an XPath along the lines of
`//p[@class="goal"]/ul`, as the `<ul>` is not counted as a child of the `<p>`.
Is this a bug or a feature of lxml, and if it is the latter, how can I get
access to the entire contents of the `<p>`? The thing is embedded in a larger
website, and it is not guaranteed that there will even _be_ a `<ul>` tag
(there may be another `<p>` inside, or anything else, for that matter).
**Update** : Updated title after answer was received to make finding this
question easier for people with the same problem.
Answer: `ul` elements (or more generally [flow
content](http://www.w3.org/TR/html5/dom.html#flow-content-1)) are [not allowed
inside `p` elements](http://stackoverflow.com/q/5681481/190597) (which can
only contain [phrasing content](http://www.w3.org/TR/html5/dom.html#phrasing-
content-1)). Therefore `lxml.html` parses `text` as
In [45]: print(html.tostring(tree))
<div><p class="goal">
<strong>Goal</strong> <br>
</p><ul><li>test</li></ul>
</div>
The `ul` follows the `p` element. So you could find the `ul` element using the
XPath
In [47]: print(html.tostring(tree.xpath('//p[@class="goal"]/following::ul')[0]))
<ul><li>test</li></ul>
|
Converting a nested loop calculation to Numpy for speedup
Question: Part of my Python program contains the follow piece of code, where a new grid
is calculated based on data found in the old grid.
The grid i a two-dimensional list of floats. The code uses three for-loops:
for t in xrange(0, t, step):
for h in xrange(1, height-1):
for w in xrange(1, width-1):
new_gr[h][w] = gr[h][w] + gr[h][w-1] + gr[h-1][w] + t * gr[h+1][w-1]-2 * (gr[h][w-1] + t * gr[h-1][w])
gr = new_gr
return gr
The code is extremly slow for a large grid and a large time _t_.
I've tried to use Numpy to speed up this code, by substituting the inner loop
with:
J = np.arange(1, width-1)
new_gr[h][J] = gr[h][J] + gr[h][J-1] ...
But the results produced (the floats in the array) are about 10% smaller than
their list-calculation counterparts.
* What loss of accuracy is to be expected when converting lists of floats to Numpy array of floats using _np.array(pylist)_ and then doing a calculation?
* How should I go about converting a triple for-loop to pretty and fast Numpy code? (or are there other suggestions for speeding up the code significantly?)
Answer: If `gr` is a list of floats, the first step if you are looking to vectorize
with NumPy would be to convert `gr` to a NumPy array with
[`np.array()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html).
Next up, I am assuming that you have `new_gr` initialized with zeros of shape
`(height,width)`. The calculations being performed in the two innermost loops
basically represent `2D convolution`. So, you can use
[`signal.convolve2d`](http://docs.scipy.org/doc/scipy-0.16.0/reference/generated/scipy.signal.convolve2d.html)
with an appropriate `kernel`. To decide on the `kernel`, we need to look at
the scaling factors and make a `3 x 3` kernel out of them and negate them to
simulate the calculations we are doing with each iteration. Thus, you would
have a vectorized solution with the two innermost loops being removed for
better performance, like so -
import numpy as np
from scipy import signal
# Get the scaling factors and negate them to get kernel
kernel = -np.array([[0,1-2*t,0],[-1,1,0,],[t,0,0]])
# Initialize output array and run 2D convolution and set values into it
out = np.zeros((height,width))
out[1:-1,1:-1] = signal.convolve2d(gr, kernel, mode='same')[1:-1,:-2]
**Verify output and runtime tests**
Define functions :
def org_app(gr,t):
new_gr = np.zeros((height,width))
for h in xrange(1, height-1):
for w in xrange(1, width-1):
new_gr[h][w] = gr[h][w] + gr[h][w-1] + gr[h-1][w] + t * gr[h+1][w-1]-2 * (gr[h][w-1] + t * gr[h-1][w])
return new_gr
def proposed_app(gr,t):
kernel = -np.array([[0,1-2*t,0],[-1,1,0,],[t,0,0]])
out = np.zeros((height,width))
out[1:-1,1:-1] = signal.convolve2d(gr, kernel, mode='same')[1:-1,:-2]
return out
Verify -
In [244]: # Inputs
...: gr = np.random.rand(40,50)
...: height,width = gr.shape
...: t = 1
...:
In [245]: np.allclose(org_app(gr,t),proposed_app(gr,t))
Out[245]: True
Timings -
In [246]: # Inputs
...: gr = np.random.rand(400,500)
...: height,width = gr.shape
...: t = 1
...:
In [247]: %timeit org_app(gr,t)
1 loops, best of 3: 2.13 s per loop
In [248]: %timeit proposed_app(gr,t)
10 loops, best of 3: 19.4 ms per loop
|
Python: Have an action happen within every single function of a python file
Question: Sorry the title isn't very clear but it is kind of hard to explain. So I am
wondering how you can have a certain action happen within every single
function of a python file. I want a user to type 'paper' inside any function
in the entire python file and I cannot figure out how to do it. Here is an
example:
def a():
raw_input()
print "Test"
def b():
raw_input()
print "Test 2"
How can I have it setup so a user can type 'paper' in these two functions
(realistically more than 30+) and then it would print the statement "you have
a paper"
Thank you so much for the help!
Answer: If you have something you want to apply to every function, you should use a
decorator. I'm not exactly sure what you're trying to accomplish, but this
should demonstrate:
>>> def youhaveapaper(function):
def wrapper(*args):
response = input('whatchu got?\n>>> ')
if response == 'paper':
print('you have a paper')
return function(*args)
import functools
functools.update_wrapper(wrapper, function)
return wrapper
>>> @youhaveapaper
def somefunction(x):
"""return the square of x"""
return x**2
>>> y = somefunction(5)
whatchu got?
>>> paper
you have a paper
>>> y
25
As you can see, `somefunction` did not need to be changed, it just needed
`@youhaveapaper` placed before the definition.
|
How to make tkintertable Table resizable
Question: I am creating a GUI using pythons Tkinter (I am using python 2.7 if it makes a
difference). I wanted to add a table and so am using the tkintertable package
as well. My code for the table is:
import Tkinter as tk
from tkintertable.Tables import TableCanvas
class createTable(tk.Frame):
def __init__(self, master=None):
tk.Frame.__init__(self, master)
self.grid()
self.F = tk.Frame(self)
self.F.grid(sticky=tk.N+tk.S+tk.E+tk.W)
self.createWidgets()
def createWidgets(self):
self.table = TableCanvas(self.F,rows=30,cols=30)
self.table.createTableFrame()
app = createTable()
app.master.title('Sample Table')
app.mainloop()
I would like to make the number of rows and columns seen change when I resize
the frame. Currently there are 13 rows and 4 columns showing. I would like
more to be seen when I make the window bigger. Any advice on how to achieve
this would be greatly appreciated! Thank you so much for your help
Answer: To achieve what you want to do, not much is needed.
The keyword here is `grid_rowconfigure` and `grid_columnconfigure`. By
default, grid rows to not expand after creation when the window size is
changed. Using `tk.Frame().grid_rowconfigure(row_id, weight=1)` this behavior
changes.
The second thing you missed was that your `createTable` class (please consider
renaming it as it sounds like a function) is not set sticky.
import Tkinter as tk
from tkintertable.Tables import TableCanvas
class createTable(tk.Frame):
def __init__(self, master=None):
tk.Frame.__init__(self, master)
#########################################
self.master.grid_rowconfigure(0, weight=1)
self.master.grid_columnconfigure(0, weight=1)
self.grid_rowconfigure(0, weight=1)
self.grid_columnconfigure(0, weight=1)
self.grid(sticky=tk.NW+tk.SE)
#########################################
self.F = tk.Frame(self)
self.F.grid(row=0, column=0, sticky=tk.NW+tk.SE)
self.createWidgets()
def createWidgets(self):
self.table = TableCanvas(self.F,rows=30,cols=30)
self.table.createTableFrame()
app = createTable()
app.master.title('Sample Table')
app.mainloop()
`
should do the trick for you.
|
Python Test inheritance with multiple subclasses
Question: I would like to write a Python test suite in a way that allows me to inherit
from a single TestBaseClass and subclass it multiple times, everytime changing
some small detail in its member variables.
Something like:
import unittest
class TestBaseClass(unittest.TestCase):
def setUp(self):
self.var1 = "exampleone"
class DetailedTestOne(TestBaseClass):
def setUp(self):
self.var2 = "exampletwo"
def runTest(self):
self.assertEqual(self.var1, "exampleone")
self.assertEqual(self.var2, "exampletwo")
class DetailedTestOneA(DetailedTestOne):
def setUp(self):
self.var3 = "examplethree"
def runTest(self):
self.assertEqual(self.var1, "exampleone")
self.assertEqual(self.var2, "exampletwo")
self.assertEqual(self.var3, "examplethree")
... continue to subclass at wish ...
In this example, DetailedTestOne inherits from TestBaseClass and
DetailedTestOneA inherits from DetailedTestOne.
With the code above, I get:
AttributeError: 'DetailedTestOne' object has no attribute 'var1'
for DetailedTestOne and:
AttributeError: 'DetailedTestOneA' object has no attribute 'var1'
for DetailedTestOneA
Of course, var1, var2, var3 could be some members of a same variable declared
in first instance in the TestBaseClass.
Any ideas on how to achieve such behaviour?
Answer: You need to call the superclass implementation in your subclasses by doing,
e.g., `super(DetailedTestOne, self).setUp()` from inside your
`DetailedTestOne.setUp` method.
|
Python : sklearn svm, providing a custom loss function
Question: The way I use sklearn's svm module now, is to use its defaults. However, its
not doing particularly well for my dataset. Is it possible to provide a custom
loss function , or a custom kernel? If so, what is the way to write such a
function so that it matches with what sklearn's svm expects and how to pass
such a function to the trainer?
There is this example of how to do it:
[SVM custom kernel](http://scikit-
learn.org/stable/auto_examples/svm/plot_custom_kernel.html)
code cited here:
def my_kernel(x, y):
"""
We create a custom kernel:
(2 0)
k(x, y) = x ( ) y.T
(0 1)
"""
M = np.array([[2, 0], [0, 1.0]])
return np.dot(np.dot(x, M), y.T)
I'd like to understand the logic behind this kernel. How to choose the kernel
matrix? And what exactly is `y.T` ?
Answer: To answer your question, unless you have a very good idea of _why_ you want to
define a custom kernel, I'd stick with the built-ins. They are very fast,
flexible, and powerful, and are well-suited to most applications.
That being said, let's go into a bit more detail:
A [Kernel Function](http://scikit-learn.org/stable/modules/svm.html#svm-
kernels) is a special kind of measure of similarity between two points.
Basically a larger value of the similarity means the points are more similar.
The scikit-learn SVM is designed to be able to work with any kernel function.
Several kernels built-in (e.g. linear, radial basis function, polynomial,
sigmoid) but you can also define your own.
Your custom kernel function should look something like this:
def my_kernel(x, y):
"""Compute My Kernel
Parameters
----------
x : array, shape=(N, D)
y : array, shape=(M, D)
input vectors for kernel similarity
Returns
-------
K : array, shape=(N, M)
matrix of similarities between x and y
"""
# ... compute something here ...
return similarity_matrix
The most basic kernel, a linear kernel, would look like this:
def linear_kernel(x, y):
return np.dot(x, y.T)
Equivalently, you can write
def linear_kernel_2(x, y):
M = np.array([[1, 0],
[0, 1]])
return np.dot(x, np.dot(M, y.T))
The matrix `M` here defines the so-called [inner product
space](https://en.wikipedia.org/wiki/Inner_product_space) in which the kernel
acts. This matrix can be modified to define a new inner product space; the
custom function from the example you linked to just modifies `M` to
effectively double the importance of the first dimension in determining the
similarity.
More complicated non-linear modifications are possible as well, but you have
to be careful: kernel functions must meet [certain
requirements](https://en.wikipedia.org/wiki/Kernel_method#Mathematics) (they
must satisfy the properties of an inner-product space) or the SVM algorithm
will not work correctly.
|
Python - Using a List, Dict Comprehension, and Mapping to Change Plot Order
Question: I am relatively new to Python, Pandas, and plotting. I am looking to make a
custom sort order in a pandas plot using a list, mapping, and sending them
through to the plot function.
I am not "solid" on mapping or dict comprehensions. I've looked around a bit
on Google and haven't found anything really clear - so any direction to
helpful references would be much appreciated.
I have a dataframe that is the result of a groupby:
Exchange
AMEX 267
NYSE 2517
Nasdaq 2747
Name: Symbol, dtype: int64
The numerical column is 'Symbol' and the exchange listing is the index
When I do a straightforward pandas plot
my_plot = Exchange['Symbol'].plot(kind='bar')
I get this:
[](http://i.stack.imgur.com/hcNlq.jpg)
The columns are in the order of the rows in the dataframe (Amex, NYSE, Nasdaq)
but I would like to present, left to right, NYSE, Nasdaq, and Amex. So a
"sort" won't work.
There is another post:
[Sorting the Order of
Bars](http://stackoverflow.com/questions/22635110/sorting-the-order-of-bars-
in-pandas-matplotlib-bar-plots)
that gets at this - but I just couldn't figure it out.
I feel like the solution is one step out of my reach. I think this is a very
important concept to get down as it would help me considerably in visualizing
data where the not-infrequent case of a custom row presentation in a chart is
needed. I'm also hoping discussion here could help me better understand
mapping as that seems to be very useful in many instances but I just can't
seem to find the right on-line resource to explain it clearly.
Thank you in advance.
Answer: The solution to your problem is putting your output dataframe into desired
order:
order = [1,2,0] # the desired order
Exchange['Symbol'].iloc[order]
NYSE 2517
Nasdaq 2747
AMEX 267
Name: Symbol, dtype: int64
As soon as you have the rightly ordered data you can plot it:
Exchange['Symbol'].iloc[order].plot(kind='bar');
[](http://i.stack.imgur.com/aY8Zn.png)
|
converting a recursion to iteration in python
Question: I wrote the below python script that sorts the elements of an array using
divide-and-conquer (recursive calls). One of my friend suggested that
recursion is slower than iteration. Is there a way to convert the below
program to a 'for' loop and still leverage divide-and-conquer strategy. Will
iteration beat recursion even if the list contains a lot of elements?
### Using recursion
import random
from datetime import datetime
start = str(datetime.now().time()).split(':')
def quicksort(A,first,last):
print "calling parameters",A,first,last
if first >= last:
return
i , j = first, last
pivot = A[random.randint(first,last)]
#pivot = A[last]
while i <= j:
while A[i] < pivot:
i+=1
#print "i:",i
while A[j] > pivot:
j-=1
#print "i,j",i,j
if i <= j:
A[i],A[j] = A[j],A[i]
i,j = i+1, j-1
#print "intermediate",A
#print "loop i,j",i,j
# Using Recursion here
quicksort(A,first,j)
quicksort(A,i,last)
A = [2,8,7,1,3,5,6,4]
#A = [1,1,1,1,1,1,1,1]
quicksort(A,0,len(A)-1)
print A
stop = str(datetime.now().time()).split(':')
print "time taken",float(stop[2]) - float(start[2])
Answer: You can always change a _tail recursive_ algorithm (that is, one where the
recursive step is the _very last statement_ in the function) into an iterative
one. In Python, iteration is almost always faster than an equivalent tail
recursion because Python (deliberately) lacks a feature called tail call
optimization, because Guido van Rossum sees the debugging information lost in
that optimization as being more important than the speed gained. Other
languages have made the opposite tradeoff, so in some of those, the recursive
version might be preferred.
_However_ , quicksort is not (only) tail recursive: it does recurse as the
very last thing it does, but it _also_ recurses as the second last thing it
does. The only general way to convert a this kind of algorithm into an
iterative one is to store a lot of state on a stack - essentially,
reimplementing how function calls work. This pollutes your code with
"housekeeping" that is normally done behind-the-scenes, and usually make
things considerably slower (since the stack management is done with, well,
function calls, so the behind-the-scenes work has to be done anyway and you're
duplicating it).
For some particular algorithms, there may be a way to convert cleanly from
non-tail recursion to iteration, but often what you end up with will be a
_different algorithm_ with different performance characteristics, and so it
doesn't end up being a comparison between iterative and recursive performance.
For quicksort in particular, the recursive version is preferable, and you will
rarely if ever see an iterative version of it "in the wild" except to
demonstrate how it is possible using a stack. For example, see [this blog post
about recursive and iterative quicksort](http://codexpi.com/quicksort-python-
iterative-recursive-implementations/) \- the iterative version uses a stack,
and it gives this summary of results:
[](//i.stack.imgur.com/4VCwX.png)
You can see that this analysis claims that the iterative version is slower for
every element count, although the difference appears to get smaller in
relative terms as the list gets larger. Also note that the (highly optimized)
Python builtin `list.sort` outperforms both quicksort implementations by an
order of magnitude - if you care particularly about the speed (rather than the
learning experience of coding your own quicksort), use the builtin every time.
|
'LinearSVC' object has no attribute 'classes_'
Question: I have several samples of images and I would like to predict if those images
contain text/characters.
I get an error when I try running my code at this step :
model = cPickle.load(f)
is_text = model.predict(image_samples)
image_samples are my samples and model looks like this :
Pipeline(steps=[
('hog', HOGFeatures(cells_per_block=(2, 2), orientations=10,
pixels_per_cell=(5, 5), size=(20, 20))),
('clf', LinearSVC(C=2.0, class_weight=None, dual=True,
fit_intercept=True,
intercept_scaling=1, loss='l2', max_iter=None,
multi_class='ovr', penalty='l2',
random_state=None, tol=0.0001, verbose=0))
])
The error message I get is :
File "/home/parallels/Desktop/Python/ImageTextRecognition-master/userimageski.py", line 104, in select_text_among_candidates
is_text = model.predict(self.candidates['flattened'])
File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/metaestimators.py", line 37, in <lambda>
out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/sklearn/pipeline.py", line 180, in predict
return self.steps[-1][-1].predict(Xt)
File "/usr/local/lib/python2.7/dist-packages/sklearn/linear_model/base.py", line 228, in predict
return self.classes_[indices]
AttributeError: 'LinearSVC' object has no attribute 'classes_'
Could anyone help me ? Thank you very much
Answer: I had the same problem . It is actually an issue with sklearn versions. run
python2.7 in the terminal and check `>>> import sklearn` `>>>
sklearn.__version__` if you have a very previous version than current and as
of now 0.17.1 is current then you need to upgrade sklearn . Just do `sudo pip
install -U scikit-learn` to upgrade and re run the code
|
Finding letter bigrams in text using Python regex
Question: I am trying to use `re.findall` to find all the sets of two letters following
each other in a text (letter bigrams). How do I get the regex not to consume
the last letter of the previously found bigram, so that it can be used again
in the following?
The following doesn't work, as the regex consumes:
>>> re.findall(r'[a-zA-z]{2}', 'find bigrams here')
['fi', 'nd', 'bi', 'gr', 'am', 'he', 're']
Using a positive lookahead was a second attempt. It doesn't consume the second
letter of the previous bigram, but doesn't catch it either...
>>> re.findall(r'([a-zA-z](?=[a-zA-Z]))', 'find bigrams here')
['f', 'i', 'n', 'b', 'i', 'g', 'r', 'a', 'm', 'h', 'e', 'r']
Thanks for any help.
Answer:
import re
print re.findall(r'(?=([a-zA-Z]{2}))', 'find bigrams here')
Output:`['fi', 'in', 'nd', 'bi', 'ig', 'gr', 'ra', 'am', 'ms', 'he', 'er',
're']`
Guess you need this
|
Python GUI - 2.7 to 3.5
Question:
from tkinter import *
#Create the window
root = Tk()
#Modify root window
root.title("Simple GUI")
root.geometry("200x50")
app = frame(root)
label = Label(app, text = "This is a label")
label.grid()
#kick of the event loop
root.mainloop()
I am following a tutorial of YouTube to learn about Python tkinter GUI. But
when I run the above code it comes with an error.
Traceback (most recent call last):
File "C:/Users/Nathan/Desktop/Python/Python GUI/Simple GUI.py", line 14, in <module>
app = frame(root)
NameError: name 'frame' is not defined
I know it is something to do with `frame`, I tried `Frame` and it doesn't
work. Can you please help me make it work, Thanks!
I am currently using Python 3.5 and the tutorial is in 2.7
Answer: You did get the fact that the 2.x module is named Tkinter, but in 3.x it is
named tkinter. However, the Frame class did not change the first letter to
lower case. It is still Frame.
app = Frame(root)
One way to overcome the import difference is in [ImportError when importing
Tkinter in Python](http://stackoverflow.com/questions/7498658/importerror-
when-importing-tkinter-in-python)
|
ImportError: No module named bs4 in Windows
Question: I am trying to create a script to download the captcha from my website. I
think the code works except from that error, when I run it in cmd (I am using
windows not Linux) I receive the following:
from bs4 import BeautifulSoup
ImportError: No module named bs4
I tried using `pip install BeautifulSoup4` but then I receive syntax error at
install.
Here is the script:
from bs4 import BeautifulSoup
import urllib2
import urllib
url = "https://example.com"
content = urllib2.urlopen(url)
soup = BeautifulSoup(content)
img = soup.find('img',id ='imgCaptcha')
print img
urllib.urlretrieve(urlparse.urljoin(url, img['src']), 'captcha.bmp')
The problem according to this
[answer](http://stackoverflow.com/a/11784778/3065448) must be due to the fact
I have not activated the virtualenv, and THEN install BeautifulSoup4.
Also I don't think this information will be of any help but I save my python
text in a notepad.py and the run it using cmd.
Answer: I had the same problem until a moment ago. Thanks for the post and comments!
According to @Martin Vseticka 's suggestion I checked if I have the pip.exe
file in my python folders. I run python 2.7 and 3.7 simultaneously. I didn't
have it in the python 2.7 folder but in the 3.7. So in the command line I
changed the directory to where the pip.exe file was located. Then I ran "pip
install BeautifulSoup4" and it worked. See enclosed screen shot. [](http://i.stack.imgur.com/nFmDF.png)
|
Django NoReverseMatch error with namespacing, Error during template rendering
Question: I have been looking at this all day now and I am not able to figure this out.
When loading hotel/index.html at this moment I get an error:
NoReverseMatch at /hotel/
Reverse for 'activities' with arguments '(2,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['hotel/activities/']
Request Method: GET
Request URL: http://127.0.0.1:8000/hotel/
Django Version: 1.8.5
Exception Type: NoReverseMatch
Exception Value:
Reverse for 'activities' with arguments '(2,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['hotel/activities/']
Exception Location: /usr/local/lib/python3.4/dist-packages/django/core/urlresolvers.py in _reverse_with_prefix, line 495
Python Executable: /usr/bin/python3
Python Version: 3.4.3
Python Path:
['/home/johan/sdp/gezelligehotelletjes_com',
'/usr/lib/python3.4',
'/usr/lib/python3.4/plat-x86_64-linux-gnu',
'/usr/lib/python3.4/lib-dynload',
'/usr/local/lib/python3.4/dist-packages',
'/usr/lib/python3/dist-packages']
Server time: Sun, 25 Oct 2015 16:18:00 +0000
Error during template rendering
In template /home/johan/sdp/gezelligehotelletjes_com/hotel/templates/hotel/index.html, error at line 8
Reverse for 'activities' with arguments '(2,)' and keyword arguments '{}' not found. 1 pattern(s) tried: ['hotel/activities/']
1 {% load staticfiles %}
2
3 <link rel="stylesheet" type="text/css" href="{% static 'hotel/style.css' %}" />
4
5 {% if netherlands_city_list %}
6 <ul>
7 {% for city in netherlands_city_list %}
8
<li><a href="
{% url 'hotel:activities' city.id %}
">{{ city.name }}</a></ul>
9 {% endfor %}
10 </ul>
11 {% else %}
12 <p>No polls are available.</p>
13 {% endif %}
Here is the code that I think relates to this error.
site/urls.py
from django.conf.urls import include, url
from django.contrib import admin
import hotel.views
urlpatterns = [
url(r'^hotel/', include('hotel.urls', namespace='hotel')),
url(r'^admin/', include(admin.site.urls)),
]
hotel/urls.py
from django.conf.urls import include, url
from django.contrib import admin
from hotel import views
urlpatterns = [
url(r'^$', views.IndexView.as_view(), name='index'),
url(r'^activities/$', views.ActivitiesView.as_view(), name='activities'),
]
hotel/index.html
{% load staticfiles %}
<link rel="stylesheet" type="text/css" href="{% static 'hotel/style.css' %}" />
{% if netherlands_city_list %}
<ul>
{% for city in netherlands_city_list %}
<li><a href="{% url 'hotel:activities' city.id %}">{{ city.name }}</a></ul>
{% endfor %}
</ul>
{% else %}
<p>No cities are available.</p>
{% endif %}
hotels/activities.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title></title>
</head>
<body>
</body>
</html>
and hotel/views.py
from django.shortcuts import render
from django.views import generic
from django.core.urlresolvers import reverse
from .models import Hotel
from base.models import City
class IndexView(generic.ListView):
template_name = "hotel/index.html"
context_object_name = "netherlands_city_list"
def get_queryset(self):
return City.objects.filter(state__country__name__exact="Netherlands").order_by('name')[:5]
class ActivitiesView(generic.ListView):
template_name = "hotel/activities.html"
I have been looking around all day now but can't figure this out. (While it is
probably one of those smaller things.)
I hope someone can help with this issue.
Thanks in advance.
Answer: Your problem is in your URLs:
url(r'^activities/$', views.ActivitiesView.as_view(), name='activities'),
Your template calls the arguments activities:
<a href="{% url 'hotel:activities' city.id %}">
But this argument isn't passed as a parameter. The solution is:
url(r'^activities/(?P<city>[0-9]+)/$', views.ActivitiesView.as_view(), name='activities'),
|
Python throws an Attribute error
Question: I simply can't get my Code to work in python
it gives me this error:
Traceback (most recent call last):
File "C:/Users/Patrick/Desktop/SummonerGui/__main__.py", line 10, in <module>
main()
File "C:/Users/Patrick/Desktop/SummonerGui/__main__.py", line 6, in main
r = api.get_summoner_by_name('ArcadeRuven')
AttributeError: 'RiotAPI' object has no attribute 'get_summoner_by_name'
Here are my 3 files:
main.py
from RiotAPI import RiotAPI
def main():
api = RiotAPI('API_KEY')
r = api.get_summoner_by_name('ArcadeRuven')
print r
if __name__ == "__main__":
main()
RiotAPI.py
import requests
import RiotConsts as Consts
class RiotAPI(object):
def __init__(self, api_key, region=Consts.REGIONS['europe_west']):
self.api_key = api_key
self.region = region
def _requests(self, api_url, params={}):
args = {'api_key': self.api_key}
for key, value in params.items():
if key not in args:
args[key] = value
response = requests.get(
Consts.URL['base'].format(
proxy=self.region,
region=self.region,
url=api_url
),
params=args
)
print response.url
return respons.json()
def get_summoner_by_name(self, name):
api_url = Consts.URL['summoner_by_name'].format(
version=Consts.API_VERSIONS['summoner'],
names=name
)
return self._request(api_url)
and here is the RiotConsts.py
URL = {
'base' : 'https://{proxy}.api.pvp.net/api/lol/{region}/{url}',
'summoner_by_name' : 'v{version}/summoner/by-name/{names}'
}
API_VERSIONS = {
'summoner' : '1.4'
}
REGIONS = {
'europe_west' : 'euw'
}
Why is there an AttributeError?? I thought that the attribute exists in
RiotAPI.py
Answer: The indentation of `get_summoner_by_name` and `_requests` is incorrect if you
want them to be methods of the object. They need to have the same level of
indentation as `__init__`.
Later in your code you have a call to `self._request` but the method is named
`_requests` (with a trailing "s").
|
Extract Numbers and Size Information (KB, MB, etc) from a String in Python
Question: I have a string like this
"44MB\n" (it can be anything ranging from 44mb, 44 MB, 44 kb, 44 B)
I want to separate `44` and `MB` from the above string. I have written this
code to extract the number
import re
mystring = "44MB\n"
re.findall(r'\d+', mystring)
for extracting the size I want to avoid using `if statements` like
if "kb" mystring.lower():
# Do stuffs
if .......
How can I extract the size info using regex
Answer: This script:
import re
test_string = '44MB\n12b\n6GB\n12pb'
regex = re.compile(r'(\d+)\s*([kmgtp]?b)', re.IGNORECASE)
order = ['b', 'kb', 'mb', 'gb', 'tb', 'pb']
for value, unit in regex.findall(test_string):
print(int(value) * (1024**order.index(unit.lower())))
Will print:
46137344
12
6442450944
13510798882111488
Which is the sizes it found in bytes.
|
Reverse a string in Python but dont reverse alphanumerics
Question: I want to reveres a string but not alphanumeric characters and spaces in it.
How we can achieve it?
input : "This is Testing! The email-id is [email protected]"
output : "sihT si gnitseT! ehT di-liame si [email protected]"
how can I achieve it without re?
Answer: Through `re.sub`.
>>> import re
>>> s = "This is Testing! The email is [email protected]"
>>> re.sub(r'\w+', lambda m: m.group()[::-1], s)
'sihT si gnitseT! ehT liame si [email protected]'
We all know that `\w+` matches one or more word characters. And note that we
also use an anonymous function as second parameter in the `re.sub` function
which in-turn helps to do string operations on the matched characters.
|
Can a regular expression be used as a key in a dictionary?
Question: I want to create a dictionary where the keys are regular expressions:
d = {'a.*': some_value1, 'b.*': some_value2}
Then, when I look into the dictionary:
d['apple']
I want apple `'apple'` to be matched against the keys which are regular
expressions. If there is a complete match with a key/regular-expression then
the corresponding value should be returned.
For example `'apple'` matches with the regular expression `'a.*'` completely,
and so, `some_value1` should be returned.
Of course, all of this assumes that the regular expression keys do not
conflict (i.e. two keys should not both match the same string exactly). Let's
say I can manually take care of this requirement when building my keys.
Is this possible in Python? If so, it would be quite an elegant and powerful
construct!
Answer: You can use a `re.compile`d pattern object as a dictionary key:
>>> import re
>>> regex = re.compile('a.*')
>>> d = {regex: 'foo'}
>>> d[re.compile('a.*')]
'foo'
Note that recompiling the same regex gives you an equal key (the same object,
in fact: `re.compile('a.*') is d.keys()[0]`), so you can get back whatever you
stored against it.
However:
* As pointed out in the comments, _multiple regular expressions can match the same string_ ;
* Dictionaries aren't ordered, so you might get a different matching regex first each time you run the program; and
* There's no `O(1)` way to ask a dictionary `{regex: result, ...}` for a `result` value given a string that might match one or more `regex` keys.
It's therefore difficult to see what utility you'd find for this.
* * *
If you _can_ come up with a way to ensure that no two keys can match the same
string, you could create a
[`MutableMapping`](https://docs.python.org/3/library/collections.abc.html#collections.abc.MutableMapping)
subclass that applies this check when you add new keys and implements
`__getitem__` to scan through the key-value pairs and return the first value
where the argument matches the key regex. Again, though, this would be `O(n)`.
|
Optimal framework to distribute a Python program across n cores
Question: I'm new to distributed systems and have been tasked with the objective of
distributing a piece of existing Python code. The goal is to treat the code as
a binary or a library and author two different kinds of wrappers:
* **Wrapper 1:** Receives large datastreams from the environment, _invokes the Python code_ to perform some computation on it and then breaks it up and sends the chunks of data (and some other things) to the worker nodes. Runs in the master node.
* **Wrapper 2:** Receives those chunks of data, _invokes the Python code_ to do some computation on them and when a particular condition is met, sends data back to the master node.
The process is repeated until no further data comes to the master node. It can
be exemplified by the following figure:[](http://i.stack.imgur.com/vH5Uq.png)
So there exists both
(1) The need for communication between the workers and the master node as well
as
(2) The need for invocation of existing Python code.
It is also important that the entire framework views the notion of "node"
agnostically, since it needs to be run on either a personal computer where
nodes equate cores (physical or virtual) or on a cluster, where nodes can be
entire computers with a number of cores each. I'm therefore looking for a
technology that can help me achieve this plethora of goals. I'm already
studying up on Apache Spark, yet I'm not entirely sure whether Spark will
allow me to execute Python code in a streamlined fashion, and was looking for
ideas.
Answer: Check out [`celery`](http://www.celeryproject.org/) as an easy option:
> **Celery: Distributed Task Queue**
>
> Celery is an asynchronous task queue/job queue based on distributed message
> passing. It is focused on real-time operation, but supports scheduling as
> well. The execution units, called tasks, are executed concurrently on a
> single or more worker servers using multiprocessing, Eventlet, or gevent.
> Tasks can execute asynchronously (in the background) or synchronously (wait
> until ready).
>
> Celery is used in production systems to process millions of tasks a day.
|
Empirical cdf in python similiar to matlab's one
Question: I have some code in matlab, that I would like to rewrite into python. It's
simple program, that computes some distribution and plot it in double-log
scale.
The problem I occured is with computing cdf. Here is matlab code:
for D = 1:10
delta = D / 10;
for k = 1:n
N_delta = poissrnd(delta^-alpha,1);
Y_k_delta = ( (1 - randn(N_delta)) / (delta.^alpha) ).^(-1/alpha);
Y_k_delta = Y_k_delta(Y_k_delta > delta);
X(k) = sum(Y_k_delta);
%disp(X(k))
end
[f,x] = ecdf(X);
plot(log(x), log(1-f))
hold on
end
In matlab one I can simply use:
[f,x] = ecdf(X);
to get cdf (f) at points x.
[Here](http://www.mathworks.com/help/stats/ecdf.html) is documentation for it.
In python it is more complicated:
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
from statsmodels.distributions.empirical_distribution import ECDF
alpha = 1.5
n = 1000
X = []
for delta in range(1,5):
delta = delta/10.0
for k in range(1,n + 1):
N_delta = np.random.poisson(delta**(-alpha), 1)
Y_k_delta = ( (1 - np.random.random(N_delta)) / (delta**alpha) )**(-1/alpha)
Y_k_delta = [i for i in Y_k_delta if i > delta]
X.append(np.sum(Y_k_delta))
ecdf = ECDF(X)
x = np.linspace(min(X), max(X))
f = ecdf(x)
plt.plot(np.log(f), np.log(1-f))
plt.show()
It makes my plot look very strange, definetly not smooth as matlab's one.
I think the problem is that I do not understand `ECDF` function or it works
differently than in matlab.
I implemented [this](http://stackoverflow.com/questions/3209362/how-to-plot-
empirical-cdf-in-matplotlib-in-python) solution (the most points one) for my
python code, but it looks like it doesn't work correctly.
Answer: Once you have your sample, you can easily compute the ECDF using a combination
of
[`np.unique`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html)*
and
[`np.cumsum`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.cumsum.html):
import numpy as np
def ecdf(sample):
# convert sample to a numpy array, if it isn't already
sample = np.atleast_1d(sample)
# find the unique values and their corresponding counts
quantiles, counts = np.unique(sample, return_counts=True)
# take the cumulative sum of the counts and divide by the sample size to
# get the cumulative probabilities between 0 and 1
cumprob = np.cumsum(counts).astype(np.double) / sample.size
return quantiles, cumprob
For example:
from scipy import stats
from matplotlib import pyplot as plt
# a normal distribution with a mean of 0 and standard deviation of 1
n = stats.norm(loc=0, scale=1)
# draw some random samples from it
sample = n.rvs(100)
# compute the ECDF of the samples
qe, pe = ecdf(sample)
# evaluate the theoretical CDF over the same range
q = np.linspace(qe[0], qe[-1], 1000)
p = n.cdf(q)
# plot
fig, ax = plt.subplots(1, 1)
ax.hold(True)
ax.plot(q, p, '-k', lw=2, label='Theoretical CDF')
ax.plot(qe, pe, '-r', lw=2, label='Empirical CDF')
ax.set_xlabel('Quantile')
ax.set_ylabel('Cumulative probability')
ax.legend(fancybox=True, loc='right')
plt.show()
[](http://i.stack.imgur.com/92VBM.png)
* * *
* If you're using a version of numpy older than 1.9.0 then `np.unique` won't accept the `return_counts` keyword argument, and you'll get a `TypeError`:
TypeError: unique() got an unexpected keyword argument 'return_counts'
In that case, a workaround would be to get the set of "inverse" indices and
use `np.bincount` to count the occurrences:
quantiles, idx = np.unique(sample, return_inverse=True)
counts = np.bincount(idx)
|
how can i run RandomRowFilter in happybase
Question: I want to sample rowkey in hbase by happybase(because of memory limit) So I
search and implemet
import happybase
"""~ """"
table = connection.table('drivers')
a=list(table.scan(filter="RandomRowFilter (chance=0.1f)" ))
or a=list(table.scan(filter="RandomRowFilter ('chance',=,'0.1')" ))
print a
but it always say thrift.Thrift.TApplicationException: Internal error
processing scannerOpenWithScan
is there any example code for RandomRowFilter example in python?
version check is all right because of [[thrift hbase client - support filters
and coprocessors](http://stackoverflow.com/questions/14070303/thrift-hbase-
client-support-filters-and-coprocessors\]) I ran this code...
Please help me
Answer: I think happybase that has no filter related to randomfilter. I mistake that
happybase uses HBASE JAVA class related to filter. But happybase owns their
filter. So I need to make JAVA application to sample hbase row key
|
Using ExtractMsg in a loop?
Question: I am trying to write a script that will extract details from Outlook .msg
files and append then to a .csv file. ExtractMsg
(<https://github.com/mattgwwalker/msg-extractor>) will process the messages
one at a time, at the command line with 'python ExtractMsg.py message' but I
can't work out how to use this to loop through all the messages in the
directory.
I have tried:
import ExtractMsg
import glob
for message in glob.glob('*.msg'):
print 'Reading', message
ExtractMsg(message)
This gives "'module' object is not callable". I have tried to look at the
ExtractMsg module but the structure of it is beyond me at the moment. How can
I make the module callable?
Answer:
ExtractMsg(message)
You are trying to call module object - exactly what error message us telling
you.
Perhaps you need to use ExtractMsg.Message class instead
msg = ExtractMsg.Message(message)
In the next link on the very bottom you will find example of usage
<https://github.com/mattgwwalker/msg-extractor/blob/master/ExtractMsg.py>
|
Python read a txt file into a list of lists of numbers
Question: My txt file looks like this:
[[1,3,5],[1,4,4]]
[[1,4,7],[1,4,8],[2,4,5]]
And I was trying to convert it into a list, which include all the lists in the
txt file. So the desired output for my example would be:
[[[1,3,5],[1,4,4]], [[1,4,7],[1,4,8],[2,4,5]]]
It seems like an easy task, except that I could only get this:
['[[1,3,5],[1,4,4]]', '[[1,4,7],[1,4,8],[2,4,5]]']
Is there an easy way to convert the string type into a list?
* * *
The code I used :
input_list = []
with open("./the_file.txt", 'r') as f:
lines = f.read().split('\n')
for each_line in lines:
input_list.append(each_line)
f.close()
Answer: You really want to evaluate each line in your file as actual python code.
However, doing so can be problematic (e.g.: what happens if one line says
`import os; os.system('rm -rf /')`).
[So you don't want to use something like
`eval`](http://stackoverflow.com/q/15197673/198633) for this
Instead, you might consider using `ast.literal_eval`, which has a few
safeguards against this sort of behavior:
with open("./the_file.txt", 'r') as f:
answer = [ast.literal_eval(line.strip()) for line in f]
|
How do I import Zbar into my Python 3.4 script?
Question: I am pretty new to programming, and have never used Zbar before. I am trying
to write a simple script that will allow me to import Zbar and use it to
decode a barcode image. I already have a script set up to decode text from
images that uses Pytesseract and Tesseract OCR, but I need to be able to
decode barcodes as well. I have Windows 7 32 bit, and and am using Python 3.4.
I have already installed Zbar and have used it from the command line
successfully to decode their barcode sample. I have tried using >pip install
zbar, but I keep getting the error:
"fatal error C1083: Cannot open include file: 'zbar.h': No such file or
directory error: command 'C:\Program Files\Microsoft Visual Studio
10.0\VC\BIN\cl.exe' failed with exit status 2"
Getting the pytesseract OCR was painless but I have wasted a lot of time on
this barcode portion of it, any help or alternatives would be much
appreciated.
Answer: Forget wrestling with all of the wrappers. The easiest solution for me was to
simply use
> import os
>
> os.system(r'D:\Winapps\Zbar\bin\zbarimg.exe -d d:\Winapps\Zbar\Examples
> \barcode.png')
Worked instantly. Hope this helps anyone else struggling with that issue.
|
User Defined Function breaks pyspark dataframe
Question: My spark version is 1.3, I am using pyspark.
I have a large dataframe called df.
from pyspark import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.parquetFile("events.parquet")
I then select a few columns of the dataframe and try to count the number of
rows. This works fine.
df3 = df.select("start", "end", "mrt")
print(type(df3))
print(df3.count())
I then apply a user defined function to convert one of the columns from a
string to a number, this also works fine
from pyspark.sql.functions import UserDefinedFunction
from pyspark.sql.types import LongType
CtI = UserDefinedFunction(lambda i: int(i), LongType())
df4 = df2.withColumn("mrt-2", CtI(df2.mrt))
However if I try to count the number of rows I get an exception even though
the type shows that it is a dataframe just like df3.
print(type(df4))
print(df4.count())
My Error:
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-10-53941e183807> in <module>()
8 df4 = df2.withColumn("mrt-2", CtI(df2.mrt))
9 print(type(df4))
---> 10 print(df4.count())
11 df3 = df4.select("start", "end", "mrt-2").withColumnRenamed("mrt-2", "mrt")
/data/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/spark/python/pyspark/sql/dataframe.py in count(self)
299 2L
300 """
--> 301 return self._jdf.count()
302
303 def collect(self):
/data/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
536 answer = self.gateway_client.send_command(command)
537 return_value = get_return_value(answer, self.gateway_client,
--> 538 self.target_id, self.name)
539
540 for temp_arg in temp_args:
/data/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
298 raise Py4JJavaError(
299 'An error occurred while calling {0}{1}{2}.\n'.
--> 300 format(target_id, '.', name), value)
301 else:
302 raise Py4JError(
Py4JJavaError: An error occurred while calling o152.count.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1379 in stage 12.0 failed 4 times, most recent failure: Lost task 1379.3 in stage 12.0 (TID 27021, va1ccogbds01.lab.ctllabs.io): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/data/0/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/jars/spark-assembly-1.3.0-cdh5.4.7-hadoop2.6.0-cdh5.4.7.jar/pyspark/worker.py", line 101, in main
process()
File "/data/0/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/jars/spark-assembly-1.3.0-cdh5.4.7-hadoop2.6.0-cdh5.4.7.jar/pyspark/worker.py", line 96, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/data/0/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/jars/spark-assembly-1.3.0-cdh5.4.7-hadoop2.6.0-cdh5.4.7.jar/pyspark/serializers.py", line 236, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/data/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/lib/spark/python/pyspark/sql/functions.py", line 119, in <lambda>
File "<ipython-input-10-53941e183807>", line 7, in <lambda>
TypeError: int() argument must be a string or a number, not 'NoneType'
at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:135)
at org.apache.spark.api.python.PythonRDD$$anon$1.next(PythonRDD.scala:98)
at org.apache.spark.api.python.PythonRDD$$anon$1.next(PythonRDD.scala:94)
at org.apache.spark.InterruptibleIterator.next(InterruptibleIterator.scala:43)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.rdd.RDD$$anonfun$zip$1$$anon$1.hasNext(RDD.scala:743)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$6.apply(Aggregate.scala:127)
at org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$6.apply(Aggregate.scala:124)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:634)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1210)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1199)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1198)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1198)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1400)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1361)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
---------------------------------------------------------------------------
Am I using the user defined function correctly? Any idea why the data frame
functions don't work on the data frame?
Answer: From the stack trace, it looks like your column contains a `None` value which
is breaking the `int` cast; you could try changing your lambda function to
`lambda i: int(i) if i else None`, to handle this situation.
Note that just because `df2.withColumn("mrt-2", CtI(df2.mrt))` didn't throw an
error doesn't mean that your code is fine: Spark has lazy evaluation, so it
won't actually try and run your code until you call `count`, `collect` or
something like that.
|
Python multiprocessing with arrays and multiple arguments
Question: So I am trying to read in a bunch of very large data files and each one takes
quite some time to load. I am trying to figure out how to load them in the
quickest way and without running into memory problems. Once the data files are
loaded into the array the correct way I do not need to write to them, but just
need to read. I've been trying to parallelize this for some time, but can't
figure it out.
Let's say I have 400 time files. Each of these files is tab separated and has
30 variables each with 40,000 data points. I would like to create a
400x30x40000 array so that I can access the points easily. The data file is
set up so that the first 40k points is for variable 1, the second 40k is for
var 2, and so on.
I have written a function that loads in a time file correctly and stores it in
my array correctly. What I'm having trouble with is parallelizing it. This
does work if I put it in a for loop and iterate over i.
import h5py
import pandas as pd
h5file = h5py.File('data.h5','a')
data = h5file.create_dataset("default",(len(files),len(header),numPts))
# is shape 400x30x40000
def loadTimes(files,i,header,numPts,data):
# files has 400 elements
# header has 30 elements
# numPts is an integer
allData = pd.read_csv(files[i],delimiter="\t",skiprows=2,header=None).T
for j in range(0,len(header)):
data[i,j,:] = allData[0][j*numPts:(j+1)*numPts]
del allData
files is the list of time files loaded by `subprocess.check_output` (has about
400 elements), header is the list of variables, loaded from another file (has
30 elements in it). numPts is the number of points per variable (so around
40k).
I've tried using `pool.map` to load the data but found it didn't like multiple
arguments. I also tried using partial, zip, and the lambda function, but none
of those seem to like my arrays.
I am not set in stone about this method. If there is a better way to do it I
will greatly appreciate it. It will just take too long to load in all this
data one at a time. My calculations show that it would take ~3hrs to load on
my computer using one core. And I will use up A LOT of my memory. I have
access to another machine with a lot more cores, which is actually where I
will be doing this, and I'd like to utilize them properly.
Answer: So how I solved this was using the h5 file format. What I did was write the
loops so that they only had the iter
def LoadTimeFiles(i):
from pandas import read_csv
import h5py as h5
dataFile = h5.File('data.h5','r+')
rFile = dataFile['files'][i]
data = dataFile['data']
lheader = len(data[0,:,0])
numPts = len(data[0,0,:])
allData = read_csv(rFile,delimiter="\t",skiprows=2,header=None,low_memory=False).T
for j in range(0,lheader):
data[i,j,:] = allData[0][j*numPts:(j+1)*numPts]
del allData
dataFile.close()
def LoadTimeFilesParallel(np):
from multiprocessing import Pool, freeze_support
import h5py as h5
files = h5.File('data.h5','r')
numFiles = len(files['data'][:,0,0])
files.close()
pool = Pool(np)
freeze_support
pool.map(LoadTimeFiles,range(numFiles))
if __name__ == '__main__':
np = 5
LoadTimeFilesParallel(np)
So since I was storing the data in the h5 format anyway I thought I'd be
tricky and load it up in each loop (I can see no time delays in reading the h5
files). I added the option `low_memory=False` to the `read_csv` command
because it made it go faster. The j loop was really fast so I didn't need to
speed it up.
Now each `LoadTimeFile` loop takes about 20-30 secs and we do 5 at once
without order mattering. My ram never hits above 3.5Gb (total system usage)
and drops back to under a gig after runs.
|
Convert string to integer python CGI
Question: I'm stuck on a part of my code where I need to convert the value of a radio
button from a string into a int because the function the value goes into takes
an integer. When the radio button is selected and the user presses submit, I
get a string of that value when I need an integer. I've tried the basic
convert tactics in python like int() but I get a TypeError: int() argument
must be a string, a bytes-like object or a number, not 'list'. Are there any
other ways to convert this value to an int? For example when I print it out I
get ['14'] when I need 14.
My location_table script
import cgi
import cgitb
cgitb.enable()
from A3 import db_access
from area_selection import page
fs = cgi.FieldStorage()
area_id = fs.getlist('area')
if len(area_id)==0:
page("Exactly 1 selected")
quit()
try:
area_id= int(area_id[0])
except Exception as exc:
page("area id should be valid int: " + str(area_id))
quit()
location = db_access.get_locations_for_area(area_id)
if not location:
page("area with id {} does not exist".format(area_id))
quit()
def page(*messages):
print("Content-Type: text/html; charset=UTF-8")
print("")
print ('''
<!DOCTYPE html>
<html>
<head>
<title>{}</title>
<link rel="stylesheet" type="text/css" href="/style1.css"/>
</head>
<body>
'''.format("Areas"))
def name_key(x):
return x['location_id']
print("<form method='get' action='measurement_table.py'>")
print("<name='area'>")
print("<table class='grid'>")
row_template = "<tr>" \
"<td>{}</td>" \
"<td>{}</td>" \
"<td>{}</td>" \
"<td>{}</td>" \
"</tr>"
print("<tr><th>Select</th><th>ID</th><th>Name</th><th>Altitude</th></tr>")
for x in sorted(location, key=name_key):
location_id = x['location_id']
name = x['name']
alt = x['altitude']
radio = "<input type='radio' name='location_id' value={:.0f}>".format(location_id)
print(row_template.format(radio,location_id, name, alt ))
print("</input>")
print("</table>")
print("<p>"
"<input type='submit' value='Get Measurement Information'>"
"</p>")
print('''
</body>
</html>
''' )
if __name__=="__main__":
page()
my measurements_table script that location_table links to when the user
presses submit
import cgi
import cgitb
cgitb.enable()
from A3 import db_access
from area_selection import page
fs = cgi.FieldStorage()
location_id = fs.getlist('location_id')
byte = int(location_id[0])
if len(location_id)==0:
page("Exactly 1 selected")
quit()
try:
area_id= int(location_id[0])
except Exception as exc:
page("area id should be valid int: " + str(location_id))
quit()
mea = db_access.get_measurements_for_location(location_id)
The problem i'm having from location_table code:
radio = "<input type='radio' name='location_id' value={}>".format
And in measurement_table where i'm having the conversion error:
mea = db_access.get_measurements_for_location(location_id)
The Error: [In the URL address, you can see the location_id number i'm trying
to convert](http://i.stack.imgur.com/JNP3t.png)
Answer: You have a list. Presumably the first item of that list is the value of the
selected radio button. You can get the first item of a list using list
indexing, like this:
location_id = ['14']
location_id = int(location_id[0])
If this hasn't helped you then add your CGI code to your question as this is
where the problem really is.
|
how to convert a text file (with unneeded double quotes) into pandas DataFrame?
Question: I need to import web-based data (as posted below) into Python. I used
`urllib2.urlopen` ([data available
here](https://raw.githubusercontent.com/QuantEcon/QuantEcon.py/master/data/test_pwt.csv)).
However, the data was imported as string lines. How can I convert them into a
pandas `DataFrame` while stripping away the double-quotes `"`? Thank you for
your help.
"country","country isocode","year","POP","XRAT","tcgdp","cc","cg"
"Argentina","ARG","2000","37335.653","0.9995","295072.21869","75.716805379","5.5788042896"
"Australia","AUS","2000","19053.186","1.72483","541804.6521","67.759025993","6.7200975332"
"India","IND","2000","1006300.297","44.9416","1728144.3748","64.575551328","14.072205773"
"Israel","ISR","2000","6114.57","4.07733","129253.89423","64.436450847","10.266688415"
"Malawi","MWI","2000","11801.505","59.543808333","5026.2217836","74.707624181","11.658954494"
"South Africa","ZAF","2000","45064.098","6.93983","227242.36949","72.718710427","5.7265463933"
"United States","USA","2000","282171.957","1","9898700","72.347054303","6.0324539789"
"Uruguay","URY","2000","3219.793","12.099591667","25255.961693","78.978740282","5.108067988"
Answer: You can do:
>>> import pandas as pd
>>> df=pd.read_csv('https://raw.githubusercontent.com/QuantEcon/QuantEcon.py/master/data/test_pwt.csv')
>>> df
country country isocode year POP XRAT \
0 Argentina ARG 2000 37335.653 0.999500
1 Australia AUS 2000 19053.186 1.724830
2 India IND 2000 1006300.297 44.941600
3 Israel ISR 2000 6114.570 4.077330
4 Malawi MWI 2000 11801.505 59.543808
5 South Africa ZAF 2000 45064.098 6.939830
6 United States USA 2000 282171.957 1.000000
7 Uruguay URY 2000 3219.793 12.099592
tcgdp cc cg
0 295072.218690 75.716805 5.578804
1 541804.652100 67.759026 6.720098
2 1728144.374800 64.575551 14.072206
3 129253.894230 64.436451 10.266688
4 5026.221784 74.707624 11.658954
5 227242.369490 72.718710 5.726546
6 9898700.000000 72.347054 6.032454
7 25255.961693 78.978740 5.108068
|
Export data from Google App Engine to csv
Question: This [old answer](http://stackoverflow.com/questions/2810394/export-import-
datastore-from-to-google-app-engine) points to a link on [Google App Engine
documentation](http://code.google.com/appengine/docs/python/tools/uploadingdata.html),
but that link is now about backup your GAE data, not downloading it.
So how to download all the data into a csv? The data is small, i.e < 1 GB
Answer: You can use `appcfg.py` to download `Kind` data in csv format.
> $ appcfg.py download_data --help
>
> Usage: appcfg.py [options] download_data
>
> Download entities from datastore.
>
> The 'download_data' command downloads datastore entities and writes them to
> file as CSV or developer defined format.
|
Python implementing Singleton as metaclass , but for abstract classes
Question: I have an abstract class and I would like to implement Singleton pattern for
all classes that inherit from my abstract class. I know that my code won't
work because there will be metaclass attribute conflict. Any ideas how to
solve this?
from abc import ABCMeta, abstractmethod, abstractproperty
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class GenericLogger(object):
__metaclass__ = ABCMeta
@abstractproperty
def SearchLink(self): pass
class Logger(GenericLogger):
__metaclass__ = Singleton
@property
def SearchLink(self): return ''
a = Logger()
Answer: Create a subclass of `ABCMeta`:
class SingletonABCMeta(ABCMeta):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(SingletonABCMeta, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class GenericLogger(object):
__metaclass__ = SingletonABCMeta
@abstractproperty
def SearchLink(self): pass
class Logger(GenericLogger):
@property
def SearchLink(self): return ''
Metaclasses work just like regular classes; you can still create subclasses
and extend their functionality. `ABCMeta` doesn't itself define a `__call__`
method, so it is safe to add one.
Demo:
>>> from abc import ABCMeta, abstractproperty
>>> class SingletonABCMeta(ABCMeta):
... _instances = {}
... def __call__(cls, *args, **kwargs):
... if cls not in cls._instances:
... cls._instances[cls] = super(SingletonABCMeta, cls).__call__(*args, **kwargs)
... return cls._instances[cls]
...
>>> class GenericLogger(object):
... __metaclass__ = SingletonABCMeta
... @abstractproperty
... def SearchLink(self): pass
...
>>> class Logger(GenericLogger):
... @property
... def SearchLink(self): return ''
...
>>> Logger()
<__main__.Logger object at 0x1012ace90>
>>> Logger()
<__main__.Logger object at 0x1012ace90>
>>> class IncompleteLogger(GenericLogger):
... pass
...
>>> IncompleteLogger()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in __call__
TypeError: Can't instantiate abstract class IncompleteLogger with abstract methods SearchLink
|
How to add chain id in pdb
Question: By using biopython library, I would like to add chains ids in my pdb file. I'm
using
p = PDBParser()
structure=p.get_structure('mypdb',mypdb.pdb)
model=structure[0]
model.child_list=["A","B"]
But I got this error:
Traceback (most recent call last):
File "../../principal_axis_v3.py", line 319, in <module>
main()
File "../../principal_axis_v3.py", line 310, in main
protA=read_PDB(struct,ch1,s1,e1)
File "../../principal_axis_v3.py", line 104, in read_PDB
chain=model[ch]
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Bio/PDB/Entity.py", line 38, in __getitem__
return self.child_dict[id]
KeyError: 'A'
I tried to changes the keys in th child.dict, butI got another error:
Traceback (most recent call last):
File "../../principal_axis_v3.py", line 319, in <module>
main()
File "../../principal_axis_v3.py", line 310, in main
protA=read_PDB(struct,ch1,s1,e1)
File "../../principal_axis_v3.py", line 102, in read_PDB
model.child_dict.keys=["A","B"]
AttributeError: 'dict' object attribute 'keys' is read-only
How can I add chains ids ?
Answer: Your error is that `child_list` is not a list with chain IDs, but of `Chain`
objects (`Bio.PDB.Chain.Chain`). You have to create `Chain` objects and then
add them to the structure. A lame example:
from Bio.PDB.Chain import Chain
my_chain = Chain("C")
model.add(my_chain)
Now you can access the model `child_dict`:
>>> model.child_dict
{'A': <Chain id=A>, 'C': <Chain id=C>}
>>> model.child_dict["C"]
<Chain id=C>
|
How to correctly load Flask app module in uWSGI?
Question: [EDIT]
I managed to load the flask app module by starting uwsgi from within the
project folder. I now have a problem with nginx not having permission to the
socket file though (scroll down to the end of the question). If anybody can
help with that..?
[/EDIT]
Following [this tutorial](http://vladikk.com/2013/09/12/serving-flask-with-
nginx-on-ubuntu/) I'm trying to run my Flask website with
[uWSGI](http://uwsgi-docs.readthedocs.org/en/latest/) and
[nginx](http://nginx.org/). When doing exactly as the tutorial says it works
fine. I now want to run my own website though. And the structure of my own
website project looks as follows:
myownproject
|-app
| -__init__.py
|-run.py
|-myownproject_nginx.conf
|-myownproject_uwsgi.ini
in which `app` is loaded in `__init__.py` like this:
app = Flask(__name__)
and myownproject_uwsgi.ini looks like this:
[uwsgi]
#application's base folder
base = /home/kramer65/myownproject
#python module to import
app = app
module = %(app)
# I disabled these lines below because I don't use a venv (please don't ask)
# home = %(base)/venv
# pythonpath = %(base)
#socket file's location
socket = /home/kramer65/myownproject/%n.sock
#permissions for the socket file
chmod-socket = 666
#the variable that holds the flask application inside the imported module
callable = app
#location of log files
logto = /var/log/uwsgi/%n.log
But when I run this:
$ uwsgi --ini /home/kramer65/myownproject/myownproject_uwsgi.ini
[uWSGI] getting INI configuration from /home/kramer65/myownproject/myownproject_uwsgi.ini
I get the following logs in `/var/log/uwsgi/myownproject_uwsgi.log`:
*** Operational MODE: single process ***
ImportError: No module named app
unable to load app 0 (mountpoint='') (callable not found or import error)
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
Why doesn't uwsgi find my callable? And why is the `mountpoint` empty (`=''`)?
What am I doing wrong here?
Does anybody know how I can get this to work properly?
[EDIT]
Okay, I tried running `uwsgi --ini
/home/kramer65/myownproject/myownproject_uwsgi.ini` from within the
`myownproject` project folder, which solves this problem; it now finds the
callable and that seems to work fine.
I still get a 502 though, the next problem seems to be a permission problem
with nginx not having permission to the socket file.
`/var/log/nginx/error.log` says:
> 2015/10/27 11:40:36 [crit] 14276#0: *1 connect() to
> unix:/home/kramer65/myownproject/myownproject_uwsgi.sock failed (13:
> Permission denied) while connecting to upstream, client: 80.xxx.xxx.xxx,
> server: localhost, request: "GET / HTTP/1.1", upstream:
> "uwsgi://unix:/home/kramer65/myownproject/myownproject_uwsgi.sock:", host:
> "52.xx.xx.xxx"
So I changed the `chmod-socket = 666` to `chmod-socket = 777`. When doing an
`ls -l` I actually see the socket file having full permissions, but I still
get the error I pasted above.
Any ideas to get this working?
Answer: The `base` config is just a internal variable. The part you commented out
caused your problem.
If you don't want to use virtualenv and set your pythonpath, change the `base`
config to `chdir`.
`chdir = /home/kramer65/myownproject `
Internally, uWSGI will run from `chdir` instead of from the current directory.
About the socket permission problem, the nginx user (probably `www-data`) does
not have access to your personal folder (`/home/kramer65/`). You must set the
socket to another folder, where nginx and uwsgi have access.
|
email address not recognised in XML-RPC interface to Neos Server
Question: I am using the XML-RPC submission API to the Neos Server (optimization, AMPL,
MILP, Cplex) and am receiving an error message to say that "CPLEX will not run
unless you provide a valid email address."
Am I misinterpreting what I should do with the provided python template found
on the Neos site[here] and [here](http://www.neos-server.org/neos/NEOS-
API.html)?
The relevant snippet of the Neos-provided .py [file](http://www.neos-
server.org/neos/NeosClient.py) that I edited is below
import sys
import xmlrpclib
import time
NEOS_HOST="www.neos-server.org"
NEOS_PORT=3332
CONTACT_EMAIL = '[email protected]'
INTERFACE = 'XML-RPC'
neos=xmlrpclib.Server("http://%s:%d" % (NEOS_HOST, NEOS_PORT))
...
(jobNumber, password) = neos.submitJob(xml, CONTACT_EMAIL, INTERFACE)
sys.stdout.write("JobNumber = %d \n" % jobNumber)
Besides the email error, my code works. I know because sometimes other solvers
will return a result (it seems some solvers - though not CPLEX - don't require
an email address)
**An unrelated question** For folks who are using this Neos server interface,
what are the alternatives to using regex to parse the returned output file?
Thanks!
Answer: The Neos server team responded :
Add your email address into the xml that you are submitting. In your xml, add
a line
<email> [email protected] </email>
along with the fields like
<model></model>
<data></data>
etc
|
What does newArray = myNumpyArray[:,0] mean?
Question: Not too familiar with Python and need to translate some code. Here is the gist
of what I am having a problem with:
import numpy
myNumpyArray = numpy.array([1,2,3,4])
newArray = myNumpyArray[:,0]
I don't know what `myNumpyArray[:,0]` means and get compile error `IndexError:
too many indices`.
Answer:
myNumpyArray[:,0]
means the first column of myNumpyArray, since your array is 1-Dimensional,
this doesn't work.
|
Pair strings in list based on containing text in Python
Question: I'm looking to take a list of strings and create a list of tuples that groups
items based on whether they contain the same text.
For example, say I have the following list:
MyList=['Apple1','Pear1','Apple3','Pear2']
I want to pair them based on all but the last character of their string, so
that I would get:
ListIWant=[('Apple1','Apple3'),('Pear1','Pear2')]
We can assume that only the last character of the string is used to identify.
Meaning I'm looking to group the strings by the following unique values:
>>> list(set([x[:-1] for x in MyList]))
['Pear', 'Apple']
Answer:
In [69]: from itertools import groupby
In [70]: MyList=['Apple1','Pear1','Apple3','Pear2']
In [71]: [tuple(v) for k, v in groupby(sorted(MyList, key=lambda x: x[:-1]), lambda x: x[:-1])]
Out[71]: [('Apple1', 'Apple3'), ('Pear1', 'Pear2')]
|
Python - Using Fabric with Sudo
Question: I'm pretty new to python and fabric and I am trying to do a simple code where
I can get the output on two hosts that uses sudo, although I keep getting an
error.... Can anyone help me out with what I might be missing ?
My code:
from fabric.api import *
from getpass import getpass
from fabric.decorators import runs_once
env.hosts = ['host1','host2']
env.port = '22'
env.user = 'username'
env.password="password"
def sudo_dsmc(cmd):
sudo("-l")
When i run: fab sudo_dsmc:"-1" :
MacBookPRO:PYTHON username$ fab sudo_dsmc:"-l"
[host1] Executing task 'sudo_dsmc'
[host1] sudo: -l
[host1] out: sudo password:
[host1] out: Sorry, user username is not allowed to execute '/bin/bash -l -c - l' as root on host1.
[host1] out:
Fatal error: sudo() received nonzero return code 1 while executing!
Requested: -l Executed: sudo -S -p 'sudo password:' /bin/bash -l -c "-l"
Aborting. Disconnecting from host1... done.
Although I can run the apt-get update with my below function fine without any
errors:
def sudo_command(cmd):
sudo("apt-get update")
# run like: fab sudo_command:"apt-get-update"
Answer: It looks like your sudoers file is preventing you from running that command as
sudo. Check your /etc/sudoers file and read the sudo documentation.
Also "-l" isn't a valid command. sudo takes -l as an optional flag (which
lists commands allowed by the user). But Fabric's sudo appears to be taking
unknown strings and routing them through /bin/bash instead of using them
directly as sudo command parameters.
|
How to store predicted classes matching the pre-vectorized X in Python Scikit-learn?
Question: I would like to use name to predict gender. And not just name but name
features like extracting the "last name" as a feature derived from a name. My
code's flow is as such, get data into df > specify lr classifier and dv
dictVectorizer > use functions to create features > perform dictVectorization
> training. I would like to do the followings but can't find any resources of
how.
1) I would like to add the predicted classes (0 and 1) back into the original
data set or the data set that I can see both the names and the predicted
gender classes. Currently my y_test_predictions correspond only to X_test
which is a sparse matrix.
2) How can I retain the trained classifier and use it to predict genders of a
different data set with a bunch of names? And how can I just insert a name
"Rick Grime" and have the classifier tells me what gender it predicts?
I have done something like this with nltk, but can't find any example or
references to do this in Scikit-learn.
Codes:
import pandas as pd
from pandas import DataFrame, Series
import numpy as np
import re
import random
import time
from random import randint
import csv
import sys
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction import DictVectorizer
from sklearn.metrics import confusion_matrix as sk_confusion_matrix
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
from sklearn.metrics import precision_recall_curve
from sklearn import cross_validation
data = pd.read_csv("file.csv", header=0, encoding="utf-8")
df = DataFrame(data)
dv = DictVectorizer()
lr = LogisticRegression()
X = df.raw_name.values
X2 = df.name.values
y = df.gender.values
def feature_full_name(nameString):
try:
full_name = nameString
if len(full_name) > 1: # not accept name with only 1 character
return full_name
else: return '?'
except: return '?'
def feature_full_last_name(nameString):
try:
last_name = nameString.rsplit(None, 1)[-1]
if len(last_name) > 1: # not accept name with only 1 character
return last_name
else: return '?'
except: return '?'
def feature_name_entity(nameString2):
space = 0
try:
for i in nameString2:
if i == ' ':
space += 1
return space+1
except: return 0
my_dict = [{'last-name': feature_full_last_name(i)} for i in X]
my_dict2 = [{'name-entity': feature_name_entity(feature_full_name(i))} for i in X2]
all_dict = []
for i in range(0, len(my_dict)):
temp_dict = dict(
my_dict[i].items() + my_dict2[i].items()
)
all_dict.append(temp_dict)
newX = dv.fit_transform(all_dict)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(newX, y, test_size=0.3)
lr.fit(X_train, y_train)
y_test_predictions = lr.predict(X_test)
Answer: I would use some of scikit-learn's built-in tools to split the dataframe,
vectorize the names, and predict the results. Then you can add the predicted
results back into the test dataframe. For example, using a small set of names
as an example:
data = {'Bruce Lee': 'Male',
'Bruce Banner': 'Male',
'Bruce Springsteen': 'Male',
'Bruce Willis': 'Male',
'Sarah McLaughlin': 'Female',
'Sarah Silverman': 'Female',
'Sarah Palin': 'Female',
'Sarah Hyland': 'Female'}
import pandas as pd
df = pd.DataFrame.from_dict(data, orient='index').reset_index()
df.columns = ['name', 'gender']
print(df)
name gender
0 Sarah Silverman Female
1 Sarah Palin Female
2 Bruce Springsteen Male
3 Bruce Banner Male
4 Bruce Lee Male
5 Sarah Hyland Female
6 Sarah McLaughlin Female
7 Bruce Willis Male
Now we can use scikit-learn's `CountVectorizer` to count the words in the
names; this produces essentially the same output as what you've done above,
except it doesn't filter on length of name, etc. For ease of use, we'll put
this in a pipeline with a cross-validated logistic regression:
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegressionCV
from sklearn.pipeline import make_pipeline
clf = make_pipeline(CountVectorizer(), LogisticRegressionCV(cv=2))
Now we can split our data into a train/test set, fit the pipeline, and then
assign the results:
from sklearn.cross_validation import train_test_split
df_train, df_test = train_test_split(df, train_size=0.5, random_state=0)
clf.fit(df_train['name'], df_train['gender'])
df_test = df_test.copy() # so we can modify it
df_test['predicted'] = clf.predict(df_test['name'])
print(df_test)
name gender predicted
6 Sarah McLaughlin Female Female
2 Bruce Springsteen Male Male
1 Sarah Palin Female Female
7 Bruce Willis Male Male
Similarly, we can just pass a list of names to the pipeline and get a
prediction:
>>> clf.predict(['Bruce Campbell', 'Sarah Roemer'])
array(['Male', 'Female'], dtype=object)
If you want to do more sophisticated logic in your text vectorization, you can
create a custom transformer for your input data: a web search for "scikit-
learn custom transformer" should give you a decent set of examples to work
from.
* * *
Edit: here's an example of using a custom transformer to generate dicts from
the input names:
from sklearn.base import TransformerMixin
class ExtractNames(TransformerMixin):
def transform(self, X, *args):
return [{'first': name.split()[0],
'last': name.split()[-1]}
for name in X]
def fit(self, *args):
return self
trans = ExtractNames()
>>> trans.fit_transform(df['name'])
[{'first': 'Bruce', 'last': 'Springsteen'},
{'first': 'Bruce', 'last': 'Banner'},
{'first': 'Sarah', 'last': 'Hyland'},
{'first': 'Sarah', 'last': 'Silverman'},
{'first': 'Sarah', 'last': 'Palin'},
{'first': 'Bruce', 'last': 'Lee'},
{'first': 'Bruce', 'last': 'Willis'},
{'first': 'Sarah', 'last': 'McLaughlin'}]
Now you can put this in a pipeline with a `DictVectorizer` to generate sparse
features:
from sklearn.feature_extraction import DictVectorizer
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(ExtractNames(), DictVectorizer())
>>> pipe.fit_transform(df['name'])
<8x10 sparse matrix of type '<class 'numpy.float64'>'
with 16 stored elements in Compressed Sparse Row format>
Finally, you could make a pipeline which combines these with a cross-validated
logistic regression and proceed as above:
clf = make_pipeline(ExtractNames(), DictVectorizer(), LogisticRegressionCV())
clf.fit(df_train['name'], df_train['gender'])
df_test['predicted'] = clf.predict(df_test['name'])
From here, if you wish, you could modify the `ExtractNames` transformer to do
more sophisticated extraction of features (using some of your code from
above), and you end up with a pipeline implementation of your procedure, but
lets you simply call `predict()` on an input list of strings. Hope that helps!
|
Histogram with Boxplot above in Python
Question: Hi I wanted to draw a histogram with a boxplot appearing the top of the
histogram showing the Q1,Q2 and Q3 as well as the outliers. Example phone is
below. (I am using Python and Pandas) [](http://i.stack.imgur.com/7o1zE.jpg)
I have checked several examples using `matplotlib.pyplot` but hardly came out
with a good example. And I also wanted to have the histogram curve appearing
like in the image below. [](http://i.stack.imgur.com/Y1DFU.jpg)
I also tried `seaborn` and it provided me the shape line along with the
histogram but didnt find a way to incorporate with boxpot above it.
can anyone help me with this to have this on `matplotlib.pyplot` or using
`pyplot`
Answer:
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="ticks")
x = np.random.randn(100)
f, (ax_box, ax_hist) = plt.subplots(2, sharex=True,
gridspec_kw={"height_ratios": (.15, .85)})
sns.boxplot(x, ax=ax_box)
sns.distplot(x, ax=ax_hist)
ax_box.set(yticks=[])
sns.despine(ax=ax_hist)
sns.despine(ax=ax_box, left=True)
[](http://i.stack.imgur.com/Qhk0H.png)
|
python set seems to hold two identical objects
Question: I have two sets with custom objects in them. I take the objects from one set
and add them to the other set with set.update.
Afterwards, it appears that one set contains two identical objects: their hash
is identical, they are == to each other and not != to each other. If I cast
the set to a list and back to a set, I only have one object in the new set. I
have no other threads or processes running which may mutate the state of any
object somehow in the middle.
I could post my **hash** and **eq** but they call multiple other sub-objects
**hash** and **eq** and it would be a lot of code to include.
Instead here is the debugging code I am running and its output:
print('old hash', map(hash, node.incoming_edges))
print('new hash', map(hash, new_node.incoming_edges))
if len(new_node.incoming_edges) > 1:
node1, node2 = list(new_node.incoming_edges)
print('eq', node1 == node2)
print('ne', node1 != node2)
print('type', type(node.incoming_edges))
print('type', type(new_node.incoming_edges))
new_node.incoming_edges.update(node.incoming_edges)
print('combined')
if len(new_node.incoming_edges) > 1:
node1, node2 = list(new_node.incoming_edges)
print('eq', node1 == node2)
print('ne', node1 != node2)
print('combined hash', map(hash, new_node.incoming_edges))
print('len', len(new_node.incoming_edges))
new_node.incoming_edges = set(list(new_node.incoming_edges))
print('len', len(new_node.incoming_edges))
and the relevant output:
old hash [5805087492197093178]
new hash [5805087492197093178]
type <type 'set'>
type <type 'set'>
combined
eq True
ne False
combined hash [5805087492197093178, 5805087492197093178]
len 2
len 1
I was thinking that since my objects are recursive graphs, the hash might be
changing by adding them to the sets, however I'm printing the hash before and
after the operation to confirm that the hash is not changing.
How can this possibly be happening? I would be happy to introduce more debug
output, I can reproduce easily.
P.S. Here is some info from pdb while I was trying to understand what is
happening:
57 print('type', type(new_node.incoming_edges))
58
59 import pdb; pdb.set_trace()
60
61 new_node.incoming_edges.update(node.incoming_edges)
62 -> new_node.outgoing_edges.update(node.outgoing_edges)
63 # new_node.incoming_edges = set(list(new_node.incoming_edges))
64
65 print('combined')
66 if len(new_node.incoming_edges) > 1:
67 node1, node2 = list(new_node.incoming_edges)
(Pdb) !len(new_node.incoming_edges)
2
(Pdb) !x, y = new_node.incoming_edges
(Pdb) x
<Edge user.id getters={<SQLQuery tables:users; selects:users.last_name; where:{} input_mapping:{'id': 'users.id'}, <SQLQuery tables:users; selects:users.first_name; where:{} input_mapping:{'id': 'users.id'}} setter=None out=False>
(Pdb) y
<Edge user.id getters={<SQLQuery tables:users; selects:users.last_name; where:{} input_mapping:{'id': 'users.id'}, <SQLQuery tables:users; selects:users.first_name; where:{} input_mapping:{'id': 'users.id'}} setter=None out=False>
(Pdb) hash(x)
-8545778292158950550
(Pdb) hash(y)
-8545778292158950550
(Pdb) x == y
True
(Pdb) x != y
False
(Pdb) len(set(list(new_node.incoming_edges)))
1
(Pdb) len(new_node.incoming_edges)
2
Answer: Psychic debugging: You've got `set` members in `node` that were added before
this code begins, then mutated in a way that alters their hashes. `set` caches
the hash value of each object on insertion and does not rehash under normal
conditions; in fact, copying or updating from `set` to `set` can avoid
rehashing as well, because it can copy the cached hash value directly instead
of rehashing.
You "fixed" the problem when you convert to `list` (which effectively strips
the cached hash values), then back to `set` (which must now rehash all the
elements, which causes it to fix the duplication).
Of course, that's not a real fix. The real fix is to _NEVER_ make mutable
items hashable; follow Python's lead and either use immutable types only, or
allow conversion from mutable to immutable variants, where only the immutable
variants define `__hash__`. If you define `__hash__`, you're signing up for
the rules for hashable types, which include logical immutability (and
immutability for all attributes which are involved in the hash calculation
too).
|
Using cross-correlation to detect an audio signal within another signal
Question: I am trying to write a script in python to detect the existence of a simple
alarm sound in any given input audio file. I explain my solution and I
appreciate it if anyone can confirm it is a good solution. Any other solution
implementable in python is appreciated.
The way I do this is calculating cross correlation of the two signals by
calculating FFT of both signals (one is reversed), and multiplying them
together and then calculating IFFT of the result. Then finding the peak of the
result and comparing it with a pre-specified threshold would determine if the
alarm sound is detected or not.
This is my code:
import scipy.fftpack as fftpack
def similarity(template, test):
corr = fftpack.irfft(fftpack.rfft(test , 2 * test.size ) * \
fftpack.rfft(template[:-1] , 2 * template.size ))
return max(abs(corr))
template and test are the 1-D lists of signal data. The second argument to
rfft is used to pad zeros for calculating FFT. however, I am not sure how many
zeros should be added. Also, should I do any normalisation o the given signal
before applying FFT? for example, normalizing it based on the peak of template
signal?
Answer: Solved! I just needed to use scipy.signal.fftconvolve which takes care of zero
padding itself. No normalization was required. So the working code for me is:
from scipy.signal import fftconvolve
def similarity(template, test):
corr = fftconvolve(template, test, mode='same')
return max(abs(corr))
|
python call parent method from child widget
Question: I am trying to call parent method `printName` from child widget `treeView` but
Get error like
1. AttributeError: 'QSplitter' object has no attribute 'printName'
2. QObject::startTimer: QTimer can only be used with threads started with QThread
why parent is referring to QSplitter ?
Parent of `TreeView` is supposed to be `compositeWidget` since `TreeView`was
created in `compositeWidget`
**CODE:**
from PyQt4.QtCore import *
from PyQt4.QtGui import *
import sys
class MainExample(QMainWindow):
def __init__(self, parent=None):
super(MainExample, self).__init__(parent)
self.initUI()
def initUI(self):
self.mainWidget = compositeWidget(self)
self.setCentralWidget(self.mainWidget)
self.mainWidget.treeView.setPath('D:\DATA')
self.setGeometry(300, 300, 300, 200)
class TreeView(QTreeView):
def __init__(self, parent):
super(TreeView, self).__init__(parent)
self.clicked.connect(self.on_treeView_clicked)
@pyqtSlot(QModelIndex)
def on_treeView_clicked(self, index):
indexItem = self.FileSystemModel.index(index.row(), 0, index.parent())
filePath = self.FileSystemModel.filePath(indexItem)
self.parent().printName(filePath)
#
def setPath(self, path):
self.FileSystemModel = QFileSystemModel()
self.FileSystemModel.setFilter(QDir.Dirs | QDir.NoDotAndDotDot)
self.FileSystemModel.setRootPath(path)
self.setModel(self.FileSystemModel)
index = self.FileSystemModel.index(path)
self.setRootIndex(index)
class compositeWidget(QWidget):
def __init__(self, parent):
super(compositeWidget, self).__init__(parent)
self.treeView = TreeView(self)
self.frame = QFrame()
splitterHorizontal = QSplitter(Qt.Horizontal)
splitterHorizontal.addWidget(self.treeView)
splitterHorizontal.addWidget(self.frame)
splitterHorizontal.setSizes([10, 190])
self.layout = QHBoxLayout(self)
self.layout.addWidget(splitterHorizontal)
self.setLayout(self.layout)
def printName(self):
print 'fileName'
def main():
app = QApplication(sys.argv)
ex = MainExample()
ex.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
Answer: The QTreeView is under QSplitter witch is under compositeWidget. You need to
call
self.parent().parent().printName(filePath)
|
Building a Tilemap in Python/Pygame and testing for mouse position
Question: Hey i appreciate any help you can provide
I am creating a tile-map for a test of a possible project. I have found a
tutorial which produced the tile-map effectively. I then tried to implement my
own code by making it loop through each X and Y coordinate testing if the
mouse is in the position of the block. If the mouse was positioned on top of
the tile a box would be drawn on it to create a visual for where the mouse
was. The problem i have is that the grid was made to look like this:
####
####
####
####
But the mouse detection only works diagonally on these tiles:
####
###
##
#
The code is below:
from pygame.locals import *
import pygame, sys
green = (40,255,30)
brown = (40,60,90)
red = (155,20,30)
yellow = (0,155,155)
grass = 0
dirt = 1
lava = 2
colours = {
grass: green,
dirt: brown,
lava: red,
}
tilemap = [
[grass,dirt,dirt,dirt, lava],
[dirt,lava,dirt,dirt, dirt],
[lava, grass,dirt,dirt, lava],
[lava, grass,dirt,dirt, grass],
[dirt,dirt,dirt,dirt,grass]
]
TILESIZE = 50
MAPWIDTH = 5
MAPHEIGHT = 5
pygame.init()
DISPLAYSURF = pygame.display.set_mode((MAPWIDTH*TILESIZE,MAPHEIGHT*TILESIZE))
while True:
mouse_x = pygame.mouse.get_pos()[0]
mouse_y = pygame.mouse.get_pos()[1]
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit
for row in range(MAPWIDTH):
print
for column in range(MAPHEIGHT):
pygame.draw.rect(DISPLAYSURF, colours[tilemap[row][column]], (column*TILESIZE, row*TILESIZE, TILESIZE, TILESIZE))
if mouse_x >= (row * TILESIZE) and mouse_x <= (row* TILESIZE) + TILESIZE:
if mouse_y >= (column * TILESIZE) and mouse_y <= (column* TILESIZE) + TILESIZE:
print (str(row) + " " + str(column))
pygame.draw.rect(DISPLAYSURF, yellow, (row * TILESIZE, column*TILESIZE, TILESIZE, TILESIZE))
pygame.display.update()
Answer: First you're not clearing the screen. Next your draw code is wrong (wrong if
checks) The correct is x for columns and y for rows.
I hope that this could help you! :)
from pygame.locals import *
import pygame, sys
green = (40,255,30)
brown = (40,60,90)
red = (155,20,30)
yellow = (0,155,155)
grass = 0
dirt = 1
lava = 2
colours = {
grass: green,
dirt: brown,
lava: red,
}
tilemap = [
[grass,dirt,dirt,dirt, lava],
[dirt,lava,dirt,dirt, dirt],
[lava, grass,dirt,dirt, lava],
[lava, grass,dirt,dirt, grass],
[dirt,dirt,dirt,dirt,grass]
]
TILESIZE = 50
MAPWIDTH = 5
MAPHEIGHT = 5
pygame.init()
DISPLAYSURF = pygame.display.set_mode((MAPWIDTH*TILESIZE,MAPHEIGHT*TILESIZE))
while True:
mouse_x = pygame.mouse.get_pos()[0]
mouse_y = pygame.mouse.get_pos()[1]
print mouse_x, mouse_y
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
DISPLAYSURF.fill((0,0,0));
for row in range(MAPWIDTH):
print
for column in range(MAPHEIGHT):
color = colours[tilemap[row][column]];
if mouse_x >= (column * TILESIZE) and mouse_x <= (column* TILESIZE) + TILESIZE:
if mouse_y >= (row * TILESIZE) and mouse_y <= (row* TILESIZE) + TILESIZE:
print (str(row) + " " + str(column))
color = yellow;
pygame.draw.rect(DISPLAYSURF, color, (column*TILESIZE, row*TILESIZE, TILESIZE, TILESIZE))
pygame.display.update()
|
jQuery AJAX call works if and only if debugging in FF/Chrome
Question: I'm facing a strange issue with a Flask single app and a jQuery AJAX call I'm
making via a form in the view. Basically, the endpoint (/register) is called
correctly when I debug the JS code, but when I try to run normally, the
endpoint is never called, I can see in the Network view that the request is
sent, but it seems that it never reaches Flask.
That said, this is my html code (the relevant parts concerning the form and
the JS code):
<form action="" method="" id="newsletter-form" role="form">
<div class="input-group">
<input type="email" name="user_email" class="newsletter-email form-control" placeholder="ENTER EMAIL ADDRESS">
<span class="input-group-btn">
<button value="" type="submit" class="btn btn-green waves-effect waves-light newsletter-submit">Get More Info</button>
</span>
</div>
<div id="ajax-panel"></div>
</form>
Here is the JS code, based on this [SO
answer](http://stackoverflow.com/a/6203142/526801), I've added the
event.preventDefault() stuff:
function makeRequest(event){
event.preventDefault(); //stop the form from being submitted
$.getJSON($SCRIPT_ROOT + '/register', {
user_email: $('input[name="user_email"]').val()
},
function(data) {
$('#ajax-panel').empty();
if (data.status == 'OK'){
$('#ajax-panel').append('<div class="success"><strong>' + data.message + '</div>');
}
else{
$('#ajax-panel').append('<div class="error"><strong>' + data.message + '</div>');
}
});
return false;
}
$('#newsletter-form').submit(function(event){
makeRequest(event);
});
And here is the Flask code for that endpoint:
@app.route('/register', methods=['GET', 'POST'])
def register():
error = None
user_email = request.args.get('user_email', 0, type=str)
try:
form = RegisterForm(email=user_email)
if form.email.data and valid_mail(form.email.data):
from models import RegisteredMail
new_rm = RegisteredMail(
form.email.data,
datetime.datetime.utcnow(),
)
try:
db.session.add(new_rm)
db.session.commit()
message = 'Thanks for subscribing, we will keep you posted!'
return json.dumps({'status': 'OK', 'message': message})
except IntegrityError:
message = 'Provided email is already used, please use a different one'
return json.dumps({'status': 'ERROR', 'message': message})
else:
message = 'Provided email is invalid, please use a different one'
return json.dumps({'status': 'ERROR', 'message': message})
except Exception as e:
print e
return render_template('index.html')
As I said, this works perfectly fine if I debug, but when running it does
not...I'm scracthing my head here. It looks like if the request never reaches
the server, because if I CTRL-C the server, I get the error in JS side, which
looks like something is "blocking" the form submission...any clues?
**EDIT**
I've found the issue, and it had nothing to do with a wrong request or any on
the server side...
I had a video tag that was used to stream a video that I had stored in my
static/video folder, about 50MB size. Well, I started to comment out big
chunks of the view code doing trial & error stuff. It looks that that piece of
code was the one causing trouble...
<video class="img_responsive" controls>
<source src="{{ url_for('static', filename='video/movie.mp4') }}" type="video/mp4">
Your browser does not support the video tag.
</video>
I'm still not 100% sure, but I was getting random exceptions using the above
code:
Traceback (most recent call last):
File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock self.process_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request self.finish_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python2.7/SocketServer.py", line 657, in init self.finish()
File "/usr/lib/python2.7/SocketServer.py", line 711, in finish self.wfile.flush()
File "/usr/lib/python2.7/SocketServer.py", line 303, in flush self._sock.sendall(view[write_offset:write_offset+buffer_size])
I really have no clue why this is happening, I've changed to dropbox to act as
CDN streaming server for that video and eveything goes great...if anyone has
an explanation for this, I'd like to hear it :), otherwise this goes to the
X-Files section.
Answer: I know this shouldn't make much difference since it is the basically the same
code but you could try and and use ajax like this to make your get request.
$.ajax({
url: "/register?userEmail=" + $('input[name="user_email"]').val(),
type: "GET",
success: function(data){
if(data.status = "OK"){
//All is good
}else{
//There was an error
}
}
error: function(xhr, ajaxOptions, thrownError){
//If something brakes
alert(xhr.status);
alert(thrownError);
}
});
|
Python equivalent to Perls END block to cleanup after exit
Question: I have a script that may take a while to run. I would like it to save some
details to a file if it exits with an error.
In Perl, the END block would be the place to do something like that.
What is the Python way to clean up after exiting?
Answer: It can be done using the atexit module, as described here:
<https://docs.python.org/2/library/atexit.html>
def savecounter():
open("counter", "w").write("%d" % _count)
import atexit
atexit.register(savecounter)
|
Getting Spark, Python, and MongoDB to work together
Question: I'm having difficulty getting these components to knit together properly. I
have Spark installed and working succesfully, I can run jobs locally,
standalone, and also via YARN. I have followed the steps advised (to the best
of my knowledge) [here](https://github.com/mongodb/mongo-hadoop/wiki/Spark-
Usage) and [here](https://github.com/mongodb/mongo-
hadoop/blob/master/spark/src/main/python/README.rst)
I'm working on Ubuntu and the various component versions I have are
* **Spark** spark-1.5.1-bin-hadoop2.6
* **Hadoop** hadoop-2.6.1
* **Mongo** 2.6.10
* **Mongo-Hadoop connector** cloned from <https://github.com/mongodb/mongo-hadoop.git>
* **Python** 2.7.10
I had some difficulty following the various steps such as which jars to add to
which path, so what I have added are
* in `/usr/local/share/hadoop-2.6.1/share/hadoop/mapreduce` **I have added** `mongo-hadoop-core-1.5.0-SNAPSHOT.jar`
* the following **environment variables**
* `export HADOOP_HOME="/usr/local/share/hadoop-2.6.1"`
* `export PATH=$PATH:$HADOOP_HOME/bin`
* `export SPARK_HOME="/usr/local/share/spark-1.5.1-bin-hadoop2.6"`
* `export PYTHONPATH="/usr/local/share/mongo-hadoop/spark/src/main/python"`
* `export PATH=$PATH:$SPARK_HOME/bin`
My Python program is basic
from pyspark import SparkContext, SparkConf
import pymongo_spark
pymongo_spark.activate()
def main():
conf = SparkConf().setAppName("pyspark test")
sc = SparkContext(conf=conf)
rdd = sc.mongoRDD(
'mongodb://username:password@localhost:27017/mydb.mycollection')
if __name__ == '__main__':
main()
I am running it using the command
$SPARK_HOME/bin/spark-submit --driver-class-path /usr/local/share/mongo-hadoop/spark/build/libs/ --master local[4] ~/sparkPythonExample/SparkPythonExample.py
and I am getting the following output as a result
Traceback (most recent call last):
File "/home/me/sparkPythonExample/SparkPythonExample.py", line 24, in <module>
main()
File "/home/me/sparkPythonExample/SparkPythonExample.py", line 17, in main
rdd = sc.mongoRDD('mongodb://username:password@localhost:27017/mydb.mycollection')
File "/usr/local/share/mongo-hadoop/spark/src/main/python/pymongo_spark.py", line 161, in mongoRDD
return self.mongoPairRDD(connection_string, config).values()
File "/usr/local/share/mongo-hadoop/spark/src/main/python/pymongo_spark.py", line 143, in mongoPairRDD
_ensure_pickles(self)
File "/usr/local/share/mongo-hadoop/spark/src/main/python/pymongo_spark.py", line 80, in _ensure_pickles
orig_tb)
py4j.protocol.Py4JError
According to
[here](https://github.com/bartdag/py4j/blob/master/py4j-web/advanced_topics.rst#id19)
> This exception is raised when an exception occurs in the Java client code.
> For example, if you try to pop an element from an empty stack. The instance
> of the Java exception thrown is stored in the java_exception member.
Looking at the source code for `pymongo_spark.py` and the line throwing the
error, it says
> "Error while communicating with the JVM. Is the MongoDB Spark jar on Spark's
> CLASSPATH? : "
So in response I have tried to be sure the right jars are being passed, but I
might be doing this all wrong, see below
$SPARK_HOME/bin/spark-submit --jars /usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-hadoop-spark-1.5.0-SNAPSHOT.jar,/usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-java-driver-3.0.4.jar --driver-class-path /usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-java-driver-3.0.4.jar,/usr/local/share/spark-1.5.1-bin-hadoop2.6/lib/mongo-hadoop-spark-1.5.0-SNAPSHOT.jar --master local[4] ~/sparkPythonExample/SparkPythonExample.py
I have imported `pymongo` to the same python program to verify that I can at
least access MongoDB using that, and I can.
I know there are quite a few moving parts here so if I can provide any more
useful information please let me know.
Answer: **Updates** :
_2016-07-04_
Since the last update [MongoDB Spark
Connector](https://github.com/mongodb/mongo-spark) matured quite a lot. It
provides [up-to-date
binaries](https://search.maven.org/#search|ga|1|g%3Aorg.mongodb.spark) and
data source based API but it is using `SparkConf` configuration so it is
subjectively less flexible than the Stratio/Spark-MongoDB.
_2016-03-30_
Since the original answer I found two different ways to connect to MongoDB
from Spark:
* [mongodb/mongo-spark](https://github.com/mongodb/mongo-spark)
* [Stratio/Spark-MongoDB](https://github.com/Stratio/Spark-MongoDB)
While the former one seems to be relatively immature the latter one looks like
a much better choice than a Mongo-Hadoop connector and provides a Spark SQL
API.
# Adjust Scala and package version according to your setup
# although officially 0.11 supports only Spark 1.5
# I haven't encountered any issues on 1.6.1
bin/pyspark --packages com.stratio.datasource:spark-mongodb_2.11:0.11.0
df = (sqlContext.read
.format("com.stratio.datasource.mongodb")
.options(host="mongo:27017", database="foo", collection="bar")
.load())
df.show()
## +---+----+--------------------+
## | x| y| _id|
## +---+----+--------------------+
## |1.0|-1.0|56fbe6f6e4120712c...|
## |0.0| 4.0|56fbe701e4120712c...|
## +---+----+--------------------+
It seems to be much more stable than `mongo-hadoop-spark`, supports predicate
pushdown without static configuration and simply works.
**The original answer** :
Indeed, there are quite a few moving parts here. I tried to make it a little
bit more manageable by building a simple Docker image which roughly matches
described configuration (I've omitted Hadoop libraries for brevity though).
You can find [complete source on `GitHub`](https://github.com/zero323/docker-
mongo-spark) ([DOI 10.5281/zenodo.47882](https://zenodo.org/record/47882)) and
build it from scratch:
git clone https://github.com/zero323/docker-mongo-spark.git
cd docker-mongo-spark
docker build -t zero323/mongo-spark .
or download an image I've [pushed to Docker
Hub](https://hub.docker.com/r/zero323/mongo-spark/) so you can simply `docker
pull zero323/mongo-spark`):
Start images:
docker run -d --name mongo mongo:2.6
docker run -i -t --link mongo:mongo zero323/mongo-spark /bin/bash
Start PySpark shell passing `--jars` and `--driver-class-path`:
pyspark --jars ${JARS} --driver-class-path ${SPARK_DRIVER_EXTRA_CLASSPATH}
And finally see how it works:
import pymongo
import pymongo_spark
mongo_url = 'mongodb://mongo:27017/'
client = pymongo.MongoClient(mongo_url)
client.foo.bar.insert_many([
{"x": 1.0, "y": -1.0}, {"x": 0.0, "y": 4.0}])
client.close()
pymongo_spark.activate()
rdd = (sc.mongoRDD('{0}foo.bar'.format(mongo_url))
.map(lambda doc: (doc.get('x'), doc.get('y'))))
rdd.collect()
## [(1.0, -1.0), (0.0, 4.0)]
Please note that mongo-hadoop seems to close the connection after the first
action. So calling for example `rdd.count()` after the collect will throw an
exception.
Based on different problems I've encountered creating this image I tend to
believe that **passing** `mongo-hadoop-1.5.0-SNAPSHOT.jar` and `mongo-hadoop-
spark-1.5.0-SNAPSHOT.jar` **to both** `--jars` and `--driver-class-path` **is
the only hard requirement**.
**Notes** :
* This image is loosely based on [jaceklaskowski/docker-spark ](https://github.com/jaceklaskowski/docker-spark) so please be sure to send some good karma to [@jacek-laskowski](http://stackoverflow.com/users/1305344/jacek-laskowski) if it helps.
* If don't require a development version including [new API](https://github.com/mongodb/mongo-hadoop/wiki/Spark-Usage#python-example-unreleasedin-master-branch) then using `--packages` is most likely a better option.
|
Getting the path to changed file with QFileSystemWatcher?
Question: From the snippet in [How do I watch a file for changes using
Python?](http://stackoverflow.com/questions/182197/how-do-i-watch-a-file-for-
changes-using-python/5339877#5339877):
...
@QtCore.pyqtSlot(str)
def file_changed(path):
print('File Changed!!!')
...
I've assumed that the argument `path` of the handler would be, well, the path
of the file that changed (<http://doc.qt.io/qt-4.8/qfilesystemwatcher.html>
~~doesn't really say what should be expected~~ says "`fileChanged` ... signal
is emitted when the file at the specified _path_ is modified, renamed or
removed from disk."). But, then I run the following example (ubuntu 14.04,
python 2.7.6, corresponding py-qt4):
import sys, os
from PyQt4 import QtGui, QtCore
from PyQt4.QtCore import SIGNAL
from PyQt4 import Qt
mydir = os.path.dirname( os.path.realpath(__file__) )
myfile = os.path.join( mydir, "file.dat" )
print("myfile", myfile)
with open(myfile,"a+") as f:
f.write("line 1\n")
class MyWindow(QtGui.QWidget):
def __init__(self):
global myfile
QtGui.QWidget.__init__(self)
self.button = QtGui.QPushButton('Test', self)
self.button.clicked.connect(self.handleButton)
layout = QtGui.QVBoxLayout(self)
layout.addWidget(self.button)
self.fs_watcher = QtCore.QFileSystemWatcher( [myfile] )
self.fs_watcher.connect(self.fs_watcher, QtCore.SIGNAL('fileChanged(const QString &)'), self.file_changed) # or just 'fileChanged(QString)'
def handleButton(self):
print ('Hello World')
@QtCore.pyqtSlot(str) #
def file_changed(path):
print('File Changed!!!' + str(path)) #
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
window = MyWindow()
# try to trigger file change here:
with open(myfile,"a+") as f:
f.write("line 2\n")
window.show()
sys.exit(app.exec_())
... and it outputs:
...
File Changed!!!<__main__.MyWindow object at 0xb5432b24>
...
So, that argument seems not to receive the path of the changed file, but
instead a reference to the encompassing class?!
So how do I get the path to the file that changed?
Answer: The `file_changed` slot needs a `self` parameter if it is to be a method of
`MyWindow`:
@QtCore.pyqtSlot(str)
def file_changed(self, path):
print('File Changed!!!' + str(path))
|
Consume Redis messages with a pool of workers
Question: I have a Redis list where a publisher pushes some messages (JSON serialized).
On the other side the subscriber can fetch each JSON blob and do something.
The simplest way is to do this serially. But I'd like to make it a little bit
faster; I'd like to maintain a pool of worker processes (multiple consumers)
and whenever a new message arrive check if there is a "free" process from the
pool that can start processing I'm looking for Pool-based version of the
following
while not False:
_, new_user = conn.blpop('queue:users')
if not new_user:
continue
try:
process_new_user(new_user, conn)
except Exception as e:
print e
else:
pass
However I can't translate this into code that uses pythons
multiprocessing.Pool class. The documentation doesn't help either
Answer:
from multiprocessing import Pool
pool = Pool()
while 1:
_, new_user = conn.blpop('queue:users')
if not new_user:
continue
pool.apply_async(process_new_user, args=(new_user, conn))
If you want to handle exceptions you need to collect the `AsyncResult` objects
returned by `pool.apply_async` and check their status.
If you can use Python 3,
[concurrent.futures](https://docs.python.org/3.4/library/concurrent.futures.html?highlight=concurrent.futures#module-
concurrent.futures) Pools allow to handle results in asynchronous callbacks
making easier to check the jobs exit status.
|
Embed "Bokeh created html file" into Flask "template.html" file
Question: I have a web application written in Python - Flask. When the user fill out
some settings in one of the pages (POST Request), my controller calculates
some functions and plot an output using Bokeh with following command and then
I redirect to that HTML page created by Bokeh.
output_file("templates\\" + idx[j]['name'] + ".html", title = "line plots")
TOOLS="resize,crosshair,pan,wheel_zoom,box_zoom,reset,box_select,lasso_select"
p = figure(tools=TOOLS, x_axis_label = 'time', y_axis_label = 'L', plot_width = 1400, plot_height = 900)
All of my HTML pages extends my "Template.HTML" file except the Bokeh
generated ones. My question is how can automatically modify Bokeh generated
HTML files to also extends my template.html file? This way I have all my nav-
bar & jumbotron on top of the Bokeh html files.
{% extends "template.html" %}
{% block content %}
<Bokeh.html file>
{% endblock %}
Answer: You don't want to use `output_file` in this situation. Bokeh has a function
specifically for embedding into HTML templates in web apps,
`bokeh.embed.component`, demonstrated in the
[quickstart](http://bokeh.pydata.org/en/latest/docs/user_guide/embed.html#components)
and [tutorial](http://nbviewer.ipython.org/github/bokeh/bokeh-
notebooks/blob/master/tutorial/05%20-%20sharing.ipynb).
from bokeh.embed import components
script, div = components(plot)
return render_template('page.html', script=script, div=div)
<body>
{{ div|safe }}
{{ script|safe }}
</body>
[Here is a complete, runnable example that that shows how to use this with
Flask.](https://github.com/bokeh/bokeh/tree/master/examples/embed/simple)
|
Is it possible to make a "pyc only" "distribution"?
Question: Say I have a python code base organized like so:
./mod/:
./__init__.py
./main/main.py
./main/__init__.py
./mytest/__init__.py
The file
mod/main/__init__.py
is empty. And
$ cat mod/main/main.py
import sys
import mytest
def main(argv):
mytest.test()
return
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
And
$ cat mod/mytest/__init__.py
def test():
print('test worked!')
As expected, this works (from within the directory "mod"):
$ python3 -m main.main
test worked!
Now, I want to remove all the .py files, and still be able to run the command
that I have above - or something very similar. "Very similar" is defined by
not having to change my code structure at all - or, if that cannot be done, as
little as possible.
How can I achieve this?
Answer: the
compileall
utility with the "-b" option did the thing for me.
|
Python while loop not stopping?
Question: I'm trying to make a dice 21 game (look up if you need to, it's too long to
type out here) on Python. It's not finished yet, but for now I'm going through
and fixing any mistakes I made. I'm having some issues with a while loop that
won't turn off. After the player chooses to stick in the diceroll function, it
should set playeraddrolls to False and exit out of the while loop, into the
computeroll function. However, it just cycles back. Immediate help is needed
because this is a school project in for Monday, after I still have to finish
the code. It would also help a lot if you could point out any additional
errors I will come across later, and how to fix them.
import random
stick=0
winner=[""]
def diceroll(addedrolls,stick,playagain,playagain1or2,playeraddedrolls,computeraddedrolls,playeraddrolls):
while playeraddedrolls<21 or playeraddrolls is True:
stick=0
die1=random.randint(1,6)
die2=random.randint(1,6)
print("You rolled ",die1,"and ",die2,".")
playeraddedrolls=die1+die2
if playeraddedrolls>21:
print("You rolled over 21. Computer wins by default!")
computeraddedrolls(playeraddedrolls,playagain,playagain1or2,computeraddedrolls)
else:
while stick>2 or stick<1:
stick=int(input("Press 1 to stick or 2 to roll again. "))
if stick==1:
print("You chose to stick at", playeraddedrolls,". The computer will now roll.")
playeraddrolls=False
computeroll(playeraddedrolls,playagain,playagain1or2,computeraddedrolls)
elif stick==2:
print("You chose to roll again. Producing numbers now.")
else:
print("I'm sorry, that's not a valid command.")
def computeroll(playeraddedrolls,playagain,playagain1or2,computeraddedrolls):
while computeroll<17:
die3=random.randint(1,6)
die4=random.randint(1,6)
print("The comoputer rolled ",die3,"and ",die4,".")
computeraddedrolls=die3+die4
if playeraddedrolls>21:
winningtally(playeraddedrolls,computeraddedrolls,playagain,playagain1or2)
else:
if computeraddedrolls<17:
print("The computer chose to roll again!")
elif computeraddedrolls>21:
print("The computer rolled over 21, you win by default!")
winningtally(playeraddedrolls,computeraddedrolls,playagain,playagain1or2)
else:
print("Overall, the computer scored ", computeraddedrolls,".")
winningtally(playeraddedrolls,computeraddedrolls,playagain,playagain1or2)
def winningtally(PAR,CAR,playagain,playagain1or2):
if playeraddedrolls>21 or computeraddedrolls>playeraddedrolls:
print("I have added the computers win to the tally. Here is the new set of wins:")
append(computer)
print(winner)
playagain(PAR,CAR,playagain,playagain1or2)
elif computeraddedrolls>21 or playeraddedrolls>computeraddedrolls:
print("I have added your win to the tally. Here is the new set of wins:")
append(player)
print(winner)
playagain(PAR,CAR,playagain,playagain1or2)
def playagain(PAR,CAR,playagain,playagain1or2):
while playagain1or2<1 or playagain1or2>2:
playagain1or2=int(input("Press 1 to play again, or 2 to view the final result."))
if playagain1or2==1:
print("Okay, rerunning...")
return
elif playagain1or2==2:
computerwins=(winner).count(computer)
playerwins=(winner).count(player)
if computerwins>playerwins:
print("Sorry, the computer won. Better luck next time!")
else:
print("Congratulations, you won! Thank you for playing!")
else:
print("I'm sorry, ",playagain1or2," is not a valid command.")
playeraddrolls=True
playeraddedrolls=2
computeraddedrolls=2
playagain1or2=0
playagain=True
while playagain==True:
stick=0
addedrolls=3
diceroll(addedrolls,stick,playagain,playagain1or2,playeraddedrolls,computeraddedrolls,playeraddrolls)
Answer: Assuming that your code works as expressed (it's a lot of code to check), your
problem is that `False < 21 == True`.
Here is your `while` condition:
while playeraddedrolls<21 or playeraddrolls is True:
Remember that `or` short-circuits. It only needs one thing to be true for a
whole string of `x or y or z or...` to be true logically, so as soon as the
first thing checked is true, `or` stops looking.
Since you are setting `playeraddedrolls = False` to break out of this loop,
the check becomes `False < 21`, which is true and short-circuits.
# Practical Solutions -- Alternate Conditions
Rather than setting `playeraddedrolls = False` and _implicitly_ breaking, you
could explicitly add `break` there. However, this isn't recommended because
`break` statements can quite easily get buried and thus can be difficult to
debug.
Perhaps better still would be to change the while condition to this:
while 0 < playeraddedrolls < 21:
This allows you to set `playeraddedrolls = -1` for the desired implicit break.
# But WHY does Python do this?
As explained in my comment, booleans are a subclass of integers, because True
and False can be thought of as special cases of the numbers 0 and 1. This lets
you do some perhaps surprising numerical things to booleans.
>>> True + False
1
>>> True - False
1
>>> True * False
0
>>> True % False
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ZeroDivisionError: integer division or modulo by zero
>>> False % True
0
You can see above that all the numerical operators simply coerce the booleans
to 0 and 1 and function quite happily.
|
Most efficient way to convert a multidimensional numpy array to ctypes array
Question: Hello, I am using ctypes module in python to run some image processing C code
from python, for the purpose of optimisation of my code, and reducing the
execution time.
For this purpose, I am reading an image into a numpy array and then applying
2D convolution to the image with a kernel, producing a filtered image. I want
achieve the same in C, in order to save some execution time.
So, the first part of the problem is converting the numpy image array, to
ctype array, so that I can perform convolution in C. Here is my C code which
does nothing right now, but I need it just to access the function definition :
#import <math.h>
void convolution(int *array,int *kernel, int array_height, int array_width,
int kernel_height, int kernel_width) {
int i=0;
here is my python code, that adds a wrapper function for this C function :
_convolution_ = ctypes.cdll.LoadLibrary(working_directory + 'libconvolution.so')
class two_dimensional_matrix_() :
def from_param(self,param) :
typename = type(param).__name__
if hasattr(self,'from_'+typename) :
return getattr(self,'from_'+typename)(param)
else :
raise TypeError('cant convert %s' %typename)
#for a list
def from_list(self,param) :
c_array = ((ctypes.c_int * len(param))*len(param[0]))()
for i in range(len(param)) :
for j in range(len(param[i])) :
c_array[i][j] = ctypes.c_int(param[i][j])
return c_array
#for a tuple
def from_tuple(self,param) :
return self.from_list(param)
#for a numpy array
def from_ndarray(self,param) :
c_array = ((ctypes.c_int * len(param))*len(param[0]))()
for i in range(len(param)) :
for j in range(len(param[i])) :
c_array[i][j] = ctypes.c_int(param[i][j])
return c_array
two_dimensional_matrix = two_dimensional_matrix_()
_convolution_.convolution.argtypes = [
two_dimensional_matrix, two_dimensional_matrix,
ctypes.c_int,ctypes.c_int,ctypes.c_int,ctypes.c_int
]
_convolution_.convolution.restypes = ctypes.c_void_p
Even though this code works perfectly, what I want to know is that is there a
more efficient way to perform the conversion from a numpy array or a list to
ctypes array? Because I am using C extensions in python to save execution
time, I want this time to be as little as possible.
**EDIT** :
as suggested by Daniel, I used
[`numpy.ascontiguousarray`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ascontiguousarray.html),
and it works well for 1D numpy array, here is what I did :
c_array = numpy.ascontiguousarray(array,dtype=numpy.uint32)
but when I try the similar method for 2D arrays, it does not work, here is
what I tried :
c_array = numpy.ascontiguousarray(array,dtype=numpy.ndarray)
when I use this, python crashes. What am I doing wrong here?
Answer: The fastest way is, not the convert the array at all, if not necessary. This
code works also for lists and tuples:
c_array = numpy.ascontiguousarray(param, dtype=int)
pointer = c_array.ctypes.data_as(ctypes.c_void_p)
|
Python dictionary keys to csv file with column match
Question: I'm trying to push multiple dictionaries (keys and values) to a csv file by
matching the key to a column in the csv header. Example:
import csv
d1 = {'a':1, 'b':2, 'c': 3}
d2 = {'d':4, 'e':5, 'f': 6}
with open('my_data.csv','wb') as f:
w = csv.writer(f)
w.writerow(['a', 'b', 'c', 'd', 'e', 'f'])
#iterate through all keys in d1,d2,dn
#if key matches column:
#write value of key to the bottom of column
#else:
#error key not found in header
expected result in mydata.csv
a,b,c,d,e,f
1,2,3,4,5,6
Answer: The answer is.. don't just pass the column names to writerow().. put them in a
variable `columns` and then use that to control the order in which the values
are written out. Python dictionaries have no order.. you have to use a little
bit of code to sort the values into the order you want.
The last line of code, which writes out the values into the CSV, uses a python
feature called List Comprehension. It's a shortcut that saves 3-4 lines of
code. Look it up, they are very handy.
import csv
d1 = {'a':1, 'b':2, 'c': 3}
d2 = {'d':4, 'e':5, 'f': 6}
columns = ['a', 'b', 'c', 'd', 'e', 'f']
# combine d1 and d2 into data.. there are other ways but this is easy to understand
data = dict(d1)
data.update(d2)
with open('my_data.csv','wb') as f:
w = csv.writer(f)
w.writerow(columns)
# make a list of values in the order of columns and write them
w.writerow([data.get(col, None) for col in columns])
Here is what it would look like without the list comprehension:
row = []
for col in columns:
row.append(data.get(col, None))
w.writerow(row)
|
Python Lottery number and checker
Question: I am attempting to create a random number generator for any number of numbers
in a line and then repeating those random numbers until a "target" number is
reached. The user will enter both the number of numbers in the sequence and
the sequence they are shooting for. The program will run the random numbers
over and over until the target sequence is hit, then the program will spit out
the number of repetitions it took. My problem is that it keeps going on
forever, seemingly never hitting the break function because num doesn't equal
target or something. So far I have this and i think i am pretty close
#Module to get amount of numbers in the sequence
def getRange():
Range =int(input("How many digits in your number?"))
return Range
#Target Sequence input
def getTarget():
Target= []
Target =input("What is your target sequence?")
return Target
def lotteryNo(Range):
import random
integer = []
for number in range(0 , Range):
integer.append(random.randint(0, 9))
return integer
def printDigits(Range,Target):
print("Your target list is",Target)
for rep in range(10000000):
num=(lotteryNo(Range))
print(num)
if num == Target:
rep=rep + 1
print("The number of repetitions is",rep)
break
else:
rep=rep+1
def main():
Range=getRange()
Target=getTarget()
printDigits(Range,Target)
main()
#End
Answer: The issue with your comparison is that you're testing `Target` which is a
string against `num` which is a list of integers. That will never match, no
matter what integers and what string you're dealing with. You need to compare
two like-types to get a meaningful result.
It looks like you wanted your `getTarget` function to return a list, since
you're initializing `Target` to an empty string. However, when you overwrite
it with `Target = input(...)`, the list is discarded.
You probably want something like `Target = list(map(int, input()))`, which
converts each character of the input string to an integer and then packs them
all into a list. Another way of writing that would be `Target = [int(digit)
for digit in input()]`.
One further suggestion, unrelated to your current issue: Python's common
naming convention is to use `lower_case_names_with_underscores` for variable
and function names, reserving `UpperCaseNames` for classes. You don't have to
follow this convention, but it's probably a good idea if you're going to share
your code with anyone. You can read [PEP
8](https://www.python.org/dev/peps/pep-0008/) for more style suggestions.
|
Bokeh dynamically changing BoxAnnotation
Question: Is there possible to update bokeh figure's renderes in IPython's interact
function. I have code which looks like:
x = [0, 1, 2, 3, 4]
y = [0, 1, 2, 3, 4]
source = ColumnDataSource(data=dict(x=x, y=y)
f = figure()
f.line(x, y, source=source)
show(f)
def update_func(selected_data):
source.data['y'] = ...
source.push_notebook()
<here I would like to add BoxAnnotation to figure f, and rerender it>
interactive(update_func, selected_data=[0,1,2])
Answer: You could use CustomJS to insert some JavaScript code that will be used to
change the bottom and top values of the BoxAnnotation. I'm using the Slider
from Bokeh in this example:
from bokeh.io import vform
from bokeh.models import CustomJS, Slider
from bokeh.plotting import figure, show
from bokeh.models import BoxAnnotation
plot = figure(plot_width=300, plot_height=300)
plot.line([0,1],[0,1], line_width=3, line_alpha=0.6)
box_l = BoxAnnotation(plot=plot, top=0.4,
fill_alpha=0.1, fill_color='red')
box_m = BoxAnnotation(plot=plot, bottom = 0.4,top=0.6,
fill_alpha=0.1, fill_color='green')
box_h = BoxAnnotation(plot=plot, bottom=0.6,
fill_alpha=0.1, fill_color='red')
plot.renderers.extend([box_l, box_m, box_h])
callb_low = CustomJS(args=dict(box_l=box_l,box_m=box_m,plot=plot),
code="""
var level = cb_obj.get('value')
box_l.set({"top":level})
box_m.set({"bottom":level})
plot.trigger('change');
""")
callb_high = CustomJS(args=dict(box_m=box_m,box_h=box_h,plot=plot),
code="""
var level = cb_obj.get('value')
box_m.set({"top":level})
box_h.set({"bottom":level})
plot.trigger('change');
""")
slider1 = Slider(start=0.1, end=1, value=0.4, step=.01, title="low",
callback=callb_low)
slider2 = Slider(start=0.1, end=1, value=0.6, step=.01, title="high",
callback=callb_high)
layout = vform(slider1,slider2, plot)
show(layout)
The output will look like: [](http://i.stack.imgur.com/kCKVM.png)
Based on the suggestions by bigreddot, you can have the following done in
ipython notebooks:
from bokeh.io import push_notebook
from bokeh.plotting import figure, show
from bokeh.models import BoxAnnotation
from ipywidgets import interact
p = figure(x_range=(0,1), y_range=(0,1),plot_width=300, plot_height=300)
box_L = BoxAnnotation(plot=p, top=0.4,
fill_alpha=0.1, fill_color='red')
box_M = BoxAnnotation(plot=p, bottom = 0.4,top=0.6,
fill_alpha=0.1, fill_color='green')
box_H = BoxAnnotation(plot=p, bottom=0.6,
fill_alpha=0.1, fill_color='red')
p.renderers.extend([box_L, box_M, box_H])
def update_func(thresh_L=0.4, thresh_H=0.6):
box_L.top = box_M.bottom = thresh_L;
box_M.top = box_H.bottom=thresh_H;
p.renderers.extend([box_L, box_M, box_H])
push_notebook() # note, just a function, not a method on "source"
show(p)
Then in a separate cell you start your sliders like:
interact(update_func, thresh_L=(0, 1, 0.1),thresh_H=(0, 1, 0.1))
[](http://i.stack.imgur.com/stre5.png)
|
Rendering csv data line-by-line without writing file
Question: I want to change a large CSV-file and write result into new file.
My python script `run.py`:
import csv
writer = csv.writer(open(..., 'w'))
for l in csv.reader(open(...)):
l[0] = 'foo' if l[1] else 'bar'
writer.writerow(l)
I've realize that more suitable for me write file using Linux `pipe`:
run.py > new-csv-file.csv
I've written:
import csv
import io
s = io.StringIO()
writer = csv.writer(s)
for l in csv.reader(open(...)):
l[0] = 'foo' if l[1] else 'bar'
writer.writerow(l)
print(s.getvalue())
It's wrong code. Each cycle appends new line in `s` and prints whole file
content on each cycle, not line-by-line.
How can I clean `s` on each cycle? Or how can I render csv-string line-by-
line?
for l in csv.reader(open(...)):
l[0] = 'foo' if l[1] else 'bar'
s = my_imaginary_csv_render(l)
print(s)
Answer: So as I understand you correct you want that your code, instead of file print
to stdout? Here solution:
import csv
for l in csv.reader(open(...)):
l[0] = 'foo' if l[1] else 'bar'
print(", ".join(l))
This solution not works as expected if data fields contain line separators,
then to not duplicate csv logic for escaping special symbols in csv you can
use:
csv.writer(sys.stdout)
|
Execute shell command and retrieve stdout in Python
Question: In Perl, if I want to execute a shell command such as `foo`, I'll do this:
#!/usr/bin/perl
$stdout = `foo`
In Python I found this very complex solution:
#!/usr/bin/python
import subprocess
p = subprocess.Popen('foo', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout = p.stdout.readlines()
retval = p.wait()
Is there any better solution ?
Notice that I don't want to use `call` or `os.system`. I would like to place
`stdout` on a variable
Answer: An easy way is to use [`sh`](http://amoffat.github.io/sh/) package. some
examples:
import sh
print(sh.ls("/"))
# same thing as above
from sh import ls
print(ls("/"))
|
Django AppRegistryNotReady Error
Question: Migrating my project from 1.8.5 to 1.9b1 cause next traceback
Traceback (most recent call last):
File "/Users/.../manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Users/.../env3/lib/python3.5/site-packages/django/core/management/__init__.py", line 350, in execute_from_command_line
utility.execute()
File "/Users/.../env3/lib/python3.5/site-packages/django/core/management/__init__.py", line 342, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/.../env3/lib/python3.5/site-packages/django/core/management/__init__.py", line 176, in fetch_command
commands = get_commands()
File "/Users/.../env3/lib/python3.5/site-packages/django/core/management/__init__.py", line 71, in get_commands
for app_config in reversed(list(apps.get_app_configs())):
File "/Users/.../env3/lib/python3.5/site-packages/django/apps/registry.py", line 137, in get_app_configs
self.check_apps_ready()
File "/Users/.../env3/lib/python3.5/site-packages/django/apps/registry.py", line 124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
Process finished with exit code 1
And my manage.py is simple:
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings.base")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
I checked Django 1.9 update guide but it's don't provide any info on App
loading.
Answer: I normally see a similar stack of error messages when the settings file can't
be found. So in this case your `settings\base.py` file either can't be found
or is missing some settings needed for django 1.9.
As this is a migrate, I am assuming that you are using virtualenv, and have
pip installed django 1.9b1 into the new environment and are running the
application there.
If this is the case then try creating a new "dummy" project to see what a
clean django 1.9 settings file looks like.
|
Iterate over a column containing keys from a dict. Return matched keys from second dict keeping order of keys from first dict
Question: I have been stack with a problem for a couple of days with Python (2.7). I
have 2 data sets, A and B, from 2 different populations, containing ordered
positions along the chromosomes (defined by a name, e.g. rs4957684) and their
corresponding frequencies in the 2 populations. Most of the positions in B
match those in A. I need to get the frequencies in A and B of only those
positions that match between A and B, and in the corresponding order along the
chromosomes.
I created a csv file (df.csv) with 4 columns: keys from A (c1), values from A
(c2), keys from B (c3), values from B (c4).
First I created 2 dicts, dA and dB, with keys and values (positions and
frequencies respectively) from A and B, and looked for the keys that match
between A and B. From the matched keys I generated 2 new dicts for A and B
(dA2 and dB2). The problem is that, since they are dicts, I cannot get the
order of the matched positions in the chromosomes so I figured out another
strategy:
Iterate along c1 and see whether any key from c3 matches the ordered keys in
c1. If yes, return an ordered list with the values (of A and B) of the matched
keys.
I wrote this code:
import csv
from collections import OrderedDict
with open('df.csv', mode='r') as infile: # input file
# to open the file in universal-newline mode
reader = csv.reader(open('df.csv', 'rU'), quotechar='"', delimiter = ',')
dA= dict((rows[1],rows[2]) for rows in reader)
dB= dict((rows[3],rows[4]) for rows in reader)
import sys
sys.stdout = open("df2.csv", "w")
for key, value in dB:
if rows[3] in dA.key():
print rows[2], rows[4]
Here the script seems to run but I get no output
# I also tried this:
for row in reader:
if row[3] in dA.key():
print row[4]
...and I have the same problem.
Answer: As I see, you imported `OrderedDict`, but didn't use it. You should build
`OrderedDict` to save keys order:
dict_a = OrderedDict((rows[1],rows[2]) for rows in reader)
dict_b = dict((rows[3],rows[4]) for rows in reader)
for key, value in dict_a.iteritems():
if dict_b[key] == value:
print value
|
Why is Parsimonious rejecting my input with an IncompleteParseError?
Question: I've been trying to work out the basic skeleton for a language I've been
designing, and I'm _attempting_ to use
[Parsimonious](https://github.com/erikrose/parsimonious) to do the parsing for
me. As of right now, I've, declared the following grammar:
grammar = Grammar(
"""
program = expr*
expr = _ "{" lvalue (rvalue / expr)* "}" _
lvalue = _ ~"[a-z0-9\\-]+" _
rvalue = _ ~".+" _
_ = ~"[\\n\\s]*"
"""
)
When I try to output the resulting AST of a simple input string like `"{ do-
something some-argument }"`:
>
> print(grammar.parse("{ do-something some-argument }"))
>
Parsimonious decides to flat-out reject it, and then gives me this somewhat
cryptic error:
>
> Traceback (most recent call last):
> File "tests.py", line 13, in <module>
> print(grammar.parse("{ do-something some-argument }"))
> File "/usr/local/lib/python2.7/dist-packages/parsimonious/grammar.py",
> line 112, in parse
> return self.default_rule.parse(text, pos=pos)
> File "/usr/local/lib/python2.7/dist-
> packages/parsimonious/expressions.py", line 109, in parse
> raise IncompleteParseError(text, node.end, self)
> parsimonious.exceptions.IncompleteParseError: Rule 'program' matched in
> its entirety, but it didn't consume all the text. The non-matching portion
> of the text begins with '{ do-something some-' (line 1, column 1).
>
At first I thought this might be an issue related to my whitespace rule, `_`,
but after a few failed attempts at removing the whitespace rule in certain
places, I was still coming up with the same error.
I've tried searching online, but all I've found that seems to be remotely
related, is [this
question](http://stackoverflow.com/questions/18966779/parsimonious-parser-
error-trying-to-parse-assignment-grammar), which didn't help me in any way.
Am I doing something wrong with my grammar? Am I not parsing the input in the
correct way? If anyone has a possible solution to this, it'd be greatly
appreciated.
Answer: I am very far from an expert on Parsimonious, but I believe the problem is
that `~".+"` is greedily matching the whole remainder of the input string,
leaving nothing to match the rest of the production. I initially tested that
idea by changing the regex for `rvalue` to `~"[a-z0-9\\-]+"`, same as the one
you have for `lvalue`. Now it parses, and (awesomely) distinguishes by context
between the two identically defined tokens `lvalue` and `rvalue`.
from parsimonious.grammar import Grammar
grammar = Grammar(
"""
program = expr*
expr = _ "{" lvalue (rvalue / expr)* "}" _
lvalue = _ ~"[a-z0-9\\-]+" _
rvalue = _ ~"[a-z0-9\\-]+" _
_ = ~"[\\n\\s]*"
"""
)
print(grammar.parse( "{ do-something some-argument }"))
If you mean for `rvalue` to match any sequence of non-whitespace characters,
you want something more like this:
rvalue = _ ~"[^\\s\\n]+" _
But whoops!
{ foo bar }
`"}"` is a closing curly brace, but it's also a sequence of one or more non-
whitespace characters. Is it `"}"` or `rvalue`? The grammar says the next
token can be either of those. One of those interpretations is parsable and the
other isn't, but Parsimonious just says it's spinach and the hell with it. I
don't know if a parsing maven would consider that a legitimate way to resolve
the ambiguity (e.g. maybe such a grammar may result in cases with two possible
interpretations that _both_ parse), or how practical that would be to
implement. In any case Parsimonious doesn't make that call.
So we need to repel boarders on the curly brace issue. I think this grammar
does what you want:
from parsimonious.grammar import Grammar
grammar = Grammar(
"""
program = expr*
expr = _ "{" lvalue (expr / rvalue)* "}" _
lvalue = _ ~"[a-z0-9\\-]+" _
rvalue = _ ~"[^{}\\n\\s]+" _
_ = ~"[\\n\\s]*"
"""
)
print(grammar.match( "{ do-something some-argument 23423 {foo bar} &^%$ }"))
I excluded open curly brace as well, because how would you expect this string
to tokenize?
{foo bar{baz poo}}
I would expect
"{" "foo" "bar" "{" "baz" "poo" "}" "}"
...because if `"poo}"` is expected to tokenize as `"poo"` `"}"`, and `"{foo"`
is expected to tokenize as `"{"` `"foo"`, then treating `bar{baz` as
`"bar{baz"` or `"bar{"` `"baz"` is ~~deranged~~ counterintuitive.
Now I remember how my bitter hatred of yacc drove me to an obsession with it.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.