text
stringlengths
226
34.5k
ttk.Entry.select_range() works with Button, but not ttk.Button in python / tkinter? Question: Can some one please explain why entry.select_range() works with a Button, but not a ttk.Button? from tkinter import * from tkinter import ttk root = Tk() entry = ttk.Entry(root) entry.pack() #This works button = Button(root, text="Select your text", command=lambda: entry.select_range(0, END)) #but this doesn't ##button = ttk.Button(root, text="Select your text", command=lambda: ## entry.select_range(0, END)) button.pack() root.mainloop() Answer: This answer from [Google Group](https://groups.google.com/d/msg/comp.lang.tcl/SplAe3PFLUM/4edvWpGqeyMJ) says, > However, _on Windows_ (only) the selection will only become visible when the > entry gets the focus. and also this page about [ttk button](http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/ttk-Button.html) says, > By default, a ttk.Button will be included in focus traversal ... To remove > the widget from focus traversal, use takefocus=False So you need to add takefocus option to `ttk.Button`. button = ttk.Button(root, takefocus=False, text=...)
Py2exe EOF error when compiling Question: Below is a simple file rename program that runs and works great, when launching from Python. Whenever I try compiling this program into a single .exe it won't launch and gives this error: File "UserInputRenameReplace.py", line 12, in EOFError: EOF when reading line. What could this error mean, and why does it run fine in Python but not Py2exe? import os path = os.getcwd() #Working/active directory filenames = os.listdir(path) print "**Rename Active Directory File(s)**\n" cur_Name = raw_input("Current Name: ") new_Name = raw_input("New Name: ") for filename in filenames: os.rename(os.path.join(path, filename), os.path.join(path, filename.replace(cur_Name, new_Name))) Answer: In your `setup.py` make sure you are specifying `console=['myscript.py']` and not `windows=['myscript.py']`. The "EOF when reading line" error can result from `stdin` being closed.
Python33, Flask: Lot of Errors starting Hello World Question: I just have installed with pip flask and HTML5 on my window-sytem. When I start the Hello World!-program with IDLE I get the text correct in a new tab of Firefox. But also a lot of error-messages in the Python shell: Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 1519, in _find_and_load_unlocked AttributeError: 'module' object has no attribute '__path__' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Python33\lib\site-packages\werkzeug\utils.py", line 18, in <module> from html.entities import name2codepoint ImportError: No module named 'html.entities'; html is not a package During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/Users/Public/python/testflask.py", line 13, in <module> from flask import Flask File "C:\Python33\lib\site-packages\flask\__init__.py", line 17, in <module> from werkzeug.exceptions import abort File "C:\Python33\lib\site-packages\werkzeug\__init__.py", line 154, in <module> __import__('werkzeug.exceptions') File "C:\Python33\lib\site-packages\werkzeug\exceptions.py", line 71, in <module> from werkzeug.wrappers import Response File "C:\Python33\lib\site-packages\werkzeug\wrappers.py", line 36, in <module> from werkzeug.utils import cached_property, environ_property, \ File "C:\Python33\lib\site-packages\werkzeug\utils.py", line 20, in <module> from htmlentitydefs import name2codepoint ImportError: No module named 'htmlentitydefs' Help! Help! Help! Answer: You have a _local_ `html.py` module or `html` package that is masking the built-in library. Rename it, as it is breaking software that relies on the standard library version. You can find what file you need to rename or move aside with: import html print(html) Rename that file to something else. Take into account that there might be a `.pyc` file as well; remove the `.pyc` bytecode cache altogeher if present.
Python axhline, title and axis labels Question: I am trying to plot a horizontal line over another plot using matplotlib. Everything works except the title and axis labels never show up. How does this work? *Edit-sorry, code looks sort of like this: from matplotlib import pyplot as plt n=100 plt.axhline(y=n, label='Old') plt.plot([5, 6, 7, 8], [100, 110, 115, 150], 'ro', label='New') plt.xlabel=('Example x') plt.ylabel=('Example y') plt.title=('Example Title') plt.legend() plt.axis([0,10,50,150]) plt.show() Everything shows up normally just no title and no axis labels. The legend is there. Answer: Try this: fig = plt.figure() ax = fig.add_subplot(111) ax.axhline(y=n, label='Old') ax.plot([5, 6, 7, 8], [100, 110, 115, 150], 'ro', label='New') ax.set_xlabel('Example x') ax.set_ylabel('Example y') ax.set_title('Example Title') ax.legend() ax.set_xticks([0,10,50,150]) ax.set_yticks([0,10,50,150]) plt.show()
beautifulsoup for subreddits Question: I have been trying to learn some html parsing with BeautifulSoup and tried to get it work for reddit. Here is my code, !/usr/bin/python import BeautifulSoup from BeautifulSoup import BeautifulSoup as BSoup import os, re, sys, math, os.path, urllib, string, random, time url = urllib.urlopen(sys.argv[1]).read() soup= BSoup(url) links = [] for link in soup.findAll('a',attrs={'class':'comments may-blank'}): links.append(link.get("href")) print links I have tested the code successfully for r/gaming and r/worldnews but the code fails for r/gifs. I have also verified that the same class is used for all the subreddits. Plus I have tried with just for link in soup.findAll('a'): but still the code fails to find the hyperlink. Any suggestion on why this happens and how to make the code work with all subreddits. Answer: If you are doing this too often you will be met with this. As a reminder to developers, we recommend that clients make no more than <a href="http://github.com/reddit/reddit/wiki/API">one request every two seconds</a> to avoid seeing this message. Reddit does this to prevent abuse by spiders and crawlers. Either space your requests OR much much more preferably use their Python api [PRAW](https://praw.readthedocs.org/en/v2.1.16/) :
Changing default home directory for Python to start in a different directory Question: Every time I start iPython, the first thing I **always** immediately change the directory from the default directory, and I would like to know of a way to edit this default starting directory. Currently, iPython starts in the following directory: In [2]: os.getcwd() Out[2]: 'C:\\Users\\Curtis\\Documents\\Python Scripts' And the first thing I always do is: os.chdir('D:') Once in the D: drive, I then use `cd` to navigate my various files and folders. There must be a way to change the default directory that python starts in so I don't have to do this every time I start iPython. One thing to note is that I cannot change where python is installed (if that would be an option). Answer: Try this: 1. set PYTHONSTARTUP variable to a desired script. Example:"c:\startup.py" 2. With in startup.py add these lines: import os os.chdir('yourdirectory') #example os.chdir("D:\\")
Error Installing JPype - Permission Denied after jni.h not found Question: This is my first post on StackOverflow. I'm trying to install JPype. I have looked at the tutorials and forums about this online. However, I am not able to successfully complete the installation. This is the error I'm getting: In file included from src/native/common/jp_array.cpp:17: src/native/common/include/jpype.h:45:10: fatal error: 'jni.h' file not found I'm running Mac OSX 10.9.3. I will be using a remote server. EDIT: I managed to get past the jni.h not found error. However, now I have this error --> error: /Library/Python/2.7/site-packages/_jpype.so: Permission denied Below is the setup.py code from JPype from distutils.core import setup as distSetup, Extension import os, os.path, sys class JPypeSetup(object): def **init**(self) : self.extra_compile_args = [] self.macros = [] def setupFiles(self) : cpp_files = [ map(lambda x : "src/native/common/"+x, os.listdir("src/native/common")), map(lambda x : "src/native/python/"+x, os.listdir("src/native/python")), ] all_src = [] for i in cpp_files : all_src += i self.cpp = filter(lambda x : x[-4:] == '.cpp', all_src) self.objc = filter(lambda x : x[-2:] == '.m', all_src) def setupWindows(self): print 'Choosing the Windows profile' self.javaHome = os.getenv("JAVA_HOME") if self.javaHome is None : print "environment variable JAVA_HOME must be set" sys.exit(-1) self.jdkInclude = "win32" self.libraries = ["Advapi32"] self.libraryDir = [self.javaHome+"/lib"] self.macros = [ ("WIN32",1) ] self.extra_compile_args = ['/EHsc'] def setupMacOSX(self): #self.javaHome = '/System/Library/Frameworks/JavaVM.framework/Versions/A/Headers/jni.h' #self.javaHome = '/System/Library/Frameworks/JavaVM.framework/Versions/' self.javaHome = '/Developer/SDKs/MacOSX10.6.sdk/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/' self.jdkInclude = "" self.libraries = ["dl"] #self.libraryDir = [self.javaHome+"/System/Library"] self.libraryDir = [self.javaHome+"/Libraries"] self.macros = [('MACOSX',1)] def setupLinux(self): self.javaHome = os.getenv("JAVA_HOME") if self.javaHome is None : self.javaHome = '/usr/lib/jvm/java-1.5.0-sun-1.5.0.08' # Ubuntu linux # self.javaHome = '/usr/java/jdk1.5.0_05' self.jdkInclude = "linux" self.libraries = ["dl"] self.libraryDir = [self.javaHome+"/lib"] def setupPlatform(self): if sys.platform == 'win32' : self.setupWindows() elif sys.platform == 'darwin' : self.setupMacOSX() else: self.setupLinux() def setupInclusion(self): self.includeDirs = [ self.javaHome+"/Headers", self.javaHome+"/Headers/"+self.jdkInclude, "src/native/common/include", "src/native/python/include", ] def setup(self): self.setupFiles() self.setupPlatform() self.setupInclusion() jpypeLib = Extension("_jpype", self.cpp, libraries=self.libraries, define_macros=self.macros, include_dirs=self.includeDirs, library_dirs=self.libraryDir, extra_compile_args=self.extra_compile_args ) distSetup( name="JPype", version="0.5.4.2", description="Python-Java bridge", author="Steve Menard", author_email="[email protected]", url="http://jpype.sourceforge.net/", packages=[ "jpype", 'jpype.awt', 'jpype.awt.event', 'jpypex', 'jpypex.swing'], package_dir={ "jpype" : "src/python/jpype", 'jpypex' : 'src/python/jpypex', }, ext_modules=[jpypeLib] ) JPypeSetup().setup() Answer: I figured out my errors. I just needed to put the sudo command before running the file. It would look like this --> sudo python setup.py install As for the jni.h error. Below is the code. This website helped me a lot: <http://blog.y3xz.com/blog/2011/04/29/installing-jpype-on-mac-os-x> Here is a copy of the setup.py for mac installation. I changed some code in these two functions setupMacOSX() and setupInclusion(). from distutils.core import setup as distSetup, Extension import os, os.path, sys class JPypeSetup(object): def __init__(self) : self.extra_compile_args = [] self.macros = [] def setupFiles(self) : cpp_files = [ map(lambda x : "src/native/common/"+x, os.listdir("src/native/common")), map(lambda x : "src/native/python/"+x, os.listdir("src/native/python")), ] all_src = [] for i in cpp_files : all_src += i self.cpp = filter(lambda x : x[-4:] == '.cpp', all_src) self.objc = filter(lambda x : x[-2:] == '.m', all_src) def setupWindows(self): print 'Choosing the Windows profile' self.javaHome = os.getenv("JAVA_HOME") if self.javaHome is None : print "environment variable JAVA_HOME must be set" sys.exit(-1) self.jdkInclude = "win32" self.libraries = ["Advapi32"] self.libraryDir = [self.javaHome+"/lib"] self.macros = [ ("WIN32",1) ] self.extra_compile_args = ['/EHsc'] def setupMacOSX(self): #I changed the line below. This is where my JDK root file was located. self.javaHome = '/Developer/SDKs/MacOSX10.6.sdk/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/' self.jdkInclude = "" self.libraries = ["dl"] #I changed the line below to this: self.libraryDir = [self.javaHome+"/Libraries"] self.macros = [('MACOSX',1)] def setupLinux(self): self.javaHome = os.getenv("JAVA_HOME") if self.javaHome is None : self.javaHome = '/usr/lib/jvm/java-1.5.0-sun-1.5.0.08' # Ubuntu linux # self.javaHome = '/usr/java/jdk1.5.0_05' self.jdkInclude = "linux" self.libraries = ["dl"] self.libraryDir = [self.javaHome+"/lib"] def setupPlatform(self): if sys.platform == 'win32' : self.setupWindows() elif sys.platform == 'darwin' : self.setupMacOSX() else: self.setupLinux() def setupInclusion(self): self.includeDirs = [ #I changed the line below to Headers. self.javaHome+"/Headers", #I changed the line below to Headers. self.javaHome+"/Headers/"+self.jdkInclude, "src/native/common/include", "src/native/python/include", ] def setup(self): self.setupFiles() self.setupPlatform() self.setupInclusion() jpypeLib = Extension("_jpype", self.cpp, libraries=self.libraries, define_macros=self.macros, include_dirs=self.includeDirs, library_dirs=self.libraryDir, extra_compile_args=self.extra_compile_args ) distSetup( name="JPype", version="0.5.4.2", description="Python-Java bridge", author="Steve Menard", author_email="[email protected]", url="http://jpype.sourceforge.net/", packages=[ "jpype", 'jpype.awt', 'jpype.awt.event', 'jpypex', 'jpypex.swing'], package_dir={ "jpype" : "src/python/jpype", 'jpypex' : 'src/python/jpypex', }, ext_modules=[jpypeLib] ) JPypeSetup().setup()
Is it possible to print python statements that appear in the middle of the program? Question: I have a question regarding python statements and whether or not it is possible to print statements such that they appear in the middle. I know that you can just punch the space bar until the word is in the middle and then print it, but I am sure there is a more logical way of doing it. For example a normal print statement would be like this: print ("Hello World") This would result in the output: Hello World However, how should I make it so that it comes out in the middle like this: Hello World I COULD add spaces to my print statement, but I would have to eye it and it wouldn't truly be in the middle. Is there a function or some sort that will help me achieve this simple print problem? Answer: Try this: import subprocess def printcenter(s): cols = int(subprocess.check_output('stty size',shell=True).split()[1]) pos = (cols-len(s))//2 print(pos*" "+s) This will work, at least, with most Linux distributions and OS X. It depends on "stty size" returning the rows and columns in the terminal, which may not work depending on your situation: let us know if you're not in a standard shell environment (eg, an iPython notebook or something similar). If you have some other way of determining the number of columns, you can swap out the `cols =`, and the code should still work. Note of course that perfect centering not necessarily possible in a shell: it depends on the ability to have an equal number of spaces on each side of the string, which means that (cols-len(s)) must be even.
Random Output from a list - Python Question: Stores = ["alpha","bravo","charlie","delta"] KnownName = Stores.pop() KnownName2 = Stores.pop() KnownName3 = Stores.pop() print KnownName+" "+Stores.pop() print KnownName2+" "+KnownName print KnownName3+" "+KnownName2 print KnownName3 Output: delta alpha charlie delta bravo charlie bravo So if I have a list of m1,m2,m3...mn I am trying to get random out such as this m1 m2 m3 m4 m2 m5 m6 m7 m8 m9 m10 m8 m11 m10 If you look at the outputs, a maximum of two values in a line. Only the first value can repeat. How can I improve my code? foo = ['m1', 'm2', 'm3', 'm4', 'm5','m6','m7','m8'] foo.reverse() while foo: try: if random.randrange(0,2) == 0: print foo.pop() else: Value2 = foo.pop() print Value2 + " " + foo.pop() if random.randrange(0,2) == 0: Value3 = foo.pop() print Value3 + " " + Value2 else: foo.pop() + foo.pop() except: break Answer: you may try this. from random import randint foo = ['m1', 'm2', 'm3', 'm4', 'm5','m6','m7','m8'] while foo: x = randint(0, len(foo) - 1) y = randint(0, len(foo) - 1) while x == y and len(foo) > 1: y = randint(0, len(foo) - 1) if randint(0, 1) == 0 or len(foo) <= 1: print foo[x] else: print foo[y], foo[x] foo = foo[:x] + foo[x+1:]
Eclipse with Python - having difficult with python version being picked up for egg file creation Question: I am using CentOS with Python 2.6 (/usr/bin/python2.6) but I installed Python 2.7.8 (/usr/local/lib/python2.7). The egg files (on running a script on eclipse get created /usr/bin/python2.6/.. for the wrong version. I want it to get created in /usr/local/bin/python2.7/.. [code] [Desktop]$ which python alias python='python2.7' /usr/local/bin/python2.7 [/code] The site-packages are present in /usr/local/lib/python2.7/site-packages I have set the .bashrc file and PYTHONPATH to point to Python2.7 and checked output of "python -v" and "which python" which seems correct. Is there something else that I could be missing? I always keep getting this error saying `"no module named pkg_resources"` found as a result of all this. Thanks Lafada: yum install python-setuptools There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: /usr/local/lib/python2.7/site-packages/cStringIO.so: undefined symbol: PyCapsule_New Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.6.6 (r266:84292, Jan 22 2014, 09:42:36) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] If you cannot solve this problem yourself, please go to the yum faq at: <http://yum.baseurl.org/wiki/Faq> This clearly explains that there is some version issue/mix-up.. would you know about this? Update: I found something on stackoverflow which helped me on 2 packages but not the others. I see the following on my Python Interpreters. [code] /usr/local/lib/python2.7/site-packages/setuptools-5.4.1-py2.7.egg /usr/local/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg /usr/lib/python2.6/site-packages/nose-1.3.3-py2.6.egg /usr/lib/python2.6/site-packages/six-1.3.0-py2.6.egg /usr/local/bin/python2.7 /usr/local/lib/python2.7/site-packag`enter code here`es /usr/lib64/python2.6 /usr/lib64/python2.6/plat-linux2 /usr/lib64/python2.6/lib-dynload /usr/lib64/python2.6/site-packages /usr/lib64/python2.6/site-packages/gtk-2.0 /usr/lib64/python2.6/site-packages/webkit-1.0 /usr/lib/python2.6/site-packages /usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info [/code] I need the packages referencing py2.6 to refer to py2.7 and create egg files for 2.7. Answer: You have to install `python-setuptools` apt-get install python-setuptools This will install `pkg_resources` module Hi Lafada: I have responded to your comment by editing my question.
Just getting started on Google App Engine and being greeted with a blank page Question: I'm currently in Udacity's CS253 (Web Development) course, and I'm getting through the second homework project (the ROT13) website. I paid attention to the lecture and I think I have a good grip on the code in the file `naisho.py`, which is here: import webapp2 import codecs import cgi form = """ <form method="post"> Tell me a secret... <br> <input type="text" name="secret" value="%(secret)s"> <div style="color: red">%(error)s</div> <br> <br> <input type="submit"> </form> """ class MainPage(webapp2.RequestHandler): def process(s): return codecs.encode(s, "rot_13") def escape(s): return cgi.escape(s, quote = True) def write_form(self, error="", secret=""): self.response.out.write(form % {"error": error, "secret": escape(secret)}) def get(self): #self.response.headers['Content-Type'] = 'text/plain' self.write_form() #self.response.headers['Content-Type'] = 'text/plain' #self.response.write(self.request) def post(self): self.redirect('/thanks') class ThanksHandler(webapp2.RequestHandler): def get(self): n = process(secret) self.response.out.write(n) def escape(s): return cgi.escape(s, quote = True) application = webapp2.WSGIApplication([ ('/', MainPage), ('/thanks', ThanksHandler)], debug=True) and file `app.yaml` provided: application: naisho version: 1 runtime: python27 api_version: 1 threadsafe: true handlers: - url: /.* script: naisho.application I navigate to the folder in the terminal (I use OSX 10.9.3), enter `dev_appserver.py --port=9999 .` to load `localhost:9999`, but get a blank page when I load it into my browser. I am absolutely puzzled as to why. Could someone point me in the right direction and also possibly point me in the direction of a reference to better diagnose these sorts of things? Thank you folks kindly in advance. Answer: First, your code contains some bugs and indentation problems. The bugs is that escape and process are methods of the class, so they should contain self as their first argument and be called as self.escape or self.process. An alternative would be to move those methods out of the class as functions. Once you do that, in the console you shouldn't see any error hidden in Stack Traces except for an annoying 303 error. For whatever reason, I also get those 303 errors when running on the dev server, but if you deploy it to an App Engine instance, it works smoothly. You could try to deploy to the App Engine first, and look into their logs to catch your bugs/indentation as it makes it easier to spot them.
use pickle to save dictionary in python Question: I have a file `hash_db.pickle` that when I created it I saved a dictionary on it: v = {hash_value:{"file name":file_name,"file size":file_size,"last scanned time":scanned_time}} {123dfre345:{"file name":calc.pdf,"file size":234,"last scanned time":12:23 24/12/2013}} {3gcdshj754:{"file name":star.pdf,"file size":10,"last scanned time":10:30 10/10/2013}} so if I want to change from the file only `last scanned time` for `3gcdshj754` how could I do that? Answer: You can use `pickle`. import pickle d = pickle.load(open('hash_db.pickle', 'rb')) d['3gcdshj754']['last scanned time'] = '11:30 11/10/2015' pickle.dump(d, open('hash_db.pickle', 'wb')) But you might find the [`shelve`](https://docs.python.org/2/library/shelve.html) module a little more convenient than direct use of pickle. It provides a persistent dictionary which seems to be exactly what you want. Sample usage: import shelve from datetime import datetime, timedelta # create a "shelf" shelf = shelve.open('hash_db.shelve') shelf['123dfre345'] = {"file name": 'calc.pdf', "file size": 234, "last scanned time": datetime(2013, 12, 24, 12, 23)} shelf['3gcdshj754'] = {"file name": 'star.pdf', "file size": 10, "last scanned time": datetime(2013, 10, 10, 10, 30)} shelf.close() # open, update and close shelf = shelve.open('hash_db.shelve') file_info = shelf['3gcdshj754'] file_info['last scanned time'] += timedelta(hours=+1, minutes=12) shelf['3gcdshj754'] = file_info shelf.close() And that's it.
Inserting text into a stream using Python OpenCV Question: Good day, which adds to the video stream and sends the text back into the stream. The idea is that I want the program to listen to certain IP addresses and port stream, using the OpenCV library, this stream divided into frames and each frame inserted text and then re-imposed in the stream. I need to do this in python. Input and output stream will use the H.264 codec. Here I found the code in python which can thus adjust the video, but I needed to do this from the stream. Please advise. import numpy as np import cv2 capture = cv2.VideoCapture(0) capture = cv2.VideoCapture("simpsnovi,prilis_drsne_pro_tv_03.avi") flag, frame = capture.read() width = np.size(frame, 1) height = np.size(frame, 0) #fourcc=cv2.cv.CV_FOURCC('I', 'Y', 'U', 'V'), #this is the codec that works for me writer = cv2.VideoWriter(filename="your_writing_file.avi", fourcc=cv2.cv.CV_FOURCC('X','V','I','D'), #this is the codec that works for me fps=25, #frames per second, I suggest 15 as a rough initial estimate frameSize=(width, height)) while True: flag, frame = capture.read() if flag == 0: break x = width/2 y = height/2 text_color = (255,0,0) cv2.putText(frame, "your_string", (x,y), cv2.FONT_HERSHEY_PLAIN, 1.0, text_color, thickness=1,linetype=cv2.CV_AA) writer.write(frame) Answer: First of all you need **UDP** socket server! That server will listen on some IP and PORT for you. Then, you can work with OpenCV.
High precision multidimensional Newtons method with mpmath.findroot in Python Question: I am trying to solve a system of equations numerically to high precision using the multidimensional newtons method of mpmath.findroot. Here's an example system: def f(x_0, x_1, x_2, x_3, x_4, x_5, y_0, y_1, y_2, y_3, y_4, y_5, l_0, l_1,l_2, l_3, l_4, l_5, l_6, l_7, l_8, l_9, l_10, l_11, l_12, l_13, l_14): return -x_0, -y_0 + 1, -x_5, -y_5, -x_1 - x_2, -y_1 + y_2, -x_3 - x_4, -y_3 + y_4, -(x_1 - x_5)^2 - (y_1 - y_5)^2 + 1, -(x_1 - x_4)^2 - (y_1 - y_4)^2 + 1, -(x_0 - x_5)^2 - (y_0 - y_5)^2 + 1, -(x_3 - x_4)^2 - (y_3 - y_4)^2 + 1, -(x_2 - x_3)^2 - (y_2 - y_3)^2 + 1, -(x_2 - x_5)^2 - (y_2 - y_5)^2 + 1, -(x_0 - x_5)^2 - (y_0 - y_5)^2 + 1, 2*l_10*(x_0 - x_5) + 2*l_14*(x_0 - x_5) + l_0 - .5*y_1 + .5*y_2, 2*l_9*(x_1 - x_4) + 2*l_8*(x_1 - x_5) + l_4 + .5*y_0 - .5*y_3, 2*l_12*(x_2 - x_3) + 2*l_13*(x_2 - x_5) - .5*y_0 + .5*y_4, -2*l_12*(x_2 - x_3) + 2*l_11*(x_3 - x_4) + l_6 + .5*y_1 - .5*y_5, -2*l_9*(x_1 - x_4) - 2*l_11*(x_3 - x_4) - .5*y_2 + .5*y_5, -2*l_10*(x_0 - x_5) - 2*l_14*(x_0 - x_5) - 2*l_8*(x_1 - x_5) - 2*l_13*(x_2 - x_5) + l_2 + .5*y_3 - .5*y_4, 2*l_10*(y_0 - y_5) + 2*l_14*(y_0 - y_5) + l_1 + .5*x_1 - .5*x_2, 2*l_9*(y_1 - y_4) + 2*l_8*(y_1 - y_5) + l_5 - .5*x_0 + .5*x_3, 2*l_12*(y_2 - y_3) + 2*l_13*(y_2 - y_5) + .5*x_0 - .5*x_4, -2*l_12*(y_2 - y_3) + 2*l_11*(y_3 - y_4) + l_7 - .5*x_1 + .5*x_5, -2*l_9*(y_1 - y_4) - 2*l_11*(y_3 - y_4) + .5*x_2 - .5*x_5, -2*l_10*(y_0 - y_5) - 2*l_14*(y_0 - y_5) - 2*l_8*(y_1 - y_5) - 2*l_13*(y_2 - y_5) + l_3 - .5*x_3 + .5*x_4 import sage.libs.mpmath.all as mpmath startsol=[0, -0.345847373297000, 0.345847770952000, -0.499999005131000, 0.500000393924000, 0, 0.999999726908000, 0.938291180327000, 0.938290404618000, 0.404864576964000, 0.404866700310000, 0, 0, 0.0205563218420000, 0.00287356421947000, -0.0200611112418000, 0.00371301649518000, 0.00363938213640000, -0.00658839856658000, -0.00414258257348000, 0.0391290458552000, 0.162094656373000, 0.0813226022151000, 0.0974643704937000, 0.158202488927000, 0.0432804352887000, 0.0813226022151000] print f(*startsol) mpmath.mp.dps=100 ans=mpmath.findroot(f, startsol) Unfortunately it does not give me a solution but throws an error > ZeroDivisionError: matrix is numerically singular Is this happening because findroot is trying to calculate the jacobian? Does this mean that the system is underdetermined? I found the starting point using scipy's fmin_cg, but I want to polish the solution to higher precision. As a function for fmin_cg I minimized the sum of squares of the 27 entries of my function f. If the problem with mpmath.findroot cannot be avoided, is there better way to solve this system to high precision? Answer: Yes, the system is underdetermined, its Jacobian matrix has rank 24 (at most) instead of 27. You can check this using SymPy: import sympy as sp vars = sp.var('x_0, x_1, x_2, x_3, x_4, x_5, y_0, y_1, y_2, y_3, y_4, y_5, l_0, l_1,l_2, l_3, l_4, l_5, l_6, l_7, l_8, l_9, l_10, l_11, l_12, l_13, l_14') F = sp.Matrix([-x_0, -y_0 + 1, -x_5, -y_5, -x_1 - x_2, -y_1 + y_2, -x_3 - x_4, -y_3 + y_4, -(x_1 - x_5)**2 - (y_1 - y_5)**2 + 1, -(x_1 - x_4)**2 - (y_1 - y_4)**2 + 1, -(x_0 - x_5)**2 $ J = F.jacobian(vars) print(J.rank()) Looking more carefully at the Jacobian matrix shows what the redundant equations are. For example, these are equations numbered 0-3 and 10: -x_0 = 0 -y_0 + 1 = 0 -x_5 = 0 -y_5 = 0 -(x_0 - x_5)^2 - (y_0 - y_5)^2 + 1 = 0 Clearly, the last one is a consequence of the first four. This redundancy needs to go away. ### Using symbolic Jacobian If mpmath.findroot is not given a function computing the Jacobian, it will use numeric differentiation, introducing additional errors. If available, it is better to provide a Jacobian function. Here is a small code sample for completeness. import sympy as sp import mpmath as mp vars = sp.var('x, y, z') F = [x**2 + y**3 + z**4 - 6, x + y + z - 2, 3*x**2 + y - 5] J = sp.Matrix(F).jacobian(vars) f = lambda x0,y0,z0 : [Fc.subs(list(zip(vars, [x0,y0,z0]))) for Fc in F] Jac = lambda x0,y0,z0 : J.subs(list(zip(vars, [x0,y0,z0]))).tolist() start = [1, 1, 1] print(mp.findroot(f, start, J = Jac))
Python/bs4: Span inside div tag - text extraction Question: I'm beating with an text extraction from div tag. The point is that there is a tag without opening pair, inside the div tag. So if I do this: `raw = soup.find('div', class_='inside').text` I get only text before the tag. An example: <div class='inside'><div>sth0</div><div>sth1</div></span><div>sth2<div></div> soup.find('div', class_='inside').text >>> sth0 sth1 Do you have an idea how to get a whole text from div tag? Thanks EDIT (According to Tanmaya Meher, the code above should work, but for me doesn't so I'm attaching the exact problem When I run this code: raw = firmHtml.find('div', class_='inside').text print raw I get Katalóg   Obchody a veľkoobchod Instead of: Katalóg Obchody a veľkoobchod Stavebniny Izolačný materiál... Here is a cut from my code. `<div class="inside"><div class="inside2"><a href="/katalog/" style="font- size:12px" title="Katalóg"><span>Katalóg</span></a> <span class="sipka s1">&nbsp;</span> <a href="/katalog/obchody-a-velkoobchod/" style="font- size:12px" itemprop="url" title="Obchody a veľkoobchod"><span itemprop="title" >Obchody a veľkoobchod</span></a></span> <span class="sipka s1">&nbsp;</span> <span itemprop="child" itemscope itemtype="http://data- vocabulary.org/Breadcrumb" ><a href="/katalog/stavebniny_1/" style="font- size:12px" itemprop="url" title="Stavebniny"><span itemprop="title" >Stavebniny</span></a></span> <span class="sipka s1">&nbsp;</span> <span itemprop="child" itemscope itemtype="http://data-vocabulary.org/Breadcrumb" ><a href="/katalog/izolacny-material/" style="font-size:12px" itemprop="url" title="Izolačný materiál"><span itemprop="title" >Izolačný materiál</span></a></span> <span class="sipka s1">&nbsp;</span> <span itemprop="child" itemscope itemtype="http://data-vocabulary.org/Breadcrumb" ><a href="/katalog/protipoziarne-izolacie/" style="font-size:12px" itemprop="url" title="Protipožiarne izolácie"><span itemprop="title" >Protipožiarne izolácie</span></a></span> <span class="sipka s1">&nbsp;</span> Ing. Milan Kalafut</div></div></div><div id="main"><div id="content"><div itemscope itemtype="http://schema.org/LocalBusiness" class="business- container"><div id="lavy"><div class="foto s3"><img src="http://s.aimg.sk/katalog/css/images/nologo.gif" alt="Logo nieje k dispozícii" /></div><div id="moznosti">` Maybe I can't see something. Answer: #!/usr/bin/env python # -*- coding: utf-8 -*- from bs4 import BeautifulSoup as BS html_text = '<div class="inside"><div class="inside2"><a href="/katalog/" style="font-size:12px" title="Katalóg"><span>Katalóg</span></a> <span class="sipka s1">&nbsp;</span> <a href="/katalog/obchody-a-velkoobchod/" style="font-size:12px" itemprop="url" title="Obchody a veľkoobchod"><span itemprop="title" >Obchody a veľkoobchod</span></a></span> <span class="sipka s1">&nbsp;</span> <span itemprop="child" itemscope itemtype="http://data-vocabulary.org/Breadcrumb" ><a href="/katalog/stavebniny_1/" style="font-size:12px" itemprop="url" title="Stavebniny"><span itemprop="title" >Stavebniny</span></a></span> <span class="sipka s1">&nbsp;</span> <span itemprop="child" itemscope itemtype="http://data-vocabulary.org/Breadcrumb" ><a href="/katalog/izolacny-material/" style="font-size:12px" itemprop="url" title="Izolačný materiál"><span itemprop="title" >Izolačný materiál</span></a></span> <span class="sipka s1">&nbsp;</span> <span itemprop="child" itemscope itemtype="http://data-vocabulary.org/Breadcrumb" ><a href="/katalog/protipoziarne-izolacie/" style="font-size:12px" itemprop="url" title="Protipožiarne izolácie"><span itemprop="title" >Protipožiarne izolácie</span></a></span> <span class="sipka s1">&nbsp;</span> Ing. Milan Kalafut</div></div></div><div id="main"><div id="content"><div itemscope itemtype="http://schema.org/LocalBusiness" class="business-container"><div id="lavy"><div class="foto s3"><img src="http://s.aimg.sk/katalog/css/images/nologo.gif" alt="Logo nieje k dispozícii" /></div><div id="moznosti">' #html_text = open("a.html",'r').read() #I have commented this, you can do like this too; a.html file contains the same html code as above firmHtml = BS(html_text) raw = firmHtml.find('div', class_='inside').text print (raw) Output (with both Python 2.7.5 and Python 3.3.2 on Linux): Katalóg   Obchody a veľkoobchod   Stavebniny   Izolačný materiál   Protipožiarne izolácie   Ing. Milan Kalafut
Using pandas to read downloaded html file Question: As title, I tried using `read_html` but give me the following error: In [17]:temp = pd.read_html('C:/age0.html',flavor='lxml') File "<string>", line unknown XMLSyntaxError: htmlParseStartTag: misplaced <html> tag, line 65, column 6 What have I done wrong? ### update 01 The HTML contains some javascript on top and then a html table. I used R to process it by parsing the html by XML package to give me a dataframe. I want to do it in python, should I use something else like beautifulsoup before giving it to pandas? Answer: I think you are on to the right track by using an html parser like beautiful soup. pandas.read_html() reads an html table not an html page. You would want to do something like this... from bs4 import BeautifulSoup import pandas as pd table = BeautifulSoup(open('C:/age0.html','r').read()).find('table') df = pd.read_html(table) #I think it accepts BeatifulSoup object #otherwise try str(table) as input
Unable to connect cassandra with python Question: I am trying to connect to cassandra from python , I have installed `cassandra` as `pip install pycassa`.When i am trying to connect to the `cassandra` i am getting the following exception from pycassa.pool import ConnectionPool pool = ConnectionPool('Keyspace1') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/site-packages/pycassa/pool.py", line 382, in __init__ self.fill() File "/usr/lib/python2.7/site-packages/pycassa/pool.py", line 442, in fill conn = self._create_connection() File "/usr/lib/python2.7/site-packages/pycassa/pool.py", line 431, in _create_connection (exc.__class__.__name__, exc)) pycassa.pool.AllServersUnavailable: An attempt was made to connect to each of the servers twice, but none of the attempts succeeded. The last failure was TTransportException: Could not connect to localhost:9160 I am using python 2.7. What is the problem, Any help would be appreciated. Answer: Perhaps try specifying the host: pool = ConnectionPool('Keyspace1', ['server_node_here:9160'])
setup.py bdist_egg is not putting files into the egg Question: I have a confusing issue with python packaging I have a setup.py that looks like this: import os try: from setuptools import setup except ImportError: from ez_setup import use_setuptools use_setuptools() from setuptools import setup config = { "name": "mypackage", "version": "3.0.15", "include_package_data": True, "scripts": ["something.py"], "entry_points": {}, "zip_safe": False, } setup(**config) and a `MANIFEST.in` that looks like: recursive-include mypackage *.* recursive-exclude mypackage *.pyc .DS_Store If I do `python setup.py sdist` a tar.gz file is written which contains all the right files If I do `python setup.py bdist_egg` and then extract the contents of the egg using Stuffit Expander... I see a `scripts/something.py` file but none of the `mypackage` source files. However in the extracted egg `SOURCES.txt` the `mypackage` files are listed, so it seems like the minifest has been parsed, it just hasn't put them into the egg. What am I misunderstanding/doing wrong? Answer: You'll need to list files to include in your binary distribution in the [`package_data` entry](https://docs.python.org/2/distutils/setupscript.html#distutils- installing-package-data) to `setuptools.setup()`: config = { "name": "mypackage", "version": "3.0.15", "include_package_data": True, "scripts": ["something.py"], "entry_points": {}, "package_data": {'': ['*.ext1', '*.ext2']}, "zip_safe": False, }
Writing to module-wide variable Question: Lets say I have these 3 (tiny) python files - **a.py** myvar = 'a' **b.py** import a import c myvar = 'b' c.pr() **c.py** from a import myvar def pr(): print myvar Now if I execute `b.py` I get output a But I really want output to be as b So please tell me how can I restructure/modify the programs so that the _module-wide_ `myvar` can be assigned a different value. Answer: In **b.py** , you are setting **b.py** 's myvar. You want to access a's value of myvar. So **b.py** might look like: import a import c a.myvar = 'b' # belongs to a myvar = 'something different' # belongs to b c.pr() And you need to update **c.py** so that it uses **a.py** 's myvar, not it's own local copy. So **c.py** : import a def pr(): print a.myvar **Why?** In your original **c.py** , when you call `from a import myvar`, this is equivalent to: import a myvar = a.myvar This makes a local version of `myvar` in **c.py** upon import. If this is confusing, I found this [article](http://nedbatchelder.com/text/names.html) about how python variables work very informative.
how to match file name in the file using python Question: How to find out if two file exists with same pattern inside a file.If all filenames have two-set of filenames ( csv.new and csv) then go ahead to next step otherwise exit with error message. The prefix "abc_package" will have two files one with extension "csv.new" and second file with extension "csv". There could be many filenames inside the "list_of_files.txt". Ex: List_of_files.txt abc_package.1406728501.csv.new abc_package.1406728501.csv abc_package.1406724901.csv.new abc_package.1406724901.csv Answer: For matching the file name name in python you can use [fnmatch](https://docs.python.org/2/library/fnmatch.html) module..I will provide you a sample code from the documentation. import fnmatch import os for file in os.listdir('.'): if fnmatch.fnmatch(file, '*.txt'): print file The syntax would be `fnmatch.fnmatchcase(filename, pattern)` Please have a look [here](https://docs.python.org/2/library/fnmatch.html) for more examples
python web page parsing when no HTML tags exist Question: Got this lovely webpage with nothing but text and I need some help getting certain lines formatted and also join multiple lines on one line The web page is <http://www.srh.noaa.gov/data/PHI/SRFPHI> I need the date of report and everything between these 2 lines: .FOR THE BEACHES OF NEW JERSEY AND DELAWARE... (include this) TIDE INFORMATION... (not this line, just prior) Any ideas what to use? Answer: I load all the lines into a list, and go through that list with a for loop. I check if `.FOR THE BEACHES OF NEW JERSEY AND DELAWARE` is in that index in the list, if it is I create a temporary variable called line. I go until line is the same as `TIDE INFORMATION...`, then it won't print anymore because the if statement will never be true again. import urllib import sys url = 'http://www.srh.noaa.gov/data/PHI/SRFPHI' try: response = urllib.urlopen(url) except: print 'Invalid url.' sys.exit(1) data = response.readlines() for i in range(0, len(data)): if '.FOR THE BEACHES OF NEW JERSEY AND DELAWARE' in data[i]: date = data[i - 2] print date line = data[i] while line.strip('\n') != 'TIDE INFORMATION...': print line i = i + 1 line = data[i]
loop prints a error message multiple times Question: i am having a little problem with python. At the this point the loop prints a error message multiple times, -well equal to the number of errors in the users input. i would prefer if the user only receive a single message input_seq = ""; #raw_input() reads every input as a string input_seq = raw_input("\nPlease input the last 12 nucleotide of the target sequence before the PAM site.\n\(The PAM site is by default \"NGG\"\):\n12 nt = ") #print "raw_input =", input_seq for bases in input_seq: if not (bases in "ACTGactg"): print "\nYour input is wrong. Only A.T.C.G.a.t.c.g are allowed for the input!\n\n"; Thanks for your answers, i would like to add one more constraint to the users input, namely the input should be exactly 12 charters - no more no less what is the correct way to add this to current if not statement ?? Answer: use `break`. I am using [regular expressions](https://docs.python.org/2/library/re.html) to avoid using a for loop input_seq = "" import re while True: #raw_input() reads every input as a string input_seq = raw_input("\nPlease input the last 12 nucleotide of the target sequence before the PAM site.\n\(The PAM site is by default \"NGG\"\):\n12 nt = ") #print "raw_input =", input_seq if len(input_seq)==12: match=re.match(r'[ATCGatcg]{12}',input_seq) if match: break else: print "\nYour input is wrong. Only A.T.C.G.a.t.c.g are allowed for the input!\n\n" continue else: print "\nYour input should be of 12 character"
Python Tkinter / How to let OptionMenus share one item list? Question: i'm trying to build multiple option menus sharing the same "base item list". A multiple selection of one item in different menus should not be possible, so all menus have to be updated when an item is selected in one of the available menus. from tkinter import * # for example 5 fields number_of_fields = 5 starting_list = ["item1","item2","item3","item4","item5"] entry_list = [] option_list = [] option_var = [] def quit(): raise SystemExit() # if an item is selected in one of the # menus run this function def reset_menu(sel_item): # for each field for field in range(number_of_fields): new_list = [] selection = option_var[field].get() # look for selected items in all menus # and build new list which contains all # items from the starting_list minus the # items which are already selected # keep the one selected (for a menu itself) for option in starting_list: marker = 0 for j in range(number_of_fields): if(str(option_var[j].get()) == str(option)): marker = 1 if(marker == 0): new_list.append(str(option)) else: pass if(str(selection) == str(option)): new_list.append(str(option)) # print new generated item list # just to be sure it works so far print("field",field,"new list=",new_list) # NOW HERE SOMETHING IS WRONG I GUESS # empty menu option_list[field]["menu"].delete(0, "end") # add new menu items for item in new_list: option_list[field]['menu'].add_command(label=item, command=lambda value=item:option_var[field].set(value)) root = Tk() root.title("OptionMenu") # menu variable for each field for i in range(number_of_fields): option_var.append(StringVar(root)) # initial value for each field for i in range(number_of_fields): option_var[i].set("") # create menu for each field for i in range(number_of_fields): option_list.append(OptionMenu(root, option_var[i], *starting_list, command=reset_menu)) # create entry for each field for i in range(number_of_fields): entry_list.append(Entry(root)) # build gui for i in range(number_of_fields): entry_list[i].grid(row=int(i),column=0,sticky=N+S+W+E) option_list[i].grid(row=int(i), column=1,sticky=N+S+W+E) button = Button(root, text="OK", command=quit) button.grid(row=number_of_fields,column=1,sticky=N+S+W+E) mainloop() Now everthing seems to be fine until i try to update the menus. The new menu item lists are generated correctly (see print statement) and the menus have the right items, but after selected one menu, the only menu that changes its selected state is the last one. Any ideas? Regards Spot Answer: I found your question because I too was trying to complete the same task. After doing a bit of poking around in dir(tkinter), I have found a solution, which you have inspired me to create an account to post. I have left your original comments in the code for sections that I left unchanged. First, your code for generating your options is unnecessarily cluttered. Instead of manually populating the list from empty, it seems cleaner to remove items from the full list. You are currently using tkinter.OptionMenu(). If you instead use tkinter.ttk.OptionMenu(), it has a method called set_menu(*values) that takes any number of values as its arguments and sets the choices of that menu to be those arguments. If you make the switch, there one thing to note - ttk's OptionMenu does not allow its default value to chosen in the dropdown, so it's recommended to make that value blank, as I have done in the declaration for starting_list. In order to persist the blank option, I added an additional blank option, in order for it to be selectable. This way, if you mistakenly choose the wrong selection, you can revert your choice. from tkinter import * from tkinter.ttk import * # for example 5 fields number_of_fields = 5 starting_list = ["","item1","item2","item3","item4","item5"] entry_list = [] option_list = [] option_var = [] def quit(): raise SystemExit() # if an item is selected in one of the # menus run this function def reset_menu(sel_item): # for each field for field in range(number_of_fields): new_list = [x for x in starting_list] selection = option_var[field].get() # look for selected items in all menus # and build new list which contains all # items from the starting_list minus the # items which are already selected # keep the one selected (for a menu itself) for option in starting_list[1:6]: #add selectable blank if option is selected if (str(selection) == str(option)): new_list.insert(0,"") for j in range(number_of_fields): if(str(selection) != str(option) and str(option_var[j].get()) == str(option)): new_list.remove(option) # print new generated item list # just to be sure it works so far print("field",field,"new list=",new_list) #set new options option_list[field].set_menu(*new_list) root = Tk() root.title("OptionMenu") # menu variable for each field for i in range(number_of_fields): option_var.append(StringVar(root)) # initial value for each field for i in range(number_of_fields): option_var[i].set("") # create menu for each field for i in range(number_of_fields): option_list.append(OptionMenu(root, option_var[i], *starting_list, command=reset_menu)) # create entry for each field for i in range(number_of_fields): entry_list.append(Entry(root)) # build gui for i in range(number_of_fields): entry_list[i].grid(row=int(i),column=0,sticky=N+S+W+E) option_list[i].grid(row=int(i), column=1,sticky=N+S+W+E) button = Button(root, text="OK", command=quit) button.grid(row=number_of_fields,column=1,sticky=N+S+W+E) mainloop() Something you may want to look into is making your option generation a bit more efficient. Right now, for n options, you're looping through your menus n^2 times. I would suggest looking at passing the value that was just selected in the callback instead of searching each menu to see what was previously selected. As an additional minor note, your "OK" button causes a crash. I'm not sure if that was intentional behavior, a quirk in my system, or something else. I hope this helps!
Adding Matplotlib to Panel from wxglade Question: Hi i'm trying to add a matplotlib chart example from the link below [Embedding a matplotlib figure inside a WxPython panel](http://stackoverflow.com/questions/10737459/embedding-a-matplotlib- figure-inside-a-wxpython-panel) To panel within a wxpython frame generated using WXGlade. The panel is panel_22 that needs the chart. any advice? #!/usr/bin/env python # -*- coding: CP1252 -*- # # generated by wxGlade 0.6.8 (standalone edition) on Thu Jul 31 22:23:21 2014 # import wx # begin wxGlade: dependencies import gettext # end wxGlade # begin wxGlade: extracode # end wxGlade class MyFrame1(wx.Frame): def __init__(self, *args, **kwds): # begin wxGlade: MyFrame1.__init__ kwds["style"] = wx.DEFAULT_FRAME_STYLE wx.Frame.__init__(self, *args, **kwds) self.panel_22 = wx.Panel(self, wx.ID_ANY) self.__set_properties() self.__do_layout() # end wxGlade def __set_properties(self): # begin wxGlade: MyFrame1.__set_properties self.SetTitle(_("frame_2")) # end wxGlade def __do_layout(self): # begin wxGlade: MyFrame1.__do_layout sizer_21 = wx.BoxSizer(wx.VERTICAL) grid_sizer_4 = wx.GridSizer(3, 3, 0, 0) grid_sizer_4.Add(self.panel_22, 1, wx.EXPAND, 0) sizer_21.Add(grid_sizer_4, 1, wx.EXPAND, 0) self.SetSizer(sizer_21) sizer_21.Fit(self) self.Layout() # end wxGlade # end of class MyFrame1 if __name__ == "__main__": gettext.install("app") # replace with the appropriate catalog name app = wx.App(False) wx.InitAllImageHandlers() frame_2 = MyFrame1(None, wx.ID_ANY, "") app.SetTopWindow(frame_2) frame_2.Show() app.MainLoop() Answer: it does not seem to matter whether it's generated in glade. so try add two import first from matplotlib.backends.backend_wxagg import FigureCanvasWxAgg as FigureCanvas from matplotlib.figure import Figure and then add two lines at the beginning of __do_layout: def __do_layout(self): self.figure = Figure() self.canvas = FigureCanvas(self,panel_22, -1, self.figure) # begin wxGlade: MyFrame1.__do_layout
Working with namespace while parsing XML using ElementTree Question: This is follow on question for [Modify a XML using ElementTree](http://stackoverflow.com/questions/25068629/modify-a-xml-using- elementtree) I am now having namespaces in my XML and tried understanding the answer at [Parsing XML with namespace in Python ElementTree](http://stackoverflow.com/questions/14853243/parsing-xml-with- namespace-in-python-elementtree) and have the following. XML file. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <grandParent> <parent> <child>Sam/Astronaut</child> </parent> </grandParent> </project> My python code after looking at [Parsing XML with namespace in Python ElementTree](http://stackoverflow.com/questions/14853243/parsing-xml-with- namespace-in-python-elementtree) import xml.etree.ElementTree as ET spaces='xmlns':'http://maven.apache.org/POM/4.0.0','schemaLocation':'http://maven.apache.org/xsd/maven-4.0.0.xsd'} tree = ET.parse("test.xml") a=tree.find('parent') for b in a.findall('child', namespaces=spaces): if b.text.strip()=='Jay/Doctor': print "child exists" break else: ET.SubElement(a,'child').text="Jay/Doctor" tree.write("test.xml") I get the error: AttributeError: 'NoneType' object has no attribute 'findall' Answer: There are two problems on this line: a=tree.find('parent') First, `<parent>` is not an immediate child of the root element. `<parent>` is a grandchild of the root element. The path to parent looks like `/project/grandparent/parent`. To search for `<parent>`, try the XPath expression `*/parent` or possiblly `//parent`. Second, `<parent>` exists in the default namespace, so you won't be able to `.find()` it with just its simple name. You'll need to add the namespace. Here are two equally valid calls to `tree.find()`, each of which should find the `<parent>` node: a=tree.find('*/{http://maven.apache.org/POM/4.0.0}parent') a=tree.find('*/xmlns:parent', namespaces=spaces) Next, the call to `findall()` needs a namespace qualifier: for b in a.findall('xmlns:child', namespaces=spaces) Fourth, the call to create the new child element needs a namespace qualifier. There may be a way to use the shortcut name, but I couldn't find it. I had to use the long form of the name. ET.SubElement(a,'{http://maven.apache.org/POM/4.0.0}child').text="Jay/Doctor" Finally, your XML output will look ugly unless you provide a default namespace: tree.write('test.xml', default_namespace=spaces['xmlns']) Unrelated to the XML aspects, you copied my answer from the previous question incorrectly. The `else` lines up with the `for`, not with the `if`: for ... if ... else ...
Reading a page and parsing it with minidom.parse or minidom.parseString in Python? Question: I have either of these codes: import urllib from xml.dom import minidom res = urllib.urlopen('https://www.google.com/webhp#q=apple&start=10') dom = minidom.parse(res) which gives me the error `xml.parsers.expat.ExpatError: syntax error: line 1, column 0` Or this: import urllib from xml.dom import minidom res = urllib.urlopen('https://www.google.com/webhp#q=apple&start=10') dom = minidom.parseString(res.read()) which gives me the same error. `res.read()` reads fine and is a string. I would like to parse through the code later. How can I do this using `xml.dom.minidom`? Answer: The reason you're getting this error is that the page isn't valid XML. It's HTML 5. The `doctype` right at the top tells you this, even if you ignore the content type. You can't parse HTML with an XML parser.* If you want to stick with what's in the stdlib, you can use [`html.parser`](https://docs.python.org/3/library/html.parser.html) (Python 3.x) / [`HTMLParser`](https://docs.python.org/2.7/library/htmlparser.html) (2.x).** However, you may want to consider third-party libraries like `lxml` (which, despite the name, can parse HTML), `html5lib`, or `BeautifulSoup` (which wraps up a lower-level parser in a really nice interface). * Well, unless it's XHTML, or the XML output of HTML5, but that's not the case here. ** Do not use `htmllib` unless you're using an old version of Python without a working `HTMLParser`. This module is deprecated for a reason.
multiprocessing.Pool: calling helper functions when using apply_async's callback option Question: How does the flow of `apply_async` work between calling the iterable (?) function and the callback function? Setup: I am reading some lines of all the files inside a 2000 file directory, some with millions of lines, some with only a few. Some header/formatting/date data is extracted to charecterize each file. This is done on a 16 CPU machine, so it made sense to multiprocess it. Currently, the expected result is being sent to a list (`ahlala`) so I can print it out; later, this will be written to *.csv. This is a simplified version of my code, originally based off [this](http://stackoverflow.com/questions/12483512/python-multiprocessing- apply-async-only-uses-one-process) extremely helpful post. import multiprocessing as mp def dirwalker(directory): ahlala = [] # X() reads files and grabs lines, calls helper function to calculate # info, and returns stuff to the callback function def X(f): fileinfo = Z(arr_of_lines) return fileinfo # Y() reads other types of files and does the same thing def Y(f): fileinfo = Z(arr_of_lines) return fileinfo # results() is the callback function def results(r): ahlala.extend(r) # or .append, haven't yet decided # helper function def Z(arr): return fileinfo # to X() or Y()! for _,_,files in os.walk(directory): pool = mp.Pool(mp.cpu_count() for f in files: if (filetype(f) == filetypeX): pool.apply_async(X, args=(f,), callback=results) elif (filetype(f) == filetypeY): pool.apply_async(Y, args=(f,), callback=results) pool.close(); pool.join() return ahlala Note, the code works if I put all of `Z()`, the helper function, into either `X()`, `Y()`, or `results()`, but is this either repetitive or possibly slower than possible? I know that the callback function is called for every function call, but when is the callback function called? Is it after `pool.apply_async()`...finishes all the jobs for the processes? Shouldn't it be faster if these helper functions were called within the scope (?) of the first function `pool.apply_async()` takes (in this case, `X()`)? If not, should I just put the helper function in `results()`? Other related ideas: Are daemon processes why nothing shows up? I am also very confused about how to queue things, and if this is the problem. [This seems like a place to start learning it](http://hairycode.org/2013/07/23/first- steps-with-celery-how-to-not-trip/), but can queuing be safely ignored when using `apply_async`, or only at a noticable time inefficiency? Answer: You're asking about a whole bunch of different things here, so I'll try to cover it all as best I can: The function you pass to `callback` will be executed in the main process (not the worker) as soon as the worker process returns its result. It is executed in a thread that the `Pool` object creates internally. That thread consumes objects from a `result_queue`, which is used to get the results from all the worker processes. After the thread pulls the result off the queue, it executes the `callback`. While your callback is executing, no other results can be pulled from the queue, so its important that the callback finishes quickly. With your example, as soon as one of the calls to `X` or `Y` you make via `apply_async` completes, the result will be placed into the `result_queue` by the worker process, and then the result-handling thread will pull the result off of the `result_queue`, and your `callback` will be executed. Second, I suspect the reason you're not seeing anything happen with your example code is because all of your worker function calls are failing. If a worker function fails, `callback` will never be executed. The failure won't be reported at all unless you try to fetch the result from the [`AsyncResult`](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.AsyncResult) object returned by the call to `apply_async`. However, since you're not saving any of those objects, you'll never know the failures occurred. If I were you, I'd try using `pool.apply` while you're testing so that you see errors as soon as they occur. The reason the workers are probably failing (at least in the example code you provided) is because `X` and `Y` are defined as function inside another function. `multiprocessing` passes functions and objects to worker processes by pickling them in the main process, and unpickling them in the worker processes. Functions defined inside other functions are not picklable, which means `multiprocessing` won't be able to successfully unpickle them in the worker process. To fix this, define both functions at the top-level of your module, rather than embedded insice the `dirwalker` function. You should definitely continue to call `Z` from `X` and `Y`, not in `results`. That way, `Z` can be run concurrently across all your worker processes, rather than having to be run one call at a time in your main process. And remember, your `callback` function is supposed to be as quick as possible, so you don't hold up processing results. Executing `Z` in there would slow things down. Here's some simple example code that's similar to what you're doing, that hopefully gives you an idea of what your code should look like: import multiprocessing as mp import os # X() reads files and grabs lines, calls helper function to calculate # info, and returns stuff to the callback function def X(f): fileinfo = Z(f) return fileinfo # Y() reads other types of files and does the same thing def Y(f): fileinfo = Z(f) return fileinfo # helper function def Z(arr): return arr + "zzz" def dirwalker(directory): ahlala = [] # results() is the callback function def results(r): ahlala.append(r) # or .append, haven't yet decided for _,_,files in os.walk(directory): pool = mp.Pool(mp.cpu_count()) for f in files: if len(f) > 5: # Just an arbitrary thing to split up the list with pool.apply_async(X, args=(f,), callback=results) # ,error_callback=handle_error # In Python 3, there's an error_callback you can use to handle errors. It's not available in Python 2.7 though :( else: pool.apply_async(Y, args=(f,), callback=results) pool.close() pool.join() return ahlala if __name__ == "__main__": print(dirwalker("/usr/bin")) Output: ['ftpzzz', 'findhyphzzz', 'gcc-nm-4.8zzz', 'google-chromezzz' ... # lots more here ] **Edit:** You can create a dict object that's shared between your parent and child processes using the `multiprocessing.Manager` class: pool = mp.Pool(mp.cpu_count()) m = multiprocessing.Manager() helper_dict = m.dict() for f in files: if len(f) > 5: pool.apply_async(X, args=(f, helper_dict), callback=results) else: pool.apply_async(Y, args=(f, helper_dict), callback=results) Then make `X` and `Y` take a second argument called `helper_dict` (or whatever name you want), and you're all set. The caveat is that this worked by creating a server process that contains a normal dict, and all your other processes talk to that one dict via a Proxy object. So every time you read or write to the dict, you're doing IPC. This makes it a lot slower than a real dict.
Parsing PubMed Central XML using Biopython Bio Entrez parse Question: I am trying to parse PubMed Central XML files using Biopython's Bio Entrez parse function. This is what I've tried so far: from Bio import Entrez for xmlfile in glob.glob ('samplepmcxml.xml'): print xmlfile fh = open (xmlfile, "r") read_xml (fh, outfp) fh.close() def read_xml (handle, outh): records = Entrez.parse(handle) for record in records: print record I am getting the following error: Traceback (most recent call last): File "3parse_info_from_pmc_nxml.py", line 78, in <module> read_xml (fh, outfp) File "3parse_info_from_pmc_nxml.py", line 10, in read_xml for record in records: File "/usr/lib/pymodules/python2.6/Bio/Entrez/Parser.py", line 137, in parse self.parser.Parse(text, False) File "/usr/lib/pymodules/python2.6/Bio/Entrez/Parser.py", line 165, in startNamespaceDeclHandler raise NotImplementedError("The Bio.Entrez parser cannot handle XML data that make use of XML namespaces") NotImplementedError: The Bio.Entrez parser cannot handle XML data that make use of XML namespaces I have already downloaded archivearticle.dtd file. Are there any other DTD files that need to be installed that would describe the schema of PMC files? Has anyone successfully used the Bio Entrez function or any other method to parse PMC articles? Thanks for your help! Answer: Use another parser, like the [minidom](https://docs.python.org/2/library/xml.dom.minidom.html) from xml.dom import minidom data = minidom.parse("pmc_full.xml") Now depending on what data do you want to extract, dive into the XML and have fun: for title in data.getElementsByTagName("article-title"): for node in title.childNodes: if node.nodeType == node.TEXT_NODE: print node.data
crypt does not work in osx, returns wrong value Question: I'm using an OSX 10.9 (last version) with the last Xcode. When I execute the following command: python -c 'import crypt; print crypt.crypt("test", "$6$random_salt")' I get this as an answer: $6asQOJRqB1i2 but if I execute the same in a debian machine (same version of python) I get the following: $6$random_salt$sJ0ZOQCUESBs9rYCOLCqGV93zg1cSDgZV/FF6ZBzpnvNUVODwaaVoPV2SiL0ur7Sexh02hMmXdSBOa216GWoh. What is wrong with my machine? Answer: The implementation of `crypt()` on Mac OS X does not support most of the "advanced" modes supported by the Linux glibc `crypt()`. It only supports "traditional `crypt()`", which is what you're getting here, and an "extended `crypt()`" mode which is not compatible with the Linux implementation either. If you need to create strong password hashes portably across Linux and Mac OS X, you'll need to use something other than `crypt()`.
Put javascript code from head tag into meteor js files Question: Okay I'm not really sure how to explain this clearly so the title might have been slightly confusing. I have some code for using the google maps api from w3 schools (<http://www.w3schools.com/googleAPI/google_maps_basic.asp>) which is shown as going in the head tag of the html file. But I am also trying to use Meteor.js for another part of the app and it separates the javascript code into another .js file. So basically I am wondering how/where to put the javascript code from the head tag into the meteor .js files. I tried to put it directly in but I think it needs the other script tag to go with it because it has some variables that weren't defined if I moved it. So how can I move the javascript code and make sure that the variables are still defined? Should I move the script src tag with it somehow? <script src="https://maps.googleapis.com/maps/api/js?key=myKey"> </script> <script> //javascript code I want to move is here </script> I've also read something about dynamically loading the javascript with jQuery or something but I'm not sure I understand how that would work? Here is the github repo of the code but I don't think you'll need it: <https://github.com/2016rshah/meteor/tree/master/FlightNews> Sorry for asking such a silly question I am new to web-dev and I've never really faced problems like this with Java or Python because you can just import wherever you need to. Thanks for any suggestions! Answer: It is not a silly question. Others have had the same problem, too. [This](http://stackoverflow.com/questions/16761042/meteor-js-and-google-maps) question has two good answers that will solve your problem. In the comments you'll also find a blog post about the issue. However, you may also want to consider using a Meteor package that was developed to ease integration of Google Maps into Meteor. One example is [`googlemaps`](http://atmospherejs.com/package/googlemaps) but there are also others. There's also an [example](https://github.com/drewjw81/meteor- googlemaps/tree/master/examples/basic) showing you how to use the package.
Add lots of data to Google App Engine NDB (Python) Question: I'm working on a project that requires me to import lots of data (on the scale of tens of thousands of entities) into the Google App Engine NDB. The data is stored in a text file, for which I wrote a parsing program that generates a list of the entities found in the file, which I then write into the database using the `put_multi()` method. When I apply this to testing data sets of say a couple hundred up to about a thousand entries, it works fine; applying it to the real data set (at about 30,000 entries right now, but it will grow), however, throws a `DeadlineExceededError`. I'm guessing that means the program is running too long and App Engine cuts it off. So my question is, how do I run code that takes a long time to complete while it iterates over a bunch of data? I've read something about a task queue or something, but that seems to be more aimed at a short task that needs to be repeated a bunch of times (while, here, I'm really just waiting for the `put_multi()` to complete). It's also a pretty bad UX having the page take this long to load. Is there any way for me to have my code that runs when the user initializes the data import just sort of kickstart the actual parsing and writing code behind the scenes on the server while the user gets a page back that says the data is being processed in the background? Thanks Answer: there is a put_multi_async(), which will take like 10 ms to complete and then do the put in the background. You're still limited by the datastore deadline, which : > The default deadline is 60 seconds > (<https://developers.google.com/appengine/docs/java/datastore/queries#Java_Data_consistency>) you might need to spread your async puts in different calls (maybe a couple of thousand each, write speed will depend on the exact size of your entity) EDIT : as per mgilson's comment, > > You're also limited by memory restrictions. If you have enough memory to > pull in the entire file, then task-queues might be the answer (put a bunch > of different pieces of the file into different tasks)
How to inherit matplotlib in python? Question: When I run the program below: import matplotlib class abc(matplotlib): def anything(self): print("anything") aa=abc() I got the following errors: > File "test.py", line 2, in class abc(matplotlib): TypeError: Error when > calling the metaclass bases module.**init**() takes at most 2 arguments (3 > given) How can I fix it? Answer: `matplotlib` is a module/package. You can't inherit from it since it isn't a class. The closest thing that I can think of that you _could_ do is something akin to: # my_matplotlib.py from matplotlib import * import matplotlib def plot(*args, **kwargs): """Overrides matplotlib.plot""" print "Hey Dude. I'm plotting now!" return matplotlib.plot(*args, **kwargs) Note that I really don't recommend doing this ...
Window scrolling and offest not working in Pygame Question: I'm working on a simple 2d tile based game in Python using Pygame. I've got the basics of a camera starting to work, it is bliting the tiles at an offset but the actual rects of the tiles aren't moving with the blitting. Here's the camera class, really simple : from Map import level import Globals as gb class Camera(object): def __init__(self, x, y, screen_width, screen_height): self.rect = gb.pygame.Rect(x, y, screen_width, screen_height) def update(self): self.rect.center = gb.player.rect.x, gb.player.rect.y and the blitting is done here as so: def render(fov, cam): for block in new_blocks: gb.window.screen.blit(block.image, (block.rect.x - cam.rect.x, block.rect.y - cam.rect.y)) the map is a simple 2d array, and I'm putting all the tiles into the list "new_blocks" like this: for row in level.current_map: for block in row: if block == '.': block = Blocks() block.pic = 'sand.png' block.rect.x = x block.rect.y = y x += 50 new_blocks.append(block) if block == '#': block = Blocks() block.ID = 1 block.is_wall = True newImage(block, 'img', 'wall.png') block.rect.x = x block.rect.y = y walls.append(block) x += 50 new_blocks.append(block) if x == len(level.current_map[0])*50: y += 50 x = 0 A lot of this is messy, and I'm planning to clean it up and use more constants and declare better variables after I get this working. Thanks for any help! Answer: Here's the thing: You don't want the camera to modify the actual coordinate values of the tiles themselves. The answer is actually quite simple, when you blit the character, use the camera to modify where the character is being blitted. Then for everything you draw to the screen, there will be two different sets of coordinates. 1) The actual position of the object and 2) The coordinates relative to the screen. The one thing you need to watch out for is that every single object is drawn relative to the screen, because if just one of them is not, it will mess up your collision detection.
How to use/install pip Question: Noob question here on Python using pip. I've installed pip using setuptools on a Win7 32-bit machine. I am not using virtualenv. Now trying to install enum34 for use with dbf; I need enum34 as I have to work in Python 2.7 for use with ArcGIS 10.2. In PowerShell, I get the following returned no matter what command I try to use with pip, including the command 'list'. I've also tried with the Windows basic Command Prompt and got the same results. Did pip not install correctly? Is there some other error? Please let me know if I've included enough information. PS C:\python27\arcgis10.2\scripts> .\pip install enum34 Traceback (most recent call last): File "C:\python27\arcgis10.2\scripts\pip-script.PY", line 9, in <module> 1oad_entry_point('pip==l.5.6', 'console_scripts', 'pip')() File "build\bdist.win32\egg\pkg_resources.py", line 351, in load_entry_point Fi1e "build\bdist.win32\egg\pkg_resources.py", line 2363: in load_entry_point File "build\bdist.win32\egg\pkg_resources.py", line 2088, in load ) File "C:\Python27\ArcGIS10.2\lib\site-packages\pip\__init__.py", line 10, in <module> from pip.util import get_installed_distributions, get_prog ....etc. Answer: [Download `get-pip.py` and run it as Administrator.](https://pip.pypa.io/en/latest/installing.html) Pip should then be on your path, so you can just use `pip install enum34` instead of `.\pip install enum34`. If that doesn't work, tell us the error message you get.
Python: Whoosh seems to return incorrect results Question: This code is straight from Whoosh's [quickstart docs](http://pythonhosted.org/Whoosh/quickstart.html): import os.path from whoosh.index import create_in from whoosh.fields import Schema, STORED, ID, KEYWORD, TEXT from whoosh.index import open_dir from whoosh.query import * from whoosh.qparser import QueryParser #establish schema to be used in the index schema = Schema(title=TEXT(stored=True), content=TEXT, path=ID(stored=True), tags=KEYWORD, icon=STORED) #create index directory if not os.path.exists("index"): os.mkdir("index") #create the index using the schema specified above ix = create_in("index", schema) #instantiate the writer object writer = ix.writer() #add the docs to the index writer.add_document(title=u"My document", content=u"This is my document!", path=u"/a", tags=u"first short", icon=u"/icons/star.png") writer.add_document(title=u"Second try", content=u"This is the second example.", path=u"/b", tags=u"second short", icon=u"/icons/sheep.png") writer.add_document(title=u"Third time's the charm", content=u"Examples are many.", path=u"/c", tags=u"short", icon=u"/icons/book.png") #commit those changes writer.commit() #identify searcher with ix.searcher() as searcher: #specify parser parser = QueryParser("content", ix.schema) #specify query -- try also "second" myquery = parser.parse("is") #search for results results = searcher.search(myquery) #identify the number of matching documents print len(results) I have merely passed a value--namely, the verb "is"--to the parser.parse() call. When I run this, however, I get results of length zero, rather than the expected results of length two. If I replace "is" with "second", I get one result, as expected. Why doesn't the search using "is" yield a match, though? ## Edit As @Philippe points out, the default Whoosh indexer removes stop words, hence the behavior described above. If you want to retain stop words, you can specify which analyzer you wish to use when indexing a given field within an index, and you can pass your analyzer a parameter to refrain from stripping stop words; e.g.: schema = Schema(title=TEXT(stored=True, analyzer=analysis.StandardAnalyzer(stoplist=None))) Answer: A stop word filter is applied by the default text analyzer: <https://bitbucket.org/mchaput/whoosh/src/999cd5fb0d110ca955fab8377d358e98ba426527/src/whoosh/analysis/filters.py?at=default#cl-41> See also the doc: <http://whoosh.readthedocs.org/en/latest/api/analysis.html#whoosh.analysis.StopFilter>
Parsing arguments using argparse and mpi4py Question: I want to run a Python script in several parallel processes under MPI, and I need to pass command-line arguments. I'm using the argparse module in Python, but it's a little messy sometimes. If I don't specify the right arguments, all the processes complain, so I get many copies of the same error message. I tried making only process 0 parse the arguments and then broadcast the results to the other processes, but then the other processes hang when the parsing fails and nothing gets broadcast. How can I parse the command-line arguments, and print a readable message when the parsing fails? Answer: The extra piece I needed was to wrap a try/finally around the argument parsing step in process 0. In the finally block, broadcast _something_ to the other processes. If parsing failed, you will broadcast `None`, and they can all silently exit. from mpi4py import MPI from time import sleep import argparse def parseOptions(comm): parser = argparse.ArgumentParser( description='Print some messages.') parser.add_argument('iteration_count', help='How many times', type=int) parser.add_argument('message', help='What to say', nargs=argparse.OPTIONAL, default='Hello, World!') args = None try: if comm.Get_rank() == 0: args = parser.parse_args() finally: args = comm.bcast(args, root=0) if args is None: exit(0) return args def main(): comm = MPI.COMM_WORLD # @UndefinedVariable rank = comm.Get_rank() size = comm.Get_size() args = parseOptions(comm) if rank == 0: print args.message for i in range(args.iteration_count): if i%size == rank: print '{} in rank {} started.'.format(i, rank) sleep(.5) print '...' sleep(.5) print '{} in rank {} ended.'.format(i, rank) if __name__ == '__main__': main() I run the code with a command like this: mpirun -np 4 python scratch.py 13
Can I double-click a tkinter Listbox option to invoke function in Python? Question: I have a Listbox with an associated "Select" button. I want my GUI such that a double-click on any Listbox value invokes this button's command. My attempt (below) works when an option is selected and the user double-clicks ANYWHERE in the window. I want it to work only when the selection itself (blue highlighted row) is being double-clicked. What is the best way to do this? from tkinter import * def func1(): print("in func1") def func2(): print("in func2") def selection(): try: dictionary[listbox.selection_get()]() except: pass root = Tk() frame = Frame(root) frame.pack() dictionary = {"1":func1, "2":func2} items = StringVar(value=tuple(sorted(dictionary.keys()))) listbox = Listbox(frame, listvariable=items, width=15, height=5) listbox.grid(column=0, row=2, rowspan=6, sticky=("n", "w", "e", "s")) listbox.focus() selectButton = Button(frame, text='Select', underline = 0, command=selection) selectButton.grid(column=2, row=4, sticky="e", padx=50, pady=50) root.bind('<Double-1>', lambda x: selectButton.invoke()) root.mainloop() Answer: Change `root.bind(...)` to `listbox.bind(...)`
error when installing package tables for python Question: This is the problem when I try to build package tables-3.1.1 for python: sudo python setup.py build_ext --inplace dyld: DYLD_ environment variables being ignored because main executable (/usr/bin/sudo) is setuid or setgid * Using Python 2.7.2 (default, Oct 11 2012, 20:14:37) * Found numpy 1.6.1 package installed. * Found numexpr 2.4 package installed. * Found Cython 0.20.2 package installed. * Found HDF5 headers at ``/usr/local/include``, library at ``/usr/local/lib``. * Found LZO 2 headers at ``/usr/local/include``, library at ``/usr/local/lib``. * Skipping detection of LZO 1 since LZO 2 has already been found. * Found bzip2 headers at ``/usr/local/include``, library at ``/usr/lib``. /tmp/blosc_list_compressors0xPbk3.c:1:1: warning: type specifier missing, defaults to 'int' [-Wimplicit-int] main (int argc, char **argv) { ^~~~ /tmp/blosc_list_compressors0xPbk3.c:2:5: warning: implicit declaration of function 'blosc_list_compressors' is invalid in C99 [-Wimplicit-function-declaration] blosc_list_compressors(); ^ 2 warnings generated. ld: library not found for -lblosc clang: error: linker command failed with exit code 1 (use -v to see invocation) ld: library not found for -lblosc clang: error: linker command failed with exit code 1 (use -v to see invocation) * Could not find blosc headers and library; using internal sources. Setting compiler flag '-msse2' running build_ext I understand that it couldn't find blosc library. I tried to import blosc in python: >>> import blosc >>> help(blosc) ------ TypeError If packed_array is not of type bytes or string. Examples -------- >>> import numpy >>> a = numpy.arange(1e6) >>> parray = blosc.pack_array(a) >>> len(parray) < a.size*a.itemsize True >>> a2 = blosc.unpack_array(parray) >>> numpy.alltrue(a == a2) True DATA __all__ = ['compress', 'compress_ptr', 'decompress', 'decompress_ptr',... __version__ = '1.2.4' VERSION 1.2.4 So, python can find blosc package. Can someone help me to solve the problem in installing package tables, please? Thanks ahead. Answer: You need to install the blosc development libraries. If you're running either CentOS or Fedora, try `sudo yum install blosc-devel`. For Ubuntu, I could not find a similar package, which means you will likely have to build from source. [There are tarballs here](https://github.com/Blosc/c-blosc/releases), and you will have to follow [their build instructions](https://github.com/Blosc/c-blosc/blob/master/README.rst).
Django migration strategy for renaming a model and relationship fields Question: I'm planning to rename several models in an existing Django project where there are many other models that have foreign key relationships to the models I would like to rename. I'm fairly certain this will require multiple migrations, but I'm not sure of the exact procedure. Let's say I start out with the following models within a Django app called `myapp`: class Foo(models.Model): name = models.CharField(unique=True, max_length=32) description = models.TextField(null=True, blank=True) class AnotherModel(models.Model): foo = models.ForeignKey(Foo) is_awesome = models.BooleanField() class YetAnotherModel(models.Model): foo = models.ForeignKey(Foo) is_ridonkulous = models.BooleanField() I want to rename the `Foo` model because the name doesn't really make sense and is causing confusion in the code, and `Bar` would make for a much clearer name. From what I have read in the Django development documentation, I'm assuming the following migration strategy: # Step 1 Modify `models.py`: class Bar(models.Model): # <-- changed model name name = models.CharField(unique=True, max_length=32) description = models.TextField(null=True, blank=True) class AnotherModel(models.Model): foo = models.ForeignKey(Bar) # <-- changed relation, but not field name is_awesome = models.BooleanField() class YetAnotherModel(models.Model): foo = models.ForeignKey(Bar) # <-- changed relation, but not field name is_ridonkulous = models.BooleanField() Note the `AnotherModel` field name for `foo` doesn't change, but the relation is updated to the `Bar` model. My reasoning is that I shouldn't change too much at once and that if I changed this field name to `bar` I would risk losing the data in that column. # Step 2 Create an empty migration: python manage.py makemigrations --empty myapp # Step 3 Edit the `Migration` class in the migration file created in step 2 to add the `RenameModel` operation to the operations list: class Migration(migrations.Migration): dependencies = [ ('myapp', '0001_initial'), ] operations = [ migrations.RenameModel('Foo', 'Bar') ] # Step 4 Apply the migration: python manage.py migrate # Step 5 Edit the related field names in `models.py`: class Bar(models.Model): name = models.CharField(unique=True, max_length=32) description = models.TextField(null=True, blank=True) class AnotherModel(models.Model): bar = models.ForeignKey(Bar) # <-- changed field name is_awesome = models.BooleanField() class YetAnotherModel(models.Model): bar = models.ForeignKey(Bar) # <-- changed field name is_ridonkulous = models.BooleanField() # Step 6 Create another empty migration: python manage.py makemigrations --empty myapp # Step 7 Edit the `Migration` class in the migration file created in step 6 to add the `RenameField` operation(s) for any related field names to the operations list: class Migration(migrations.Migration): dependencies = [ ('myapp', '0002_rename_fields'), # <-- is this okay? ] operations = [ migrations.RenameField('AnotherModel', 'foo', 'bar'), migrations.RenameField('YetAnotherModel', 'foo', 'bar') ] # Step 8 Apply the 2nd migration: python manage.py migrate * * * Aside from updating the rest of the code (views, forms, etc.) to reflect the new variable names, is this basically how the new migration functionality would work? Also, this seems like a lot of steps. Can the migration operations be condensed in some way? Thanks! Answer: So when I tried this, it seems you can condense Step 3 - 7: class Migration(migrations.Migration): dependencies = [ ('myapp', '0001_initial'), ] operations = [ migrations.RenameModel('Foo', 'Bar'), migrations.RenameField('AnotherModel', 'foo', 'bar'), migrations.RenameField('YetAnotherModel', 'foo', 'bar') ] You may get some errors if you don't update the names where it's imported e.g. admin.py and even older migration files (!)
Avoid 'MySQLConverter' object has no attribute '_timestamp_to_mysql' error with datetime64[ns] and MySQL Question: I'm reading a CSV file like this Date,Open,High,Low,Close,Volume,Adj Close 2000-12-29,30.88,31.31,28.69,29.06,31702200,27.57 2000-12-28,30.56,31.62,30.38,31.06,25053600,29.46 2000-12-27,30.38,31.06,29.38,30.69,26437500,29.11 2000-12-26,31.50,32.19,30.00,30.94,20589500,29.34 2000-12-22,30.38,31.98,30.00,31.88,35568200,30.23 2000-12-21,27.81,30.25,27.31,29.50,46719700,27.98 2000-12-20,28.06,29.81,27.50,28.50,54440500,27.03 2000-12-19,31.81,33.12,30.12,30.62,58653700,29.05 ... 2000-01-13,108.50,109.88,103.50,105.06,55779200,24.91 2000-01-12,112.25,112.25,103.69,105.62,83443600,25.05 2000-01-11,112.62,114.75,109.50,112.38,86585200,26.65 2000-01-10,108.00,116.00,105.50,115.75,91518000,27.45 2000-01-07,95.00,103.50,93.56,103.38,91755600,24.51 2000-01-06,100.16,105.00,94.69,96.00,109880000,22.76 2000-01-05,101.62,106.38,96.00,102.00,166054000,24.19 2000-01-04,115.50,118.62,105.00,107.69,116824800,25.54 2000-01-03,124.62,125.19,111.62,118.12,98114800,28.01 Full data can be download using python -c "from pyalgotrade.tools import yahoofinance; yahoofinance.download_daily_bars('orcl', 2000, 'orcl-2000.csv')" see <http://gbeced.github.io/pyalgotrade/docs/v0.15/html/tutorial.html> I try to put this CSV data to a MySQL database using Python, Pandas, SQLAlchemy, `read_csv` and `to_sql`: filename = "orcl-2000.csv" df = pd.read_csv(filename, sep=',') db_uri = "mysql+mysqlconnector://{user}:{password}@{host}:{port}/{db}" # or without mysqlconnector (need MySQLdb) db_uri = db_uri.format( user = "root", password = "123456", host = "127.0.0.1", db = "test", port = 3306 ) engine = sqlalchemy.create_engine(db_uri) df["Date"] = pd.to_datetime(df["Date"]) df = df.set_index("Date") print(df) print(df.dtypes) print(type(df.index), df.index.dtype) print(type(df.index[0])) df.to_sql("test_table", engine, flavor="mysql", if_exists="replace") (see [full code here](http://pastebin.com/GFt6D9r8)) I get the following output: $ python main.py Date Open High Low Close Volume Adj Close 0 2000-12-29 30.88 31.31 28.69 29.06 31702200 27.57 1 2000-12-28 30.56 31.62 30.38 31.06 25053600 29.46 2 2000-12-27 30.38 31.06 29.38 30.69 26437500 29.11 3 2000-12-26 31.50 32.19 30.00 30.94 20589500 29.34 4 2000-12-22 30.38 31.98 30.00 31.88 35568200 30.23 5 2000-12-21 27.81 30.25 27.31 29.50 46719700 27.98 6 2000-12-20 28.06 29.81 27.50 28.50 54440500 27.03 7 2000-12-19 31.81 33.12 30.12 30.62 58653700 29.05 8 2000-12-18 30.00 32.44 29.94 32.00 61640100 30.35 9 2000-12-15 29.44 30.08 28.19 28.56 120004000 27.09 10 2000-12-14 29.25 29.94 27.25 27.50 45894400 26.08 11 2000-12-13 31.94 32.00 28.25 28.38 37933600 26.91 12 2000-12-12 31.88 32.50 30.41 30.75 26481200 29.17 13 2000-12-11 30.50 32.25 30.00 31.94 50279700 30.29 14 2000-12-08 30.06 30.62 29.25 30.06 40052600 28.51 15 2000-12-07 29.62 29.94 28.12 28.31 41088300 26.85 16 2000-12-06 31.19 31.62 29.31 30.19 42125600 28.63 17 2000-12-05 29.44 31.50 28.88 31.50 59754700 29.88 18 2000-12-04 26.25 28.88 26.19 28.19 40710400 26.74 19 2000-12-01 26.38 27.88 25.50 26.44 48663500 25.08 20 2000-11-30 21.75 27.62 21.50 26.50 84386200 25.14 21 2000-11-29 23.19 23.62 21.81 22.88 75409600 21.70 22 2000-11-28 23.50 23.81 22.25 22.66 43075300 21.49 23 2000-11-27 25.44 25.81 22.88 23.12 45665200 21.93 24 2000-11-24 23.31 24.25 23.12 24.12 22443900 22.88 25 2000-11-22 23.62 24.06 22.06 22.31 53315300 21.16 26 2000-11-21 24.81 25.62 23.50 23.88 58647400 22.65 27 2000-11-20 24.31 25.88 24.00 24.75 89778400 23.48 28 2000-11-17 26.94 29.25 25.25 28.81 59636000 27.33 29 2000-11-16 28.75 29.81 27.25 27.38 37986600 25.96 .. ... ... ... ... ... ... ... 222 2000-02-14 60.88 62.25 58.62 62.19 37599800 29.49 223 2000-02-11 62.50 64.75 58.75 59.69 55774000 28.31 224 2000-02-10 60.00 62.62 58.00 62.31 45288600 29.55 225 2000-02-09 60.06 61.31 58.81 59.94 52471600 28.43 226 2000-02-08 60.75 61.44 59.00 59.56 55718000 28.25 227 2000-02-07 59.31 60.00 58.88 59.94 44691200 28.43 228 2000-02-04 57.62 58.25 56.81 57.81 40916000 27.42 229 2000-02-03 55.38 57.00 54.25 56.69 55533200 26.88 230 2000-02-02 54.94 56.00 54.00 54.31 63933000 25.76 231 2000-02-01 51.25 54.31 50.00 54.00 57105600 25.61 232 2000-01-31 47.94 50.12 47.06 49.95 68148000 23.69 233 2000-01-28 51.50 51.94 46.62 47.38 86394000 22.47 234 2000-01-27 55.81 56.69 50.00 51.81 61054000 24.57 235 2000-01-26 56.75 58.94 55.00 55.06 47569200 26.11 236 2000-01-25 55.06 57.50 54.88 56.44 53059200 26.77 237 2000-01-24 60.25 60.38 54.00 54.19 50022400 25.70 238 2000-01-21 61.50 61.50 59.00 59.69 50891000 28.31 239 2000-01-20 59.00 60.25 58.12 59.25 54526800 28.10 240 2000-01-19 56.12 58.25 54.00 57.12 49198400 27.09 241 2000-01-18 107.88 114.50 105.62 111.25 66780000 26.38 242 2000-01-14 109.00 111.38 104.75 106.81 57078000 25.33 243 2000-01-13 108.50 109.88 103.50 105.06 55779200 24.91 244 2000-01-12 112.25 112.25 103.69 105.62 83443600 25.05 245 2000-01-11 112.62 114.75 109.50 112.38 86585200 26.65 246 2000-01-10 108.00 116.00 105.50 115.75 91518000 27.45 247 2000-01-07 95.00 103.50 93.56 103.38 91755600 24.51 248 2000-01-06 100.16 105.00 94.69 96.00 109880000 22.76 249 2000-01-05 101.62 106.38 96.00 102.00 166054000 24.19 250 2000-01-04 115.50 118.62 105.00 107.69 116824800 25.54 251 2000-01-03 124.62 125.19 111.62 118.12 98114800 28.01 [252 rows x 7 columns] Date datetime64[ns] Open float64 High float64 Low float64 Close float64 Volume int64 Adj Close float64 dtype: object (<class 'pandas.tseries.index.DatetimeIndex'>, dtype('<M8[ns]')) <class 'pandas.tslib.Timestamp'> Traceback (most recent call last): File "main.py", line 28, in <module> main() File "main.py", line 25, in main df.to_sql("test_table", engine, flavor="mysql", if_exists="replace") File "/usr/local/lib/python2.7/dist-packages/pandas/core/generic.py", line 950, in to_sql index_label=index_label) File "/usr/local/lib/python2.7/dist-packages/pandas/io/sql.py", line 475, in to_sql index_label=index_label) File "/usr/local/lib/python2.7/dist-packages/pandas/io/sql.py", line 842, in to_sql table.insert() File "/usr/local/lib/python2.7/dist-packages/pandas/io/sql.py", line 611, in insert self.pd_sql.execute(ins, data_list) File "/usr/local/lib/python2.7/dist-packages/pandas/io/sql.py", line 810, in execute return self.engine.execute(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1614, in execute return connection.execute(statement, *multiparams, **params) File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 662, in execute params) File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 761, in _execute_clauseelement compiled_sql, distilled_params File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 874, in _execute_context context) File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1024, in _handle_dbapi_exception exc_info File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 196, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 856, in _execute_context context) File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 321, in do_executemany cursor.executemany(statement, parameters) File "/usr/lib/python2.7/dist-packages/mysql/connector/cursor.py", line 557, in executemany values.append(fmt % self._process_params(params)) File "/usr/lib/python2.7/dist-packages/mysql/connector/cursor.py", line 344, in _process_params return self._process_params_dict(params) File "/usr/lib/python2.7/dist-packages/mysql/connector/cursor.py", line 335, in _process_params_dict "Failed processing pyformat-parameters; %s" % err) sqlalchemy.exc.ProgrammingError: (ProgrammingError) Failed processing pyformat-parameters; 'MySQLConverter' object has no attribute '_timestamp_to_mysql' u'INSERT INTO test_table (`index`, `Date`, `Open`, `High`, `Low`, `Close`, `Volume`, `Adj Close`) VALUES (%(index)s, %(Date)s, %(Open)s, %(High)s, %(Low)s, %(Close)s, %(Volume)s, %(Adj Close)s)' ({'index': 0, 'High': 31.31, 'Adj Close': 27.57, 'Volume': 31702200, 'Low': 28.69, 'Date': Timestamp('2000-12-29 00:00:00'), 'Close': 29.06, 'Open': 30.88}, {'index': 1, 'High': 31.62, 'Adj Close': 29.46, 'Volume': 25053600, 'Low': 30.38, 'Date': Timestamp('2000-12-28 00:00:00'), 'Close': 31.06, 'Open': 30.56}, {'index': 2, 'High': 31.06, 'Adj Close': 29.11, 'Volume': 26437500, 'Low': 29.38, 'Date': Timestamp('2000-12-27 00:00:00'), 'Close': 30.69, 'Open': 30.38}, {'index': 3, 'High': 32.19, 'Adj Close': 29.34, 'Volume': 20589500, 'Low': 30.0, 'Date': Timestamp('2000-12-26 00:00:00'), 'Close': 30.94, 'Open': 31.5}, {'index': 4, 'High': 31.98, 'Adj Close': 30.23, 'Volume': 35568200, 'Low': 30.0, 'Date': Timestamp('2000-12-22 00:00:00'), 'Close': 31.88, 'Open': 30.38}, {'index': 5, 'High': 30.25, 'Adj Close': 27.98, 'Volume': 46719700, 'Low': 27.31, 'Date': Timestamp('2000-12-21 00:00:00'), 'Close': 29.5, 'Open': 27.81}, {'index': 6, 'High': 29.81, 'Adj Close': 27.03, 'Volume': 54440500, 'Low': 27.5, 'Date': Timestamp('2000-12-20 00:00:00'), 'Close': 28.5, 'Open': 28.06}, {'index': 7, 'High': 33.12, 'Adj Close': 29.05, 'Volume': 58653700, 'Low': 30.12, 'Date': Timestamp('2000-12-19 00:00:00'), 'Close': 30.62, 'Open': 31.81} ... displaying 10 of 252 total bound parameter sets ... {'index': 250, 'High': 118.62, 'Adj Close': 25.54, 'Volume': 116824800, 'Low': 105.0, 'Date': Timestamp('2000-01-04 00:00:00'), 'Close': 107.69, 'Open': 115.5}, {'index': 251, 'High': 125.19, 'Adj Close': 28.01, 'Volume': 98114800, 'Low': 111.62, 'Date': Timestamp('2000-01-03 00:00:00'), 'Close': 118.12, 'Open': 124.62}) Column `Date` type is `datetime64[ns]`. SQLAlchemy doesn't seems to like this kind of Numpy type so it raises: sqlalchemy.exc.ProgrammingError: (ProgrammingError) Failed processing pyformat-parameters; 'MySQLConverter' object has no attribute '_timestamp_to_mysql' How can I cleanly avoid this kind of error ? Answer: I'm not very familiar with MySQL Connector, but [according to this](http://stackoverflow.com/a/19502805/190597), you should be able to add a datetime64 converter using something like class Datetime64Converter(mysql.connector.conversion.MySQLConverter): """ A mysql.connector Converter that handles datetime64 types """ def _timestamp_to_mysql(self, value): return value.view('<i8') config = { 'user' : 'user', 'host' : 'localhost', 'password': 'xxx', 'database': 'db1'} conn = mysql.connector.connect(**config) conn.set_converter_class(Datetime64Converter) Since all datetime64 dtypes are 8-bytes, they can be viewed and stored as 8-byte integers. I'm not sure what facilities MySQL Connector provides for pulling the data back out as `datetime64`s. But if all else fails, you can convert the 8-byte integers back into `datetime64[ns]` like this: In [33]: s.view('<i8') Out[33]: 0 978307200000000000 1 978393600000000000 2 978480000000000000 3 978566400000000000 4 978652800000000000 5 978739200000000000 6 978825600000000000 7 978912000000000000 8 978998400000000000 9 979084800000000000 dtype: int64 In [34]: s.view('<i8').view('<M8[ns]') Out[34]: 0 2001-01-01 1 2001-01-02 2 2001-01-03 3 2001-01-04 4 2001-01-05 5 2001-01-06 6 2001-01-07 7 2001-01-08 8 2001-01-09 9 2001-01-10 dtype: datetime64[ns]
Pyramid ImportError: No module named registry Question: I have recently upgraded pyramid to 1.5.1 from 1.2 on my machine, when try to start the uwsgi server, now i am getting this error. File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 247, in loadapp return loadobj(APP, uri, name=name, **kw) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 271, in loadobj global_conf=global_conf) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 296, in loadcontext global_conf=global_conf) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 450, in get_context global_additions=global_additions) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 559, in _pipeline_app_context APP, pipeline[-1], global_conf) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 454, in get_context section) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 476, in _context_from_use object_type, name=use, global_conf=global_conf) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 406, in get_context global_conf=global_conf) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 296, in loadcontext global_conf=global_conf) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 328, in _loadegg return loader.get_context(object_type, name, global_conf) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 620, in get_context object_type, name=name) File "/usr/local/lib/python2.7/dist-packages/PasteDeploy-1.5.0-py2.7.egg/paste/deploy/loadwsgi.py", line 646, in find_egg_entry_point possible.append((entry.load(), protocol, entry.name)) File "build/bdist.linux-x86_64/egg/pkg_resources.py", line 2190, in load File "./xyz/__init__.py", line 1, in <module> from pyramid.config import Configurator File "/usr/local/lib/python2.7/dist-packages/pyramid/config/__init__.py", line 20, in <module> from pyramid.authorization import ACLAuthorizationPolicy File "/usr/local/lib/python2.7/dist-packages/pyramid/authorization.py", line 9, in <module> from pyramid.security import ( File "/usr/local/lib/python2.7/dist-packages/pyramid/security.py", line 13, in <module> from pyramid.threadlocal import get_current_registry File "/usr/local/lib/python2.7/dist-packages/pyramid/threadlocal.py", line 3, in <module> from pyramid.registry import global_registry File "/usr/local/lib/python2.7/dist-packages/pyramid/registry.py", line 5, in <module> from zope.interface.registry import Components ImportError: No module named registry How do i proceed to solve this error, I am using uwsgi to run server.I looked for solutions in the similar questions, but nothing helped me. Answer: You'll need to upgrade your `zope.interface` version too. You'll need to install version 3.8.0 or newer. Other minimal requirements have also updated since 1.2: * `WebOb` must be 1.3.1 or newer * `repoze.lru` must be 0.4 or up * `zope.deprecation` 3.5.0 or newer is required * `venusian` must now be version 1.0a3 at least * `translationstring` must be `0.4` or newer. Take into account that each of these packages may have other dependencies too. If you are using a buildout, make sure you have a `[versions]` section and pin newer versions. If you have a virtualenv, you should investigate if `bin/pip -U` will get you the correct versions. However, I'd not make the jump straight from 1.2 to 1.5.1 in just one step. Follow the [upgrade advice](http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/upgrading.html), read the change log and see if you can upgrade your application one version at a time; from 1.2 to 1.3.4 to 1.4.5 to 1.5.1 in controlled steps.
asyncio with map&reduce flavor and without flooding the event loop Question: I am trying to use asyncio in real applications and it doesn't go that easy, a help of asyncio gurus is needed badly. ## Tasks that spawn other tasks without flooding event loop (Success!) Consider a task like crawling the web starting from some "seeding" web-pages. Each web-page leads to generation of new downloading tasks in exponential(!) progression. However we don't want neither to flood the event loop nor to overload our network. We'd like to control the task flow. This is what I achieve well with modification of nice Maxime's solution proposed here: <https://mail.python.org/pipermail/python-list/2014-July/687823.html> ## map & reduce (Fail) Well, but I'd need as well a very natural thing, kind of map() & reduce() or functools.reduce() if we are on python3 already. That is, I'd need to call a "summarizing" function for all the downloading tasks completed on links from a page. This is where i fail :( I'd propose an oversimplified but still a nice test to model the use case: Let's use fibonacci function implementation in its ineffective form. That is, let the coro_sum() be applied in reduce() and coro_fib be what we apply with map(). Something like this: @asyncio.coroutine def coro_sum(x): return sum(x) @asyncio.coroutine def coro_fib(x): if x < 2: return 1 res_coro = executor_pool.spawn_task_when_arg_list_of_coros_ready(coro=coro_sum, arg_coro_list=[coro_fib(x - 1), coro_fib(x - 2)]) return res_coro So that we could run the following tests. Test #1 on one worker: executor_pool = ExecutorPool(workers=1) executor_pool.as_completed( coro_fib(x) for x in range(20) ) Test #2 on two workers: executor_pool = ExecutorPool(workers=2) executor_pool.as_completed( coro_fib(x) for x in range(20) ) It would be very important that both each coro_fib() and coro_sum() invocations are done via a Task on some worker, not just spawned implicitly and unmanaged! It would be cool to find asyncio gurus interested in this very natural goal. Your help and ideas would be very much appreciated. best regards Valery Answer: There are [multiple ways to compute fibonacci series asynchroniously](https://github.com/zed/txfib/blob/master/fibonacci.py#L229). First, check that the explosive variant fails in your case: @asyncio.coroutine def coro_sum(summands): return sum(summands) @asyncio.coroutine def coro_fib(n): if n == 0: s = 0 elif n == 1: s = 1 else: summands, _ = yield from asyncio.wait([coro_fib(n-2), coro_fib(n-1)]) s = yield from coro_sum(f.result() for f in summands) return s You could replace `summands` with: a = yield from coro_fib(n-2) # don't return until its ready b = yield from coro_fib(n-1) s = yield from coro_sum([a, b]) In general, to prevent the exponential growth, you could use `asyncio.Queue` ([synchronization via communication](http://stackoverflow.com/a/26638490/4279)), `asyncio.Semaphore` ([synchonization using mutex](http://stackoverflow.com/a/20722204/4279)) primitives.
Import error with Python Django Question: (Note: I have found a workaround, as stated at the bottom, but it seems like the wrong way to go about this.) `views.py` and `forms.py` are both in `R:\\jeffy\\programming\\sandbox\\python\\django_files\\tutorial\\django_test\\django_test\\` and `R:\\jeffy\\programming\\sandbox\\python\\django_files\\tutorial\\django_test\\` is in Django's "Python Path" But when attempting from forms import MyRegistrationForm in `views.py`, it results in an `ImportError ... No module named 'forms'`. ~~I changed it to from django_test.forms import MyRegistrationForm but that results in `SyntaxError ... invalid syntax`.~~ Since `R:\jeffy\programming\sandbox\python\django_files\tutorial\django_test\django_test` is a package (it has the empty `__init__.py` file in it), I thought I could refer to `django_test` in this way. (It's also the Django project name.) I can manually add `R:\\jeffy\\programming\\sandbox\\python\\django_files\\tutorial\\django_test\\django_test\\` to the `PYTHONPATH` environment variable (Windows 7, 32 bit, Django 1.7c2), and get it to work. But since this implies that _every_ Django project must be added to the `PYTHONPATH`, it seems like it should be able to work without having to do this. Am I correct? Is there a way to import `forms.py` from `views.py` (which is in the same directory) without having to manually add its directory to the `PYTHONPATH`? Thanks. Answer: If it's in the same directory, try a relative import: from .forms import MyRegistrationForm _EDIT:_ Someone posted this same suggestion as a comment already. I didn't see the comment.
python Qt ibus error messages Question: Running the following python program results in 2 error messages at startup. ubuntu:~/Desktop/testing$ python test.py Bus::open: Can not get ibus-daemon's address. IBusInputContext::createInputContext: no connection to ibus-daemon The errors occur in both Python and Python3. import sys from PyQt4 import QtGui class MyApp(QtGui.QMainWindow): def __init__(self): QtGui.QMainWindow.__init__(self) list = QtGui.QTableView(self) app = QtGui.QApplication(sys.argv) myApp = MyApp() myApp.show() sys.exit(app.exec_()) Can anyone tell me what I'm doing wrong? Answer: From <http://askubuntu.com/questions/360774/how-do-reactivate-ibus-after- upgrading-to-ubuntu-13-10> ibus restart ibus-setup Running they python code after executing those commands makes the warnings go away. The second command prompted me with "ibus daemon not running. would you like to start it?". The fact that it asked that explains why my python program could not connect.
How to change color of plotted curves when using Animations in Python MatPlotLib? Question: I have a piece of code that uses the FuncAnimation method in Python MatPlotLib to generate 50 random Exponential Decay Curves and updating the plot showing each one other the curves as they re generated.Each curves shows up with different colors. I would like to be able to gray out the previos curves as the new one is generated in a set color, say Blue. I hope someone can help. import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation import random fig = plt.figure() ax1 = fig.add_subplot(1,1,1) def main(i): # Actual parameters A0 = 10 K0 = random.uniform(-15,-1) C0 = random.uniform(0,10) # Generate some data based on these tmin, tmax = 0, 0.5 num = 20 t = np.linspace(tmin, tmax, num) y = model_func(t, A0, K0, C0) ax1.plot(t,y) def model_func(t, A, K, C): return A * np.exp(K * t) ani = animation.FuncAnimation(fig, main, interval=1000) plt.show() Answer: you have to store the line instance which `plot` returns and call `set_color(color)` before you draw again: import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation import random # an empty variable, whre we store the returned line of plot: line = None fig = plt.figure() ax1 = fig.add_subplot(1,1,1) def main(i): # we have to make line global: global line # Actual parameters A0 = 10 K0 = random.uniform(-15,-1) C0 = random.uniform(0,10) # Generate some data based on these tmin, tmax = 0, 0.5 num = 20 t = np.linspace(tmin, tmax, num) y = model_func(t, A0, K0, C0) # check if line already exists, if yes make it gray: if line is not None: line.set_color('gray') # plot returns a list with line instances, one for each line you draw, # the comma is used to unpack the one element list line, = ax1.plot(t,y, color='red') def model_func(t, A, K, C): return A * np.exp(K * t) ani = animation.FuncAnimation(fig, main, interval=1000) plt.show() ![plot result](http://i.stack.imgur.com/syY8n.gif)
python different result from IDLE and python script Question: I have tried the following in Python 2.7 shell: >>> from nltk.stem.isri import ISRIStemmer >>> st = ISRIStemmer() >>> string = u'\u062D\u064E\u062F\u0651\u064E\u062B\u064E\u0646\u064E\u0627' >>> st.stem(string) u'\u062d\u062f\u062b' So basically, I am trying to obtain: u'\u062d\u062f\u062b' from u'\u062D\u064E\u062F\u0651\u064E\u062B\u064E\u0646\u064E\u0627' using nltk's arabic stemmer, which works! However, when I try to accomplish the exact thing through a python script, it fails to stem any of the words in the list, `tokens` : #!/c/Python27/python # -*- coding: utf8 -*- import nltk import nltk.data from nltk.stem.isri import ISRIStemmer #In my script, I tokenize the following string commasection = '\u0623\u064E\u062E\u0652\u0628\u064E\u0631\u064E\u0646\u064E\u0627 \u0628\u0650\u0634\u0652\u0631\u064F \u0628\u0652\u0646\u064F \u0647\u0650\u0644\u0627\u064E\u0644\u064D' #The tokenizing works tokens = nltk.word_tokenize(commasection) st = ISRIStemmer() for word in tokens: #But the stemming of each word in tokens doesn't work???? print st.stem(word) #Should display #u'u0623\u062e\u0628\u0631' #u'\u0628\u0634\u0631' #u'\u0628\u0646' #u'\u0647\u0644\u0644' #But it just shows whatever is in commasection I need my python code to stem all words in `tokens`. But I don't get how the simpler example running in python shell works but not this script. I have noticed that in the shell scenario, there is that 'u' in front of the sequence of unicode, so I tried all sorts of encodings/decodings and read a lot about it all night long (pulled an all-nighter on this one), but this python script is just not stemming `word` from `tokens` like the python shell!!! If anyone can please help me make my script display the correct result I would be super super appreciative Answer: Unicode escapes only work in `unicode` literals. commasection = u'\u0623\u064E\u062E\u0652\u0628\u064E\u0631\u064E\u0646\u064E\u0627 \u0628\u0650\u0634\u0652\u0631\u064F \u0628\u0652\u0646\u064F \u0647\u0650\u0644\u0627\u064E\u0644\u064D'
Match a pattern only when previous pattern matches Question: I have a situation where I have to match a pattern only when previous regex pattern matches. Both pattern are different and matchobj in different line. For exmaple, Text: blah blah blah MyHost="xxxx" again blah blah blah MyIp= "x.x.x.x" I am only interested in whats comes after `MyHost` and `MyIp`, I also have a requirement that `MyIp` should match only when there is a `match(MyHost="xxxx")` in the above line. I am able to match both `MyHost` value and `MyIp` value separately but having hard time finding a logic to match both as per the requirement. Please note I am fairly new to python and tried lot of search and end up here. Answer: > `MyIp` should match only when there is a `match(MyHost="xxxx")` in the above > line. Get the matched group from index 1 in _Lazy way_. You know already what is next after `MyHost` \bMyHost="xxxx"\r?\n.*?MyIp=\s*\"([^"]*) Here is [demo](http://regex101.com/r/sB6mR7/4) sample code: import re p = re.compile(ur'\bMyHost="xxxx"\r?\n.*?MyIp=\s*\"([^"]*)', re.IGNORECASE) test_str = u"blah blah blah MyHost=\"xxxx\"\nagain blah blah blah MyIp= \"x.x.x.x\"" re.findall(p, test_str)
Python - 'numpy.float64' object is not callable using minimize function for alpha optimization for Simple Exponential Smoothing Question: I'm getting the TypeError: 'numpy.float64' object is not callable error for the following code: import numpy as np from scipy.optimize import minimize def ses(data, alpha): fit=[] fit.append(alpha*data[1] + (1-alpha)*data[0]) for i in range(2, len(data)): fit.append(data[i]*alpha + fit[i-2]*(1-alpha)) return fit def rmse(data, fit): se=[] for i in range(2,len(data)): se.append((data[i]-fit[i-2])*(data[i]-fit[i-2])) mse=np.mean(se) return np.sqrt(mse) alpha=0.1555 # starting value fit=ses(d[0], alpha) error=rmse(d[0], fit) result=minimize(error, alpha, (fit,), bounds=[(0,1)], method='SLSQP') I've tried many alternatives and its just not working. Changed the lists to arrays and made the multiplications involve no exponentials (np.sqrt() as opposed to ()**0.5) EDIT: def ses(data, alpha): fit=[] fit.append(alpha*data[1] + (1-alpha)*data[0]) for i in range(2, len(data)): fit.append(data[i]*alpha + fit[i-2]*(1-alpha)) return fit def rmse(data, alpha): fit=ses(data, alpha) se=[] for i in range(2,len(data)): print i, i-2 se.append((data[i]-fit[i-2])*(data[i]-fit[i-2])) mse=np.mean(se) return np.sqrt(mse) alpha=0.1555 # starting value data=d[0] result = minimize(rmse, alpha, (data,), bounds=[(0,1)], method='SLSQP') Ok guys, thanks. Have edited to this and I have stopped the error, however now I am getting an index out of bounds error, which is strange as without the minimize line, the code runs perfectly fine. EDIT 2: There was a series of silly errors, most of which I didn't know were problems, but were solved by trial and error. For some working code of optimized exponential smoothing: def ses(data, alpha): 'Simple exponential smoothing' fit=[] fit.append(data[0]) fit.append(data[1]) ## pads first two fit.append(alpha*data[1] + (1-alpha)*data[0]) for i in range(2, len(data)-1): fit.append(alpha*data[i] + (1-alpha)*fit[i]) return fit def rmse(alpha, data): fit=ses(data, alpha) se=[] for i in range(2,len(data)): se.append((data[i]-fit[i-2])*(data[i]-fit[i-2])) mse=np.mean(se) return np.sqrt(mse) alpha=0.5 data = d[0] result = minimize(rmse, alpha, (data,), bounds=[(0,1)], method='SLSQP') Answer: Its hard to tell exactly what the problem is here. I assume that `minimize` is actually [Scipy's minimize](http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.minimize.html). If so the first argument should be a function. Instead, you are passing the output of the `rmse` function, which is a double precision number. error=rmse(d[0], fit) # <--- returns a number You should have: result=minimize(<some function here>, alpha, (fit,), bounds=[(0,1)], method='SLSQP') When `minimize` is called, it attempts to call `error`, thus throwing a `TypeError: 'numpy.float64' object is not callable` There is a straightforward tutorial [here](http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html) that walks through exactly how to use `minimize` with the sequential least squares programming optimization algorithm. I would hazard a guess that you actually want to be passing `rmse` as the first argument: result=minimize(rmse, alpha, (fit,), bounds=[(0,1)], method='SLSQP') After all, the `rmse` function is giving you the error value and that is what you are minimising in such an optimisation.
Python irc bot input Question: ive been working on an irc bot and have run into a halt, i cant figure out how to accept input for arguments for a function, specifically my function add(), heres the function, def add(x1,x2): xtot = x1 + x2 ircsock.send("PRIVMSG "+ channel + " :Toatal = %s.\n" % (xtot)) and heres how its called if ircmsg.find('add.' + botnick) != -1: add() i want to know how to attach input along side just running the function, as in add.mybot 1,2 instead of just add.mybot Answer: I don't know which library you are using, but I will suppose that ircmsg contains the input (otherwise, replace ircmsg by ircmsg.line/ircmsg.input/… import re if ircmsg.find('add.' + botnick) != -1: #you find a line with add.mybot try: (x,y) = [int(i) for i in re.match(".*\((.*),(.*)\).*",a).groups()] #we try to retrieve what we want add(x, y) except: pass # or some logs However, I think that the conversion should not be done here but in your `add` function (+ if you want to be able to add float for instance). For instance: import re if ircmsg.find('add.' + botnick) != -1: #you find a line with add.mybot try: (x,y) = re.match(".*\((.*),(.*)\).*",a).groups() #we try to retrieve what we want add(x, y) except: pass # or some logs and def add(x1,x2): if x1.isdigit() and x2.isdigit(): xtot = int(x1) + int(x2) ircsock.send("PRIVMSG "+ channel + " :Toatal = %s.\n" % (xtot)) else: try: xtot = float(x1) + float(x2) ircsock.send("PRIVMSG "+ channel + " :Toatal = %s.\n" % (xtot)) except: pass #or send the concatenation x1+x2 Hope I am not mistaken.
Python: unable connect with database Question: Python 3.4 with psycopg2 I used [this guide](https://wiki.postgresql.org/wiki/Using_psycopg2_with_PostgreSQL) to set up a basic `psycopg2` connection like so: #!/usr/bin/python import psycopg2 import sys import pprint def main(): conn_string = "dbname='CIBTST' host='XX.XX.XXX.XX' port='XXXX' user='XXXXX' password='XXXX'" conn = psycopg2.connect(conn_string) cursor = conn.cursor() cursor.execute("My_select") records = cursor.fetchall() pprint.pprint(records) if __name__ == "__main__": main() I get this traceback: Traceback (most recent call last): File "C:\Users\pi24926\Desktop\Python\doSMS.py", line 14, in <module> main() File "C:\Users\pi24926\Desktop\Python\doSMS.py", line 8, in main conn = psycopg2.connect(conn_string) File "C:\Python34\lib\site-packages\psycopg2\__init__.py", line 164, in connect conn = _connect(dsn, connection_factory=connection_factory, async=async) psycopg2.OperationalError When I try the same select statement in another client (Toad) it seems to operate correctly. Any suggestions will be great. Thank you! EDIT: The source of problem is: PORT. If I run: #!/usr/bin/python import psycopg2 import sys import pprint def main(): conn_string = "dbname='CIBTST' host='XX.XX.XXX.XX' user='XXXXX' password='XXXX'" conn = psycopg2.connect(conn_string) cursor = conn.cursor() cursor.execute("My_select") records = cursor.fetchall() pprint.pprint(records) if __name__ == "__main__": main() Then I get: Traceback (most recent call last): File "C:\Users\pi24926\Desktop\Python\doSMS.py", line 14, in <module> main() File "C:\Users\pi24926\Desktop\Python\doSMS.py", line 8, in main conn = psycopg2.connect(conn_string) File "C:\Python34\lib\site-packages\psycopg2\__init__.py", line 164, in connect conn = _connect(dsn, connection_factory=connection_factory, async=async) psycopg2.OperationalError: could not connect to server: Connection timed out (0x0000274C/10060) Is the server running on host "XXXXXXX" and accepting TCP/IP connections on port 5432? I can only connect under port 1522 for some reason. Any ideas? Answer: The problem was with database... It is an Oracle, so i should use cx_Oracle instead of psycopg2.
Iterating over dictionaries in list - Python Question: Similar to my last question: Is there a one-liner/Pythonic (the former doesn't necessarily imply the latter, I know) way to write the following nested for loop? some_list = # list of dictionaries for i in some_list: for j in i['some_key']: if j is in another_list: i['another_key'] = True I've tried import itertools for i,j in itertools.product(some_list,i): if j is in another_list: i['another_key'] = True but I've been given a 'reference before assignment' error, which makes sense, I think. Any suggestions? Thanks! Answer: This isn't a one-liner, but achieves what you're trying to do neatly and clearly: for i in some_list: i['another_key'] = any(j in another_list for j in i['some_key']) Alternatively, you can protect against `i['some_key']` not being present with: for i in some_list: i['another_key'] = any(j in another_list for j in i.get('some_key', [])) or against `i['some_key']` not being iterable with e.g.: try: i['another_key'] = any(j in another_list for j in i['some_key']) except TypeError: # whatever you want to do instead on the other hand, if it turns out your dictionary doesn't have the appropriate key or the value isn't iterable, you might prefer to find out directly! I don't know what the data you're playing with represents, so can't make suggestions, but better variable names would probably be helpful.
Python: How to escape 'lambda' Question: `lambda` has a keyword function in Python: f = lambda x: x**2 + 2*x - 5 What if I want to use it as a variable name? Is there an escape sequence or another way? You may ask why I don't use another name. This is because I'd like to use [`argparse`](https://docs.python.org/2/library/argparse.html): parser = argparse.ArgumentParser("Calculate something with a quantity commonly called lambda.") parser.add_argument("-l","--lambda",help="Defines the quantity called lambda", type=float) args = parser.parse_args() print args.lambda # syntax error! Script called with `--help` option gives: ... optional arguments -h, --help show this help message and exit -l LAMBDA, --lambda LAMBDA Defines the quantity called lambda Because of that, I would like to stay with `lambda` as the variable name. Solutions may be `argparse`-related as well. Answer: You can use dynamic attribute access to access that specific attribute still: print getattr(args, 'lambda') Better still, tell `argparse` to use a different attribute name: parser.add_argument("-l", "--lambda", help="Defines the quantity called lambda", type=float, dest='lambda_', metavar='LAMBDA') Here the [`dest` argument](https://docs.python.org/2/library/argparse.html#dest) tells `argparse` to use `lambda_` as the attribute name: print args.lambda_ The help text still will show the argument as `--lambda`, of course; I set `metavar` explicitly as it otherwise would use `dest` in uppercase (so with the underscore): >>> import argparse >>> parser = argparse.ArgumentParser("Calculate something with a quantity commonly called lambda.") >>> parser.add_argument("-l", "--lambda", ... help="Defines the quantity called lambda", ... type=float, dest='lambda_', metavar='LAMBDA') _StoreAction(option_strings=['-l', '--lambda'], dest='lambda_', nargs=None, const=None, default=None, type=<type 'float'>, choices=None, help='Defines the quantity called lambda', metavar='LAMBDA') >>> parser.print_help() usage: Calculate something with a quantity commonly called lambda. [-h] [-l LAMBDA] optional arguments: -h, --help show this help message and exit -l LAMBDA, --lambda LAMBDA Defines the quantity called lambda >>> args = parser.parse_args(['--lambda', '4.2']) >>> args.lambda_ 4.2
Setting custom headers on a python-wordpress-xmlrpc request Question: I'm trying to connect to a Wordpress blog using XMLRPC. I'm using the latest library, v2.3 (<http://python-wordpress-xmlrpc.readthedocs.org/en/latest/>). I get the following exception when I try to initialize the client: ServerConnectionError: <ProtocolError for www.myblogaddress.com/xmlrpc.php: 403 Forbidden> I noticed that this happens before the username & password are checked, so it doesn't have anything to do with invalid credentials. I believe it might require some custom headers, like user agent, but I don't know how to set a custom transport param. I have copied the code from the python-wordpress-xmlrpc library and modified it so I could make tests. Here is what I have so far: from xmlrpclib import Transport class SpecialTransport(Transport): def send_content(self, connection, request_body): connection.putheader("Content-Type", "text/xml") connection.putheader("Content-Length", str(len(request_body))) connection.putheader('User-Agent', 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11') connection.putheader('Accept','text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8') connection.putheader('Accept-Charset','ISO-8859-1,utf-8;q=0.7,*;q=0.3') connection.putheader('Accept-Encoding','none') connection.putheader('Accept-Language', 'en-US,en;q=0.8') connection.putheader('Connection', 'keep-alive') connection.endheaders() if request_body: connection.send(request_body) url = "{test_url_here}" try: server = xmlrpc_client.ServerProxy(url, allow_none=True, transport=SpecialTransport()) supported_methods = server.mt.supportedMethods() except xmlrpc_client.ProtocolError as err: print "A protocol error occurred" print "URL: %s" % err.url print "HTTP/HTTPS headers: %s" % err.headers print "Error code: %d" % err.errcode print "Error message: %s" % err.errmsg I should mention I have successfully connected to the same blog from a PHP script, so I believe it has something to do with the Python request. Any idea why this doesn't work? Thanks for the help! Answer: I figured that the "403 Forbidden" status was because the XMLRPC api didn't "like" the user agent. Like I said, the same request was working fine using a PHP script. In the [xmlrpclib Transport class](http://svn.python.org/projects/python/trunk/Lib/xmlrpclib.py), the user agent is set as: user_agent = "xmlrpclib.py/%s (by www.pythonware.com)" % __version__ My send_content() method was not overwriting it, instead my request ended up with 2 User-Agent headers. Maybe someone can shed some light on what this happens. So, I did the following: from xmlrpclib import Transport class SpecialTransport(Transport): user_agent = 'Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.43 Safari/537.31' try: # Use the verbose flag for debugging server = xmlrpc_client.ServerProxy(url, transport=SpecialTransport(), verbose=True) except xmlrpc_client.ProtocolError as err: print "A protocol error occurred" print "URL: %s" % err.url print "HTTP/HTTPS headers: %s" % err.headers print "Error code: %d" % err.errcode print "Error message: %s" % err.errmsg It worked just fine after that. After debugging the xmlrpclib problem, my code ended up as: from xmlrpclib import Transport import wordpress_xmlrpc class SpecialTransport(Transport): user_agent = 'Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.43 Safari/537.31' wordpress_client = wordpress_xmlrpc.Client( 'http://www.myblogaddress.com/xmlrpc.php', 'username', 'password', transport=SpecialTransport() )
Python 2.7: updating Tkinter Label widget content Question: I'm trying to make my Tkinter Label widget update but, where I thought it was straightforward, now I can't sort it out. My code is: import Tkinter as tk import json, htmllib, formatter, urllib2 from http_dict import http_status_dict from urllib2 import * from contextlib import closing class Application(tk.Frame): def __init__(self, master=None): tk.Frame.__init__(self, master) self.grid() self.createWidgets() def createWidgets(self): StatusTextVar = tk.StringVar() self.EntryText = tk.Entry(self) self.GetButton = tk.Button(self, command=self.GetURL) self.StatusLabel = tk.Label(self, textvariable=StatusTextVar) self.EntryText.grid(row=0, column=0) self.GetButton.grid(row=0, column=1, sticky=tk.E) self.StatusLabel.grid(row=1, column=0, sticky=tk.W) def GetURL(self): try: self.url_target = ("http://www." + self.EntryText.get()) self.req = urllib2.urlopen(self.url_target) StatusTextVar = "Success" except: self.StatusTextVar = "Wrong input. Retry" pass app = Application() app.mainloop() I've tried several ways but either the Label won't update, or the interpreter raises errors. Note: In the excerpt I deleted as much as code as possible to avoid confusion. Answer: You need to use the `StringVar` `set` method to change the label text. Also: StatusTextVar = "Success" is not referencing self and will not change any state. You should first change all `StatusTextVar` to `self.StatusTextVar` and then update the set calls: self.StatusTextVar = "Success" self.StatusTextVar = "Wrong input. Retry" to self.StatusTextVar.set("Success") self.StatusTextVar.set("Wrong input. Retry") Updating all `StatusTextVar` instances and using the `set` method, I get: import Tkinter as tk import json, htmllib, formatter, urllib2 from urllib2 import * from contextlib import closing class Application(tk.Frame): def __init__(self, master=None): tk.Frame.__init__(self, master) self.grid() self.createWidgets() def createWidgets(self): self.StatusTextVar = tk.StringVar() self.EntryText = tk.Entry(self) self.GetButton = tk.Button(self, command=self.GetURL) self.StatusLabel = tk.Label(self, textvariable=self.StatusTextVar) self.EntryText.grid(row=0, column=0) self.GetButton.grid(row=0, column=1, sticky=tk.E) self.StatusLabel.grid(row=1, column=0, sticky=tk.W) def GetURL(self): try: self.url_target = ("http://www." + self.EntryText.get()) self.req = urllib2.urlopen(self.url_target) self.StatusTextVar.set("Success") except: self.StatusTextVar.set("Wrong input. Retry") pass root = tk.Tk() app = Application(master=root) app.mainloop() It works as one would expect.
parallellization in python multiprocessing Question: I need to do parallelization in Python for my existing code and I am blocking in one for loop. I used `multiprocessing.process` but it's freezing the computer. import multiprocessing def func(pos,r,h,grid): for i in arayb: l= L(r,g, p[i,:],h) #process need to be parallelized (L is function in another file) p = multiprocessing.Process(target=func) p.start() p.join() print('l',l) if __name__ == '__main__': lock = Lock() When I am using `if __name__ == '__main__':` above like: import multiprocessing from source.RUN import* # imported main file def func(pos,r,h,grid): for i in arayb: l= L(r,g, p[i,:],h) #process need to be parallelized (L is function in another file) if __name__ == '__main__': lock = Lock() p = multiprocessing.Process(target=func) p.start() p.join() print('l',l) Then it will not go to `multiprocess.Process`. So please give me appropriate way to do `l = L(r,g, p[i,:],h)` in parallel for `for i in arrayb`. I have 13 to 14 processes like `l = L(r,g, p[i,:],h)` that need to be parallelized, so give me guidance about one, then I can make the others parallel. Answer: Your first example is freezing the computer because you're recursively calling `func` in new processes. That's going to quickly use up all your CPU/memory. Your second example is just creating one process that iterates over `arayb` and calls `L` without any concurrency, so you're not getting any performance boost. What you want to do is execute `L` concurrently across multiple processes, but without spawning so many processes that you bog down your system. You can use [`multiprocessing.Pool`](https://docs.python.org/2.7/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool) for that: import multiprocessing # L must is imported or defined somewhere up here... if __name__ == '__main__': p = multiprocessing.Pool() # Creates a pool with as many workers as you have CPU cores results = [] for i in arayb: results.append(pool.apply_async(L, (r, g, p[i,:], h))) p.close() p.join() for result in results: print('l', result.get())
how to stop after "cordova run ios" Question: What is the command to stop running after using `cordova run ios` in terminal? I found one topic about this with 1 answer saying it's `quit` but that didn't work. Right now I close terminal every time which is very time consuming. If i press ctrl+c I get the following: > (lldb) ^CTraceback (most recent call last): File > "/private/tmp/fruitstrap_.py", line 17, in connect_command event = > lldb.SBEvent() File > "/Applications/Xcode.app/Contents/SharedFrameworks/LLDB.framework/Versions/A/Resources/Python/lldb/**init**.py", > line 3395, in **init** this = _lldb.new_SBEvent(*args) KeyboardInterrupt > error: the platform is not currently connected Executing commands in > '/tmp/fruitstrap-lldb-prep-cmds-'. (lldb) platform select remote-ios > --sysroot '/Users/doekewartena/Library/Developer/Xcode/iOS > DeviceSupport/7.1.2 (11D257)/Symbols' Platform: remote-ios Connected: no SDK > Path: "/Users/doekewartena/Library/Developer/Xcode/iOS DeviceSupport/7.1.2 > (11D257)/Symbols" (lldb) target create > "/Users/doekewartena/Documents/jbc2014/platforms/ios/build/device/JBC2014.app" > Current executable set to > '/Users/doekewartena/Documents/jbc2014/platforms/ios/build/device/JBC2014.app' > (armv7). (lldb) script > fruitstrap_device_app="/private/var/mobile/Applications/E23498AF-29C5-4A9F-8AFB-6566631DB725/JBC2014.app" > (lldb) script fruitstrap_connect_url="connect://127.0.0.1:12345" (lldb) > command script import "/tmp/fruitstrap_.py" (lldb) > command script add -f fruitstrap_.connect_command connect (lldb) > command script add -s asynchronous -f fruitstrap_.run_command run (lldb) > command script add -s asynchronous -f fruitstrap_.autoexit_command autoexit > (lldb) connect (lldb) > run Answer: It is probably because of an earlier version of `ios-deploy`, as it is described [here](https://github.com/phonegap/ios-deploy/issues/38). Check your version of ios-deploy, mine was 1.0.8: $ ios-deploy --version Check npm version of ios-deploy, current is 1.1.0: $ npm info ios-deploy version Update ios-deploy via npm: $ sudo npm update -g ios-deploy After `$ cordova run` you can now type `exit` and press ENTER. After this, type `y` and press ENTER to exit the process.
Ensuring continuously updated image file is safe to open Question: So here's the situation: I've got two Python programs, one to control a uEye camera module, making use of the SimpleCV library, and another to do a bit of analysis on the image. The reason for them being separate is that SimpleCV is 2.7, while a few modules I need to use in the analysis stage are for 3.X only. The camera program will continuously capture and save an image to a location (rewriting the old image), which I've timed to be around every 30 ms. The analysis program takes in an image every 100 ms or so. Now, the issue I'm concerned with is that if the analysis program tries to read in the image while the camera program happens to be writing it, it will spring an error. I'm fairly certain placing an exception statement to catch the OSError and have it simply try again would suffice, but I feel that is a bit forceful. I've also thought about having the camera program write a number (say, 100) of images, to lesson the odds that the two will happen to be working on the same file at once, but that seems unreliable. In a perfect world, I could ditch SimpleCV and go with a 3.X module, allowing the writing and reading to happen in sequence only, but I've yet to find a suitable replacement that works with the camera. Any thoughts on the most efficient, robust way of avoiding this issue? Here is the (simplified) camera program: from SimpleCV import * cam = Camera(0) while True: img = cam.getImage() img.save("nav.jpg") And the important part of the analysis program: from PIL import Image img = Image.open("nav.jpg") Answer: The easiest way is to open the file with exclusive access so no-one can have it open for the duration of you working with it. See [What is the best way to open a file for exclusive access in Python?](http://stackoverflow.com/questions/186202/what-is-the-best-way-to- open-a-file-for-exclusive-access-in-python) for implementation details. Be sure to `file.close()` or `with <file_open> as f` to close the file as soon as you can to minimize interference with the agents that "continuously update" it. Yes, and be sure to handle the file locked case in those agents.
How to modify Qt/PyQt install on OSX Question: After hours of QT/PyQt compiling and tedious installation there is new `./PyQt4/` sub-folder under `/Library/Python/2.7/site-packages/`. Since I don't want to run the same compilation/installation process on every mac machine I thought I just would copy/paste that '`PyQt4` folder and import it using: import sys sys.path.append('/Library/Python/2.7/site-packages/PyQt4') from PyQt4 import QtCore, QtGui But I am getting a following `ImportError`: ImportError: dlopen(/Library/Python/2.7/site-packages/PyQt4/_qt.so, 2): Library not loaded: QtDesigner.framework/Versions/4/QtDesigner Referenced from: /Library/Python/2.7/site-packages/PyQt4/_qt.so Reason: image not found Apparently it looks for some additional files. Where what are they? Answer: The `.so` files are [shared objects](http://en.wikipedia.org/wiki/Library_\(computing\)#Shared_libraries) there were created when you compiled PyQT on the system. Qt references these files at runtime. That said, linking these files might be more trouble than just documenting and scripting your install process.
Using scipy.stats library or another method to generate data follows a distribution in a specific boundary Question: I want to sample with `scipy.stats` library, using an upper and a lower boundary for the sampled data. I am interested to use `scipy.stats.lognorm` and `scipy.stats.expon` and set a constrain `(low<=x<=up)` on the limits of generated data points and also estimate `logp` with considering these limits. For instance, I can not do LogNormal=scipy.stats.lognorm(q=[0,5],scale=[0.25],loc=0.0) #q:upper and lower limits, scale=sigma, loc=mean Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/vol/anaconda/lib/python2.7/site-packages/scipy/stats/_distn_infrastructure.py", line 739, in __call__ return self.freeze(*args, **kwds) File "/vol/anaconda/lib/python2.7/site-packages/scipy/stats/_distn_infrastructure.py", line 736, in freeze return rv_frozen(self, *args, **kwds) File "/vol/anaconda/lib/python2.7/site-packages/scipy/stats/_distn_infrastructure.py", line 434, in __init__ shapes, _, _ = self.dist._parse_args(*args, **kwds) TypeError: _parse_args() got an unexpected keyword argument 'q' The documentation is a bit confusing, which one is `sigma` and which input parameter is `mean`? Could anybody give an example, how they should be set with boundaries? Answer: There are several problems in your implementation 1, your pdf can not be evaluated at x=0 2, `-log(1./sqrt(2*pi)/self.sigma*exp(-0.5*((log(value)-self.mu)/self.sigma)**2))` should be: `-log(1./sqrt(2*pi)/self.sigma/value*exp(-0.5*((log(value)-self.mu)/self.sigma)**2))` (And there may be more) Another consideration is that you may want to keep the parameterization the same as `scipy` to avoid future confusion. Therefore, a minimal implementation: In [112]: import scipy.stats as ss import scipy.optimize as so import numpy as np class bounded_distr(object): def __init__(self, parent_dist): self.parent = parent_dist def bnd_lpdf(self, x, limits=None, *args, **kwargs): if limits and np.diff(limits)<=0: return -np.inf #nan may be better idea else: _v = -log(self.parent.pdf(x, *args, **kwargs)) _v[x<=limits[0]] = -np.inf _v[x>=limits[1]] = -np.inf return _v def bnd_cdf(self, x, limits=None, *args, **kwargs): if limits and np.diff(limits)<=0: return 0 #nan may be better idea elif limits: _v1 = self.parent.cdf(x, *args, **kwargs) _v2 = self.parent.cdf(limits[0], *args, **kwargs) _v3 = self.parent.cdf(limits[1], *args, **kwargs) _v4 = (_v1-_v2)/(_v3-_v2) _v4[_v4<0] = np.nan _v4[_v4>1] = np.nan return _v4 else: return self.parent.cdf(x, *args, **kwargs) def bnd_rvs(self, size, limits=None, *args, **kwargs): if limits and np.diff(limits)<=0: return np.repeat(np.nan, size) #nan may be better idea elif limits: low, high = limits rnd_cdf = np.random.uniform(self.parent.cdf(x=low, *args, **kwargs), self.parent.cdf(x=high, *args, **kwargs), size=size) return self.parent.ppf(q=rnd_cdf, *args, **kwargs) else: return self.parent.rvs(size=size, *args, **kwargs) In [113]: bnd_logn = bounded_distr(ss.lognorm) In [114]: bnd_logn.bnd_rvs(10, limits=(0.1, 0.9), s=1, loc=0) Out[114]: array([ 0.23167598, 0.43185726, 0.34763109, 0.71020467, 0.5216074 , 0.60883528, 0.34353607, 0.84530444, 0.64145739, 0.82082447]) In [115]: bnd_logn.bnd_lpdf(np.linspace(0,1,10), limits=(0.1, 0.9), s=1, loc=0) Out[115]: array([ inf, 1.13561188, 0.54598554, 0.42380072, 0.43681222, 0.50389845, 0.5956744 , 0.69920358, 0.80809192, 0.91893853]) In [116]: bnd_logn.bnd_cdf(np.linspace(0,1,10), limits=(0.1, 0.9), s=1, loc=0) Out[116]: array([ nan, 0.00749028, 0.12434152, 0.28010562, 0.44267888, 0.59832448, 0.74188947, 0.87201574, 0.98899161, nan])
Sending DHCP Discover using python scapy Question: I am new to python and learning some network programming, I wish to send an DHCP Packet through my tap interface to my DHCP server and expecting some response from it. I tried with several packet building techniques such a structs and ctypes and ended up with using scapy. Here I am able to send DHCP Packet but unable to get any response from the DHCP server(Analyzed using wireshark and tcpdump)..My packet looked like same as original DHCP packet but failed to get response. Here is my code import socket from scapy.all import * def main(): if len(sys.argv)<3: print " fewer arguments." sys.exit(1) else: tap_interface = sys.argv[1] src_mac_address = sys.argv[2] ethernet = Ether(dst='ff:ff:ff:ff:ff:ff',src=src_mac_address,type=0x800) ip = IP(src ='0.0.0.0',dst='255.255.255.255') udp =UDP (sport=68,dport=67) fam,hw = get_if_raw_hwaddr(tap_interface) bootp = BOOTP(chaddr = hw, ciaddr = '0.0.0.0',xid = 0x01020304,flags= 1) dhcp = DHCP(options=[("message-type","discover"),"end"]) packet = ethernet / ip / udp / bootp / dhcp fd = open('/dev/net/tun','r+') TUNSETIFF = 0x400454ca IFF_TAP = 0x0002 IFF_NO_PI = 0x1000 mode = IFF_TAP | IFF_NO_PI ifr = struct.pack('16sH', tap_interface, IFF_TAP | IFF_NO_PI) fcntl.ioctl(fd,TUNSETIFF,ifr) while True: sendp(packet, iface = tap_interface) time.sleep(10) if __name__ == '__main__': main() Is there any other ways of achieving this? If so please do mention them as well. Thanks in Advance. Answer: Solved ! I had the same problem, **The problem I think was on the srp() function, it can't receive packets on port 68, but I've created a new function with a new thread that sniffs BOOTP messages and displays the packet fields.** you can simulate it : > sniff(iface=myiface, filter="port 68 and port 67") then send the packet using srp() or sendp() func :) NOTE: I have used multithreading mechanism cause my program sends messages and sniffs if a rogue DHCP Server is on the network
MATLAB: which language interface to use when calling external library functions offered with various interfaces? Question: Still very new to MATLAB and to programming in general, I got stuck in understanding how to best make profit from what is documented in the official MATLAB documentation in the chapter Calling External Functions. I am by now not capable to judge which of the many offered pathways might be the most effective one to go for, in terms of how easy a pathway can be learned by a beginner and which one could on the long term most easy and clear in the MATLAB code be applied over and over again. To put it in a very practical question: as, for instance, the third party image processing function libraries ITK or OpenCV provide Java, C++, C and Python interfaces, and MATLAB has functionality to address such interfaces, which interface should a beginner in programming chose? Is one of them usable in a clearer laid out design and thus easier to get warm with and quicker to learn to apply? I am afraid to now hear from everybody something like "well, it depends what you want to do", and my answer could only be "don´t know yet, I am learning programming and prefer to first gain some general success by going for the cleaner designed and easier to understand approach, and thus would like to get a recommendation where to start". Please let me add this to my question and concerns: Highly respected Yair Altman states on his internet page "undocumentedmatlab.com" in the commercial for his "Undocumented Secrets of Matlab-Java Programming" book, that the Matlab programming environment would rely on Java for numerous tasks, including networking, data-processing algorithms and graphical user-interface (GUI). I derive from this statement that learning to specially connect MATLAB to JAVA will have significant advantages, THE MATHWORKS itself seem to have decided to take advantage of such connection when implementing MATLAB. But I can also see, that THE MATHWORKS by providing for MATLAB the MEX functionality seems to lean towards a tight C/C++ incorporation, providing also MEX besides the other possibilities to call external C functions. For me as a beginner it is now confusing to uncover which route of connectivity to external languages could be taken as the "standard" or "first to be recommended" one. Do any of you experienced programmers would have some arguments for me which route to first focus on? It is a long journey to learn programming, and I would not like to waste time on poorly recommended pathways. Answer: This question sounds like: "I am still learning how to drive, still not a very experienced driver. Please give me your tips about how to change a flat tire, what is the best tire to get a flat tire on, passenger side rear? what are the best places to get a flat tire, is it the mall or my office parking lot or the middle of the street." Let me give you some tips: * Changing a tire will not make you a more knowledgeable driver. You will learn very few things from doing it and it is a frustrating experience and it is not worth your time right now. Learn how to drive. **Explanation:** Making MATLAB call Java/C++/C or whatever other language will not make you a better MATLAB programmer, and frankly is of secondary importance. Until the first sentence of your question isn't "I'm still new to MATLAB and programming in general" you're wasting your time. Like changing a flat tire, connecting MATLAB to other languages is not something cool, or interesting, in fact it is the opposite: it is frustrating, error prone and boring. * The day will come when you will have a flat tire. That day the location where you get it and which tire it is will become secondary. You will need to learn how to change it and you will. Trust me you will. **Explanation:** You don't get to decide in what language the code that solves the exact problem you have right now is written. The same way you don't get to decide where you get the flat tire. The day will come when you already know C++ and need MATLAB to call into some C++ code (either your code or someone else's). That day you will need to learn how to write a mex file in C++ and compile it for your platform and invoke your code. Or the day will come when you need to invoke Java, and then you will learn how to call into Java. Obsessing over this when you don't know what you need to do and you're clearly not technically equipped to do it is just a waste of time.
python pickle - dumping a very huge list Question: I have two directories, each of which contains about 50,000 images, which are mostly 240x180 sizes. I want to pickle their pixel infos as training, validation, and test sets, but this apparently turns out to be very, very large, and eventually cause the computer to either free or run out of disk spaces. When the computer froze, the pkl file in the middle of being generated was 28GB. I'm not sure if this is supposed to be this large. Am I doing something wrong? Or is there a more efficient way to do this? from PIL import Image import pickle import os indir1 = 'Positive' indir2 = 'Negative' trainimage = [] trainpixels = [] trainlabels = [] validimage = [] validpixels = [] validlabels = [] testimage = [] testpixels = [] testlabels = [] i=0 for (root, dirs, filenames) in os.walk(indir1): print 'hello' for f in filenames: try: im = Image.open(os.path.join(root,f)) if i<40000: trainpixels.append(im.tostring()) trainlabels.append(0) elif i<45000: validpixels.append(im.tostring()) validlabels.append(0) else: testpixels.append(im.tostring()) testlabels.append(0) print str(i)+'\t'+str(f) i+=1 except IOError: continue i=0 for (root, dirs, filenames) in os.walk(indir2): print 'hello' for f in filenames: try: im = Image.open(os.path.join(root,f)) if i<40000: trainpixels.append(im.tostring()) trainlabels.append(1) elif i<45000: validpixels.append(im.tostring()) validlabels.append(1) else: testpixels.append(im.tostring()) testlabels.append(1) print str(i)+'\t'+str(f) i+=1 except IOError: continue trainimage.append(trainpixels) trainimage.append(trainlabels) validimage.append(validpixels) validimage.append(validlabels) testimage.append(testpixels) testimage.append(testlabels) output=open('data.pkl','wb') pickle.dump(trainimage,output) pickle.dump(validimage,output) pickle.dump(testimage,output) Answer: The pickle file format isn't particularly efficient, especially not for images. Even if your pixels were stored as 1 byte per pixel, you would have > 50,000 × 240 × 180 = 2,160,000,000 so 2 GB. Your pixels undoubtedly take more space than that, I'm not sure what the PIL `tostring()` method actually does on an image. It's entirely plausible that your resulting file could be in the tens of gigabytes. You may want to consider a storage method other than pickle. For example, what would be wrong with simply storing the files on disk in their native image format, and pickling a list of the file names?
Make string length divisible by 6 in Python Question: I want the program to take a users input and format its length so it is divisible by six, by adding the characters of the string onto the end of the string, until `len(string)%6 == 0`: string = raw_input('Enter String') while len(string)<=6-len(string)%6 and len(string)%6 != 0: print 6-len(string)%6 string = string + string while len(string)%6 != 0: string = string + string[0::6-(len(string)%6)] print string Examples (using numbers as a stand in for whatever character could be situated at that point): 1 -> 111111 12 -> 121212 123 -> 123123 1234 -> 123412 12345 -> 123451 12456 -> 123456 1234567 -> 123456712345 I am getting stuff like this: 12 -> 121211 12345 -> 123451234513524142551545411111 Answer: You could try something like this (code written/tested in Python 3.4): import itertools lStr = input("Enter string:") def getStrMod(aStr, aMod): lIter = itertools.cycle(list(aStr)) while len(aStr) % aMod != 0: aStr += next(lIter) return aStr print(getStrMod(lStr, 6)) Results: Enter string:1 111111 Enter string:12 121212 Enter string:12345 123451 Enter string:1234567 123456712345
Is this an efficient way to generate the Thue-Morse sequence in Python? Question: Is using a generator as in the code below, an efficient way to generate the [Thue-Morse sequence](http://en.wikipedia.org/wiki/Thue%E2%80%93Morse_sequence) in Python? # generate the Thue-Morse sequence def genThueMorse(): # initialize tms = '0' curr = 0 while True: # generate next sequence if curr == len(tms): tmp = '' for i in range(len(tms)): if tms[i] is '0': tmp += '1' else: tmp += '0' tms += tmp yield tms[curr] curr +=1 Here is code to test it: tms = koch.genThueMorse() while True: print(next(tms)) Answer: This is concise, is it "efficient"? import itertools def genThueMorse(): for n in itertools.count(): yield (1 if bin(n).count('1')%2 else 0)
Doctests fail with UnicodeDecodeError on C-extension and Python3 Question: I am having difficulty getting my testing framework to work for a C-extension module for both Python2 and Python3. I like to run my docstrings through `doctest` to make sure that I am not feeding my users bad information, so I want to run `doctest` as part of my testing. I don't believe that the source of my problem is the docstrings themselves, but rather how the `doctest` module is trying to read my extension module. If I run `doctest` with Python2 (on the module compiled against Python2), I get the output that I expect: $ python -m doctest myext.so -v ... 1 items passed all tests: 98 tests in myext.so 98 tests in 1 items. 98 passed and 0 failed. Test passed. However, when I do the same but with Python3, I get a `UnicodeDecodeError`: $ python3 -m doctest myext3.so -v Traceback (most recent call last): ... File "/usr/local/Cellar/python3/3.3.3/Frameworks/Python.framework/Versions/3.3/lib/python3.3/doctest.py", line 223, in _load_testfile return f.read(), filename File "/usr/local/Cellar/python3/3.3.3/Frameworks/Python.framework/Versions/3.3/lib/python3.3/codecs.py", line 301, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcf in position 0: invalid continuation byte To get some more info, I ran it through `pytest` with full traceback: $ python3 -m pytest --doctest-glob "*.so" --full-trace ... self = <encodings.utf_8.IncrementalDecoder object at 0x102ff5110> input = b'\xcf\xfa\xed\xfe\x07\x00\x00\x01\x03\x00\x00\x00\x08\x00\x00\x00\r\x00\x00\x00\xd0\x05\x00\x00\x85\x00\x00\x00\x00\x...edString\x00_PyUnicode_FromString\x00_Py_BuildValue\x00__Py_FalseStruct\x00__Py_TrueStruct\x00dyld_stub_binder\x00\x00' final = True def decode(self, input, final=False): # decode input (taking the buffer into account) data = self.buffer + input > (result, consumed) = self._buffer_decode(data, self.errors, final) E UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcf in position 0: invalid continuation byte /usr/local/Cellar/python3/3.3.3/Frameworks/Python.framework/Versions/3.3/lib/python3.3/codecs.py:301: UnicodeDecodeError * * * It looks like `doctest` is actually _reading_ the `.so` file to get the docstrings (rather than importing the module), but Python3 doesn't know how to decode the input. I can confirm this by replicating the byte string and traceback by trying to read the `.so` file myself: $ python3 Python 3.3.3 (default, Dec 10 2013, 20:13:18) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> open('myext3.so').read() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/Cellar/python3/3.3.3/Frameworks/Python.framework/Versions/3.3/lib/python3.3/codecs.py", line 301, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcf in position 0: invalid continuation byte >>> open('myext3.so', 'rb').read() b'\xcf\xfa\xed\xfe\x07\x00\x00\x01\x03\x00\x00\x00\x08\x00\x00\x00\r\x00\x00\x00\xd0\x05...' * * * Has anyone else run into this problem before? Is there a standard (or not-so- standard) way to get `doctest` to execute tests on C extension modules on python3? **Update** : I should also add that I get identical results on Travis-CI ([see here](https://travis-ci.org/SethMMorton/fastnumbers/jobs/31680811)), so it's not specific to my local build. Answer: I have found a workaround to this problem so I will post it, but I find it rather unsatisfying. I am still looking for more elegant/less hacky solutions to this. * * * There are **three** problems with `doctest.py` that need to be overcome to make this work: **1) Get doctest to consider .so files as python modules.** If you look at the `doctest.py` source, you will notice in the test runner a block that looks similar to this (depending on the python version you are running): if filename.endswith(".py"): # It is a module -- insert its dir into sys.path and try to # import it. If it is part of a package, that possibly # won't work because of package imports. dirname, filename = os.path.split(filename) sys.path.insert(0, dirname) m = __import__(filename[:-3]) del sys.path[0] failures, _ = testmod(m) else: failures, _ = testfile(filename, module_relative=False) What is happening here is `doctest.py` is checking for the ".py" extension, and if so the file is loaded as a python module, but otherwise the file is read as if it were text (like a README.rst might be). We need to get `doctest.py` to acknowledge that a file with ".so" extension is a python module. To do this, simply add a check for the ".so" extension by modifying this `if` block to read if filename.endswith(".py") or filename.endswith(".so"): ... **2) Get doctest to identify the functions in the C-extension module** `doctest.py` uses the [inspect.isfunction](https://docs.python.org/3/library/inspect.html#inspect.isfunction) function to determine what objects are functions when recursively searching for docstrings within a module object. The problem with this function is that it only identifies functions written in python, not in C (python identifies C-extension functions as builtin). So, to identify our functions when recursing through the module, we need to use [inspect.isbuiltin](https://docs.python.org/3/library/inspect.html#inspect.isbuiltin) instead. To rectify this, we need to locate the `DocTestFinder._find` method in `doctest.py` and change how it looks for functions. I converted # Recurse to functions & classes. if ((inspect.isfunction(val) or inspect.isclass(val)) and self._from_module(module, val)): self._find(tests, val, valname, module, source_lines, globs, seen) to # Recurse to functions & classes. if ((inspect.isbuiltin(val) or inspect.isclass(val)) and self._from_module(module, val)): self._find(tests, val, valname, module, source_lines, globs, seen) **3) Properly remove the version tag on the .so file (Python3 only).** On Python3, C-extensions can be tagged with a version identifier (i.e. "myext.cpython-3mu.so", please see [PEP 3149](http://legacy.python.org/dev/peps/pep-3149/)). We need to know how to remove this when doing the initial import in the `doctest.py` test runner. To do this, I converted the line m = __import__(filename[:-3]) to from sysconfig import get_config_var m = __import__(filename[:-3] if filename.endswith(".py") else filename.replace(get_config_var("EXT_SUFFIX"), "")) This is only needed for Python3. * * * After making these modifications, I can get doctest to work as expected on both Python2 and Python3. Since these modifications are rather annoying, I have made a `patch_doctest.py` script that does this automatically and puts the patched `doctest.py` in your current directory. You can get this file [here](https://github.com/SethMMorton/fastnumbers/blob/develop/patch_doctest.py) if you want to use it. You can then run the tests on the extension modules like this $ python2 patch_doctest.py $ python2 -m doctest myext2.so $ rm doctest.py $ python3 patch_doctest.py $ python3 -m doctest myext3.so As evidence that this works, [here are the new Travis-CI results](https://travis-ci.org/SethMMorton/fastnumbers/builds/31772265).
ValueError: time data '24:00' does not match format '%H:%M' Question: I'm having serious trouble converting 24 hour time to 12 hour. def standard_time(t): t = datetime.strptime(t, "%H:%M") return t When fed in `'24:00'` we get ValueError: time data '24:00' does not match format '%H:%M' I also attempt converting using %I (12 hour) instead of %H, but get an error whenever hours go over 12: def standard_time(t): t = datetime.strptime(t, "%I:%M") return t Sort of stuck... ValueError: time data '13:30' does not match format '%I:%M' Does python have a simple 24 hour to 12 hour converter? Ideally 23:00 should put out 11:00 PM and 24:00 should not throw an error! Answer: You have to give `00:00` for `24:00`. Last count in 24 hour format is `23:59` after that next value will be `00:00`. Like if you have `23:59`, and add one more minutes in that >>> a = datetime(1900, 1, 1, 23, 59) >>> from datetime import timedelta >>> a + timedelta(minutes=1) datetime.datetime(1900, 1, 2, 0, 0) You will get next date with `00:00`
Hashed passcode matching stuck python Question: I am doing a fun little project, making a kind of secret code program with a GUI(Tkinter) to encrypt my text(not very securely). I am trying to make a password on this program linked to a txt file. The program has default password 'abc', stored in a text file in sha-224. When the user enters 'abc', it will hash their input in sha-224, and compare it to the stored password in _hexutf2.txt. Once they have logged in, the user will have the option to choose a new password by clicking New Passcode, entering their previous passcode, clicking next, and then clicking new code. After clicking new code, the user enters a new passcode in the entrypassVariable, then clicks enter passcode, which will write the new passcode on the txt file. Instead, the program hangs on the first pressing of Enter passcode, despite the fact that I had entered 'abc' the default passcode. The program worked before I added the password element, so I will post only the password code here, but I will link to the entire program if anyone wants to see it. **_EDIT_** Posting just the essential code here. The problem in my main program is caused by this. For some reason, this program prints this: Stored Password: cd62248424c8057fea8fff161ec753d7a29f47a7d0af2036a2e79632 Enter Password: Moo Password Attempt Hash: cd62248424c8057fea8fff161ec753d7a29f47a7d0af2036a2e79632 Password Incorrect <http://snipplr.com/view/75865/cryptographer/> <\----Entire program code import hashlib passfile = open('pass.txt','r') stored_password = str(passfile.read()) print 'Stored Password: ' + stored_password password = raw_input('Enter Password: ') enter_pass_sha = hashlib.sha224(password) enter_password = str(enter_pass_sha.hexdigest()) print 'Password Attempt Hash: ' + enter_password if stored_password == enter_password: print 'Password Correct' else: print 'Password Incorrect' Answer: You should have noticed this: Stored Password: cd62248424c8057fea8fff161ec753d7a29f47a7d0af2036a2e79632 # blank line?! Enter Password: Moo Password Attempt Hash: cd62248424c8057fea8fff161ec753d7a29f47a7d0af2036a2e79632 Password Incorrect When you `read` the `stored_password` in from the `passfile`, it comes with a newline character `'\n'` at the end. You need to do: with open('pass.txt') as passfile: stored_password = passfile.read().strip() print 'Stored Password: ' + stored_password Note `str.strip`, called without arguments, removes all whitespace, including newlines, from the start and end of the string. Note also the use of the `with` context manager for file handling, which handles errors and closes the file for you.
Incorrect results with genetic algorithm image evolution Question: I'm attempting to implement a program originally created by [Roger Alsing](http://rogeralsing.com/2008/12/07/genetic-programming-evolution-of- mona-lisa/). I've done quite a bit of research on what other people have implemented. I decided to write my program in python, and use basic triangles as the shapes. When I run the program, it does not show improvement after more generations (The triangles tend to just disappear). I'm assuming something is wrong with my mutate function. Can anyone tell me why its producing less than satisfactory results? My code: import random import copy from PIL import Image, ImageDraw optimal = Image.open("mona_lisa.png") optimal = optimal.convert("RGBA") size = width, height = optimal.size num_shapes = 128 generations = 50000 def random_genome(): elements = [] for i in range(num_shapes): x = (random.randint(0, width), random.randint(0, height)) y = (random.randint(0, width), random.randint(0, height)) z = (random.randint(0, width), random.randint(0, height)) r = random.randint(0, 255) g = random.randint(0, 255) b = random.randint(0, 255) alpha = random.randint(10, 255) elements.append([x, y, z, r, g, b, alpha]) return elements def render_daughter(dna): image = Image.new("RGBA", (width, height), "white") draw = ImageDraw.Draw(image) for item in dna: x = item[0] y = item[1] z = item[2] r = item[3] g = item[4] b = item[5] alpha = item[6] color = (r, g, b, alpha) draw.polygon([x, y, z], fill = color) return image def mutate(dna): dna_copy = copy.deepcopy(dna) shape_index = random.randint(0, len(dna) - 1) roulette = random.random() * 2 if roulette < 1: if roulette < 0.25: dna_copy[shape_index][3] = int(random.triangular(255, dna_copy[shape_index][3])) elif roulette < 0.5: dna_copy[shape_index][4] = int(random.triangular(255, dna_copy[shape_index][4])) elif roulette < 0.75: dna_copy[shape_index][5] = int(random.triangular(255, dna_copy[shape_index][5])) elif roulette < 1.0: dna_copy[shape_index][6] = int(0.00390625 * random.triangular(255, dna_copy[shape_index][6] * 255)) else: if roulette < 1.25: dna_copy[shape_index][0] = (int(random.triangular(width, dna_copy[shape_index][0][0])), int(random.triangular(height, dna_copy[shape_index][0][1]))) elif roulette < 1.5: dna_copy[shape_index][2] = (int(random.triangular(width, dna_copy[shape_index][3][0])), int(random.triangular(height, dna_copy[shape_index][4][1]))) elif roulette < 1.75: dna_copy[shape_index][3] = (int(random.triangular(width, dna_copy[shape_index][4][0])), int(random.triangular(height, dna_copy[shape_index][5][1]))) return dna_copy def fitness(original, new): fitness = 0 for x in range(0, width): for y in range(0, height): r1, g1, b1, a1 = original.getpixel((x, y)) r2, g2, b2, a2 = new.getpixel((x, y)) deltaRed = r1 - r2 deltaGreen = g1 - g2 deltaBlue = b1 - b2 deltaAlpha = a1 - a2 pixelFitness = deltaRed + deltaGreen + deltaBlue + deltaAlpha fitness += pixelFitness return fitness def generate(): mother = random_genome() best_genome = mother best_fitness = fitness(optimal, render_daughter(best_genome)) for i in range(generations): daughter = copy.deepcopy(best_genome) daughter = mutate(daughter) daughter_fitness = fitness(optimal, render_daughter(daughter)) if daughter_fitness < best_fitness: best_genome = daughter best_fitness = daughter_fitness if i % 50 == 0: print i if i % 1000 == 0: render_daughter(best_genome).save("iterations/output_" + str(i) + ".png") if __name__ == "__main__": generate() The beginning image I am using: ![Mona Lisa](http://i.stack.imgur.com/IclTk.png) The output image after 1,000 generations: ![Output 1000](http://i.stack.imgur.com/5cWEC.png) Output image after 5,000 generations: ![Output 5000](http://i.stack.imgur.com/bwdwm.png) Answer: You're checking wheter the new fitness is smaller than the current fitness: if daughter_fitness < best_fitness: The fitness you calculate, however, can be negative: deltaRed = r1 - r2 deltaGreen = g1 - g2 deltaBlue = b1 - b2 deltaAlpha = a1 - a2 pixelFitness = deltaRed + deltaGreen + deltaBlue + deltaAlpha fitness += pixelFitness The various `delta*` variables can be negative or positive; your test will favour negative deltas, increasing the whiteness of the "best" image (the higher values of `r2`, `g2` etc, the lower the fitness, and the whiter the image, until they are all at 255, 255, 255. I don't know if increasing alpha increases or decreases the transparency). Thus, you should take the absolute value of the differences: deltaRed = abs(r1 - r2) deltaGreen = abs(g1 - g2) deltaBlue = abs(b1 - b2) deltaAlpha = abs(a1 - a2) You could also consider the sum of the square, or square root of the sum of squares (which, basically, turns it into a least-squares fitting routine): deltaRed = r1 - r2 deltaGreen = g1 - g2 deltaBlue = b1 - b2 deltaAlpha = a1 - a2 pixelFitness = math.sqrt(deltaRed**2 + deltaGreen**2 + deltaBlue**2 + deltaAlpha**2) fitness += pixelFitness Finally, I noticed your program doesn't work for me. It's in the second half of your `mutate()` function, where you assign new values to x, y or z, but use indices above 2. `random_genome()` shows that you try to access colour values instead, which are integers, and even attempt to index those. This leads to exceptions, so I don't even know how you could get this program running. It either never ran in the first place, or you didn't properly copy- paste. I've changed that to if roulette < 1.25: dna_copy[shape_index][0] = (int(random.triangular( width, dna_copy[shape_index][0][0])), int( random.triangular(height, dna_copy[shape_index][0][1]))) elif roulette < 1.5: dna_copy[shape_index][1] = (int(random.triangular( width, dna_copy[shape_index][1][0])), int( random.triangular(height, dna_copy[shape_index][1][1]))) elif roulette < 1.75: dna_copy[shape_index][2] = (int(random.triangular( width, dna_copy[shape_index][2][0])), int( random.triangular(height, dna_copy[shape_index][2][1]))) which seems to do what you want.
python - PipeMapRed.waitOutputThreads(): subprocess failed with code 1 Question: Recently, I want to parse websites and then use BeautifulSoup to filter what I want and write in csv file in hdfs. Now, I am at the process of filtering website code with BeautifulSoup. I want to use mapreduce method to execute it: hadoop jar /usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.3.0-mr1-cdh5.0.2.jar -mapper /pytemp/filter.py -input /user/root/py/input/ -output /user/root/py/output40/ input file is like kvs(PER LINE): (key, value) = (url, content) content, I mean: <html><head><title>...</title></head><body>...</body></html> filter.py file: #!/usr/bin/env python #!/usr/bin/python #coding:utf-8 from bs4 import BeautifulSoup import sys for line in sys.stdin: line = line.strip() key, content = line.split(",") #if the following two lines do not exist, the program will execute successfully soup = BeautifulSoup(content) output = soup.find() print("Start-----------------") print("End------------------") BTW, I think I do not need reduce.py to do my work. **However, I got error message** : Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Here is a reply said it is memory issue but my input file just 3MB. <http://grokbase.com/t/gg/rhadoop/13924fs4as/972-getting-error-pipemapred- waitoutputthreads-while-running-mapreduce-program-for-40mb-of-sizedataset> I have no idea about my problem. I search lots of things for it but still does not work. My environment is: 1. CentOS6 2. Python2.7 3. Cloudera CDH5 I will appreciate your help with this situation. **EDIT on 2016/06/24** First of all, I checked error log and found the problem is _too many values to unpack_. (also thanks to @kynan answer) Just give an example why it happened <font color="#0000FF"> SomeText1 <font color="#0000FF"> SomeText2 </font> </font> If part of _content_ is like above, and I call soup.find("font", color="#0000FF") and assign to _output_. It will cause two _font_ to be assigned to one _output_ , so that is why the error _too many values to unpack_ **Solution** Just change `output = soup.find()` to `(Var1, Var2, ...) = soup.find_all("font", color="#0000FF", limit=AmountOfVar)` and work well :) Answer: This error usually means that the mapper process died. To find out why check the user logs in `$HADOOP_PREFIX/logs/userlogs`: there is one directory per job and inside one directory per container. In each container directory is a file `stderr` containing the output sent to stderr i.e. error messages.
py-bcrypt giving different hash results than flask-bcrypt - possible? Question: I am generating a hash in a python app using the flask-bcrypt package on top of flask-security. For a password of test12 i got a result of '$2a$12$ibinoz7sTc76Vh09shUhruYD8CrJyUxPpu1m.kb6LmFmzvWBbb52a' (used a randomly generated salt, per bcrypt docs) but when I do the following in python terminal: import py-bcrypt print bcrypt.hashpw("test12", '$2a$12$ibinoz7sTc76Vh09shUhruYD8CrJyUxPpu1m.kb6LmFmzvWBbb52a') I get: $2a$12$ibinoz7sTc76Vh09shUhru1wllZi3KqQEluhhInj5FAghM4uczmxe when I think I should be getting back the original as a match. I also checked with .checkpw and it returns False. What am I doing wrong? Can they possibly yield different results? My app authenticates just fine. UPDATE: when I manually call (in the Flask app) `print check_password_hash('$2a$12$ibinoz7sTc76Vh09shUhruYD8CrJyUxPpu1m.kb6LmFmzvWBbb52a', 'test12')` I also get False. very strange, indeed ,considering 'test12' works to login. If I generate a new password hash in the app, and check it using the above it passes. UPDATE 2: I have learned that flask-security uses HMAC, as well as the chosen password hashing backend (bcrypt, in my case) and I suspect that this my be cause of inconsistency. Assuming that is true, the question becomes, how does one verify a password hash that has both HMAC and bcrypt applied. My app is configured to provide a secret key as a HMAC salt (sha512) so I tried: result = hmac.new('...my apps secretkey...', 'test12', hashlib.sha512).hexdigest() print bcrypt.checkpw(result, '$2a$12$ibinoz7sTc76Vh09shUhruYD8CrJyUxPpu1m.kb6LmFmzvWBbb52a') But that isn't working either. Answer: If you want to generate the same hash, you need _both_ the same password _and_ the same salt.
JSON schema $ref does not work for relative path Question: I have 3 schemas: child schema: { "title": "child_schema", "type": "object", "properties": { "wyx":{ "type": "number" } }, "additionalProperties": false, "required": ["wyx"] } parent schema: { "title": "parent", "type": "object", "properties": { "x": { "type": "number" }, "y": { "type": "number" }, "child": { "$ref": "file:child.json" } } } grandpa schema: { "type": "object", "title": "grandpa", "properties": { "reason": { "$ref": "file:parent.json" } }, "additionalProperties": false } As you can see, gradpa has a ref to parent and parent has a ref to child. All these 3 files are inside the same folder. When I use python validator to validate grandpa schema, I will keep on getting an error called RefResolutionError. HOWEVER, if I do not have grandpa and I just use parent schema and child schema, everything worked!! So the issue is I cannot have a ref pointing to a ref(2 levels). But I can have a ref pointing to a schema(just 1 level.) I wonder why Answer: Your references are incorrect. If the referenced schemes are in the same folder, use simple relative path references like: "child": {"$ref": "child.json"}, "reason": {"$ref": "parent.json"} If you're using [jsonschema](https://github.com/Julian/jsonschema) for validation, don't forget to set a reference resolver in order to resolve the paths to referenced schemas: import os from jsonschema import validate, RefResolver instance = {} schema = grandpa # this is a directory name (root) where the 'grandpa' is located schema_path = 'file:///{0}/'.format( os.path.dirname(get_file_path(grandpa)).replace("\\", "/")) resolver = RefResolver(schema_path, schema) validate(instance, schema, resolver=resolver)
Parse REST API: Proper way to query class object with ACL (python) Question: There are three classes in my Parse database. For some reason, fetching data from the second class ContactsData always returns an empty result. Would be great if anyone could provide insights into the problem! **1\. The default "Users" class** No problems getting data from this class following examples at <https://parse.com/docs/rest#users> ACL column is empty **2\. A custom "ContactsData" class. A user is allowed to add name and phone of contacts through our iOS app** Custom columns: name (string), phone (number) ACL is write: true and read: true. What I want to do: Fetch all contacts of a given user using python REST API. **METHOD 1:** import json,httplib import apikeys import dataset MY_CLASS = '/1/classes/ContactsData' connection = httplib.HTTPSConnection('api.parse.com', 443) connection.connect() connection.request('GET', MY_CLASS, '', { "X-Parse-Application-Id": apikeys.PARSE_APP_ID, "X-Parse-REST-API-Key": apikeys.PARSE_REST_API_KEY,}) parseData = json.loads(connection.getresponse().read()) print parseData Output: {'results':[]} Keep getting this even when there are objects in this class **METHOD 2** Also tried MY_CLASS = '/1/classes/ContactsData/ABCDEFG' where ABCDEFG is the objectId of a particular object. Output: {'code': 101, 'error': 'requested resource was not found'} **3\. A custom "Testing" class** Exactly the same as MY_CLASS. Just that I did not use ACL, and created the object directly from the Parse dashboard. This time, I was able to get the data using MY_CLASS = '/1/classes/Testing' Answer: You're use REST calls without any logged in user/session-token, so you'll be using default permissions. Unless you've set the class to allow anonymous reads and the row ACL also doesn't restrict it, you wont get any data. If you want to do the requests as a User that does have read rights to the row you'll need to login and then use the session-token in the headers. More information can be found here: <https://parse.com/docs/rest#users>
How to filter user' s input ( html, with PHP backend) with python? Question: There is a web application written in PHP and HTML. What I want is to filter a users input for a variety of cases and sanitize it. For example, I want to compare the input from a form (string) with a list of allowed strings and depending if it is right or wrong to trigger the suitable PHP function to handle this. My question is how to bind the user input with the python script and then the outcome of this python script as an input for PHP? thanks Answer: You can call the Python script from your PHP file as a shell command, passing it JSON-formatted arguments. Then have the Python script output the response (also JSON encoded) and have the PHP file capture that. Here's an example I used recently, cobbled together from the links below: **PHP file:** $py_input = ... // Your data goes here. // Call the Python script, passing it the JSON argument, and capturing the result. $py_output = shell_exec('python script.py ' . escapeshellarg(json_encode($py_input))); $py_result = json_decode($py_output); **Python file:** import json php_input = json.loads(sys.argv[1]) # The first command line argument. # Do your thing. php_output = ... # Whatever your output is. print json.dumps(php_output) # Print it out in JSON format. [Passing a Python list to php](http://stackoverflow.com/questions/17235467/passing-a-python-list-to-php) [executing Python script in PHP and exchanging data between the two](http://stackoverflow.com/questions/14047979/executing-python-script-in- php-and-exchanging-data-between-the-two)
How do I look up the value of a multi-level pointer inside a process in Python? Question: I have a process, and I want to look up a value of an address inside that process but that address is a multi-level pointer and has a few offsets attached to it. How do I do this in Python? Answer: I'm answering my own question to document a way of doing this in Python 3. First you need some way to look up the pid of the process we are working on. I used the module psutil to do this, but there are other ways to do it too. import psutil def get_pid(process_name): pid = None for proc in psutil.process_iter(): try: if (proc.name() == process_name): pid = proc.pid except (PermissionError, psutil.AccessDenied): pass return pid Now we have the pid of the process we want to work on. We'll use that later to get the handle of the process we want to work on. Now I said it's a multi-level pointer. How that works is that we have an initial address. and a list of offsets. We first of all look up the value of our initial address. We then apply the first offset to that value to get the next address. We look up the value of that address, apply the next offset to that value and get the next address to look up. This can keep going depending on the size of your list of offsets, but say that last look up was the last one and that gives us our final address. When we get the value of that we get the actual value that we are after. To do this programmatically, we need the pid(For example 4045), the address(For example 0x0163B4D8), the list of offsets(For example [0x37C, 0x3C]) and the size of data(For example an unsigned int is 4 bytes, so that's the size of our data). from ctypes import * from ctypes.wintypes import * PROCESS_ALL_ACCESS = 0x1F0FFF def read_process_memory(pid, address, offsets, size_of_data): # Open the process and get the handle. process_handle = windll.kernel32.OpenProcess(PROCESS_ALL_ACCESS, False, pid) size_of_data = 4 # Size of your data data = "" read_buff = create_string_buffer(size_of_data) count = c_ulong(0) current_address = address offsets.append(None) # We want a final loop where we actually get the data out, this lets us do that in one go. for offset in offsets: if not windll.kernel32.ReadProcessMemory(process_handle, current_address, cast(read_buff, LPVOID), size_of_data, byref(count)): return -1 # Error, so we're quitting. else: val = read_buff.value result = int.from_bytes(val, byteorder='little') # Here that None comes into play. if(offset != None): current_address = result+offset else: windll.kernel32.CloseHandle(process_handle) return result That's the basic concept, and of course the code could be improved.
How does class datetime.timedelta work? Question: I have been writing a python script to delete files in a folder if they are older than x months. I am easily able to delete files older than a year old. I am also able to take two values, x (a file) and y (a num of months), and delete x if it is older than y months old, assuming they are in the same calendar year. The problem I ran into is when the year passes from one to the next, but the file x is still younger than y months. (e.g. if the current date was Jan. 2014 and the file was created Dec. 2013, it should not be deleted, but if the current date is Jan 2014 and the file was modified in March 2013, the file should be deleted). I was trying to come up with an algorithm for solving this based on month numbers (jan = 01, etc.), but I could not figure it out. I discovered I can use a datetime.timedelta instance to represent the difference between two dates class datetime.timedelta. I want to code this from scratch, though, or at least see how it works. I cannot find the code for how class datetime.timedelta in the python docs. I want to learn how to solve this problem without relying on a built in class that I don't understand. Can anyone help me understand this? import os import time import math def main(): path = '/home/gabriel/Documents/bookmarks/' months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] for root, dirs, files in os.walk(path): print "root:", root print "dirs:", dirs print "files:" for f in files: mtime = "%s" % time.ctime(os.path.getmtime('%s%s' % (path,f))) print "\tlast modified: %s %s" % (mtime, f) weekDay, month, day, HMS, year = mtime.split() if int(time.strftime("%Y")) - int(year) > 1: print "\t... deleting %s" % f os.remove('%s%s' % (path,f)) # elif int(time.strftime("%Y")) - int(year) == 1 & : # after the & in the elif expression above is what I cannot figure out. elif abs((months.index(month) + 1)-(int(time.strftime("%m")))) > 4: print "\t... deleting %s" % f os.remove('%s%s' % (path,f)) main() Answer: The simplest approach is not to use timedelta. Just compare your timestamp with now: tseconds = time.time() - os.path.getmtime(...) That is the time elapsed, in seconds, since the last modification. Since you are talking about a timescale of months, I don't think you really care if the file was modified the 1st of February 2013 or the 31st of January: they belong to different months, but should be deleted anyway. If you really want to use timedelta, use that number of seconds to construct it: tdelta = datetime.timedelta(seconds=tseconds) But it will not automatically give you months. Again, you are working in a large scale, you can assume all months have 31 days and be safe.
NLTK NaiveBayesClassifier input formatting Question: I am totally stumped with this issue. I am relatively new to python and NLTK. I am trying to make a naive Bayes classifier and I am not sure if the input should be a list of tuples, or a dictionary or a list which is a tuple of two lists. The following returns an error that `AttributeError: 'str' object has no attribute 'items'` [('maggie: just a push button. and the electric car uses sensors to drive itself. \n', 'notending')] The format below returns the following error `AttributeError: 'list' object has no attribute 'items'` [([['the', 'fire', 'chief', 'says', 'someone', 'started', 'the', 'blaze', 'on', 'purpose', 'as', 'a', 'controlled', 'burn', ',', 'but', 'it', 'quickly', 'got', 'out', 'of', 'hand', '.']], 'notending')] And if I use a dictionary, I get the following error `ValueError: too many values to unpack` {'everyone: bye!': 'ending'} I am calling the naive bayes classifier as `classifier = nltk.NaiveBayesClassifier.train(d_train)` I am not sure what is wrong here. Your help is greatly appreciated. Thanks. Answer: from nltk.classify import NaiveBayesClassifier from nltk.corpus import stopwords stopset = list(set(stopwords.words('english'))) def word_feats(words): return dict([(word, True) for word in words.split() if word not in stopset]) posids = ['I love this sandwich.', 'I feel very good about these beers.'] negids = ['I hate this sandwich.', 'I feel worst about these beers.'] pos_feats = [(word_feats(f), 'positive') for f in posids ] neg_feats = [(word_feats(f), 'negative') for f in negids ] print pos_feats print neg_feats trainfeats = pos_feats + neg_feats classifier = NaiveBayesClassifier.train(trainfeats) Take a look at the positive and negative feats [({'I': True, 'love': True, 'sandwich.': True}, 'positive'), ({'I': True, 'feel': True, 'good': True, 'beers.': True}, 'positive')] [({'I': True, 'hate': True, 'sandwich.': True}, 'negative'), ({'I': True, 'feel': True, 'beers.': True, 'worst': True}, 'negative')] So, If you give the sentence 'I hate everything' to classify print classifier.classify(word_feats('I hate everything')) you will get the result as 'negative'.
Using Mako Template in Javascript Question: I have a dashboard I am working on, using `Python` `cherrypy` framework and `Mako` template language. I had an `html` file for each of dashboard pages. There, I used `Mako` to pass some data to `html` and inline `Javascript`. For example, to display the names of some processes when I only had the list of ids, I passed a `Python` `dict` that maps ids to their corresponding names, and then used the dict in `${}` tags. However, as I am now moving these `Javascript` codes into a separate file with `.js` extension, I found out simply putting the same `Mako` code blocks in the `Javascript` code does not work. Is there any way I could use `Mako` template language in an external `.js` file that is imported in an `html` file? Is it considered a bad practice and should I pass all these data using `XMLHTTPRequest`s when I am passing them to `Javascript`? Answer: Ajax is the "right" way to do this. In order to insert the values into your separate Javascript file dynamically, it can no longer be served as a static file. Beyond that, it adds an extra layer of problems with security and maintainability as you have to deal with string escaping, possible script injection, and having Mako syntax in your Javascript. Not to mention losing the ability to host your `.js` files on a CDN or server configured for static files.
Find and replace in multiple files and add incrementing number Question: I'm trying to replace a regex pattern in many `.cpp` files on my computer. I need to add an incrementing number at the end of each substitution, so I chose python to do this. This is what I got already, but it doesn't work yet: import os import re i = 0 for file in os.listdir("C:\SeparatorTest"): if file.endswith(".cpp"): for line in open(file): line = re.sub(r'([^\s]*)\.Separator\((.*)\)\;', r'Separator \1\_' + i += 1 + '(\1,\2);') Have I missed something? Answer: Haven't tested because I didn't think of what you are trying to replace, but you shouldn't increment like that in the middle of your re.sub call, you need to replace your last line of code with this : line = re.sub(r'([^\s]*)\.Separator\((.*)\)\;', r'Separator \1\_' + i + '(\1,\2);') i += 1 In C++ you'd just put i++ or ++i and the expression would be evaluated to i before or after incrementation, but here I wouldn't try fancy things, python needs to be readable and the next programmer reading your code might not guess what you did. and there is no ++ operator in python. Edit : And you are just reading your file, the open(file) has default "r", which means reading, and you aren't writing anything. you need open(file, "w") for that. and not just store the re.sub() return value in a variable, but write it to the file. Edit2 : Here is what I'm working on, it's not done yet, I'll edit as I find how to get it to work : import os, re i = 0 def replacement(i): i += 1 return r'Separator \1\_' + str(i) + '(\1,\2);' for file in os.listdir("."): if file.endswith(".cpp"): for line in open(file): re.sub(r'([^\s]*)\.Separator\((.*)\)\;', replacement(i), line) The idea is that the replacement text can be the result of a function that should only be called when a non overlapping pattern matches, according to [The python documentation](https://docs.python.org/2/library/re.html#re.sub) Edit3 : I think I'll stop there, unless I get a response from you because I have some regex and other problems I don't have time to address. Also I'm unsure on best practice for text replacement, you should look for that, there should be help available. Glue the whole thing together (Incrementation, correcting your re.sub() call, open in writing mode, replace the text that matches) and you should achieve what you were trying to do.
Python pickle file strangely large Question: I made a pickle file, storing a grayscale value of each pixel in 100,000 80x80 sized images. (Plus an array of 100,000 integers whose values are one-digit). My approximation for the total size of the pickle is, 4 byte x 80 x 80 x 100000 = 2.88 GB plus the array of integers, which shouldn't be that large. The generated pickle file however is over 16GB, so it's taking hours just to unpickle it and load it, and it eventually freezes, after it takes full memory resources. Is there something wrong with my calculation or is it the way I pickled it? I pickled the file in the following way. from PIL import Image import pickle import os import numpy import time trainpixels = numpy.empty([80000,6400]) trainlabels = numpy.empty(80000) validpixels = numpy.empty([10000,6400]) validlabels = numpy.empty(10000) testpixels = numpy.empty([10408,6400]) testlabels = numpy.empty(10408) i=0 tr=0 va=0 te=0 for (root, dirs, filenames) in os.walk(indir1): print 'hello' for f in filenames: try: im = Image.open(os.path.join(root,f)) Imv=im.load() x,y=im.size pixelv = numpy.empty(6400) ind=0 for ii in range(x): for j in range(y): temp=float(Imv[j,ii]) temp=float(temp/255.0) pixelv[ind]=temp ind+=1 if i<40000: trainpixels[tr]=pixelv tr+=1 elif i<45000: validpixels[va]=pixelv va+=1 else: testpixels[te]=pixelv te+=1 print str(i)+'\t'+str(f) i+=1 except IOError: continue trainimage=(trainpixels,trainlabels) validimage=(validpixels,validlabels) testimage=(testpixels,testlabels) output=open('data.pkl','wb') pickle.dump(trainimage,output) pickle.dump(validimage,output) pickle.dump(testimage,output) Please let me know if you see something wrong with either my calculation or my code! Answer: Python Pickles are not a thrifty mechanism for storing data as you're storing objects instead of "just the data." The following test case takes 24kb on my system and this is for a small, sparsely populated numpy array stored in a `pickle`: import os import sys import numpy import pickle testlabels = numpy.empty(1000) testlabels[0] = 1 testlabels[99] = 0 test_labels_size = sys.getsizeof(testlabels) #80 output = open('/tmp/pickle', 'wb') test_labels_pickle = pickle.dump(testlabels, output) print os.path.getsize('/tmp/pickle') Further, I'm not sure why you believe 4kb to be the size of a number in Python -- non-numpy ints are 24 bytes (`sys.getsizeof(1)`) and `numpy` arrays are a minimum of 80 bytes (`sys.getsizeof(numpy.array([0], float))`). As you stated as a response to my comment, you have reasons for staying with Pickle, so I won't try to convince you further to not store objects, but be aware of the overhead of storing objects. As an option: reduce the size of your training data/Pickle fewer objects.
python source code organization and __init.py__ Question: I have a python source tree which is organised as follows: >folder |-> utils |-> image.py |-> helper.py |-> __init__.py |-> core |-> vf.py |-> __init__.py Now in vf.py, I have the following line to import utils import utils and subsequently I do something like: img = utils.Image() Now, if I leave the `__init__.py` file empty in the utils directory, this does not work and I get an error: AttributeError: 'module' object has no attribute 'Image' However, if I add the following line in `__init__.py` in the utils directory, it works: from image import * from helper import * So, I am guessing that when the top level script is called it parses this `__init__.py` file and imports all the methods and classes from this utils package. However, I have a feeling this is not such a good idea because I am doing a * import and this might pollute the namespace. So, I was wondering if someone can shed some light on the appropriate way to do this i.e. if I have parallel directories, what is a good way to import the classes from one python package to another (if indeed this `__init__.py` approach is not clean as I suspect). Answer: Have you tried using "img = utils.image.Image()"? If the "Image" class is defined within the "image.py" file, I think this would work.
Dynamically reformat tkinter scale value python 2.7 Question: I thought what I wanted to do would be simple enough, but evidently not. What I want to do is use a tkinter scale to control the range and value that a user can input. In my case I want to input a time value in seconds and display it in _minutes:seconds_ format so that anybody can understand that, say 330 seconds == 5:30. Because this is not a standard format for the scale widget what I want to do is make the time in mm:ss format appear beside the scale widget. In my example code I can see the scale value changing, but so far I can not get the mm:ss display to change as the scale is moved (I've got to click a button to get it to update). As I want the end result to be as idiot proof as possible I need the mm:ss display to change dynamically with the slider. At this stage I appear to have exhausted all the online examples I can find, and none of them appear to do what I want (extra button press required for conversion). I'm pretty sure I'll feel stupid when I find out how to do this, but right now my head hurts from trying to figure this out. Does anyone have an example of this behaviour that they can share? Answer: The scale widget has a `command` attribute which you can use to call a function whenever the value changes. This function can then reformat the value to whatever you want. import Tkinter as tk class Example(tk.Frame): def __init__(self, parent): tk.Frame.__init__(self, parent) self.scale = tk.Scale(self, orient="horizontal", from_=0, to=600, showvalue=False, command=self._on_scale) self.scale_label = tk.Label(self, text="") self.scale.pack(side="top", fill="x") self.scale_label.pack(side="top") def _on_scale(self, value): value = int(value) minutes = value/60 seconds = value%60 self.scale_label.configure(text="%2.2d:%2.2d" % (minutes, seconds)) if __name__ == "__main__": root = tk.Tk() Example(root).pack(fill="both", expand=True); root.mainloop()
zipline backtesting using non-US (European) intraday data Question: I'm trying to get zipline working with non-US, intraday data, that I've loaded into a pandas DataFrame: BARC HSBA LLOY STAN Date 2014-07-01 08:30:00 321.250 894.55 112.105 1777.25 2014-07-01 08:32:00 321.150 894.70 112.095 1777.00 2014-07-01 08:34:00 321.075 894.80 112.140 1776.50 2014-07-01 08:36:00 321.725 894.80 112.255 1777.00 2014-07-01 08:38:00 321.675 894.70 112.290 1777.00 I've followed moving-averages tutorial [here](http://nbviewer.ipython.org/github/quantopian/zipline/blob/master/docs/tutorial.ipynb), replacing "AAPL" with my own symbol code, and the historical calls with "1m" data instead of "1d". Then I do the final call using `algo_obj.run(DataFrameSource(mydf))`, where `mydf` is the dataframe above. However there are all sorts of problems arising related to [TradingEnvironment](https://github.com/quantopian/zipline/blob/master/zipline/finance/trading.py). According to the source code: # This module maintains a global variable, environment, which is # subsequently referenced directly by zipline financial # components. To set the environment, you can set the property on # the module directly: # from zipline.finance import trading # trading.environment = TradingEnvironment() # # or if you want to switch the environment for a limited context # you can use a TradingEnvironment in a with clause: # lse = TradingEnvironment(bm_index="^FTSE", exchange_tz="Europe/London") # with lse: # the code here will have lse as the global trading.environment # algo.run(start, end) However, using the context doesn't seem to fully work. I still get errors, for example stating that my timestamps are before the market open (and indeed, looking at `trading.environment.open_and_close` the times are for the US market. **My question is, has anybody managed to use zipline with non-US, intra-day data?** Could you point me to a resource and ideally example code on how to do this? n.b. I've seen there are some [tests](https://github.com/quantopian/zipline/blob/master/tests/test_tradingcalendar.py) on github that seem related to the trading calendars (tradincalendar_lse.py, tradingcalendar_tse.py , etc) - but this appears to only handle data at the daily level. I would need to fix: * open/close times * reference data for the benchmark * and probably more ... Answer: I've got this working after fiddling around with the tutorial notebook. Code sample below. It's using the DF `mid`, as described in the original question. A few points bear mentioning: 1. **Trading Calendar** I create one manually and assign to `trading.environment`, by using non_working_days in _tradingcalendar_lse.py_. Alternatively you could create one that fits your data exactly (however could be a problem for out-of-sample data). There are two fields that you need to define: `trading_days` and `open_and_closes`. 2. **sim_params** There is a problem with the default start/end values because they aren't timezone aware. So you _must_ create a sim_params object and pass start/end parameters with a timezone. 3. Also, `run()` must be called with the argument overwrite_sim_params=False as `calculate_first_open`/`close` raise timestamp errors. I should mention that it's also possible to pass pandas Panel data, with fields open,high,low,close,price and volume in the minor_axis. But in this case, the former fields are mandatory - otherwise errors are raised. Note that this code only produces a _daily_ summary of the performance. I'm sure there must be a way to get the result at a minute resolution (I thought this was set by `emission_rate`, but apparently it's not). If anybody knows please comment and I'll update the code. Also, not sure what the api call is to call 'analyze' (i.e. when using `%%zipline` magic in IPython, as in the tutorial, the `analyze()` method gets automatically called. How do I do this manually?) import pytz from datetime import datetime from zipline.algorithm import TradingAlgorithm from zipline.utils import tradingcalendar from zipline.utils import tradingcalendar_lse from zipline.finance.trading import TradingEnvironment from zipline.api import order_target, record, symbol, history, add_history from zipline.finance import trading def initialize(context): # Register 2 histories that track daily prices, # one with a 100 window and one with a 300 day window add_history(10, '1m', 'price') add_history(30, '1m', 'price') context.i = 0 def handle_data(context, data): # Skip first 30 mins to get full windows context.i += 1 if context.i < 30: return # Compute averages # history() has to be called with the same params # from above and returns a pandas dataframe. short_mavg = history(10, '1m', 'price').mean() long_mavg = history(30, '1m', 'price').mean() sym = symbol('BARC') # Trading logic if short_mavg[sym] > long_mavg[sym]: # order_target orders as many shares as needed to # achieve the desired number of shares. order_target(sym, 100) elif short_mavg[sym] < long_mavg[sym]: order_target(sym, 0) # Save values for later inspection record(BARC=data[sym].price, short_mavg=short_mavg[sym], long_mavg=long_mavg[sym]) def analyze(context,perf) : perf["pnl"].plot(title="Strategy P&L") # Create algorithm object passing in initialize and # handle_data functions # This is needed to handle the correct calendar. Assume that market data has the right index for tradeable days. # Passing in env_trading_calendar=tradingcalendar_lse doesn't appear to work, as it doesn't implement open_and_closes from zipline.utils import tradingcalendar_lse trading.environment = TradingEnvironment(bm_symbol='^FTSE', exchange_tz='Europe/London') #trading.environment.trading_days = mid.index.normalize().unique() trading.environment.trading_days = pd.date_range(start=mid.index.normalize()[0], end=mid.index.normalize()[-1], freq=pd.tseries.offsets.CDay(holidays=tradingcalendar_lse.non_trading_days)) trading.environment.open_and_closes = pd.DataFrame(index=trading.environment.trading_days,columns=["market_open","market_close"]) trading.environment.open_and_closes.market_open = (trading.environment.open_and_closes.index + pd.to_timedelta(60*7,unit="T")).to_pydatetime() trading.environment.open_and_closes.market_close = (trading.environment.open_and_closes.index + pd.to_timedelta(60*15+30,unit="T")).to_pydatetime() from zipline.utils.factory import create_simulation_parameters sim_params = create_simulation_parameters( start = pd.to_datetime("2014-07-01 08:30:00").tz_localize("Europe/London").tz_convert("UTC"), #Bug in code doesn't set tz if these are not specified (finance/trading.py:SimulationParameters.calculate_first_open[close]) end = pd.to_datetime("2014-07-24 16:30:00").tz_localize("Europe/London").tz_convert("UTC"), data_frequency = "minute", emission_rate = "minute", sids = ["BARC"]) algo_obj = TradingAlgorithm(initialize=initialize, handle_data=handle_data, sim_params=sim_params) # Run algorithm perf_manual = algo_obj.run(mid,overwrite_sim_params=False) # overwrite == True calls calculate_first_open[close] (see above)
Classification Based Chunking - NLTK Cookbook - Evaluate() not working Question: I've been following the NLTK cookbook for classification based chunking, and I came to the following error when trying to evaluate my classifier. all the code that leads up to this error is posted below the traceback --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-64-201b22386c9f> in <module>() 1 chunker = ClassifierChunker(train_chunks) ----> 2 score = chunker.evaluate(test_chunks) 3 score.accuracy() //anaconda/lib/python2.7/site-packages/nltk/chunk/api.pyc in evaluate(self, gold) 47 chunkscore = ChunkScore() 48 for correct in gold: ---> 49 chunkscore.score(correct, self.parse(correct.leaves())) 50 return chunkscore 51 //anaconda/lib/python2.7/site-packages/nltk/chunk/api.pyc in parse(self, tokens) 32 :rtype: Tree 33 """ ---> 34 raise NotImplementedError() 35 36 def evaluate(self, gold): NotImplementedError: #from chunkers import TagChunker from nltk.corpus import treebank_chunk train_chunks = treebank_chunk.chunked_sents()[:3000] test_chunks = treebank_chunk.chunked_sents()[3000:] import nltk.chunk from nltk.tag import ClassifierBasedTagger def chunk_trees2train_chunks(chunk_sents): tag_sents = [nltk.chunk.tree2conlltags(sent) for sent in chunk_sents] return [[((w,t),c) for (w,t,c) in sent] for sent in tag_sents] def prev_next_pos_iob(tokens, index, history): word, pos = tokens[index] if index == 0: prevword, prevpos, previob = ('<START>',)*3 else: prevword, prevpos = tokens[index-1] previob = history[index-1] if index == len(tokens) - 1: nextword, nextpos = ('<END>',)*2 else: nextword, nextpos = tokens[index+1] feats = { 'word': word, 'pos': pos, 'nextword': nextword, 'nextpos': nextpos, 'prevword': prevword, 'prevpos': prevpos, 'previob': previob } return feats class ClassifierChunker(nltk.chunk.ChunkParserI): def __init__(self, train_sents, feature_detector=prev_next_pos_iob, **kwargs): if not feature_detector: feature_detector = self.feature_detector train_chunks = chunk_trees2train_chunks(train_sents) self.tagger = ClassifierBasedTagger(train=train_chunks, feature_detector=feature_detector, **kwargs) def parse(self, tagged_sent): if not tagged_sent: return None chunks = self.tagger.tag(tagged_sent) return nltk.chunk.conlltags2tree([(w,t,c) for ((w,t),c) in chunks]) #the following is copy/pasted from chunkers.py import nltk.tag from nltk.chunk import ChunkParserI from nltk.chunk.util import conlltags2tree, tree2conlltags from nltk.tag import UnigramTagger, BigramTagger, ClassifierBasedTagger #from .transforms import node_label ##################### ## tree conversion ## ##################### def chunk_trees2train_chunks(chunk_sents): tag_sents = [tree2conlltags(sent) for sent in chunk_sents] return [[((w,t),c) for (w,t,c) in sent] for sent in tag_sents] def conll_tag_chunks(chunk_sents): '''Convert each chunked sentence to list of (tag, chunk_tag) tuples, so the final result is a list of lists of (tag, chunk_tag) tuples. >>> from nltk.tree import Tree >>> t = Tree('S', [Tree('NP', [('the', 'DT'), ('book', 'NN')])]) >>> conll_tag_chunks([t]) [[('DT', 'B-NP'), ('NN', 'I-NP')]] ''' tagged_sents = [tree2conlltags(tree) for tree in chunk_sents] return [[(t, c) for (w, t, c) in sent] for sent in tagged_sents] def ieertree2conlltags(tree, tag=nltk.tag.pos_tag): # tree.pos() flattens the tree and produces [(word, label)] where label is # from the word's parent tree label. words in a chunk therefore get the # chunk tag, while words outside a chunk get the same tag as the tree's # top label words, ents = zip(*tree.pos()) iobs = [] prev = None # construct iob tags from entity names for ent in ents: # any entity that is the same as the tree's top label is outside a chunk if ent == node_label(tree): iobs.append('O') prev = None # have a previous entity that is equal so this is inside the chunk elif prev == ent: iobs.append('I-%s' % ent) # no previous equal entity in the sequence, so this is the beginning of # an entity chunk else: iobs.append('B-%s' % ent) prev = ent # get tags for each word, then construct 3-tuple for conll tags words, tags = zip(*tag(words)) return zip(words, tags, iobs) ################# ## tag chunker ## ################# class TagChunker(ChunkParserI): '''Chunks tagged tokens using Ngram Tagging.''' def __init__(self, train_chunks, tagger_classes=[UnigramTagger, BigramTagger]): '''Train Ngram taggers on chunked sentences''' train_sents = conll_tag_chunks(train_chunks) self.tagger = None for cls in tagger_classes: self.tagger = cls(train_sents, backoff=self.tagger) def parse(self, tagged_sent): '''Parsed tagged tokens into parse Tree of chunks''' if not tagged_sent: return None (words, tags) = zip(*tagged_sent) chunks = self.tagger.tag(tags) # create conll str for tree parsing return conlltags2tree([(w,t,c) for (w,(t,c)) in zip(words, chunks)]) ######################## ## classifier chunker ## ######################## def prev_next_pos_iob(tokens, index, history): word, pos = tokens[index] if index == 0: prevword, prevpos, previob = ('<START>',)*3 else: prevword, prevpos = tokens[index-1] previob = history[index-1] if index == len(tokens) - 1: nextword, nextpos = ('<END>',)*2 else: nextword, nextpos = tokens[index+1] feats = { 'word': word, 'pos': pos, 'nextword': nextword, 'nextpos': nextpos, 'prevword': prevword, 'prevpos': prevpos, 'previob': previob } return feats class ClassifierChunker(ChunkParserI): def __init__(self, train_sents, feature_detector=prev_next_pos_iob, **kwargs): if not feature_detector: feature_detector = self.feature_detector train_chunks = chunk_trees2train_chunks(train_sents) self.tagger = ClassifierBasedTagger(train=train_chunks, feature_detector=feature_detector, **kwargs) def parse(self, tagged_sent): if not tagged_sent: return None chunks = self.tagger.tag(tagged_sent) return conlltags2tree([(w,t,c) for ((w,t),c) in chunks]) ############# ## pattern ## ############# class PatternChunker(ChunkParserI): def parse(self, tagged_sent): # don't import at top since don't want to fail if not installed from pattern.en import parse s = ' '.join([word for word, tag in tagged_sent]) # not tokenizing ensures that the number of tagged tokens returned is # the same as the number of input tokens sents = parse(s, tokenize=False).split() if not sents: return None return conlltags2tree([(w, t, c) for w, t, c, p in sents[0]]) Answer: You are meant to define a parse method yourself, you can see in the source that it is not implemented: class ChunkParserI(ParserI): """ A processing interface for identifying non-overlapping groups in unrestricted text. Typically, chunk parsers are used to find base syntactic constituents, such as base noun phrases. Unlike ``ParserI``, ``ChunkParserI`` guarantees that the ``parse()`` method will always generate a parse. """ def parse(self, tokens): """ Return the best chunk structure for the given tokens and return a tree. :param tokens: The list of (word, tag) tokens to be chunked. :type tokens: list(tuple) :rtype: Tree """ raise NotImplementedError() You actually have one defined, I think your indentation is the issue: class ClassifierChunker(nltk.chunk.ChunkParserI): def __init__(self, train_sents, feature_detector=prev_next_pos_iob, **kwargs): if not feature_detector: feature_detector = self.feature_detector train_chunks = chunk_trees2train_chunks(train_sents) self.tagger = ClassifierBasedTagger(train=train_chunks, feature_detector=feature_detector, **kwargs) def parse(self, tagged_sent): # indent inside the class if not tagged_sent: return None chunks = self.tagger.tag(tagged_sent) return nltk.chunk.conlltags2tree([(w,t,c) for ((w,t),c) in chunks]) You do not have it inside the `class` though so as far as `nltk.chunk.ChunkParserI` is concerned you have no `parse` method implemented There is no method `nltk.chunk.conlltags2tree` it is in `nltk.chunk.util` return nltk.chunk.util.conlltags2tree([(w,t,c) for ((w,t),c) in chunks])
How do you wrap C++ code for IronPython access Question: I have a simple example which I want to access from Ironpython (I am coming from 'regular/sane' python') so I am struggling importing my C++ code into Ironpython. Normally I just use SWIG, wrap my code, import and go on my merry way But with Ironpython being C# and not C based it makes this process much harder How do I wrap this class for ironpython (I also attached my swig file for this example, but that might not be useful) #include "minimal.h" double average(std::vector<int> v) { return std::accumulate(v.begin(), v.end(), 0.0) / v.size(); } std::vector<double> half(const std::vector<double>& v) { std::vector<double> w(v); for (unsigned int i = 0; i<w.size(); i++) w[i] /= 2.0; return w; } void halve_in_place(std::vector<double>& v) { std::transform(v.begin(), v.end(), v.begin(), std::bind2nd(std::divides<double>(), 2.0)); } with the header file #include <vector> #include <algorithm> #include <functional> #include <numeric> double average(std::vector<int> v); std::vector<double> half(const std::vector<double>& v); void halve_in_place(std::vector<double>& v); I had a swig i file minimal.i but realize there are numerous issues issuing a swig.exe -c++ -python "%(FullPath)" on this and having ironpython actually accept it on import. %module transfervector %{ #include "minimal.h" %} %include "std_vector.i" // Instantiate templates used by example namespace std { %template(IntVector) vector<int>; %template(DoubleVector) vector<double>; } // Include the header file with above prototypes %include "minimal.h" Answer: SWIG-python isn't going to work - it generates CPython extensions, and IronPython doesn't support those. Ultimately you'll need to wrap the C++ so that it's accessible from .NET. I think you can use SWIG to generate C# wrappers, which you could then import into IronPython. Otherwise, you might be able to compile with the C++/CLI compiler to generate a .NET assembly directly, which can also be used from IronPython.
Replace CR by CR LF Question: I'm on Windows and I have an odd text file containing mostly CR+LF line ending. A _few_ lines end with only CR. Which tool to use to transform these odd lines into well formatted (e.g. CR+LF terminated) lines? I could use either GnuWin32 tools or Python to solve this. The main problem I have is that I cannot open the file as text file since Python (as most other text processors, such as awk) don't recognize the mixed line endings. So I believe the solution must incorporate binary processing of the file. The again, I cannot just replace CR by CR LF, since there are also CR LF line endings existing that must not be touched. Answer: To replace lines you can use regular expressions: * `\r+` to find CR * `\r\n` is the text you want as replacement text. * * * Regular Expressions in Python: [Regular Expression](https://docs.python.org/2/howto/regex.html) * * * import re txt='text where you want to replace the linebreak' out = re.sub("\r+", '\r\n', txt) print out
Getting boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request exception Question: I am trying to **download the log files that are present in S3 bucket using boto**. The reason behind not using s3cmd and some other tools is that I don't want my code to be dependent on some kind of software/tool so that others can also use my code directly and don't have to worry about downloading some other dependencies. I am getting the following stack trace. I saw various related posts but none of them solved my problem. Traceback (most recent call last): File "/Library/Python/2.7/site-packages/fabric/main.py", line 743, in main *args, **kwargs File "/Library/Python/2.7/site-packages/fabric/tasks.py", line 405, in execute results['<local-only>'] = task.run(*args, **new_kwargs) File "/Library/Python/2.7/site-packages/fabric/tasks.py", line 171, in run return self.wrapped(*args, **kwargs) File "/pgbadger/pgbadger_html.py", line 86, in dlogs s3 = S3() File "/pgbadger/pgbadger_html.py", line 46, in __init__ self.bucket = self._get_bucket(self.log_bucket) File "/pgbadger/pgbadger_html.py", line 65, in _get_bucket return self.s3_conn.get_bucket(bucket) File "/Library/Python/2.7/site-packages/boto/s3/connection.py", line 471, in get_bucket return self.head_bucket(bucket_name, headers=headers) File "/Library/Python/2.7/site-packages/boto/s3/connection.py", line 518, in head_bucket response.status, response.reason, body) boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request I have seen the code and I don't know why I am getting this error. My code is as follows: from fabric.api import task from fabric.api import env S3_LOG_BUCKET = BUCKET-NAME class S3(object): s3_conn = None log_bucket = S3_LOG_BUCKET region = region bucket = None env.host_string = REGION-NAME def __init__(self): self._s3_connect() self.bucket = self._get_bucket(self.log_bucket) def _s3_connect(self): if not self.s3_conn: self.s3_conn = boto.s3.connect_to_region( self.region, aws_access_key_id=AWS_ACCESS_KEY_ID, aws_secret_access_key=AWS_SECRET_ACCESS_KEY, ) if not self.s3_conn: raise ValueError('Invalid Region Name: {}'.format(region)) def download_s3_logs(self): for l in self.bucket.list(): key_string = str(l.key) l.get_contents_to_filename("/tempLogFiles/" + key_string) print l.key def _get_bucket(self, bucket): return self.s3_conn.get_bucket(bucket) @task def dlogs(): s3 = S3() s3.download_s3_logs() Answer: Problem Solved: In the S3-Log-Bucket, I was mentioning the entire path specific to my bucket.. Like I have my bucket and there are multiple folders in it. So I was mentioning the entire path to it but BOTO doesn't expect that to happen. Hence I just had to mention the bucket name only instead of mentioning the entire path. Previously I was doing--> log_bucket = Bucket/Inner Folder 1/Inner Folder 2/.../ which was wrong Correct way of doing it--> log_bucket = Bucket
Can rpy2 code be run in parallel? Question: I have some Python code that passes a data frame to R via rpy2, whereupon R processes it and I pull the resulting data.frame back to R as a PANDAS data frame via `com.load_data`. The thing is, the call to `com.load_data` works fine in a single Python process but it crashes when the same bunch of code is run in several `multiprocessing.Process` processes concurrently. I get the following error message out of Python: File "C:\\Python27\\lib\\site-packages\\pandas\\rpy\\common.py", line 29, in load_data r.data(name) TypeError: 'DataFrame' object is not callable' So my question is, is rpy2 not actually _designed_ to be able to be run in parallel, or is it merely a bug in the `load_data` function? I just assumed that each Python process would get its own independent R session. As far as I can tell, the only workaround would be to have R write the output to a text file which the appropriate Python process can open and go on with its processing. But this is pretty clunky. Update with some code: from rpy2.robjects.packages import importr import rpy2.robjects as ro import pandas as pd import pandas.rpy.common as com # Load C50 library into R environment C50 = importr('C50') ... # PANDAS data frame containing test dataset testing = pd.DataFrame(testing) # Pass testing dataset to R rtesting = com.convert_to_r_dataframe(testing) ro.globalenv['test'] = rtesting # Strip "AsIs" from each column in the R data frame # so that predict.C5.0 will work for c in range(len(testing.columns)): ro.r('''class(test[,{0}])=class(test[,{0}])[-match("AsIs", class(test[,{0}]))]'''.format(c+1)) # Make predictions on test dataset (res is pre-existing C5.0 tree) ro.r('''preds=predict.C5.0(res, newdata=test)''') ro.r('''preds=as.data.frame(preds)''') # Get the predictions from R preds = com.load_data('preds') ### Crashes here when code is run on several processes concurrently #Further processing as necessary ... Answer: `rpy` works by running a Python process and an R process in parallel, and exchange information between them. It does not take into account that R calls are called in parallel using `multiprocess`. So in practice, each of the python processes connects to the same R process. This probably causes the issues you see. One way to circumvent this issue is to implement the parallel processing in R, and not in Python. You then send everything at once to R, this will process it in parallel, and the result will be sent back to Python.
inserting path in the entry box in Python GUI Question: I'm new to GUI programming. I made a small GUI where the user browses the file and then do the functionalities he wants with the file. Now i have an Entry Box in the GUI and whenever a user browses for the required file and after he finishes browsing the path to the file should appear in the entry box. basically i want it like this befor browsing![before browsing](http://i.stack.imgur.com/lomHa.png) after browsing ![after browsing](http://i.stack.imgur.com/wFrWV.png) How can i do this? My code to the GUI is : #!/usr/bin/python import Tkinter as tk from tkFileDialog import * import os class StageGui: prot=0 log=0 cwd=os.getcwd() def __init__(self,parent,tex): self.tex=tex self.fm=tk.Frame(remodelmain) self.l1=tk.Label(self.fm, text='Import .prot File').grid(row=0,column=0,sticky='w',padx=5,pady=10) self.l2=tk.Label(self.fm, text='Import .log File').grid(row=1,column=0,sticky='w',padx=5,pady=10) self.protentry = tk.Entry(self.fm, width = 50).grid(row=0,column=1,sticky='w',padx=5,pady=10) self.logentry = tk.Entry(self.fm, width = 50).grid(row=1,column=1,sticky='w',padx=5,pady=10) self.b1=tk.Button(self.fm, text='Browse',height=1,width=20,command=self.callback).grid(row=0,column=2,sticky='e',padx=5,pady=10) self.b2=tk.Button(self.fm, text='Browse',height=1,width=20,command=self.callback1).grid(row=1,column=2,sticky='e',padx=5,pady=10) self.fm.pack() self.fm0=tk.Frame(remodelmain,width=500,height=500) self.b3=tk.Button(self.fm0, text='Check',height=1,width=15,command=self.check).grid(row=4,column=2,sticky='e',padx=5,pady=10) self.b4=tk.Button(self.fm0, text='Inertia',height=1,width=15).grid(row=5,column=2,sticky='e',padx=5,pady=10) self.b5=tk.Button(self.fm0, text='Summary',height=1,width=15).grid(row=6,column=2,sticky='e',padx=5,pady=10) self.b6=tk.Button(self.fm0, text='Report',height=1,width=15).grid(row=7,column=2,sticky='e',padx=5,pady=10) self.fm0.pack(side='right') self.fm1=tk.Frame(remodelmain,width=200,height=200) self.tex1= tk.Text(self.fm1,width=130,height=10) self.l3=tk.Label(self.fm1, text='Status Box:').grid(row=6,column=0,sticky='nw') self.tex1.grid(row=6,column=1,sticky='s',padx=20,pady=10) self.fm1.pack(side='left',anchor='w') def callback(self): name= askopenfilename() StageGui.prot=name self.printstatements(StageGui.prot) def printstatements(self,name): self.tex1.insert('end','\nthe file has been imported \n') s='the path of the imported file is {}\n'.format(name) self.tex1.insert('end',s) self.tex1.see(tk.END) return def callback1(self): name1= askopenfilename() StageGui.log=name1 self.printstatements(StageGui.log) def check(self): file=open(StageGui.prot,'r') a,b,c='|Checks|','|Status|','|Remarks|' mess='\n{0:10s} \t {1:10s} \t {2:100s}\n'.format(a,b,c) self.tex.insert('end',mess) count_string_occurance(file) remodelmain = tk.Tk() fmn1=tk.Frame(remodelmain,width=300,height=300) l3=tk.Label(fmn1, text='Message Box:').grid(row=6,column=0,sticky='nw') tex= tk.Text(fmn1,width=130,height=60) tex.grid(row=6,column=1,sticky='s',padx=20,pady=20) fmn1.pack(side='bottom',anchor='w') stagegui=StageGui(remodelmain,tex) remodelmain.title('prototype_remodel') remodelmain.geometry('1200x1200+300+300') remodelmain.mainloop() Answer: Create a string varible that is associated with the Entry widget: self.pathVar = tk.StringVar(self.fm) self.protentry = tk.Entry(self.fm, width = 50, textvariable=self.pathVar).grid(row=0,column=1,sticky='w',padx=5,pady=10) Then after getting the path, set the variable to that path: name= askopenfilename() self.pathVar.set(name)
dispersion_plot not working inspite of installing matplotlib Question: i have installed `matplotlib` using pip in ubuntu 14.04 LTS.. but on running `dispersion_plot` this is showing the following error .. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/dist-packages/nltk/text.py", line 455, in dispersion_plot from nltk.draw import dispersion_plot ImportError: cannot import name dispersion_plot I am new to python... can anyone suggest if there is a better way of installing `matplotlib` in `nltk`. Answer: The examples of the online book are not quite right. You may try this: from nltk.draw.dispersion import dispersion_plot words = ['Elinor', 'Marianne', 'Edward', 'Willoughby'] dispersion_plot(gutenberg.words('austen-sense.txt'), words) You may also call it from a text directly: from nltk.book import text1 from nltk.draw.dispersion import dispersion_plot dispersion_plot(text1, ['monstrous']) this way you import the function directly instead of calling the funcion from text object. I realized this watching at the source code directly. Hope this work for you
Python matrix query - must be simpler way Question: I have created a small Python program to multiply two 2 by 2 matrices but am wondering if it could be simplified in any way (particularly the creation of new arrays) The python code is below: matA=[0]*2 matB=[0]*2 matC=[0]*2 matC[0]=[0]*2 matC[1]=[0]*2 # creating new arrays for multiplying two 2 by 2 matrices # must be a more simple way def multiply2by2matrices(a,b): matC[0][0]=a[0][0]*b[0][0]+a[0][1]*b[1][0] matC[0][1]=a[0][0]*b[0][1]+a[0][1]*b[1][1] matC[1][0]=a[1][0]*b[0][0]+a[1][1]*b[1][0] matC[1][1]=a[1][0]*b[0][1]+a[1][1]*b[1][1] print ((matC[0][0]),(matC[0][1])) print ((matC[1][0]),(matC[1][1])) matA[0]=[4,3] matA[1]=[2,12] matB[0]=[5,-2] matB[1]=[6,3] multiply2by2matrices(matA, matB) Any thoughts will be greatly received. Answer: Don't implement by hand. You are reinventing the wheel and there very good wheels around already. Numpy is the answer. import numpy as np a = np.arange(20).reshape(5,4) b = (np.arange(20) + 10).reshape(4,5) np.dot(a,b) Docs: <http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html> Cheers, P
Can't find "six", but it's installed Question: I have `six` installed (even reinstalled it). $ pip show six --- Name: six Version: 1.7.3 Location: /usr/lib/python2.6/site-packages Requires: But when I try to run `csvcut`, it can't find it. $ csvcut -n monster.csv Traceback (most recent call last): File "/usr/bin/csvcut", line 5, in <module> from pkg_resources import load_entry_point File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 2655, in <module> working_set.require(__requires__) File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 648, in require needed = self.resolve(parse_requirements(requirements)) File "/usr/lib/python2.6/site-packages/pkg_resources.py", line 546, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: six>=1.6.1 Here's the relevant but of `csvcut`: #!/usr/bin/python # EASY-INSTALL-ENTRY-SCRIPT: 'csvkit==0.8.0','console_scripts','csvcut' __requires__ = 'csvkit==0.8.0' import sys from pkg_resources import load_entry_point if __name__ == '__main__': sys.exit( load_entry_point('csvkit==0.8.0', 'console_scripts', 'csvcut')() ) This is on CentOS. Answer: Uninstalling and reinstalling `six` using pip _didn't_ work sudo pip uninstall six sudo pip install six However, I was able to solve the problem using `easy_install`: easy_install --upgrade six
Audible Errors - Custom non-blocking text-to-speech handler for Python's logging module Question: I am building a screen-less Raspberry Pi-based long term art installation. A common application of [Text-to-Speech on the Raspberry pi](http://elinux.org/RPi_Text_to_Speech_\(Speech_Synthesis\)) is folks configuring their Pis to speak their IP Address on boot to ease SSHing. I have grown to love the logging module. Never again commenting out uncountable print statements makes my heart glow. For my situation, it would be ideal to have a logging handler that speaks the error message. The StreamHandler and the FileHandler I am currently using are great for development and post-problem diagnosis, but horrible for in situ problem solving. Also, there is something delightfully SciFi about my robot shouting errors at me. [I found an example of a custom handler for SMS-based error logging,](http://pantburk.info/?blog=77) and tried to implement my own with e-speak. It speaks, but the entire program stops at the end of the first spoken sentence. I am looking for advice on how to implement a stream handler that does not block the execution of the program. **My broken custom handler:** import logging import os #based on SMSHandler http://pantburk.info/?blog=77 def speak(stringToSay): '''say whatver it is told to say, squleching annoying warnings''' stringToSay = "'"+stringToSay+"'" #English female voice, emphasis on capitals (-k), speaking slowly (-s) using direct text:- #the 2>/dev/null' is there because any calls to the rPi audio card result in a dozen warnings. # see: http://raspberrypi.stackexchange.com/questions/3412/errors-with-espeak os.system('espeak -ven+f3 -k5 -s150 '+stringToSay+' 2>/dev/null') class TALKHandler(logging.Handler): # Inherit from logging.Handler def __init__(self): # run the regular Handler __init__ logging.Handler.__init__(self) def emit(self, record): # record.message is the log message speak(record.message) **The snippet from the program that is logging:** logging.handlers.TALKHandler = speechHandler.TALKHandler # create the handler object talkingHandler = logging.handlers.TALKHandler() # Configure the handler to only send SMS for critical errors talkingHandler.setLevel(logging.CRITICAL) # and finally we add the handler to the logging object logger.addHandler(talkingHandler) ipAddress = [(s.connect(('8.8.8.8', 80)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1] ipAddress = " ".join(ipAddress) ipAddress = ipAddress.replace(".","dot") logger.critical("Current IP Address is " + ipAddress ) Answer: I know this was asked ages ago - I missed it when it was posted and stumbled on it by chance. The following works for me: import logging import subprocess import sys class TalkHandler(logging.Handler): def emit(self, record): msg = self.format(record) cmd = ['espeak', '-ven+f3', '-s150', msg] p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) p.communicate() def configure_logging(): h = TalkHandler() root = logging.getLogger() root.addHandler(h) root.setLevel(logging.DEBUG) def main(): logging.info('Hello') logging.debug('Goodbye') if __name__ == '__main__': configure_logging() sys.exit(main()) When I run it, I hear "Hello" and then "Goodbye" spoken.
Gammu how to load balance between two modems Question: I have two GSM USB modems connected to my server, and we use gammu send SMS. We would like to load balance between both the modems, so that if 100 messages are send in day 50 goes from first and 50 goes from second. We are using standard Python code as copied from Gammu web site as given below import gammu sm = gammu.StateMachine() sm.ReadConfig() sm.Init() message = { 'Text': 'python-gammu testing message', 'SMSC': {'Location': 1}, 'Number': '+420800123465', } sm.SendSMS(message) How do we do that. Currently it is very haphazard. We need this balancing to get the best out of telco provider's offer of free texts per day. Help appreciated Answer: I have solved this. I created two configuration files for each modem. And then wrote a random file picker function something as trivial as below def randfile(): return random.sample([gammurc1_path, gammurc2_path], 1) [0] Then in the sm.ReadConfig(FileName=randfile()) And it's working comfortably
Creating lowpass filter in SciPy - understanding methods and units Question: I am trying to filter a noisy heart rate signal with python. Because heart rates should never be about 220 beats per minute i want to filter out all noise above 220bpm. I converted 220/minute into 3.66666666 Hertz and then converted that Hertz to rad/s to get 23.0383461 rad/sec. The sampling frequency of the chip that takes data is 30Hz so i converted that to rad/s to get 188.495559 rad/s. After looking up some stuff online i found some unctions for a bandpass filter that i wanted to make into a lowpass. [Here is the link the bandpass code](http://wiki.scipy.org/Cookbook/ButterworthBandpass), so i converted it to be this: from scipy.signal import butter, lfilter from scipy.signal import freqs def butter_lowpass(cutOff, fs, order=5): nyq = 0.5 * fs normalCutoff = cutOff / nyq b, a = butter(order, normalCutoff, btype='low', analog = True) return b, a def butter_lowpass_filter(data, cutOff, fs, order=4): b, a = butter_lowpass(cutOff, fs, order=order) y = lfilter(b, a, data) return y cutOff = 23.1 #cutoff frequency in rad/s fs = 188.495559 #sampling frequency in rad/s order = 20 #order of filter #print sticker_data.ps1_dxdt2 y = butter_lowpass_filter(data, cutOff, fs, order) plt.plot(y) I am very confused by this though because i am pretty sure the butter function takes in the cutoff and sampling frequency in rad/s but i seem to be getting a weird output. Is it actually in Hz? Secondly what is the purpose of these two lines: nyq = 0.5 * fs normalCutoff = cutOff / nyq I know its something about normalization but i thought the nyquist was 2 times the sampling requency, not one half. And why are you using the nyquist as a normalizer? Can one explain more about how to create filters with these functions? I plotted the filter using w, h = signal.freqs(b, a) plt.plot(w, 20 * np.log10(abs(h))) plt.xscale('log') plt.title('Butterworth filter frequency response') plt.xlabel('Frequency [radians / second]') plt.ylabel('Amplitude [dB]') plt.margins(0, 0.1) plt.grid(which='both', axis='both') plt.axvline(100, color='green') # cutoff frequency plt.show() and [got this](http://i.imgur.com/weqB1sw.png) that clearly does not cut off at 23 rad/s Answer: A few comments: * The [Nyquist frequency](http://en.wikipedia.org/wiki/Nyquist_frequency) is half the sampling rate. * You are working with regularly sampled data, so you want a digital filter, not an analog filter. This means you should not use `analog=True` in the call to `butter`, and you should use `scipy.signal.freqz` (not `freqs`) to generate the frequency response. * One goal of those short utility functions is to allow you to leave all your frequencies expressed in Hz. You shouldn't have to convert to rad/sec. As long as you express your frequencies with consistent units, the scaling in the utility functions takes care of the normalization for you. Here's my modified version of your script, followed by the plot that it generates. import numpy as np from scipy.signal import butter, lfilter, freqz import matplotlib.pyplot as plt def butter_lowpass(cutoff, fs, order=5): nyq = 0.5 * fs normal_cutoff = cutoff / nyq b, a = butter(order, normal_cutoff, btype='low', analog=False) return b, a def butter_lowpass_filter(data, cutoff, fs, order=5): b, a = butter_lowpass(cutoff, fs, order=order) y = lfilter(b, a, data) return y # Filter requirements. order = 6 fs = 30.0 # sample rate, Hz cutoff = 3.667 # desired cutoff frequency of the filter, Hz # Get the filter coefficients so we can check its frequency response. b, a = butter_lowpass(cutoff, fs, order) # Plot the frequency response. w, h = freqz(b, a, worN=8000) plt.subplot(2, 1, 1) plt.plot(0.5*fs*w/np.pi, np.abs(h), 'b') plt.plot(cutoff, 0.5*np.sqrt(2), 'ko') plt.axvline(cutoff, color='k') plt.xlim(0, 0.5*fs) plt.title("Lowpass Filter Frequency Response") plt.xlabel('Frequency [Hz]') plt.grid() # Demonstrate the use of the filter. # First make some data to be filtered. T = 5.0 # seconds n = int(T * fs) # total number of samples t = np.linspace(0, T, n, endpoint=False) # "Noisy" data. We want to recover the 1.2 Hz signal from this. data = np.sin(1.2*2*np.pi*t) + 1.5*np.cos(9*2*np.pi*t) + 0.5*np.sin(12.0*2*np.pi*t) # Filter the data, and plot both the original and filtered signals. y = butter_lowpass_filter(data, cutoff, fs, order) plt.subplot(2, 1, 2) plt.plot(t, data, 'b-', label='data') plt.plot(t, y, 'g-', linewidth=2, label='filtered data') plt.xlabel('Time [sec]') plt.grid() plt.legend() plt.subplots_adjust(hspace=0.35) plt.show() ![lowpass example](http://i.stack.imgur.com/NvTeN.png)
Python Requests gives different page text than Internet Explorer Question: Looking at my stackoverflow user profile page: <http://stackoverflow.com/users/2683104/roberto> The site indicates I have been a member for 316 days (screenshots at end of post). If I `view source` in my browser (IE11), I can see this data comes from a `days-visited` class. But if I look for this same `days-visited` information using Python Requests, the data does not appear anywhere. Why? from requests import Session from BeautifulSoup import BeautifulSoup s = Session() url = 'http://stackoverflow.com/users/2683104/roberto' page = s.get(url) soup = BeautifulSoup(page.text) print soup.prettify() #server response, prettified # following returns error # AttributeError: 'NoneType' object has no attribute 'getText' #days_visited = soup.find('span', attrs={'id':'days-visited'}).getText() s.close() **screenshot** ![screenshot](http://i.stack.imgur.com/vvYoI.png) **view source** ![view_source](http://i.stack.imgur.com/7uXB6.png) **python Requests** ![python_requests](http://i.stack.imgur.com/RSMoA.png) Answer: That field is not visible to your script (or other users). If you want to scrap that piece of information, you will need to have your script login and store appropriate cookies. This is what is seen by users that aren't you: ![Profile](http://i.stack.imgur.com/BM6VA.png) And the code block they see: <tbody> <tr> <th>visits</th> <td>member for</td> <td class="cool" title="2013-08-14 15:38:01Z">11 months</td> </tr> <tr> <th></th> <td>seen</td> <td class="supernova" title="2014-08-08 05:26:50Z"> <span title="2014-08-08 05:26:50Z" class="relativetime">6 mins ago</span> </td> </tr> </tbody> Normally, I'd recommend against scraping Stack Overflow for data and use the [API](https://api.stackexchange.com/docs) instead, but this particular piece of information isn't returned as part of the [User](https://api.stackexchange.com/docs/types/user) object.
Shortest way / Shorthand to declare variable in go Question: We can use the following syntax for go variable declaration var num int var str string but is there any shorthand in go for doing the same thing? for example we can do so in python simply saying: num = 13 strings = "Hello World" or even num, strings = 13,"Hello World" Answer: The [variable declaration](http://golang.org/ref/spec#Variable_declarations) can initialize multiple variables: var x, y float32 = -1, -2 Or (short variable declaration with `:=`) i, j := 0, 10 So this would work: [play.golang.org](http://play.golang.org/p/ytzTJRKQN5) package main import "fmt" func main() { a, b := 1, "e" fmt.Printf("Hello, playground %v %v", b, a) } Output: Hello, playground e 1
Using all elements of a list as argument to a system command (netCDF operator) in a python code Question: I've a python code performs some operator on some netCDF files. It has names of netCDF files as a list. I want to calculate ensemble average of these netCDF files using netCDF operator ncea (the netCDF ensemble average). However to call NCO, I need to pass all list elements as arguments as follows: filelist = [file1.ncf file2.ncf file3.ncf ........ file50.ncf] ncea file1.ncf file2.ncf ......file49.ncf file50.ncf output.cdf Any idea how this can be achieved. ANy help is greatly appreciated. Answer: import subprocess import shlex args = 'ncea file1.ncf file2.ncf ......file49.ncf file50.ncf output.cdf' args = shlex.split(args) p = subprocess.Popen(args,stdout=subprocess.PIPE) print p.stdout # Print stdout if you need.
SWIG with preprocessor macro from boost preprocessor Question: I'm utilizing the enum with ToString implementation that was suggested here: [How to convert an enum type variable to a string?](http://stackoverflow.com/questions/5093460/how-to-convert-an-enum- type-variable-to-a-string) It utilizes and works fine as far as I can tell. My issues arise when I try to wrap and export the macro to a Python library wrapped with SWIG. Similar question: [SWIG errors because of preprocessor directive](http://stackoverflow.com/questions/10760287/swig-errors-because-of- preprocessor-directive) There, the solution was to add headers / declarations to the SWIG interface. I haven't had success with this so far. Chances are that I just don't know what I have to add. Tried: %include <boost/preprocessor/config/config.hpp> %include <boost/preprocessor/stringize.hpp> %include <boost/preprocessor/seq/for_each.hpp> %include <boost/preprocessor/seq/enum.hpp> MWE: minimal.h #ifndef MINIMAL_H #define MINIMAL_H #include <boost/preprocessor.hpp> //Found this here: http://stackoverflow.com/questions/5093460/how-to-convert-an-enum-type-variable-to-a-string #define X_DEFINE_ENUM_WITH_STRING_CONVERSIONS_TOSTRING_CASE(r, data, elem) \ case elem : return BOOST_PP_STRINGIZE(elem); #define DEFINE_ENUM_WITH_STRING_CONVERSIONS(name, enumerators) \ enum name { \ BOOST_PP_SEQ_ENUM(enumerators) \ }; \ \ inline const char* ToString(name v) \ { \ switch (v) \ { \ BOOST_PP_SEQ_FOR_EACH( \ X_DEFINE_ENUM_WITH_STRING_CONVERSIONS_TOSTRING_CASE, \ name, \ enumerators \ ) \ default: return "[Unknown " BOOST_PP_STRINGIZE(name) "]"; \ } \ } DEFINE_ENUM_WITH_STRING_CONVERSIONS(my_enum, (A)(B)) #endif minimal.cpp #include <iostream> #include "minimal.h" int main(){ using namespace std; cout << A << ": " << ToString(A) << endl; cout << B << ": " << ToString(B) << endl; } minimal.i %module minimal %{ #include "minimal.h" %} %include "minimal.h" The error is not very indicative. Line 29 is the actual definition of my_enum. matthias@rp3deb:~/dvl/swig_boost_minimal$ swig minimal.i minimal.h:29: Error: Syntax error in input(1). Any advice on how I could wrap this? Answer: If you wanted to make SWIG read boost/preprocessor.hpp you'd do that with: %module minimal %{ #include "minimal.h" %} %include <boost/preprocessor.hpp> %include "minimal.h" Since by default SWIG doesn't follow `#include` directives. (You could also use `-includeall` to make it follow them instead). In this case though I think making the SWIG preprocessor make any kind of sense of the crazy magic that the Boost preprocessor library uses is a lost cause. Instead though we can try to get something with equally nice, but "Pythonic" syntax instead. In esscence what we're going to do is write a totally different version of `DEFINE_ENUM_WITH_STRING_CONVERSIONS` for SWIG wrappers only. It will be compatible with the definitions seen by C++ though. To do this I'm going to start by splitting your file minimal.h into two files. One with the macro definition and one that uses it. (We could have done this different ways, for example by wrapping the macro definitions with `#ifndef DEFINE_ENUM_WITH_STRING_CONVERSIONS` or `#ifndef SWIG`, which would be equally valid solutions). Thus we now have enum.hh: #ifndef ENUM_H #define ENUM_H #include <boost/preprocessor.hpp> //Found this here: http://stackoverflow.com/questions/5093460/how-to-convert-an-enum-type-variable-to-a-string #define X_DEFINE_ENUM_WITH_STRING_CONVERSIONS_TOSTRING_CASE(r, data, elem) \ case elem : return BOOST_PP_STRINGIZE(elem); #define DEFINE_ENUM_WITH_STRING_CONVERSIONS(name, enumerators) \ enum name { \ BOOST_PP_SEQ_ENUM(enumerators) \ }; \ \ inline const char* ToString(name v) \ { \ switch (v) \ { \ BOOST_PP_SEQ_FOR_EACH( \ X_DEFINE_ENUM_WITH_STRING_CONVERSIONS_TOSTRING_CASE, \ name, \ enumerators \ ) \ default: return "[Unknown " BOOST_PP_STRINGIZE(name) "]"; \ } \ } #endif And minimal.h: #ifndef MINIMAL_H #define MINIMAL_H #include "enum.h" DEFINE_ENUM_WITH_STRING_CONVERSIONS(my_enum, (A)(B)) #endif So your minimal.cpp continues to work as before, but now we can write a SWIG module that at least compiles, even if it doesn't do anything useful yet: %module minimal %{ #include "minimal.h" %} %define DEFINE_ENUM_WITH_STRING_CONVERSIONS(name,enumerators) %enddef %include "minimal.h" This currently has a stub, SWIG specific macro that we're going to fill out. It's a little ugly how I've done this, simply because I'm trying to avoid changing the way the existing macro is defined/used at all. What I produced as a starting point is another file, enum.i: %include <std_vector.i> %include <std_string.i> %{ #include <vector> #include <string> #include <tuple> %} %define DEFINE_ENUM_WITH_STRING_CONVERSIONS(name,enumerators) %{ typedef std::tuple<name,std::string> name ## _entry; struct name ## _helper { std::vector<name ## _entry> list; name ## _helper(const name value) { list.push_back(std::make_tuple(value,ToString(value))); } name ## _helper operator()(const name value) { list.push_back(std::make_tuple(value,ToString(value))); return *this; } }; static const std::vector<name ## _entry> name ## _list = name ## _helper enumerators . list; %} struct name ## _entry { %extend { const unsigned long value { return std::get<0>(*$self); } const std::string& label { return std::get<1>(*$self); } } }; %template(name ## vec) std::vector<name ## _entry>; const std::vector<name ## _entry> name ## _list; %enddef Such that minimal.i just needs to become: %module minimal %{ #include "minimal.h" %} %include "enum.i" %include "minimal.h" All that macro does is take the value of `enumerators`, which is going to be something like `(A)(B)` and generate some code that's completely standard (if quirky) C++ that expands this into a `std::vector<std::tuple<my_enum,std::string>>`. That's done by mapping the first enum member onto a constructor call, and the rest onto an overloaded `operator()`. We use the `ToString()` supplied by enum.h to find the string representation. Finally our macro has enough information to wrap the vector of tuples in a way which makes sense from within Python. With this in place we can do something like: import minimal print ", ".join(("%s(%d)" % (x.label,x.value) for x in minimal.my_enum_list)) Which, when compiled and run gives: A(0), B(1) I.e. enough to start writing Python code that's aware of both the label and the value of a C++ enum. But let's not stop there! Why did I deliberately call the resulting vector `my_enum_list` instead of just `my_enum`? Because there's more we can do now. Python 2.7 doesn't have any default "enum-ish", but that doesn't prevent us from wrapping this as something both Pythonic and natural to people who know about enums. I made my Python 2.7 enum support by reading [this other answer](http://stackoverflow.com/a/1695250/168175). To start with I added some generic enum support routines to the file using `%pythoncode`, (labelled #1 in final source) but outside the SWIG macro since there's no need to vary it. I also added a `%pythoncode` inside the SWIG macro (labelled #2) that invokes this once per actual enum. In order to make this work I had to convert the `const std::vector` from the previous version into a function so that it was accessible in the right part of the generated Python. Finally I had to show SWIG a forward declaration of the real enum, in order to persuade it to actually accept that as an argument to functions. The final result is: %include <std_vector.i> %include <std_string.i> %{ #include <vector> #include <string> #include <tuple> %} // #1 %pythoncode %{ class EnumValue(int): def __new__(cls,v,l): result = super(EnumValue,cls).__new__(cls,v) result._value = l return result def __str__(self): return self._value def make_enum(name,enums): return type(name, (), enums) %} %define DEFINE_ENUM_WITH_STRING_CONVERSIONS(name,enumerators) %{ typedef std::tuple<name,std::string> name ## _entry; struct name ## _helper { std::vector<name ## _entry> list; name ## _helper(const name value) { list.push_back(std::make_tuple(value,ToString(value))); } name ## _helper operator()(const name value) { list.push_back(std::make_tuple(value,ToString(value))); return *this; } }; static const std::vector<name ## _entry> name ## _list() { return name ## _helper enumerators . list; } %} struct name ## _entry { %extend { const unsigned long value { return std::get<0>(*$self); } const std::string& label { return std::get<1>(*$self); } } }; %template(name ## vec) std::vector<name ## _entry>; const std::vector<name ## _entry> name ## _list(); // #2 %pythoncode %{ name = make_enum('name', {x.label: EnumValue(x.value, x.label) for x in name ## _list()}) %} enum name; %enddef I added a function to minimal.i to prove it really does work: %module minimal %{ #include "minimal.h" %} %include "enum.i" %include "minimal.h" %inline %{ void foo(const my_enum& v) { std::cerr << "GOT: " << v << "\n"; } %} And finally test it with: import minimal print minimal.my_enum print minimal.my_enum.A print minimal.my_enum.B minimal.foo(minimal.my_enum.B) Which you'll be pleased to see worked and resulted in: <class 'minimal.my_enum'> A B GOT: 1 If you're using Python 3 there's a possibly nicer way to [represent enums](https://docs.python.org/3/library/enum.html#module-enum), but I'll leave that as an exercise for the reader for now. You can obviously tweak the Python 2.7 fake enums to your taste as well.
Flask SQLAlchemy Tables exist but still get no such table error Question: I'm having a problem with my Flask database. I have opened the db file and found that the tables exist but when I try to create a new user in the database I get a no such table error. here is my config.py file for the database CSRF_ENABLED = True SECRET_KEY = 'asshat' import os basedir = os.path.abspath(os.path.dirname(__file__)) SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'app.db') SQLALCHEMY_MIGRATE_REPO = os.path.join(basedir, 'db_repository') my app folder structure is pineapple/ -------app.db -------config.py -------xapp.wsgi -------venv/ -------app/ -------------app.py -------------views.py -------------models.py -------------forms.py -------------static/ -------------templates/ everything works fine when I open the site url but when I try to sign up with it returns a 500 internal server error. so I check the apache error logs and I get this [Fri Aug 08 02:09:55.819330 2014] [:error] [pid 12452] ERROR:app:Exception on /signup [ POST] [Fri Aug 08 02:09:55.819403 2014] [:error] [pid 12452] Traceback (most recent call last): [Fri Aug 08 02:09:55.819413 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app [Fri Aug 08 02:09:55.819422 2014] [:error] [pid 12452] response = self.full_dispatch_request() [Fri Aug 08 02:09:55.819430 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request [Fri Aug 08 02:09:55.819439 2014] [:error] [pid 12452] rv = self.handle_user_exception(e) [Fri Aug 08 02:09:55.819447 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception [Fri Aug 08 02:09:55.819456 2014] [:error] [pid 12452] reraise(exc_type, exc_value, tb) [Fri Aug 08 02:09:55.819464 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request [Fri Aug 08 02:09:55.819473 2014] [:error] [pid 12452] rv = self.dispatch_request() [Fri Aug 08 02:09:55.819481 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/flask/app.py", line 1461, in dispatch_request [Fri Aug 08 02:09:55.819490 2014] [:error] [pid 12452] return self.view_functions[rule.endpoint](**req.view_args) [Fri Aug 08 02:09:55.819499 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/app/views.py", line 33, in signup [Fri Aug 08 02:09:55.819507 2014] [:error] [pid 12452] db.session.commit() [Fri Aug 08 02:09:55.819515 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/orm/scoping.py", line 150, in do [Fri Aug 08 02:09:55.819524 2014] [:error] [pid 12452] return getattr(self.registry(), name)(*args, **kwargs) [Fri Aug 08 02:09:55.819532 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 776, in commit [Fri Aug 08 02:09:55.819540 2014] [:error] [pid 12452] self.transaction.commit() [Fri Aug 08 02:09:55.819549 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 377, in commit [Fri Aug 08 02:09:55.819583 2014] [:error] [pid 12452] self._prepare_impl() [Fri Aug 08 02:09:55.819591 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 357, in _prepare_impl [Fri Aug 08 02:09:55.819599 2014] [:error] [pid 12452] self.session.flush() [Fri Aug 08 02:09:55.819606 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1919, in flush [Fri Aug 08 02:09:55.819613 2014] [:error] [pid 12452] self._flush(objects) [Fri Aug 08 02:09:55.819621 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2037, in _flush [Fri Aug 08 02:09:55.819629 2014] [:error] [pid 12452] transaction.rollback(_capture_exception=True) [Fri Aug 08 02:09:55.819636 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__ [Fri Aug 08 02:09:55.819644 2014] [:error] [pid 12452] compat.reraise(exc_type, exc_value, exc_tb) [Fri Aug 08 02:09:55.819651 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2001, in _flush [Fri Aug 08 02:09:55.819659 2014] [:error] [pid 12452] flush_context.execute() [Fri Aug 08 02:09:55.819666 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 372, in execute [Fri Aug 08 02:09:55.819673 2014] [:error] [pid 12452] rec.execute(self) [Fri Aug 08 02:09:55.819681 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 526, in execute [Fri Aug 08 02:09:55.819688 2014] [:error] [pid 12452] uow [Fri Aug 08 02:09:55.819695 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 65, in save_obj [Fri Aug 08 02:09:55.819703 2014] [:error] [pid 12452] mapper, table, insert) [Fri Aug 08 02:09:55.819711 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 602, in _emit_insert_statements [Fri Aug 08 02:09:55.819719 2014] [:error] [pid 12452] execute(statement, params) [Fri Aug 08 02:09:55.819726 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute [Fri Aug 08 02:09:55.819734 2014] [:error] [pid 12452] return meth(self, multiparams, params) [Fri Aug 08 02:09:55.819742 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 321, in _execute_on_connection [Fri Aug 08 02:09:55.819749 2014] [:error] [pid 12452] return connection._execute_clauseelement(self, multiparams, params) [Fri Aug 08 02:09:55.819757 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement [Fri Aug 08 02:09:55.819765 2014] [:error] [pid 12452] compiled_sql, distilled_params [Fri Aug 08 02:09:55.819772 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context [Fri Aug 08 02:09:55.819780 2014] [:error] [pid 12452] context) [Fri Aug 08 02:09:55.819787 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1160, in _handle_dbapi_exception [Fri Aug 08 02:09:55.819795 2014] [:error] [pid 12452] exc_info [Fri Aug 08 02:09:55.819802 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause [Fri Aug 08 02:09:55.819819 2014] [:error] [pid 12452] reraise(type(exception), exception, tb=exc_tb) [Fri Aug 08 02:09:55.819828 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context [Fri Aug 08 02:09:55.819835 2014] [:error] [pid 12452] context) [Fri Aug 08 02:09:55.819842 2014] [:error] [pid 12452] File "/var/www/html/pineapple-express/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute [Fri Aug 08 02:09:55.819850 2014] [:error] [pid 12452] cursor.execute(statement, parameters) [Fri Aug 08 02:09:55.819858 2014] [:error] [pid 12452] OperationalError: (OperationalError) no such table: user u'INSERT INTO user (nickname, password, email, role) VALUES (?, ?, ?, ?)' It tells me the tables don't exist but from physically looking at the database using a database browser I know they are there. Can anyone tell me what I am doing wrong? If there is any info I have failed to provide just ask and I'll add it to the post. **EDIT** signup route from views @app.route('/signup', methods=['GET', 'POST']) def signup(): form = Signup(request.form) if form.validate_on_submit(): user = User(form.usrnam.data,form.passw.data,form.email.data) db.session.add(user) db.session.commit() os.makedirs('app/static/%s' % form.usrnam.data) flash('You are now a registered user') return redirect(url_for('login')) return render_template('signup.html', title='Sign Up', form=form) **EDIT** heres my User Model class User(db.Model): id = db.Column(db.Integer, primary_key = True) nickname = db.Column(db.String(64), index = True, unique = True) password = db.Column(db.String(10), index = True, unique = True) email = db.Column(db.String(120), index = True, unique = True) role = db.Column(db.SmallInteger, default = ROLE_USER) uploads = db.relationship('Upload', backref = 'author', lazy = 'dynamic') #registered_on = db.Column('registered_on' , db.DateTime) def __init__(self , nickname ,password , email): self.nickname = nickname self.password = password self.email = email #self.registered_on = datetime.utcnow() def is_authenticated(self): return True def is_active(self): return True def is_anonymous(self): return False def get_id(self): return unicode(self.id) def __repr__(self): return '<User %r>' % (self.nickname) Answer: could it be that you need to capitalize User when referring to your table in the code?
Walk two lists (no zip) Question: This is my implementation: def walk_two(a, b): for x in a: for y in b: yield x, y a = xrange(2) b = xrange(3) for x, y in walk_two(a, b): print x, y With this output: 0 0 0 1 0 2 1 0 1 1 1 2 Is there a better (more pythonic) way of doing that? A built-in? A more generic walkN? Answer: You are looking for `itertools.product`: from itertools import product a = xrange(2) b = xrange(3) for x, y in product(a, b): print x, y