text
stringlengths
226
34.5k
How to encrypt a python module asuming the decryption key is well hidden Question: I have been seaching the internet for solutions of how to encrypt a python script file and the answer is pretty much the same: "You can't". They all refer to fact that the key will be exposed somehow. I am developing a standalone electrical system, which uses the Raspberry Pi as the main computer. All the python files are at the moment pretty much exposed if someone steals the SD Card. Now I have a key stored in an external device and Im using that to decrypt some text files. Now my question! Is it possible to somehow encrypt vital part of my .py moduls content and still be able to execute them using the external key, without exposing the decrypted .py moduls on the SD Card? That is to protect the code, if someone steals the SD Card and want to look what is on it. I know if someone steals the integer system, it is a different story! Hope some of you something about python module encryption. Best Regards Allan Answer: While, as you mention, it's impossible to protect your python files from _someone with logical access to the system_ (i.e.: someone who logs in as `root`), it's easy and commonplace to protect against _physical theft_ of the storage devices where they reside. You can do it on at least three diferent levels: * Have a [encrypted filesystem](https://en.wikipedia.org/wiki/Disk_encryption) on the SD card, such as ext4 over [LUKS](https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup), and have it mounted by Linux. [Here's a tutorial](http://realtechtalk.com/LUKScryptsetup_Tutorial_for_Linux_Hard_Drive_Partition_Encryption-1002-articles) * Keep a regular filesystem on the SD card, and use a [file-level encryption overlay](https://en.wikipedia.org/wiki/Filesystem-level_encryption) such as [encFS](https://en.wikipedia.org/wiki/EncFS) * Devise your own encryption scheme in python, possibly using `pycrypto`; decode the modules to strings and either write them in a ramdisk or `exec` the strings directly. Personally, I'd go with the first option, for a couple of reasons: * It's transparent to your program, so you can be sure it'll keep working and there's no need for `__import__` or `exec` black magic. * It's potentially safer. Devising your own encryption scheme is frowned upon by the vast majority of IT security experts. * It'll make data recovery easier, in case you ever need it Please note that all these protections are moot if the attacker has full logical access to the system, **or** physical access to the place you store the encryption keys.
Can not import modules from gi.repository Question: I can not import modules from gi.repository. Specifically not Gtk and GObject. I experienced this error both on Ubuntu 14.04 LTS and after reinstall also on Linux Mint 17. from gi.repository import Gtk, GObject Results in the 'unresolved reference' warning for the respective modules. Interestingly enough my Gtk GUI can be compiled and works perfectly fine. Yet, GObject is entirely out of function. I tried to work around with altering import statements such as: from gi.repository.Gtk import* Even hard coding the import path via: sys.path.append('/usr/lib/python2.7/dist-packages/gi') _None of these approaches have solved this frustrating error so far._ I have not found any concluding help or basic info on this issue, neither anywhere on the web nor in Linux forums or here on stackoverflow. I am not sure whether this problem lies on the Python or the Linux side of things. **Can anybody suggest how to solve this issue?** What additional information do I need to provide eventually. Thanks! Answer: If on a Debian-based system such as Ubuntu, you probably need to install the `gir1.2-gtk-3.0` package and other similarly named packages for other libraries. I don't know what the name of the package would be on Mint, but it's probably something similar.
pythonic way to try (repeatedly) to catch exceptions Question: I have a script that runs on an unattended machine. If the script throws an error, I want to log the error and keep running as long as it's practical. Here's my first draft of the code, but it doesn't work for reasons explained below. (As a note: a more realistic `concede` function will return true if there are N errors captured within T seconds. Or something.) from contextlib import contextmanager @contextmanager def perseverance(concede = lambda: False): while (True): try: yield except Exception, e: if (concede()): log_exception(e, "conceding") raise else: log_exception(e, "retrying") This would allow me to do something like: def quit_after(n): n = [n] # make n mutable for the closure :P def quitter(): if (n[0] <= 0): return True n[0] -= 1 return False return quitter with perseverance(quit_after(3)): do_complex_script() This doesn't work because the @contextmanager is fussy about exception handling. This code will fail with: Traceback (most recent call last): File "<stdin>", line 2, in <module> File ".../python2.7/contextlib.py", line 36, in __exit__ raise RuntimeError("generator didn't stop after throw()") RuntimeError: generator didn't stop after throw() I could trivially rewrite perseverance() to not use context manager but without its elegant syntax, like this: def with_perseverance(fn, concede = lambda: False): while (True): try: fn() except Exception, e: if (concede()): log_exception(e, "conceding") raise else: log_exception(e, "retrying") and call it like this: with_perseverance(do_complex_script, quit_after(3)) ## the question At the risk of getting down-voted for being too subjective: Is there a better / more pythonic way to write this? Answer: The `contextmanager` isn't fussy about exception handling; it just wants you to only `yield` once. Context managers don't support reentry. If you want that, you have several options. One is to use a `with`-`for` combo: from contextlib import contextmanager class MutableValue: def __init__(self, value): self.value = value @contextmanager def null_context(): yield @contextmanager def catch_and_log(mutable_return_successful): try: yield except Exception as e: print("ERROR:", e) else: mutable_return_successful.value = True def quit_after(n): for _ in range(n-1): successful = MutableValue(False) yield catch_and_log(successful) if successful.value: return yield null_context() for ctx in quit_after(5): with ctx: 1/0 #>>> ERROR: division by zero #>>> ERROR: division by zero #>>> ERROR: division by zero #>>> ERROR: division by zero #>>> Traceback (most recent call last): #>>> File "", line 31, in <module> #>>> ZeroDivisionError: division by zero Because the `with` needs to communicate with the `for`, I had to do a bit of hackery to get the returns right. It's still more readable than a class-based solution. Another much easier option is to abuse decorators: def quit_after(n): def inner(f): for _ in range(n-1): try: f() except Exception as e: print("ERROR:", e) else: return f() return inner @quit_after(5) def _(): 1/0 #>>> ERROR: division by zero #>>> ERROR: division by zero #>>> ERROR: division by zero #>>> ERROR: division by zero #>>> Traceback (most recent call last): #>>> File "", line 44, in <module> #>>> File "", line 40, in inner #>>> File "", line 46, in _ #>>> ZeroDivisionError: division by zero
Python optomise nearest neighbour for large arrays Question: I have an array of points in 4D space (my array is roughly 1,000,000 long). For each value in my array I need to find the kth (where k = 81) nearest neighbours I then do some further code based on that.. _[in brief, I'm effectively trying to extend the SSIM algorithm but for a picture sitting in 4D - I have a list of values with len(my_values)=1,000,000 = len(my_coords). Although as an added complication my 'picture' is not made of points in a simple grid, but is more like a collection of 1,000,000 'point- stars' distributed psudo-randomly in 4D space - that for difference sets of 'my_values' have difference brightnesses, but always the same location... That context may not be needed :) ]._ Roughly I need to do: from scipy.spatial import KDTree mytree = KDTree(my_coords) for i in range(len(my_coords)): dist, locs = mytree.query(my_coords[i], k=k) .... result.append(some_function(dist,locs,my_values_pic1[locs],my_values_pic2[locs])) I tried querying mytree with the full 'my_coords' but this caused a memory problem. - also obviously that for loop takes forever.... so needs to be optomized. I'm trying to change my loop to do chunks of about 300 points at a time and then stiching together the result - I'm just wondering if I'm missing something obvious that could make this even faster? Answer: KNN only works efficiently for small datasets. In order to deal with larger ones efficiently, you'll have to perform data reduction by selecting prototype points that are capable of representing your classes accurately. [Wikipedia has a good overview of the process](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm#Data_reduction)
Python script to alert on empty/missing logs Question: I am working on a project to check a file directory and automatically add log files as they are created. A file is being generated every five minutes, but some of the files are being created with a "0" filesize and I would like to alert when this happens. So the sequence of steps I would like to have are essentially: * Get time (MM:DD:YY HH:MM:SS) *Not sure if I need to do this... * CD to Folder Directory /Netflow/YY/MM/DD * Search for filename "nfcapd.YYYYMMDDHHMM" where MM increments by 5. * If filesize is 0, then email Johnny, Sally and Jimmy * Wait 6 minutes and repeat This is what I have pieced together thus far. How can I get the desired functionality? import os def is_non_zero_file(fpath): storage/Netflow/ return True if os.path.isfile(fpath) and os.path.getsize(fpath) > 0 else False # I need to check storage/Netflow for files named by time e.g 13_56_05.txt while True: time.sleep(360) Answer: In addition to enumerating the files in a given path, and subsequently filtering the files which are only zero-length, you probably want to maintain some type of state to ensure you're aren't notified multiple times of the same zero length file. That is, you probably don't want to get a notification that the same file is zero-length indefinitely (although you can modify the example below if you want said behavior). You may optionally want to do things like verify that the file name strictly meets your naming convention. You may also want to validate the the string date-stamp included in the file name is a valid datetime. The example below uses the glob module (itself leveraging `os.listdir()` and `fnmatch.fnmatch()`) to build up a set of possible files for inclusion. [1] The example is intentionally simple, and leverages a single class to store log sample 'state'. `KEEP_SAMPLES` samples are maintained (instances of `logState()` in the `log_states` list, achieved by using list slicing. A single `alert(msg)` function is supplied as a stub to something that might send mail, etc... References: [1] <https://docs.python.org/3.2/library/glob.html> #!/usr/bin/python3 import os import glob import re from datetime import datetime, timezone import time from pprint import pprint class logState(): def __init__(self, log_path, glob_patt, re_patt, dt_fmt): self.dt = datetime.now(timezone.utc) self.log_path = log_path self.glob_patt = glob_patt self.re_patt = re_patt self.dt_fmt = dt_fmt self.empty_logs = [] self.nonempty_logs = [] # Retrieve only files from glob self.files = [ f for f in glob.glob(self.log_path + self.glob_patt) if os.path.isfile(f) ] for f in self.files: unq_fname = f.split('/')[-1] if unq_fname == None: continue # Tighter pattern matching if re.match(re_patt, unq_fname) == None: continue # Get the datetime portion of the file name f_dtstamp = unq_fname.split('.')[-1] # Make sure the datetime stamp represents # a valid date if datetime.strptime(f_dtstamp, self.dt_fmt) == None: continue # Check file size, add to the appropriate # list if os.path.getsize(f) <= 0: self.empty_logs.append(f) else: self.nonempty_logs.append(f) def alert(msg): print("ALERT!: {0}".format(msg)) if __name__ == "__main__": # How long to sleep SLEEP_SECS = 5 # How many samples to keep KEEP_SAMPLES = 5 log_states = [] # Definition for what logs states we'll look for log_path = './' glob_patt = 'nfcapd.[0-9]*' re_patt = 'nfcapd.([0-9]{12})' dt_fmt = "%Y%m%d%H%M" print("-- Setup --") print("Sample files in '{0}'".format(log_path)) print("\t{0} samples kept:".format(KEEP_SAMPLES)) print("\tglob pattern: '{0}'".format(glob_patt)) print("\tregex pattern: '{0}'".format(re_patt)) print("\tdatetime string: '{0}'".format(dt_fmt)) print("") # Collect the initial state log_states.append(logState(log_path, glob_patt, re_patt, dt_fmt)) while True: # Print state inventory and current state detail print( "-- Log States Stored --") for i, log_state in enumerate(log_states): print("Log state {0} @ {1}".format(i, log_state.dt)) print(" -- Logs size > 0 --") pprint(log_states[-1].nonempty_logs) print(" -- Logs size <= 0 --") pprint(log_states[-1].empty_logs) print("") time.sleep(SLEEP_SECS) log_states = log_states[-KEEP_SAMPLES+1:] log_states.append(logState(log_path, glob_patt, re_patt, dt_fmt)) # p = previous sample, c = current p = set(log_states[-2].empty_logs) c = set(log_states[-1].empty_logs) # only report the items in the current sample # not in the last if len(c.difference(p)) > 0: alert("\nNew zero length logs: " + str(c.difference(p)) + "\n")
Image sorter by created date in python Question: I need to make a program that sort all the .jpg files in a directory, or simply all the files in the directory(because they are all jpg, so it doesnt have to be specifically fro jpgs) and that depending on the date they were made, puts them in two separate folders: say for example before 2012 in one folder, and after 2012 in the other....the thing that i dont know how to do is how to i get the program to read through the properties of of all of the files/.jpgs in the folder , after that i think i know what to do : store the dates in a variable , create the two folders and then i was thinking of an if statemente that transfers the files comparing their dates. But how do i tell the program to do this for every file,no matter how many they are?...because i dont know how many files there are in the folder. thanks!!! Answer: Here is some quick code to get you in the right direction. It goes through all files in a folder and moves them into two separate subfolders. You can modify this to use modify-date instead of create-date, or to get just .jpg files if you need. import os, datetime folder_name = "MY_PATH" for file_name in os.listdir(folder_name): file_name_full = os.path.join(folder_name, file_name) if not os.path.isfile(file_name_full): continue timestamp = os.path.getctime(file_name_full) dt = datetime.datetime.fromtimestamp(timestamp) # print out the datestamp print dt # if you now want to put them into subfolders based on the year 2012 # you can do something like the following yr = dt.year subfolder_name = "" if yr < 2012: subfolder_name = "before 2012" else: subfolder_name = "2012 and after" subfolder_name_full = os.path.join(folder_name, subfolder_name) new_file_name_full = os.path.join(subfolder_name_full, file_name) # I assume your folders are premade. If not, you can do a quick-and-dirty mkdir here print "Moving %s -> %s" % (file_name_full, new_file_name_full) os.rename(file_name_full, new_file_name_full)
GNU Radio filter design tool (gr_filter_design) Question: I am having some trouble getting the filter design tool to even start. When starting the application I get "This example requires a Numerical Python Extension, but failed to import either NumPy, or numarray, or Numeric. NumPy is available at http://sourceforge.net/projects/numpy". I have rebuild GNU Radio a couple of times now, and I am fairly sure that I have every thing installed that is required. I do have numpy installed, and I have tried a couple of versions just to be safe. Has some one else had this problem? Answer: You are getting this error as from PyQt4.Qwt5.anynumpy import * in polezero_plot.py (/usr/lib/python2.7/site-packages/gnuradio/filter) is failing. Just try replacing from PyQt4.Qwt5.anynumpy import * ( line no 25) with from scipy import zeros or from numpy import zeros
Python.h not found using swig and Anaconda Python Question: I'm trying to compile a simple python/C example following this tutorial: <http://www.swig.org/tutorial.html> I'm on MacOS using Anaconda python. however, when I run gcc -c example.c example_wrap.c -I/Users/myuser/anaconda/include/ I get: example_wrap.c:130:11: fatal error: 'Python.h' file not found # include <Python.h> ^ It seems that this problem is reported in a number of questions: [Missing Python.h while trying to compile a C extension module](http://stackoverflow.com/questions/4097339/missing-python-h-while- trying-to-compile-a-c-extension-module) [Missing Python.h and impossible to find](http://stackoverflow.com/questions/23170678/missing-python-h-and- impossible-to-find) [Python.h: No such file or directory](http://stackoverflow.com/questions/11041299/python-h-no-such-file- or-directory) but none seem to provide an answer specific to Anaconda on MacOS Anyone solved this? Answer: Use the option `-I/Users/myuser/anaconda/include/python2.7` in the `gcc` command. (That's assuming you are using python 2.7. Change the name to match the version of python that you are using.) You can use the command `python- config --cflags` to get the full set of recommended compilation flags: $ python-config --cflags -I/Users/myuser/anaconda/include/python2.7 -I/Users/myuser/anaconda/include/python2.7 -fno-strict-aliasing -I/Users/myuser/anaconda/include -arch x86_64 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes However, to build the extension module, I recommend using a simple setup script, such as the following `setup.py`, and let `distutils` figure out all the compiling and linking options for you. # setup.py from distutils.core import setup, Extension example_module = Extension('_example', sources=['example_wrap.c', 'example.c']) setup(name='example', ext_modules=[example_module], py_modules=["example"]) Then you can run: $ swig -python example.i $ python setup.py build_ext --inplace (Take a look at the compiler commands that are echoed to the terminal when `setup.py` is run.) `distutils` knows about SWIG, so instead of including `example_wrap.c` in the list of source files, you can include `example.i`, and `swig` will be run automatically by the setup script: # setup.py from distutils.core import setup, Extension example_module = Extension('_example', sources=['example.c', 'example.i']) setup(name='example', ext_modules=[example_module], py_modules=["example"]) With the above version of `setup.py`, you can build the extension module with the single command $ python setup.py build_ext --inplace Once you've built the extension module, you should be able to use it in python: >>> import example >>> example.fact(5) 120 If you'd rather not use the script `setup.py`, here's a set of commands that worked for me: $ swig -python example.i $ gcc -c -I/Users/myuser/anaconda/include/python2.7 example.c example_wrap.c $ gcc -bundle -undefined dynamic_lookup -L/Users/myuser/anaconda/lib example.o example_wrap.o -o _example.so Note: I'm using Mac OS X 10.9.4: $ gcc --version Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn) Target: x86_64-apple-darwin13.3.0 Thread model: posix
Error in .vdisplay.start() - xvfbwrapper is not working Question: installed xvfbwrapper python module.provided display start in python selenium test in MAC from xvfbwrapper import Xvfb class San(unittest.TestCase): def setUp(self): #display = Display(visible=0, size=(1480, 2000)) #display.start() self.vdisplay = Xvfb(width=1480, height=2000) self.vdisplay.start() self.driver = webdriver.Chrome() While running the test case , error occur Traceback (most recent call last): File "san98.py", line 34, in setUp self.vdisplay.start() File "/Library/Python/2.7/site-packages/xvfbwrapper.py", line 53, in start stderr=open(os.devnull), File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 711, in __init__ errread, errwrite) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1308, in _execute_child raise child_exception OSError: [Errno 2] No such file or directory How to fix this issue Answer: try this sudo apt-get install xvfb
Appending np.arrays to a blank array in Python Question: I am trying to save the results from a loop in a np.array. import numpy as np p=np.array([]) points= np.array([[3,0,0],[-1,0,0]]) for i in points: for j in points: if j[0]!=0: n=i+j p= np.append(p,n) However the resulting array is a 1D array of 6 members. [2. 0. 0. -2. 0. 0.] Instead I am looking for, but have been unable to produce: [[2,0,0],[-2,0,0]] Is there any way to get the result above? Thank you. Answer: One possibility is to turn `p` into a list, and convert it into a NumPy array right at the end: p = [] for i in points: ... p.append(n) p = np.array(p)
Store IP address in a variable in python Question: Can an IP address be stored in a variable in python? As, ip = "123.45.67.89" Answer: The Python 3.x stdlib has an [ipaddress](https://docs.python.org/3.4/library/ipaddress.html) module that provides support for IP addresses. **Unfortunately this is not available for Python 2.x** and so you may wish to try [this backport](https://pypi.python.org/pypi/py2-ipaddress/), note that the backport does not provide the full range of features but may be a good place to start. # Python 3.4 In [35]: import ipaddress In [36]: ip = ipaddress.ip_address('192.168.0.1') In [37]: print(ip) 192.168.0.1 In [38]: ip Out[38]: IPv4Address('192.168.0.1')
Scipy stats.skew -- IndexError: tuple index out of range Question: I am getting a wierd error.... this doesn't happen if I use np.random.random instead of np.random.randint >>> import numpy as np >>> import scipy.stats as stats >>> rdata = np.random.randint(5000) >>> skew = stats.skew(rdata) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/dist-packages/scipy/stats/stats.py", line 989, in skew n = a.shape[axis] IndexError: tuple index out of range Answer: Take a closer look at the docstrings for [`numpy.random.randint`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randint.html) and [`numpy.random.random`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.random.html). `numpy.random.randint(5000)` returns a _single_ random integer between 0 and 4999 (inclusive). `numpy.random.random(5000)` returns an array of 5000 samples from the uniform distibution on 0 <= x < 1.
Python - urllib2 how to know if our authorization is successful Question: When you would normally login to a website we can use urllib2.Request as so. import urllib2, base64 req = urllib2.Request("http://www.facebook.com/") base64string = base64.encodestring("%s:%s" % ("username", "password")).replace("\n", "") req.add_header("Authorization", "Basic %s" % base64string) requested = urllib2.urlopen(req) But how do we know if the authorization was successful? Because you could of just opened a URL of a wrong authorization. Answer: A `urllib2.HTTPError` exception is raised when you are not authorised and a 401 status is returned: >>> import urllib2, base64 >>> req = urllib2.Request('http://httpbin.org/basic-auth/foouser/barpw') >>> base64string = base64.encodestring("%s:%s" % ("username", "password")).replace("\n", "") >>> req.add_header("Authorization", "Basic %s" % base64string) >>> urllib2.urlopen(req) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/urllib2.py", line 127, in urlopen return _opener.open(url, data, timeout) File "/Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/urllib2.py", line 410, in open response = meth(req, response) File "/Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/urllib2.py", line 523, in http_response 'http', request, response, code, msg, hdrs) File "/Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/urllib2.py", line 448, in error return self._call_chain(*args) File "/Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/urllib2.py", line 382, in _call_chain result = func(*args) File "/Users/mpietre/Development/Library/buildout.python/parts/opt/lib/python2.7/urllib2.py", line 531, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 401: UNAUTHORIZED On successful authorisation you'll get a response object instead: >>> req = urllib2.Request('http://httpbin.org/basic-auth/foouser/barpw') >>> base64string = base64.encodestring("%s:%s" % ("foouser", "barpw")).replace("\n", "") >>> req.add_header("Authorization", "Basic %s" % base64string) >>> response = urllib2.urlopen(req) >>> response.getcode() 200
python urllib2 can't get correct webpage Question: I'm writting a spider using urllib2 and beautifulsoup. But I come across some problems. 1. I can't download the webpage correctly. I tried `GET 'http://thesite.html'`, `wget 'http://thesite.html'`, `curl -O 'http://thesite.html'`, on linux terminal, but got a lot mass, seems like wrong codec. 2. I then tried using `file_get_contents('http://thesite.html')` and also can't get the correct webpage. 3. Then I tried `urllib2.urlopen('http://thesite.html')`, can't work. 4. can't detect the encoding. ` s = urllib2.urlopen('http://thesite.html') print chardet.detect(s) ` and outputs `{'confidence':0.0, 'encoding':None}` 5. I also tried python3 with urllib.request, I can get a byte string, but when I try to decode this bytes to utf-8, I got a error message. Anyone can help me about this? How to get the correct webpage like a web browser. Answer: What are you downloading? Is it text, or something binary, e.g. an image? A binary file might explain why standard tools such as wget and curl return a lot of "mass" (mess?), and `chardet.detect()` will return `{'confidence': 0.0, 'encoding': None}` in that case. >>> import urllib2 >>> import chardet >>> s = urllib2.urlopen('http://i.stack.imgur.com/uIM9Q.png?s=32&g=1').read() # your avatar >>> chardet.detect(s) {'confidence': 0.0, 'encoding': None} What does the byte string that you got in Python 3 look like? Otherwise, if you are too coy to post the URL, traceback, or other details, perhaps you can post a little of that string?
Dissable boundaries in a python snake game Question: > My problem is hard to explain so please run the code to see what's going on I'm trying to adapt the code of a snake game so that the snake cannot collide with the screen boundaries. I'm using existing code that can be found here [code on stackexchange](http://codereview.stackexchange.com/questions/24267/snake-game- made-with-python). I'm using the revised version by [Gareth Rees](http://codereview.stackexchange.com/users/11728/gareth-rees) (see the bottom part of his answer). I added the no boundaries option (global boolean `WORLD_BOUNDARIES = False`) but I have problems to get it right basically because I'm not used to pythons syntax. My adapted code looks like this. The changed/added code is between `'''****...'''` comments inside the snake class' update function). The code below can be safely executed but has some bugs related to the no boundaries. I did try to fix the bug in the down direction but it is not working so I commented it out. from collections import deque from itertools import * import pygame from random import randrange import sys from pygame.locals import * class Vector(tuple): """A tuple that supports some vector operations. >>> v, w = Vector((1, 2)), Vector((3, 4)) >>> v + w, w - v, v * 10, 100 * v, -v ((4, 6), (2, 2), (10, 20), (100, 200), (-1, -2)) """ def __add__(self, other): return Vector(v + w for v, w in zip(self, other)) def __radd__(self, other): return Vector(w + v for v, w in zip(self, other)) def __sub__(self, other): return Vector(v - w for v, w in zip(self, other)) def __rsub__(self, other): return Vector(w - v for v, w in zip(self, other)) def __mul__(self, s): return Vector(v * s for v in self) def __rmul__(self, s): return Vector(v * s for v in self) def __neg__(self): return -1 * self FPS = 60 # Game frames per second SEGMENT_SCORE = 50 # Score per segment SNAKE_SPEED_INITIAL = 4.0 # Initial snake speed (squares per second) SNAKE_SPEED_INCREMENT = 0.25 # Snake speeds up this much each time it grows SNAKE_START_LENGTH = 4 # Initial snake length in segments WORLD_SIZE = Vector((20, 20)) # World size, in blocks BLOCK_SIZE = 24 # Block size, in pixels WORLD_BOUNDARIES = False BACKGROUND_COLOR = 45, 45, 45 SNAKE_COLOR = 0, 255, 0 FOOD_COLOR = 255, 0, 0 DEATH_COLOR = 255, 0, 0 TEXT_COLOR = 255, 255, 255 DIRECTION_UP = Vector(( 0, -1)) DIRECTION_DOWN = Vector(( 0, 1)) DIRECTION_LEFT = Vector((-1, 0)) DIRECTION_RIGHT = Vector(( 1, 0)) DIRECTION_DR = DIRECTION_DOWN + DIRECTION_RIGHT # Map from PyGame key event to the corresponding direction. KEY_DIRECTION = { K_w: DIRECTION_UP, K_UP: DIRECTION_UP, K_s: DIRECTION_DOWN, K_DOWN: DIRECTION_DOWN, K_a: DIRECTION_LEFT, K_LEFT: DIRECTION_LEFT, K_d: DIRECTION_RIGHT, K_RIGHT: DIRECTION_RIGHT, } class Snake(object): def __init__(self, start, start_length): self.speed = SNAKE_SPEED_INITIAL # Speed in squares per second. self.timer = 1.0 / self.speed # Time remaining to next movement. self.growth_pending = 0 # Number of segments still to grow. self.direction = DIRECTION_UP # Current movement direction. self.segments = deque([start - self.direction * i for i in xrange(start_length)]) def __iter__(self): return iter(self.segments) def __len__(self): return len(self.segments) def change_direction(self, direction): """Update the direction of the snake.""" # Moving in the opposite direction of current movement is not allowed. if self.direction != -direction: self.direction = direction def head(self): """Return the position of the snake's head.""" return self.segments[0] def tail(self): """Return the tail of the snake.""" return deque(islice(self.segments, 1, None)) def update(self, dt, direction): """Update the snake by dt seconds and possibly set direction.""" self.timer -= dt if self.timer > 0: # Nothing to do yet. return # Moving in the opposite direction of current movement is not allowed. if self.direction != -direction: self.direction = direction self.timer += 1 / self.speed '''************************************************************************************ *************************************************************************************** ***************************************************************************************''' if not WORLD_BOUNDARIES: head = self.head() if self.direction == DIRECTION_DOWN and head[1] == WORLD_SIZE[1]-1: self.segments[0]=Vector((head[0], 0)) # new_head = (head[0], 0) # new_tail = deque([x+self.direction for x in self.tail()]) # self.segments = new_tail.extendleft([new_head]) # print new_head # print new_tail if self.direction == DIRECTION_UP and head[1] == 0: self.segments[0]=Vector((head[0], WORLD_SIZE[1]-1)) if self.direction == DIRECTION_RIGHT and head[0] == WORLD_SIZE[0]-1: self.segments[0]=Vector((0, head[1])) if self.direction == DIRECTION_LEFT and head[0] == 0: self.segments[0]=Vector((WORLD_SIZE[0]-1, head[1])) '''************************************************************************************ *************************************************************************************** ***************************************************************************************''' # Add a new head. self.segments.appendleft(self.head() + self.direction) if self.growth_pending > 0: self.growth_pending -= 1 else: # Remove tail. self.segments.pop() def grow(self): """Grow snake by one segment and speed up.""" self.growth_pending += 1 self.speed += SNAKE_SPEED_INCREMENT def self_intersecting(self): """Is the snake currently self-intersecting?""" it = iter(self) head = next(it) return head in it class SnakeGame(object): def __init__(self): pygame.display.set_caption('PyGame Snake') self.block_size = BLOCK_SIZE self.window = pygame.display.set_mode(WORLD_SIZE * self.block_size) self.screen = pygame.display.get_surface() self.clock = pygame.time.Clock() self.font = pygame.font.Font('freesansbold.ttf', 20) self.world = Rect((0, 0), WORLD_SIZE) self.reset() def reset(self): """Start a new game.""" self.playing = True self.next_direction = DIRECTION_UP self.score = 0 self.snake = Snake(self.world.center, SNAKE_START_LENGTH) self.food = set() self.add_food() def add_food(self): """Ensure that there is at least one piece of food. (And, with small probability, more than one.) """ while not (self.food and randrange(4)): food = Vector(map(randrange, self.world.bottomright)) if food not in self.food and food not in self.snake: self.food.add(food) def input(self, e): """Process keyboard event e.""" if e.key in KEY_DIRECTION: self.next_direction = KEY_DIRECTION[e.key] elif e.key == K_SPACE and not self.playing: self.reset() def update(self, dt): """Update the game by dt seconds.""" self.snake.update(dt, self.next_direction) # If snake hits a food block, then consume the food, add new # food and grow the snake. head = self.snake.head() if head in self.food: self.food.remove(head) self.add_food() self.snake.grow() self.score += len(self.snake) * SEGMENT_SCORE # If snake collides with self or the screen boundaries, then # it's game over. if self.snake.self_intersecting() or not self.world.collidepoint(self.snake.head()): self.playing = False def block(self, p): """Return the screen rectangle corresponding to the position p.""" return Rect(p * self.block_size, DIRECTION_DR * self.block_size) def draw_text(self, text, p): """Draw text at position p.""" self.screen.blit(self.font.render(text, 1, TEXT_COLOR), p) def draw(self): """Draw game (while playing).""" self.screen.fill(BACKGROUND_COLOR) for p in self.snake: pygame.draw.rect(self.screen, SNAKE_COLOR, self.block(p)) for f in self.food: pygame.draw.rect(self.screen, FOOD_COLOR, self.block(f)) self.draw_text("Score: {}".format(self.score), (20, 20)) def draw_death(self): """Draw game (after game over).""" self.screen.fill(DEATH_COLOR) self.draw_text("Game over! Press Space to start a new game", (20, 150)) self.draw_text("Your score is: {}".format(self.score), (140, 180)) def play(self): """Play game until the QUIT event is received.""" while True: dt = self.clock.tick(FPS) / 1000.0 # convert to seconds for e in pygame.event.get(): if e.type == QUIT: return elif e.type == KEYUP: self.input(e) if self.playing: self.update(dt) self.draw() else: self.draw_death() pygame.display.flip() if __name__ == '__main__': pygame.init() SnakeGame().play() pygame.quit() Answer: after some thinking I managed to adapt the code so the snake can cross the screen boundaries without bugs. It wasn't actually that difficult after all. Those who are interested can see the code below: from collections import deque from itertools import * import pygame from random import randrange import sys from pygame.locals import * class Vector(tuple): """A tuple that supports some vector operations. >>> v, w = Vector((1, 2)), Vector((3, 4)) >>> v + w, w - v, v * 10, 100 * v, -v ((4, 6), (2, 2), (10, 20), (100, 200), (-1, -2)) """ def __add__(self, other): return Vector(v + w for v, w in zip(self, other)) def __radd__(self, other): return Vector(w + v for v, w in zip(self, other)) def __sub__(self, other): return Vector(v - w for v, w in zip(self, other)) def __rsub__(self, other): return Vector(w - v for v, w in zip(self, other)) def __mul__(self, s): return Vector(v * s for v in self) def __rmul__(self, s): return Vector(v * s for v in self) def __neg__(self): return -1 * self FPS = 60 # Game frames per second SEGMENT_SCORE = 50 # Score per segment SNAKE_SPEED_INITIAL = 4.0 # Initial snake speed (squares per second) SNAKE_SPEED_INCREMENT = 0.25 # Snake speeds up this much each time it grows SNAKE_START_LENGTH = 4 # Initial snake length in segments WORLD_SIZE = Vector((20, 20)) # World size, in blocks BLOCK_SIZE = 24 # Block size, in pixels WORLD_BOUNDARIES = False BACKGROUND_COLOR = 45, 45, 45 SNAKE_COLOR = 0, 255, 0 FOOD_COLOR = 255, 0, 0 DEATH_COLOR = 255, 0, 0 TEXT_COLOR = 255, 255, 255 DIRECTION_UP = Vector(( 0, -1)) DIRECTION_DOWN = Vector(( 0, 1)) DIRECTION_LEFT = Vector((-1, 0)) DIRECTION_RIGHT = Vector(( 1, 0)) DIRECTION_DR = DIRECTION_DOWN + DIRECTION_RIGHT # Map from PyGame key event to the corresponding direction. KEY_DIRECTION = { K_w: DIRECTION_UP, K_UP: DIRECTION_UP, K_s: DIRECTION_DOWN, K_DOWN: DIRECTION_DOWN, K_a: DIRECTION_LEFT, K_LEFT: DIRECTION_LEFT, K_d: DIRECTION_RIGHT, K_RIGHT: DIRECTION_RIGHT, } class Snake(object): def __init__(self, start, start_length): self.speed = SNAKE_SPEED_INITIAL # Speed in squares per second. self.timer = 1.0 / self.speed # Time remaining to next movement. self.growth_pending = 0 # Number of segments still to grow. self.direction = DIRECTION_UP # Current movement direction. self.segments = deque([start - self.direction * i for i in xrange(start_length)]) def __iter__(self): return iter(self.segments) def __len__(self): return len(self.segments) def change_direction(self, direction): """Update the direction of the snake.""" # Moving in the opposite direction of current movement is not allowed. if self.direction != -direction: self.direction = direction def head(self): """Return the position of the snake's head.""" return self.segments[0] def tail(self): """Return the tail of the snake.""" return deque(islice(self.segments, 1, None)) def update(self, dt, direction): """Update the snake by dt seconds and possibly set direction.""" self.timer -= dt if self.timer > 0: # Nothing to do yet. return # Moving in the opposite direction of current movement is not allowed. if self.direction != -direction: self.direction = direction self.timer += 1 / self.speed '''************************************************************************************ *************************************************************************************** ***************************************************************************************''' # Add a new head. if not WORLD_BOUNDARIES: head = self.head() if self.direction == DIRECTION_DOWN and head[1] == WORLD_SIZE[1]-1: self.segments.appendleft(Vector((head[0], 0))) elif self.direction == DIRECTION_UP and head[1] == 0: self.segments.appendleft(Vector((head[0], WORLD_SIZE[1]-1))) elif self.direction == DIRECTION_RIGHT and head[0] == WORLD_SIZE[0]-1: self.segments.appendleft(Vector((0, head[1]))) elif self.direction == DIRECTION_LEFT and head[0] == 0: self.segments.appendleft(Vector((WORLD_SIZE[0]-1, head[1]))) else: self.segments.appendleft(self.head() + self.direction) else: self.segments.appendleft(self.head() + self.direction) '''************************************************************************************ *************************************************************************************** ***************************************************************************************''' if self.growth_pending > 0: self.growth_pending -= 1 else: # Remove tail. self.segments.pop() def grow(self): """Grow snake by one segment and speed up.""" self.growth_pending += 1 self.speed += SNAKE_SPEED_INCREMENT def self_intersecting(self): """Is the snake currently self-intersecting?""" it = iter(self) head = next(it) return head in it class SnakeGame(object): def __init__(self): pygame.display.set_caption('PyGame Snake') self.block_size = BLOCK_SIZE self.window = pygame.display.set_mode(WORLD_SIZE * self.block_size) self.screen = pygame.display.get_surface() self.clock = pygame.time.Clock() self.font = pygame.font.Font('freesansbold.ttf', 20) self.world = Rect((0, 0), WORLD_SIZE) self.reset() def reset(self): """Start a new game.""" self.playing = True self.next_direction = DIRECTION_UP self.score = 0 self.snake = Snake(self.world.center, SNAKE_START_LENGTH) self.food = set() self.add_food() def add_food(self): """Ensure that there is at least one piece of food. (And, with small probability, more than one.) """ while not (self.food and randrange(4)): food = Vector(map(randrange, self.world.bottomright)) if food not in self.food and food not in self.snake: self.food.add(food) def input(self, e): """Process keyboard event e.""" if e.key in KEY_DIRECTION: self.next_direction = KEY_DIRECTION[e.key] elif e.key == K_SPACE and not self.playing: self.reset() def update(self, dt): """Update the game by dt seconds.""" self.snake.update(dt, self.next_direction) # If snake hits a food block, then consume the food, add new # food and grow the snake. head = self.snake.head() if head in self.food: self.food.remove(head) self.add_food() self.snake.grow() self.score += len(self.snake) * SEGMENT_SCORE # If snake collides with self or the screen boundaries, then # it's game over. if self.snake.self_intersecting() or not self.world.collidepoint(self.snake.head()): self.playing = False def block(self, p): """Return the screen rectangle corresponding to the position p.""" return Rect(p * self.block_size, DIRECTION_DR * self.block_size) def draw_text(self, text, p): """Draw text at position p.""" self.screen.blit(self.font.render(text, 1, TEXT_COLOR), p) def draw(self): """Draw game (while playing).""" self.screen.fill(BACKGROUND_COLOR) for p in self.snake: pygame.draw.rect(self.screen, SNAKE_COLOR, self.block(p)) for f in self.food: pygame.draw.rect(self.screen, FOOD_COLOR, self.block(f)) self.draw_text("Score: {}".format(self.score), (20, 20)) def draw_death(self): """Draw game (after game over).""" self.screen.fill(DEATH_COLOR) self.draw_text("Game over! Press Space to start a new game", (20, 150)) self.draw_text("Your score is: {}".format(self.score), (140, 180)) def play(self): """Play game until the QUIT event is received.""" while True: dt = self.clock.tick(FPS) / 1000.0 # convert to seconds for e in pygame.event.get(): if e.type == QUIT: return elif e.type == KEYUP: self.input(e) if self.playing: self.update(dt) self.draw() else: self.draw_death() pygame.display.flip() if __name__ == '__main__': pygame.init() SnakeGame().play() pygame.quit()
Beginner issues with python script output Question: The below Python code isn't acting as intended. The final output is missing all lines that I'd expect to be outputted where `output_data[1] == "S"`. It also appears my lazy attempt to obfuscate client ID is failing. import csv TID = 0 ACCT_TYPE = 1 PRODUCT_TYPE = 2 NUM_ACCT_OWNERS = 3 NUM_ACCT = 4 COMBINED_BALANCES = 5 TID_BALANCES = 6 INSURED_BALANCES = 7 ESTIMATED_UNINSURED = 8 FOREIGN_IND = 9 #output_data = [None] * 10 header_row_processed = False with open("U:\exampletext.txt", 'r') as csvfile: CIN = csv.reader(csvfile, delimiter="\t") file_output_data = [] for row in CIN: if not header_row_processed: header_row_processed = True continue if not row[ACCT_TYPE] == "I" or row[ACCT_TYPE] == "S": continue if not row[PRODUCT_TYPE] == "Total": continue output_data = [None] * 6 client_id = row[TID] client_id.replace("0", "x") client_id.replace("1", "d") client_id.replace("2", "h") client_id.replace("3", "g") client_id.replace("4", "v") client_id.replace("5", "n") client_id.replace("6", "m") client_id.replace("7", "q") client_id.replace("8", "w") client_id.replace("9", "r") client_id.replace("-", "u") output_data[0] = client_id output_data[1] = row[ACCT_TYPE] output_data[2] = row[PRODUCT_TYPE] output_data[3] = row[COMBINED_BALANCES] output_data[4] = str(min(float(row[COMBINED_BALANCES])/float(row[NUM_ACCT_OWNERS]), 250000*float(row[NUM_ACCT_OWNERS]))) output_data[5] = str(max(0, (float(row[COMBINED_BALANCES])/float(row[NUM_ACCT_OWNERS]))-(250000*float(row[NUM_ACCT_OWNERS])))*(float(row[NUM_ACCT_OWNERS]))) file_output_data.append("\t".join(output_data)) #file_output_data = ','.join(str(v) for v in value_list) if output_data[1] == "I": out_file = open( 'U:\path_one.txt', 'w') out_file.write("\n".join(file_output_data)) out_file.close() if output_data[1] == "S": out_file = open( 'U:\path_two.txt', 'w') out_file.write("\n".join(file_output_data)) out_file.close() Answer: Your error is in the part: client_id = row[TID] client_id.replace("0", "x") client_id.replace("1", "d") client_id.replace("2", "h") client_id.replace("3", "g") client_id.replace("4", "v") client_id.replace("5", "n") client_id.replace("6", "m") client_id.replace("7", "q") client_id.replace("8", "w") client_id.replace("9", "r") client_id.replace("-", "u") output_data[0] = client_id Go write an unit test for it and see what the code does. replace doesn't do what you think it does.
Selecting a column from an array in Python Question: In trying to select the first (when counting from zero) column in a 2D 4x4 array, I wrote the following script: import numpy a4x4=[list(range(4*i,4*(i+1))) for i in list(range(4))] print(a4x4) print(a4x4[:,1]) The array seems to be alright: [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15]] but instead of > [1, 5, 9, 13] for the second print, I get this error: > TypeError: list indices must be integers, not tuple Why does this error appear, what is going wrong? Answer: You've import numpy but you aren't using it. What you have instead is a list of lists, and Python doesn't support multidimensional slicing for that (ie, you'd need `[a4x4[i][1] for i in range(4)]` to get the result you expect, but really you should be using numpy). Here's an example: import numpy a4x4=numpy.array([list(range(4*i,4*(i+1))) for i in list(range(4))]) print(a4x4) print(a4x4[:,1]) By the way, in numpy you can also build the array you want directly, like this: numpy.arange(4*4).reshape((4,4)) (And also in Python one doesn't need the `list` calls I have above, I'm just trying to keep the code as similar to yours as possible to see the key thing, which is converting the list of lists into a numpy array.)
How to catch the clipboard event (onChangeClipboard equivalent) from any application in Python Question: I am working on a clipboard manager. My current issue is to succeed in catching modification of the clipboard from any applications. For instance : * From a ctrl - c * From a right click and copy to clipboard The idea is that a python script is running in background, like a deamon and catch every change of the clipboard Thank you a lot :) PS: For people who know autohotkey, I am looking for onClipboardChange equivalent Answer: I found in the web a solution using GTK. It works :) import sys from gi.repository import Gtk, Gdk def callBack(*args): print("Clipboard changed. New value = " + clip.wait_for_text()) clip = Gtk.Clipboard.get(Gdk.SELECTION_CLIPBOARD) clip.connect('owner-change',callBack) Gtk.main() Does anybody has a solution with QT or a more native solution ?
cant import/install python libraries for another python package Question: I got python 3.3.3 package as part of binary package. (<http://siremol.org/Sire/Download.html>) I tried lot of things but could not import or install additional libraries for this python package. I installed anaconda as well but nothing help me. When I import libraries (e.g numpy, matplotlib ..) from any other location on my system it gives me the following error; ImportError: cannot import name multiarray i dont know how I can do it. Please let me know if you need some details on files present in my installed package. I have pip, easy install files but they seems not working. Any possible guidelines to solve the issue ? Thanks! Answer: You basically have to install libraries for version 3. For example, using pip3: pip3 install numpy #will install a package for Python3 Every install tool should have a switch what version of Python should it install libraries for.
ImportError: No module named 'Tkinter' Question: For some reason I can't use the `Tkinter` module.. I have no idea what could cause it, and it's so annoying, is there anything wrong with this line? import Tkinter Also tried running it, in the python terminal, still don't work.. Answer: I have been using Tkinter for a while now. Why don't you try this and let me know if it worked? try: # for Python2 from Tkinter import * ## notice capitalized T in Tkinter except ImportError: # for Python3 from tkinter import * ## notice lowercase 't' in tkinter here Here is the reference [link](http://stackoverflow.com/questions/7498658/importerror-when-importing- tkinter-in-python) and here is the [doc](https://docs.python.org/2/library/tkinter.html)
Marketo "Import Lead" fails with error 610 Requested resource not found Question: I'm trying to batch **update a bunch of existing records** through Marketo's REST API. According [to the documentation](http://developers.marketo.com/documentation/rest/import-lead/), the Import Lead function seems to be ideal for this. In short, I'm getting the error "610 Resource Not Found" upon using the curl sample from the documentation. Here are some steps I've taken. 1. Fetching the auth_token is not a problem: $ curl "https://<identity_path>/identity/oauth/token? grant_type=client_credentials&client_id=<my_client_id> &client_secret=<my_client_secret>" 2. Proving the token is valid, fetching a single lead isn't a problem as well: # Fetch the record - outputs just fine $ curl "https://<rest_path>/rest/v1/lead/1.json?access_token=<access_token>" # output: { "requestId": "ab9d#12345abc45", "result": [ { "id": 1, "updatedAt": "2014-09-18T13:00:00+0000", "lastName": "Potter", "email": "[email protected]", "createdAt": "2014-09-18T12:00:00+0000", "firstName": "Harry" } ], "success": true } 3. Now here's the pain, when I try to upload a CSV file using the [Import Lead function](http://developers.marketo.com/documentation/rest/import-lead/). Like so: # "Import Lead" function $ curl -i -F format=csv -F [email protected] -F access_token=<access_token> "https://<rest_path>/rest/bulk/v1/leads.json" # results in the following error { "requestId": "f2b6#14888a7385a", "success": false, "errors": [ { "code": "610", "message": "Requested resource not found" } ] } The [error codes documentation](http://developers.marketo.com/documentation/rest/error-codes/) only states _Requested resource not found_ , nothing else. So my question is: what is causing the 610 error code - and **how can I fix it**? Further steps I've tried, with no success: * Placing the access_token as url parameter (e.g. appending '?access_token=xxx' to the url), with no effect. * Stripping down the CSV (yes, it's comma seperated) to a bare minimum (e.g. only fields 'id' and 'lastName') * Looked at the question [Marketo API and Python, Post request failing](http://stackoverflow.com/questions/25411664/marketo-api-and-python-post-request-failing) * Verified that the CSV doesn't have some funky line endings * I have no idea if there are specific requirements for the CSV file, like column orders, though... Any tips or suggestions? Answer: Error code 610 can represent something akin to a '404' for urls under the REST endpoint, i.e. your `rest_path`. I'm guessing this is why you are getting that '404': Marketo's docs show REST paths as starting with '/rest', yet their rest endpoint ends with /rest, so if you follow their directions you get an url like, `xxxx.mktorest.com/rest/rest/v1/lead/...`, i.e. with '/rest' twice. This is not correct. Your url must have only one 'rest/'.
parallel generation of random forests using scikit-learn Question: Main question: How do I combine different randomForests in python and scikit- learn? I am currently using the randomForest package in R to generate randomforest objects using elastic map reduce. This is to address a classification problem. Since my input data is too large to fit in memory on one machine, I sample the data into smaller data sets and generate random forest object which contains a smaller set of trees. I then combine the different trees together using a modified combine function to create a new random forest object. This random forest object contains the feature importance and final set of trees. This does not include the oob errors or votes of the trees. While this works well in R, I want to do the same thing in Python using scikit-learn. I can create different random forest objects, but I don't have any way to combine them together to form a new object. Can anyone point me to a function that can combine the forests? Is this possible using scikit-learn? Here is the link to a question on how to this process in R:[Combining random forests built with different training sets in R](http://stackoverflow.com/questions/19170130/combining-random-forests-built- with-different-training-sets-in-r) . Edit: The resulting random forest object should contain the trees that can be used for prediction and also the feature importance. Any help would be appreciated. Answer: Sure, just aggregate all the trees, for instance have look at this snippet from [pyrallel](https://github.com/pydata/pyrallel/blob/master/pyrallel/ensemble.py#L27): def combine(all_ensembles): """Combine the sub-estimators of a group of ensembles >>> from sklearn.datasets import load_iris >>> from sklearn.ensemble import ExtraTreesClassifier >>> iris = load_iris() >>> X, y = iris.data, iris.target >>> all_ensembles = [ExtraTreesClassifier(n_estimators=4).fit(X, y) ... for i in range(3)] >>> big = combine(all_ensembles) >>> len(big.estimators_) 12 >>> big.n_estimators 12 >>> big.score(X, y) 1.0 """ final_ensemble = copy(all_ensembles[0]) final_ensemble.estimators_ = [] for ensemble in all_ensembles: final_ensemble.estimators_ += ensemble.estimators_ # Required in old versions of sklearn final_ensemble.n_estimators = len(final_ensemble.estimators_) return final_ensemble
Curve fitting in Python using a data sets Question: I am really new in Python, hence I am asking a simple question: I have a sets of data (x1, x2, x3, x4, x5) and corresponding (y1, y2, y3, y4, y5). Now, how can I use Python to find a y value for a given x value? (x lies in between x1 to x5) As an example: Let say, I want to find a value of Y for X = 0.9 for the following sets of data. X Y 0.5 12 1.2 17 1.3 23 1.6 29 2.1 33 Thanks in advance!! Answer: You can use a `polyfit`. from numpy import polyfit print polyfit([0.5, 1.2, 1.3, 1.6, 2.1], [12, 17, 23, 29, 33], 1) # Replace this number for the degree of the polinomium Output degree 1 -> [ 14.02332362 4.00874636] Output degree 2 -> [ 1.17847672 10.98436544 5.64150351] What you obtain are the coeficients for the curves: * Grade 1: _y = 14.02332362x + 4.00874636_ * Grade 2: _y = 1.17847672x 2 \+ 10.98436544x + 5.64150351_
Seeing or filtering using indexed columns in pandas? Question: Using Python's pandas library, I imported a csv and set multiple columns as my index. Unexpectedly, the indexed columns are no longer present when I display the dataframe and I can't use the index columns as filter option. Google tells me that when I set my index, I should set 'drop' to False. This has me wondering if I am mistaken in thinking of pandas indexes as being similar to SQL indexes. Say my data looks like this (simplified dummy example for stock market prices): date, exchange, symbol, low, high, open, close, last `date`, `exchange`, and `symbol` are sufficient to uniquely identify a row. First, what is the point of creating an index? Does it speed up lookups or does it add some semantic information useful for things like stack/unstack/pivot/groupby? Does it reduce memory usage? Why are date, exchange, and symbol moved out of dataframe (and moved to index series?)? Doesn't it make sense to leave them in the dataframe where I can filter on them as df[df.symbol=='MSFT']? I guess 'drop' solves this, but the fact that this isn't default means I'm misunderstanding something here. Answer: Indices are used to select and align rows of a data frame. The way I think of them is as labels for the dataframe rows, with the rows containing the data values. They enable you to use a dataframe to store higher dimensional data in a convenient way. I found them a little tricky starting out, but when you are aligning data sets they really shine. You need the xs method to access the inner levels of a multiindex. To do what you want try the following. df.xs(('BATS', 'MSFT'), level=(1, 2)) Alternatively you can refrain from setting an index and use boolean indexing to select multiple criteria. bats_mask = df.exchange == 'BATS' msft_mask = df.symbol == 'MSFT' df[bats_mask & msft_mask]
ElementTree parallel node creation Question: I'm writing a graph object to an xml representation. My monolithic code works well, but it's too slow on my large graph. I'm trying to parallelize it, but I'm not getting the `SubElement` back from the pool. I'm sure that I'm missing something obvious, but I'm new to python. import networkx as nx import lxml.etree as et from multiprocessing import Pool G = nx.petersen_graph() # For any graph, make a node subelement with the id being the node label def getNodeAttributes(index): et.SubElement(nodes, "node", attrib={'id': str(G.nodes()[index])}) # Do it with one monolithic process network = et.Element("network", attrib={"name": "Petersen Graph"}) nodes = et.SubElement(network, "nodes") for i in range(len(G)): getNodeAttributes(i) et.dump(network) <network name="Petersen Graph"> <nodes> <node id="0"/> <node id="1"/> <node id="2"/> <node id="3"/> <node id="4"/> <node id="5"/> <node id="6"/> <node id="7"/> <node id="8"/> <node id="9"/> </nodes> </network> # Do it again, but with pool.map in parallel network = et.Element("network", attrib={"name": "Petersen Graph"}) nodes = et.SubElement(network, "nodes") pool = Pool(4) pool.map(getNodeAttributes, range(len(G))) pool.close() pool.join() et.dump(network) <network name="Petersen Graph"> <nodes/> </network> Answer: Use a queue (`multiprocessing.Queue`) to collect the results of your worker processes. See the answer to this question: [Sharing a queue among several processes](http://stackoverflow.com/q/9908781/699305). That said, I'm not sure it will help much in your case, since the XML file needs to be read and parsed sequentially, and the element tree is going to be quite large. But give it a try...
The lxml 'None' type is not None Question: I want to compare a variable I set to `None`, which was a string element before, with `is` but it fails. When I compare this variable to `None` with `==`, it works. This is the variable I'm talking about: print type(xml.a) -> <type 'lxml.objectify.StringElement'> Because some libraries I use have `None` as a default argument (i.e., `def f(x=None)`), I converted my nullstrings earlier like this: if xml.a == '': xml.a = None Afterwards the type has changed to: print type(xml.a) -> <type 'lxml.objectify.NoneElement'> Which is not the same as: print type(None) -> <type 'NoneType'> When I compare this value as I described above I get the following result: if xml.a is None: print 'what I expect' else: print 'what I do NOT expect' # sadly this one is printed if xml.a == None: print 'what I do NOT expect' # this one is printed again... else: print 'what I expect' I already know that when comparing objects that are not the same instance, `is` returns `false`. But my understanding is that I had set `xml.a` earlier to the `None` instance. On the other hand they don't match in their types and `is` returns `false` so it can't be the same instance as `None`. * Why? * Do I have no other choice than to use `==`? For those who want to know more about the difference between `is` and `isinstance` there has been a discussion about it [here](http://stackoverflow.com/questions/1549801/differences-between- isinstance-and-type-in-python). Answer: You are using the [`lxml.objectify` API](http://lxml.de/objectify.html#the- lxml-objectify-api); this API uses a _special object_ to represent `None` values in the objectified XML tree. When you assigned `None` to `xml.a`, `lxml` stored that special object, so that it can transform that to a XML element in an XML document. What you have is **not** the `None` singleton. You have an instance of the [`lxml.objectify.NoneElement` class](http://lxml.de/api/lxml.objectify.NoneElement-class.html) instead. You can test for that element's type instead: from lxml.objectify import NoneElement if isinstance(xml.a, NoneElement):
At what point is MySQL primary key error thrown? Question: If I have a batch insert statement like: INSERT INTO TABLE VALUES (x,y,z),(x2,y2,z2),(x3,y3,z3); And `x2` violates a primary key, is the error thrown before or after the processing of `x3`? Specifically, I have a bunch of batch inserts in a try-catch block using Python and PyMySQL, like: conn = myDB.cursor() try: conn.execute("INSERT INTO TABLE VALUES (x,y,z),(x2,y2,z2),(x3,y3,z3);") except pymysql.Error as msg: print("MYSQL ERROR!:{0}".format(msg)) #print error I want to make sure that if one of the tuples in the batch insert fails, thus printing the error, the rest of the tuples in that same batch were still processed. My motivation is that I am transferring LOTS of data across two servers. In server 1 the data is stored in log files, and it is being inserted into MySQL on server 2. Some of the data is already in MySQL on server 2, so there are many failures. However, if I do not use batch inserts, and I have a separate `INSERT INTO` for every of the (millions of) records things seem to run much more slowly. So I'm in trouble either way: with batch inserts, duplicate failures blow up the whole statement, and without batch inserts the process takes much much longer. Answer: The way that MySQL deals with multiple insert (or update) statements varies by table engine and server SQL mode. While only the table engine is really important to the key constraints you're asking about here, it's important to understand the bigger picture, so I'm going to take the time to add some extra details. If you're in a hurry, feel free to just read the first and last sections below. # Table Engines In the case of a nontransactional table engine like MyISAM, you can easily end up performing a partial update because each insert or update is performed sequentially and cannot be rolled back when the bad row is encountered and the statement is aborted. However, if you use a transactional table engine like InnoDB, any constraint violation during an insert or update statement will trigger a rollback of any changes made up to that point, in addition to aborting the statement. # SQL Modes The [server SQL mode](http://dev.mysql.com/doc/refman/5.0/en/sql-mode.html) becomes important when you're not violating a key constraint, but the data you're trying to insert or update doesn't fit the definition of the column you're putting it into. For example: * inserting a row without giving values for every `NOT NULL` column * inserting `'123'` into a column defined with a numeric type (rather than `123`) * updating a `CHAR(3)` column to hold the value `'four'` In these cases, MySQL will throw an error if strict mode is in effect. However, if strict mode is not in effect, it will often "fix" your mistake instead, which can cause all manner of potentially harmful behavior (see [MySQL 'Truncated incorrect INTEGER value'](http://stackoverflow.com/q/4946899/2359271) and [mysql string conversion return 0](http://stackoverflow.com/q/9948389/2359271) for just two examples). # Danger, Will Robinson! There are some potential "gotchas" with nontransactional tables and strict mode. You haven't told us which table engine you're working with, but [this answer](http://stackoverflow.com/a/25918982/2359271) as currently written is clearly using a nontransactional table and it's important to know how that affects the result. For example, consider the following set of statements: SET sql_mode = ''; # This will make sure strict mode is not in effect CREATE TABLE tbl ( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, val INT ) ENGINE=MyISAM; # A nontransactional table engine (this used to be the default) INSERT INTO tbl (val) VALUES (1), ('two'), (3); INSERT INTO tbl (val) VALUES ('four'), (5), (6); INSERT INTO tbl (val) VALUES ('7'), (8), (9); Since strict mode is not in effect, it shouldn't be surprising that all nine values are inserted and the invalid strings are coerced to integers. The server is clever enough to recognize `'7'` as a number but doesn't recognize `'two'` or `'four'`, so they get converted to the [default value for numeric types in MySQL](http://dev.mysql.com/doc/refman/5.0/en/data-type- defaults.html): mysql> SELECT val FROM tbl; +------+ | val | +------+ | 1 | | 0 | | 3 | | 0 | | 5 | | 6 | | 7 | | 8 | | 9 | +------+ 9 rows in set (0.00 sec) Now, try doing that again with `sql_mode = 'STRICT_ALL_TABLES'`. To make a long story short, the first `INSERT` statement will result in a partial insert, the second will fail entirely and the third will silently coerce `'7'` to `7` (which doesn't seem very "strict" if you ask me, but it's [documented behavior](http://dev.mysql.com/doc/refman/5.0/en/type-conversion.html) and not that unreasonable). But wait, there's more! Try it with `sql_mode = 'STRICT_TRANS_TABLES'`. Now you'll find that the first statement throws a warning instead of an error - but the second statement still fails! This can be particularly frustrating if you're using `LOAD DATA` with a bunch of files and some are failing while others aren't (see [this closed bug report](http://bugs.mysql.com/bug.php?id=68494)). # What to Do In the case of key violations specifically, what matters is only whether the table engine is transactional (example: InnoDB) or not (example: MyISAM). If you're working on a transactional table, the Python code in your question will cause the MySQL server to do things in this order: 1. Parse the `INSERT` statement and start a transaction.* 2. Insert the first tuple. 3. Insert the second tuple (key constraint is violated). 4. Rollback the transaction. 5. Send an error message to `pymysql`. *It would make sense for the statement to be parsed before starting a transaction, but I don't know the exact implementation so I'll put these together as one step. In this case, any changes prior to the bad tuple would already have been reversed by the time your script receives an error message from the server and enters the `except` block. If you're working on a nontransactional table, however, the server will skip step 4 (and the relevant part of step 1) because the table engine doesn't support [transaction statements](http://dev.mysql.com/doc/refman/5.0/en/commit.html). In this case, at the time your script enters the `except` block, the first tuple has been inserted, the second has blown up, and you may not be able to easily determine how many rows were successfully inserted because [the function that normally does that](http://dev.mysql.com/doc/refman/5.0/en/mysql-affected-rows.html) returns -1 if the last insert or update statement threw an error. Partial updates should be strictly avoided; they're much harder to fix than simply making sure your statement succeeds entirely or fails entirely. In this type of situation, [the documentation suggests](http://dev.mysql.com/doc/refman/5.0/en/sql-mode.html#sql-mode- strict): > To avoid [a partial update], use single-row statements, which can be aborted > without changing the table. And in my opinion, that's exactly what you should do. It's hardly difficult to write a loop in Python and you won't have to repeat code as long as you're [inserting values properly as parameters](http://www.mikusa.com/python-mysql- docs/query.html) rather than hard-coding them - which you're already doing, right? RIGHT??? >:( # Alternative Alternatives If you _expect_ to violate your constraint sometimes and you want to take some other action when the row you try to insert turns out already to exist, then you might be interested in [`INSERT ... ON DUPLICATE KEY UPDATE'](http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html). This lets you perform such amazing feats of computational gymnastics as _counting stuff_ : mysql> create table counting_is_fun ( -> stuff int primary key, -> ct int unsigned not null default 1 -> ); Query OK, 0 rows affected (0.12 sec) mysql> insert into counting_is_fun (stuff) -> values (1), (2), (5), (3), (3) -> on duplicate key update count = count + 1; Query OK, 6 rows affected (0.04 sec) Records: 5 Duplicates: 1 Warnings: 0 mysql> select * from counting_is_fun; +-------+-------+ | stuff | count | +-------+-------+ | 1 | 1 | | 2 | 1 | | 3 | 2 | | 5 | 1 | +-------+-------+ 4 rows in set (0.00 sec) (Note: Compare the number of tuples you inserted to the number of "rows affected" by the query and the number of rows in the table afterward. Isn't counting fun?) Or, if you think the data you're inserting right now is at least as good as the data currently in the table, you could look into [`REPLACE INTO`](http://dev.mysql.com/doc/refman/5.0/en/replace.html) \- but this is a MySQL-specific extension to the SQL standard and as usual, [it has its quirks](http://code.openark.org/blog/mysql/replace-into-think-twice), particularly with respect to `AUTO_INCREMENT` fields and `ON DELETE` actions associated with foreign key references. One other approach people love to suggest is `INSERT IGNORE`. This ignores the error and just keeps on rolling. Great, right? Who needs errors, anyway? The reasons I don't like this as a solution are: * `INSERT IGNORE` will cause _any_ error that occurs during the statement to be ignored, not just whatever error you _think_ you don't care about. * The documentation states, ["Ignored errors may generate warnings instead, although duplicate-key errors do not."](http://dev.mysql.com/doc/refman/5.0/en/insert.html) So you don't even necessarily know which _warnings_ to expect when using this keyword! * To me, using `INSERT IGNORE` says, "I don't know how to do this the right way, so I'm just going to do it the wrong way." I do use `INSERT IGNORE` sometimes, but when the documentation flat-out tells you the "right way" to do something, don't outsmart yourself. Try it that way first; if you still have a really good reason to do it the wrong way and risk violating the integrity of your data and _ruining everything forever_ , at least you've made an informed decision.
Copy files to a different directory based on their extension using Python Question: I have a list named source where I store the file names. Now based on the extension of each file I want it to copy to different folders. I know it is a very simple task but somehow i am not getting it. Her is my code: import shutil , os source = ['test_sound.wav','ts.raw'] src= '/home/GM/codec_implement/Audio_files' destination1= '/home/GM/codec_implement/raw_files' destination2= '/home/GM/codec_implement/raw_files/wave_files' for files in source: fileName, fileExtension = os.path.splitext(files) if (fileExtension=='.raw'): full_filename = os.path.join(source, files) shutil.copy(full_filename,destination1) elif (fileExtension=='.wav'): full_filename = os.path.join(source, files) shutil.copy(full_filename,destination2) else: print "This is not a valid file format " Errors: I get errors like unexpected indentation all the time I dont understand where I am making mistake and I am using Python 2.7 Answer: You need to move line `fileName, fileExtension = os.path.splitext(files)` four spaces to the right. Take a look at the [pylint](http://www.pylint.org/). It will show you where the error is. for example, running `pylint` against your code: pylint -r n /tmp/foo ************* Module foo E: 13,0: expected an indented block
Python matplotlib not importing due to failure to import unichr Question: I'm running Ubuntu 13.10 with Python 2.7.5+, and whenever I try to import the `matplotlib` package, I get an error. Specifically, my test script has only the following code: #!/usr/bin/python import matplotlib.pyplot as plt When I run the test script, here is the error: Traceback (most recent call last): File "./test.py", line 2, in <module> import matplotlib.pyplot as plt File "/usr/local/lib/python2.7/dist-packages/matplotlib/pyplot.py", line 27, in <module> import matplotlib.colorbar File "/usr/local/lib/python2.7/dist-packages/matplotlib/colorbar.py", line 34, in <module> import matplotlib.collections as collections File "/usr/local/lib/python2.7/dist-packages/matplotlib/collections.py", line 27, in <module> import matplotlib.backend_bases as backend_bases File "/usr/local/lib/python2.7/dist-packages/matplotlib/backend_bases.py", line 56, in <module> import matplotlib.textpath as textpath File "/usr/local/lib/python2.7/dist-packages/matplotlib/textpath.py", line 22, in <module> from matplotlib.mathtext import MathTextParser File "/usr/local/lib/python2.7/dist-packages/matplotlib/mathtext.py", line 26, in <module> from six import unichr ImportError: cannot import name unichr Any idea what I'm doing wrong? Answer: You apparently have an outdated version of `six`. The `unichr` wrapper was added in version 1.4.0, as seen in [`CHANGES`](https://bitbucket.org/gutworth/six/src/default/CHANGES?at=default#cl-135). I'm not sure exactly when 1.4.0 was released, but the fix for [issue #25](https://bitbucket.org/gutworth/six/issue/25/add-unichr) was committed on 2013-05-18, so… some time after that. So, if you `print(six.__version__)`, and it's anything less than '1.4.0', that's your problem. Depending on whether you're installing packages with `pip` or with your system's package manager, the solution is going to be something like one of these: $ pip install --upgrade six $ apt-get install six $ brew install --upgrade six … etc. * * * But meanwhile, `matplotlib` shouldn't be requiring 1.3 but using features only available in 1.4. Your GitHub issue [matplotlib#3538](https://github.com/matplotlib/matplotlib/issues/3538) should get this fixed.
Open right button menu then selec one item from the menu by using Selenium+Python Question: I try to open the right button menu from web then select one of the option(e.g. print... )? I'm using selenium and python.Does anyone know how to do this? Thanks a lot! Answer: I don't know if I understood you well. For example try this: from selenium import webdriver from selenium.webdriver.common.keys import Keys driver = webdriver.Firefox() driver.get("https://www.google.pl") driver.find_element_by_xpath(".//*[@id='gbwa']/div[1]/a").click() driver.find_element_by_xpath(".//*[@id='gb36']/span[1]").click() element = driver.find_element_by_xpath("//body") element.send_keys(Keys.CONTROL, 'p') print driver.current_url driver.close() [And read about xpath](http://www.w3schools.com/xpath/)
Improving performance of python multithreading for downloading webpages Question: I am trying to write python code that downloads web pages using separate threads. Here is an example of my code: import urllib2 from threading import Thread import time URLs = ['http://www.yahoo.com/', 'http://www.time.com/', 'http://www.cnn.com/', 'http://www.slashdot.org/' ] def thread_func(arg): t = time.time() page = urllib2.urlopen(arg) page = page.read() print time.time() - t for url in URLs: t = Thread(target = thread_func, args = (url, )) t.start() t.join() I run the code and the threads seem to execute serially, if I'm not mistaken, with the time of the download measured but each one is output to console after a certain amount of time. Am I coding this correctly? Answer: The call to `t.join()` blocks the current thread until the target thread ends. You're calling that right after you create the thread, so you don't have more than one downloader thread running at a time. Change your code to this: threads = [] for url in URLs: t = Thread(target = thread_func, args = (url, )) t.start() threads.append(t) # All threads started, now wait for them to finish for t in threads: t.join()
FileNotFoundError: [Errno 2] Question: Synopsis: How do I read a file in Python? why must it be done this way? My problem is that I get the following error: Traceback (most recent call last): File "C:\Users\Terminal\Desktop\wkspc\filetesting.py", line 1, in <module> testFile=open("test.txt") FileNotFoundError: [Errno 2] No such file or directory: 'test.txt' Which originates from the following code: (that is the entire '.py' file) testFile=open("test.txt") print(testFile.read()) "test.txt" is in the same folder as my program. I'm new to Python and do not understand why I am getting file location errors. I'd like to know the fix and why the fix has to be done that way. I have tried using the absolute path to the file, "C:\Users\Terminal\Desktop\wkspc\test.txt" Other details: "Python 3.4.1 (v3.4.1:c0e311e010fc, May 18 2014, 10:38:22) [MSC v.1600 32 bit (Intel)] on win32" Windows 7, 32 Bit Answer: Since you are using IDLE(GUI) the script may not be launched from the directory where the script resides. I think the best alternative is to go with something like: import os.path scriptpath = os.path.dirname(__file__) filename = os.path.join(scriptpath, 'test.txt') testFile=open(filename) print(testFile.read()) `os.path.dirname(__file__)` will find the directory where the currently running script resides. It then uses `os.path.join` to prepend `test.txt` with that path. If this doesn't work then I can only guess that `test.txt` isn't actually in the same directory as the script you are running.
Character encoding in python to replace 'u2019' with ' Question: I have tried numerous ways to encode this to the end result `"BACK RUSHIN'"` with the most important character being the right apostrophe `'`. I would like a way of getting to this end result using some of the built in functions Python has where there is no discrimination between a normal string and a unicode string. This was the code I was using to retrieve the string: `str(unicode(etree.tostring(root.xpath('path')[0],method='text', encoding='utf-8'),errors='ignore')).strip()` With the result being: `'BACK RUSHIN'` the thing being the apostrophe `'` is missing. Another way was: `root.xpath('path/text()')` And that result was: `u'BACK RUSHIN\u2019'` in python. Lastly if I try: `u'BACK RUSHIN\u2019'.encode('ascii', 'replace')` The result is: `'BACK RUSHIN?'` Please no replace functions, I would like to make use of pythons codec libraries. Also no printing the string because it is being held in a variable. Thanks Answer: >>> import unidecode >>> unidecode.unidecode(u'BACK RUSHIN\u2019') "BACK RUSHIN'" [unidecode](https://pypi.python.org/pypi/Unidecode)
Urllib Python is not providing with the html code I see with with inspect element Question: I'm trying to crawl the results in this link: url = "<http://topsy.com/trackback?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia- retail-evolution-lab-aopen-shopping%2F>" When I inspect it with firebug I can see the html code and I know what I need to do to extract the tweets. The problem is when I get the response using urlopen, i don't get the same html code. I only get tags. What am I missing? Example code below: def get_tweets(section_url): html = urlopen(section_url).read() soup = BeautifulSoup(html, "lxml") tweets = soup.find("div", "results") category_links = [dd.a["href"] for tweet in tweets.findAll("div", "result-tweet")] return category_links url = "http://topsy.com/trackback?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia-retail-evolution-lab-aopen-shopping%2F" cat_links = get_tweets(url) Thanks, YB Answer: The problem is that the content of `results` div is filled up with extra HTTP call and javascript code being executed on the browser side. `urllib` only "sees" the initial HTML page that doesn't contain the data you need. One option would be to follow @Himal's suggestion and simulate the underlying request to `trackbacks.js` that is sent for the data with tweets. The result is in JSON format that you can [`load()`](https://docs.python.org/2/library/json.html#json.load) using [`json`](https://docs.python.org/2/library/json.html) module coming with standard library: import json import urllib2 url = 'http://otter.topsy.com/trackbacks.js?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia-retail-evolution-lab-aopen-shopping%2F&infonly=0&call_timestamp=1411090809443&apikey=09C43A9B270A470B8EB8F2946A9369F3' data = json.load(urllib2.urlopen(url)) for tweet in data['response']['list']: print tweet['permalink_url'] Prints: http://twitter.com/Evonomie/status/512179917610835968 http://twitter.com/abs_office/status/512054653723619329 http://twitter.com/TKE_Global/status/511523709677756416 http://twitter.com/trevinocreativo/status/510216232122200064 http://twitter.com/TomCrouser/status/509730668814028800 http://twitter.com/Evonomie/status/509703168062922753 http://twitter.com/peterchaly/status/509592878491136000 http://twitter.com/chandagarwala/status/509540405411840000 http://twitter.com/Ayjay4650/status/509517948747526144 http://twitter.com/Marketingccc/status/509131671900536832 This was "going down to metal" option. * * * Otherwise, you can take a "high-level" approach and don't bother about what is there happening under-the-hood. Let the real browser load the page which you would interact with through [selenium WebDriver](http://selenium- python.readthedocs.org/): from selenium import webdriver driver = webdriver.Chrome() # can be Firefox(), PhantomJS() and more driver.get("http://topsy.com/trackback?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia-retail-evolution-lab-aopen-shopping%2F") for tweet in driver.find_elements_by_class_name('result-tweet'): print tweet.find_element_by_xpath('.//div[@class="media-body"]//ul[@class="inline"]/li//a').get_attribute('href') driver.close() Prints: http://twitter.com/Evonomie/status/512179917610835968 http://twitter.com/abs_office/status/512054653723619329 http://twitter.com/TKE_Global/status/511523709677756416 http://twitter.com/trevinocreativo/status/510216232122200064 http://twitter.com/TomCrouser/status/509730668814028800 http://twitter.com/Evonomie/status/509703168062922753 http://twitter.com/peterchaly/status/509592878491136000 http://twitter.com/chandagarwala/status/509540405411840000 http://twitter.com/Ayjay4650/status/509517948747526144 http://twitter.com/Marketingccc/status/509131671900536832 * * * This is how you can scale the second option to get all of tweets following pagination: from selenium import webdriver from selenium.common.exceptions import NoSuchElementException from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC BASE_URL = 'http://topsy.com/trackback?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia-retail-evolution-lab-aopen-shopping%2F&offset={offset}' driver = webdriver.Chrome() # get tweets count driver.get('http://topsy.com/trackback?url=http%3A%2F%2Fmashable.com%2F2014%2F08%2F27%2Faustralia-retail-evolution-lab-aopen-shopping%2F') tweets_count = int(driver.find_element_by_xpath('//li[@data-name="all"]/a/span').text) for x in xrange(0, tweets_count, 10): driver.get(BASE_URL.format(offset=x)) # page header appears in case no more tweets found try: driver.find_element_by_xpath('//div[@class="page-header"]/h3') except NoSuchElementException: pass else: break # wait for results WebDriverWait(driver, 5).until( EC.presence_of_element_located((By.ID, "results")) ) # get tweets for tweet in driver.find_elements_by_class_name('result-tweet'): print tweet.find_element_by_xpath('.//div[@class="media-body"]//ul[@class="inline"]/li//a').get_attribute('href') driver.close()
Popen and stdin to automate input to .exe Question: I have a simple windows executable that asks for user input and I'm trying to make a python script to automate the input. What I've done: from subprocess import Popen, PIPE p = Popen([r'myprgm.exe'], stdin=PIPE) p.communicate(input=bytes(input("Test Input"), 'UTF-8')) Now, normally when the program runs, it outputs to the command prompt "give me input:" then expects input from the user. If I use Popen with the `stdin=PIPE` option, nothing is written to the command prompt at all - it just sits there blank. Then, when I use communicate, "Test Input" ends up on the python console, and not to the process. Is it possible to automate something like this? Or am I missing something fundamental here.. Answer: You can give `input` to [`communicate`](https://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate) function. like p1 = Popen(["mycmd"], stdin=PIPE, stdout=PIPE) output = p1.communicate(input="my input")[0]
Capistrano deployment error Can't activate jruby-openssl-0.9.5-java Question: I have developed a new Rails (4.1.4) app in JRuby (1.7.10) and I am trying to deploy it with Capistrano v3 on a remote vps. The error I am getting looks like: INFO[551a80fb] Running ~/.rvm/bin/rvm default do bundle install --binstubs /home/deployer/apps/APPNAME/shared/bin --path /home/deployer/apps/APPNAME/shared/bundle --without development test on example.net DEBUG[551a80fb] Command: cd /home/deployer/apps/APPNAME/releases/20140919052426 && ~/.rvm/bin/rvm default do bundle install --binstubs /home/deployer/apps/APPNAME/shared/bin --path /home/deployer/apps/APPNAME/shared/bundle --without development test DEBUG[551a80fb] Gem::LoadError: can't activate jruby-openssl-0.9.5-java, already activated jruby-openssl-0.9.3 DEBUG[551a80fb] DEBUG[551a80fb] raise_if_conflicts at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/specification.rb:1988 DEBUG[551a80fb] DEBUG[551a80fb] activate at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/specification.rb:1238 DEBUG[551a80fb] DEBUG[551a80fb] gem at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/core_ext/kernel_gem.rb:48 DEBUG[551a80fb] DEBUG[551a80fb] require at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:46 DEBUG[551a80fb] DEBUG[551a80fb] (root) at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/security.rb:11 DEBUG[551a80fb] DEBUG[551a80fb] require at org/jruby/RubyKernel.java:1083 DEBUG[551a80fb] DEBUG[551a80fb] require at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55 DEBUG[551a80fb] DEBUG[551a80fb] require at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:53 DEBUG[551a80fb] DEBUG[551a80fb] (root) at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/package.rb:1 DEBUG[551a80fb] DEBUG[551a80fb] require at org/jruby/RubyKernel.java:1083 DEBUG[551a80fb] DEBUG[551a80fb] require at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55 DEBUG[551a80fb] DEBUG[551a80fb] require at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:53 DEBUG[551a80fb] DEBUG[551a80fb] (root) at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/package.rb:43 DEBUG[551a80fb] DEBUG[551a80fb] require at org/jruby/RubyKernel.java:1083 DEBUG[551a80fb] DEBUG[551a80fb] require at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55 DEBUG[551a80fb] DEBUG[551a80fb] require at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:53 DEBUG[551a80fb] DEBUG[551a80fb] (root) at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/dependency_installer.rb:1 DEBUG[551a80fb] DEBUG[551a80fb] (root) at /home/deployer/.rvm/rubies/jruby-1.7.10/lib/ruby/shared/rubygems/dependency_installer.rb:4 DEBUG[551a80fb] DEBUG[551a80fb] (root) at /home/deployer/.rvm/gems/jruby-1.7.10@global/gems/bundl DEBUG[551a80fb] er-1.7.3/lib/bundler/installer.rb:1 DEBUG[551a80fb] DEBUG[551a80fb] (root) at /home/deployer/.rvm/gems/jruby-1.7.10@global/gems/bundler-1.7.3/lib/bundler/installer.rb:2 DEBUG[551a80fb] DEBUG[551a80fb] (root) at /home/deployer/.rvm/gems/jruby-1.7.10@global/gems/bundler-1.7.3/lib/bundler/cli/install.rb:1 DEBUG[551a80fb] run at /home/deployer/.rvm/gems/jruby-1.7.10@global/gems/bundler-1.7.3/lib/bundler/cli/install.rb:78 DEBUG[551a80fb] DEBUG[551a80fb] install at /home/deployer/.rvm/gems/jruby-1.7.10@global/gems/bundler-1.7.3/lib/bundler/cli.rb:145 DEBUG[551a80fb] DEBUG[551a80fb] run at /home/deployer/.rvm/gems/jruby-1.7.10@global/gems/bundler-1.7.3/lib/bundler/vendor/thor/command.rb:27 DEBUG[551a80fb] DEBUG[551a80fb] invoke_command at /home/deployer/.rvm/gems/jruby-1.7.10@global/gems/bundler-1.7.3/lib/bundler/vendor/thor/invocation.rb:121 DEBUG[551a80fb] DEBUG[551a80fb] dispatch at /home/deployer/.rvm/gems/jruby-1.7.10@global/gems/bundler-1.7.3/lib/bundler/vendor/thor.rb:363 DEBUG[551a80fb] DEBUG[551a80fb] start at /home/deployer/.rvm/gems/jruby-1.7.10@global/gems/bundler-1.7.3/lib/bundler/vendor/thor/base.rb:440 DEBUG[551a80fb] DEBUG[551a80fb] load at org/jruby/RubyKernel.java:1099 DEBUG[551a80fb] DEBUG[551a80fb] start at /home/deployer/.rvm/gems/jruby-1.7.10@global/gems/bundler-1.7.3/lib/bundler/cli.rb:9 DEBUG[551a80fb] DEBUG[551a80fb] eval at org/jruby/RubyKernel.java:1119 DEBUG[551a80fb] DEBUG[551a80fb] (root) at /home/deployer/.rvm/gems/jruby-1.7.10@global/bin/jruby_executable_hooks:15 This is how the Gemfile looks like: source 'https://rubygems.org' ruby '1.9.3', :engine => 'jruby', :engine_version => '1.7.10' gem 'bouncy-castle-java', '<= 1.50' # my attempt to fix the version of jruby-openssl gem 'jruby-openssl', '0.9.5' # to 0.9.5. Tried with 0.9.3 but with no effect. gem 'rails', '4.1.4' gem 'sass-rails', '~> 4.0.3' gem 'uglifier', '>= 1.3.0' gem 'therubyrhino' gem 'jquery-rails' gem 'jbuilder', '~> 2.0' gem 'sdoc', '~> 0.4.0', group: :doc gem 'activerecord-jdbcmysql-adapter' gem 'devise' gem 'devise_invitable', :github => 'scambra/devise_invitable' gem "paperclip" gem 'acts_as_list' gem 'pry-rails', group: :development gem 'rubyzip' gem 'to_bool', '~> 1.0.1' gem "jquery-fileupload-rails" # Use Capistrano for deployment gem 'capistrano', group: :development gem 'capistrano-rvm', group: :development gem 'capistrano-bundler', group: :development gem 'capistrano-rails', group: :development gem 'trinidad', require: false gem 'trinidad_init_services', require: false gem 'rvm1-capistrano3', require: false Capfile: require 'capistrano/setup' require 'capistrano/deploy' require 'capistrano/rvm' require 'capistrano/bundler' require 'capistrano/rails' require 'capistrano/rails/assets' require 'capistrano/rails/migrations' require 'rvm1/capistrano3' Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r } deploy.rb: # config valid only for Capistrano 3.1 lock '3.2.1' set :bundle_flags, '--deployment' # tried removing switch deployment if installing as system gem helps set :deploy_user, "deployer" set :application, 'APPNAME' set :repo_url, '[email protected]:user/repo.git' server "example.net", user: 'deployer', roles: [:web, :app, :db] set :rvm_type, :user set :rvm1_ruby_version, 'jruby-1.7.10' set :scm, :git set :pty, true set :deploy_to, "/home/#{fetch(:deploy_user)}/apps/#{fetch(:application)}" set :linked_files, %w{config/database.yml} set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system} set :keep_releases, 5 after "deploy", "deploy:cleanup" namespace :deploy do desc 'Restart application' task :restart do on roles(:app), in: :sequence, wait: 5 do execute :touch, release_path.join('tmp/restart.txt') end end after :publishing, :restart after :restart, :clear_cache do on roles(:web), in: :groups, limit: 3, wait: 10 do # Here we can do anything such as: # within release_path do execute :rake, 'cache:clear' # end end end desc 'Delete shared bundle folder' task :remove_shared_bundle do on roles(:app), in: :sequence, wait: 5 do execute :rm, "-fr", "#{shared_path}/bundle" end end before :starting, :remove_shared_bundle end def template(from, to) erb = File.read(File.expand_path("../config/recipes/templates/#{from}", File.dirname(__FILE__))) # File.join(File.expand_path(File.dirname(__FILE__)), 'poi') # put ERB.new(erb).result(binding), to upload! StringIO.new(ERB.new(erb).result(binding)), to end namespace :deploy do desc "Install everything onto the server" task :install do on roles(:all), in: :sequence, wait: 1 do execute 'mkdir', '-p', fetch(:deploy_to) execute :sudo, 'apt-get', '-y', "update" execute :sudo, 'apt-get', '-y', "install", "build-essential zlib1g-dev libssl-dev libreadline-gplv2-dev python-software-properties curl git-core openjdk-7-jdk jsvc" end end end It seems the root of the error is Thor gem. If I remove bundle folder inside shared directory then it installs all the gems. But it fails next time onwards. With 'remove_shared_bundle' task I tried to delete the bundle folder before before each deploy which worked. But then it would do a fresh installation every time which is time taking. Is there a remedy for this issue? Answer: I think this issue might be related to JRuby's default gems being unchangeable in 1.7.10 ... it has been fixed since, so the immediate thing is to try JRuby 1.7.15 (or at least 1.7.13) I would than not declare `gem 'jruby-openssl', '0.9.5'` in the _Gemfile_ (remove it completely) let JRuby use whatever it has available - wonder if there are any gem pulling it in, that could also help (if there's no gem pulling it into the bundle as a dependency) resolve it under 1.7.10.
Streaming camera feed from mjpeg-streamer with Flask (Python) Question: I'm trying to stream a webcam feed from mjpeg-streamer through flask in a simple web app. My flask code has a simple piece of html, which otherwise works, which I insert the following line into: <img src=”/?action=stream”/> But the picture doesn't load. I've tried varying this with things like making the source have localhost:8080 (mjpeg-streamers port), and I've had no luck. I've seen that for images, they should be in your static folder, but that doesn't quite seem to apply here. What am I doing wrong? Answer: This is likely due to the [Same-Origin Policy](http://en.wikipedia.org/wiki/Same-origin_policy) in web browsers. I assume you're running your flask application on port 5000, and your mjpeg- streamer is running on port 8080. This fails because you're trying to import a resource from another port. If you setup apache or similar to front your application, you can proxy the mjpeg-streamer so that everything is served from the same port, and the browser will pickup the stream correctly.
unable to use library build with gobject-introspection in python Question: It's my first question on stackoverflow. Before I start I'm sorry for my poor english I'm currently developing a library "Pocketvox". it's aim is to provide an easy way to use pocketsphinx (voice recognition) on linux system to control it. The main project is stored here <https://github.com/benoitfragit/pocketVox> I've coded in C and I want to give developers an access in Python. So I've generated a .gir and .typelib files. The library name is Pocketvox and every files is present I tried to load it but if failed: from gi.repository import Pocketvox I've set the GI_TYPELIB_PATH variable in order to point the folder containing typelib file but nothing changed If someone has knowledge in gobject-introspection C/Python, I need some help to resolve this problem Answer: I found by myself, the LD_LIBRARY_PATH was not good
How Can I deploy OVA file on Vsphere Client with python Question: I want to automate deploying OVA image on VSphere with python. I looked up at some packages viz. Pysphere, psphere but didn't find direct method to do so. is there any Library I'm missing or is there any other way to deploy OVA/OVF files/templates on VSphere with Python. Pls help!!! Answer: As far I know there are no appropriate api for deploying ovf template using python package. You can use ovftool, VMware OVF Tool is a command-line utility that allows you to import and export OVF packages to and from many VMware products. download ovftool from vmware site [https://my.vmware.com/web/vmware/details?productId=352&downloadGroup=OVFTOOL350](https://my.vmware.com/web/vmware/details?productId=352&downloadGroup=OVFTOOL350) to install ovftool:- sudo /bin/sh VMware- ovftool-3.5.0-1274719-lin.x86_64.bundle to deploy ova image as template. syntax:- ovftool -dm=thick -ds=3par1 -n=abhi_vm /root/lab/extract/overcloud-esx-ovsvapp.ova vi://root:[email protected]**.**/datacenter/host/cluster use os.system(ovftool_syntax) to use in your python script.
Filtering another filter object Question: I am trying to generate prime endlessly,by filtering out composite numbers. Using list to store and test for all primes makes the whole thing slow, so i tried to use generators. from itertools import count def chk(it,num): for i in it: if i%num: yield(i) genStore = [count(2)] primeStore = [] while 1: prime = next(genStore[-1]) primeStore.append(prime) genStore.append(chk(genStore[-1],num)) It works quite well, generating primes, until it hit maximum recursion depth. So I found ifilter (or filter in python 3). From [documentation of python standard library](https://docs.python.org/2/library/itertools.html#itertools.ifilter): > Make an iterator that filters elements from iterable returning only those > for which the predicate is True. If predicate is None, return the items that > are true. Equivalent to: def ifilter(predicate, iterable): # ifilter(lambda x: x%2, range(10)) --> 1 3 5 7 9 if predicate is None: predicate = bool for x in iterable: if predicate(x): yield x So I get the following: from itertools import count genStore = [count(2)] primeStore = [] while 1: prime = next(genStore[-1]) primeStore.append(prime) genStore.append(filter(lambda x:x%num,genStore[-1])) I expected to get: 2 3 5 7 11 13 17 ... What I get is: 2 3 4 5 6 7 ... It seems `next()` only iterate through `count()`, not the filter. Object in list should point to the object, so I expected it works like `filter(lambda x: x%n,(.... (filter(lambda x:x%3,filter(lambda x:x%2,count(2))))`. I do some experiment and noticed the following characteristic: 1. `filter(lambda x:x%2,filter(lambda x:x%3,count(0))`) #works, filter all 2*n and 3*n 2. `genStore = [count(2)]; genStore.append(filter(lambda x:x%2,genStore[-1])); genStore.append (filter(lambda x:x%2,genStore[-1]))` \- works, also filter all 2*n and 3*n 3. `next(filter(lambda x:x%2,filter(lambda x:x%3,count(2))))` \- works, printing out 5 On contrast: from itertools import count genStore = [count(2)] primeStore = [] while 1: prime = next(genStore[-1]) print(prime) primeStore.append(prime) genStore.append(filter(lambda x:x%prime,genStore[-1])) if len(genStore) == 3: for i in genStore[-1]: print(i) #It doesn't work, only filtering out 4*n. Questions: * Why doesn't it work? * Is it a feature of python, or I made mistakes somewhere? * Is there any way to fix it? Answer: I think your problem stems from the fact that lambdas are not evaluated will it's 'too late' and then you will get prime be same for all of them as all of them point at the same variable. you can try to add custom filter and use normal function instead of lambda: def myfilt(f, i, p): for n in i: print("gen:", n, i) if f(n, p): yield n def byprime(x, p): if x % p: print("pri:", x, p) return True f = myfilt(byprime, genStore[-1], prime) this way you avoid the problems of lambdas being all the same
Images not displaying html <img /> Question: I have been at this for hours trying to get these images to display. I didn't want to resort to posting a question on stackoverflow but it seems like the best option right now. I have read a several posts on stackoverflow and even a couple others on different sites. I have tried out everything suggested and I don't seem to be spotting any obvious errors with my eyes by looking at my code. I am making a personal website for myself and I am making image links to my blogger, twitter, linkedin and github profiles. I have tried getting it to work both locally and live on the internet. I am using Python and Google App Engine. I am trying to insert the images into my ABOUT page, so in my main.py file, it's the AboutHandler. I don't think there are any bugs in my main.py file though. Feel free to [see the problem in action](http://www.juliandavidfarley-2.appspot.com/about). I'm sure it's something that I'm doing wrong in my html file. Any help would be greatly appreciated :) Here is my main.py file... import os import webapp2 import jinja2 template_dir = os.path.join(os.path.dirname(__file__), 'templates') jinja_env = jinja2.Environment(loader = jinja2.FileSystemLoader(template_dir), autoescape = True) def render_str(template, **params): t = jinja_env.get_template(template) return t.render(params) class BaseHandler(webapp2.RequestHandler): def write(self, *a, **kw): self.response.write(*a, **kw) def render_str(self, template, **params): params['user'] = self.user return render_str(template, **params) def render(self, template, **kw): self.response.write(render_str(template, **kw)) class MainHandler(BaseHandler): def get(self): self.render('home_personal.html') class AboutHandler(BaseHandler): def get(self): self.render('about_personal.html') class PortfolioHandler(BaseHandler): def get(self): self.render('portfolio_personal.html') class ContactHandler(BaseHandler): def get(self): self.render('contact_personal.html') app = webapp2.WSGIApplication([ ('/', MainHandler), ('/about', AboutHandler), ('/portfolio', PortfolioHandler), ('/contact', ContactHandler) ], debug=True) Here is the relevant portion of my html file... <div id="logos-social"> <!-- image links for my social presence --> <!-- blogger logo link --> <div> <a href="http://www.juliandavidfarley.blogspot.com"> <img src="../static/images/blogger_logo_for_prsnl_website.png" alt="blogger_link" width="25" height="25"/> </a> </div> <!-- github logo link --> <div> <a href="https://github.com/jvojens2"> <img src="../static/images/github_logo_for_prsnl_website.png" alt="github_link" width="25" height="25"/> </a> </div> <!-- linkedin logo link --> <div> <a href="http://www.linkedin.com/in/juliandavidfarley"> <img src="../static/images/linkedin_logo_for_prsnl_website.png" alt="linkedin_link" width="25" height="25"/> </a> </div> <!-- twitter logo link --> <div> <a href="https://twitter.com/bugfarley"> <img src="../static/images/logo_twitter_for_prsnl_website.png" alt="twitter_link" width="25" height="25"/> </a> </div> </div> Here is my css file... body { position: relative; font-family: Helvetica, Arial, sans-serif; font-size: 14px; background-color: #29586F; /* tealish-blue */ margin: 0 auto; } #main-section-home { width: 100%; height: 600px; } #main-section-about { position: relative; width: 600px; margin-right: auto; margin-left: auto; color: #C4D0D5; font-family: Arial, "Helvetica Neue", Helvetica, sans-serif; font-size: 18px; top: 80px;; } #main-section-contact { position: relative; width: 600px; margin-right: auto; margin-left: auto; color: #C4D0D5; /* light gray */ font-family: Arial, "Helvetica Neue", Helvetica, sans-serif; font-size: 18px; top: 80px;; } .questions { font-size: 20px; color: white; } #container { height: 600px; } #nav { float:left; width:100%; overflow:hidden; position:relative; height: 40px; background-color: #29586F; /* tealish-blue */ } #nav ul { clear:left; float:left; list-style:none; margin:0; padding:0; position:relative; left:50%; text-align:center; line-height: 2.5em; } #nav ul li { display:block; float:left; list-style:none; margin:0; padding:0; position:relative; right:50%; text-transform: uppercase; width: 130px; font-size: 18px; #nav ul li a { display:block; margin:0 0 0 1px; padding:3px 10px; background:#29586F; /* tealish-blue */ color: white; text-decoration:none; height: 40px; } #nav ul li a:hover { color:#3BA6DA; /* new blue */ } #nav ul li a.active, #nav ul li a.active:hover { color:#29586F; /* tealish-blue */ font-weight:bold; } #my-name-div { text-transform: uppercase; font-size: 50px; font-family: "Century Gothic", CenturyGothic, AppleGothic, sans-serif; color: white; letter-spacing: 4px; white-space: pre; width: 300px; } #my-name-div-small { text-transform: uppercase; font-size: 30px; font-family: "Century Gothic", CenturyGothic, AppleGothic, sans-serif; color: white; letter-spacing: 4px; /*white-space: pre;*/ width: 300px; } #my-title-div { font-size: 18px; color: #3BA6DA; /* new blue */ letter-spacing: 2px; font-family: "Century Gothic", CenturyGothic, AppleGothic, sans-serif; margin-top: 15px; width: 300px; } #my-title-div-small { font-size: 18px; color: #3BA6DA; /* new blue */ letter-spacing: 2px; font-family: "Century Gothic", CenturyGothic, AppleGothic, sans-serif; margin-top: 15px; position: relative; left: 15px; width: 300px; } #name-and-title-wrapper { position: relative; left: 25%; top: 100px; width: 300px; } #name-and-title-wrapper-small { position: relative; left: 25%; top: 50px; width: 300px; .link a { color: #3BA6DA; /* new blue */ text-decoration: none; } .link a:hover { color: white; text-decoration: none; } #logos-social div { margin: 5px; float: left; position: relative; left: 65%; } ![directory - images](http://i.stack.imgur.com/uytIR.png) ![directory - templates](http://i.stack.imgur.com/IUfLi.png) Here is my app.yaml... application: juliandavidfarley-2 version: 1 runtime: python27 api_version: 1 threadsafe: yes handlers: - url: /stylesheets static_dir: static - url: /favicon\.ico static_files: favicon.ico upload: favicon\.ico - url: .* script: main.app libraries: - name: webapp2 version: "2.5.2" - name: jinja2 version: latest Answer: Don't use `../` relative links. Use links relative to the root. Instead of: <img src="../static/images/blogger_logo_for_prsnl_website.png" ...> use <img src="/static/images/blogger_logo_for_prsnl_website.png" ...> Then, you handle the routing in your app.yaml, as follows: - url: /static static_dir: static/
Python - Benchmarking Disk - Write exactly x bytes in a file Question: I am trying to benchmark my Hard drive, this is to say calculate its latency (ms) and throughput (MB/s). To do that, I want to measure the execution time of the function f.write of Python. What I need is to write exactly x bytes to my files. I understand that I need to open my file using f = open(file_name, 'wb') Then what I do is for i in range(blocksize) f.write(b'\xff') Howewer, the results I obtain for the throughput (MB/s) is way too low. The latency looks correct. So what I deduced is that when I do the previous lines, I am actually writing more than one byte to the file, I am writing a string containing one byte ... I know that object don't really have size in Python, but is there a way to fix this problem ? **EDIT** Ok here is the new code, now the results are unexplicably too high ! The limit in writing for my disk should be 100MB/s, but I have results ten time faster. What's wrong ? import sys import time f = open("test.txt",'wb+') def file_write_seq_access(blocksize): chunk = b'\xff'*4000 for i in range(blocksize//4000): f.write(chunk) if __name__ == '__main__': start_time = time.time() file_write_seq_access(int(sys.argv[1])) stop_time = time.time() diff = stop_time - start_time print diff, "s" print (int(sys.argv[1])/diff),"B/s" Answer: Simply put, Python isn't fast enough for this kind of byte-by-byte writing, and the file buffering and similar adds too much overhead. What you should do is chunk the operation: import sys blocksize = int(sys.argv[1]) chunk = b'\xff'*10000 with open("file.file", "wb") as f: for _ in range(blocksize // 10000): f.write(chunk) Possibly using `PyPy` should give a further (very small, possibly negative) speed-up. Note that the OS will interfere with timings here, so there's going to be _a lot_ of variance. Using C might end up even faster. * * * After doing some timings, this matches `dd` for speed, so you're not going to be getting any faster.
What is the fastest way to find the exact value of the factorial of a large number in python? Question: I'm aiming at particularly large values of the number whose factorial is to be found for example- 12345678!. Even `math.factorial(12345678)` in python takes a lot of time to compute the factorial of such a number. I tried Stirling's Appoximation to compute the same but it does not give the exact value. Is there any other method to compute the same? EDIT 1: This is the preview of the code I tried to compute the trailing zeros in factorial of the number import math def main(): total_cases = int(eval(raw_input())) for case in xrange(total_cases): number = int(eval(raw_input())) if number >= 1e9: break factorial_n = math.factorial(number) count = 0 for i in xrange(1, number): temp = 10**i if factorial_n % temp == 0 : count += 1 else: print count break main() EDIT 2: I just found that the bottleneck is the dividing step. Answer: `scipy` has a fast C implementation for both an approximation and exact values. scipy.misc.factorial(12345, exact=True) Tried this myself, takes under a second. But upon trying `math.factorial(12345)` it also takes under a second. Have you tried this yourself?
Repeat same method on multiple objects in python Question: This is very basic but I am having trouble finding an answer through google. I have a loop that imports a number of text files into pandas dataframes. I have written the names to a list. onlyfiles = [ f for f in listdir(mypath) if isfile(join(mypath,f)) and join(mypath,f).endswith('.txt') ] dataframelist = [] for filenum in range(1,len(onlyfiles)): path = 'path/%s' % onlyfiles[filenum] print path name = onlyfiles[filenum][:-4] dsname = name print name name = pd.read_csv(path, sep = '\t') print '%s has been imported' % dsname dataframelist.append(dsname) I am now looking to run the to_sql() method on each object but cannot seem to find the right syntax. The objects are already instantiated, to my understanding but the interpreter thinks I am trying to manipulate string objects. I then take the list of pandas objects #if a dataframe exists and has a '-customer' at the end then import custlist = [] for item in list(dataframelist): if item.endswith('-customer'): custlist.append(item) and try and loop with the method for dsname in range(1,5): ds_to_sql = custlist[dsname] print ds_to_sql (ds_to_sql.to_sql('%s', engine)) % ds_to_sql I am sure this is quite basic and I appreciate the assistance. Answer: The problem seems to be here: name = onlyfiles[filenum][:-4] #name = 'example.txt' dsname = name #dsname = name = 'example.txt' print name name = pd.read_csv(path, sep = '\t') #name = DataFrame print '%s has been imported' % dsname dataframelist.append(dsname) #'example.txt' is appended I've added comments with what's happenning at each stage. While you are loading your dataframes - you never save them! You only keep their names in the list.
ImportError: No module named config on Travis-CI build Question: I'm having an import error on [Travis builds](https://travis- ci.org/dzlab/sentimentpy), the error is related to the configuration of flask: from flask import Flask app = Flask(__name__) app.config.from_object('config') On local machine, the flask app run correctly. But on travis here is the error trace $ nosetests --with-coverage --cover-package=core E......................... ====================================================================== ERROR: Failure: ImportStringError (import_string() failed for 'config'. Possible reasons are: - missing __init__.py in a package; - package or module path not included in sys.path; - duplicated package or module name taking precedence in sys.path; - missing module, class, function or variable; Debugged import: - 'config' not found. Original exception: ImportError: No module named config) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/nose/loader.py", line 414, in loadTestsFromName addr.filename, addr.module) File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/nose/importer.py", line 94, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/home/travis/build/dzlab/sentimentpy/webapp/app/__init__.py", line 6, in <module> app.config.from_object('config') File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/flask/config.py", line 162, in from_object obj = import_string(obj) File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/werkzeug/utils.py", line 426, in import_string sys.exc_info()[2]) File "/home/travis/virtualenv/python2.7_with_system_site_packages/local/lib/python2.7/site-packages/werkzeug/utils.py", line 408, in import_string return __import__(import_name) ImportStringError: import_string() failed for 'config'. Possible reasons are: - missing __init__.py in a package; - package or module path not included in sys.path; - duplicated package or module name taking precedence in sys.path; - missing module, class, function or variable; What's wrong with my configuration? Answer: import_string only takes absolute module imports. Since `config` is not a top- level module, but part of `webapp`, you need to specify `webapp.config`. See <http://flask.pocoo.org/docs/0.10/config/#configuring-from-files>: app = Flask(__name__) app.config.from_object('yourapplication.default_settings') app.config.from_envvar('YOURAPPLICATION_SETTINGS')
Using Functions from Other .py Files Question: So I'm very new to Python and programming in general. I've run into an issue with my text game. I'm trying to create a function (pretty sure 'def (name) is a function, right?)in a .py called 'locationMenu' and use it in my main game file. Here is the start of my locationMenu.py (the rest is simply prints and if/elif/else statements. from main import Engine, Hallway, Canteen, Bedroom, Room class locationsMenu(): def locationMenu(self): and here is the place in my main game file where I am trying to use the 'locationMenu' function that I just created. locationMenu() That is just one line after I made a choice with input, etc. but now I would like to skip to the locationMenu that is in the other file. Any help much appreciated. Please don't use "big words" because as I said I'm still very new to programming in general. Answer: `locationMenu()` is an instance method of the `locationsMenu()` class, so you'd need to create an instance and call the method on it, something like: from locationMenu import locationsMenu my_menu = locationsMenu() # Create a new object my_menu.locationMenu() # Call its instance method If you were to just try `locationsMenu.locationMenu()` then you'd get something like this error: Traceback (most recent call last): File "./prog.py", line 3, in <module> locationsMenu.locationMenu() TypeError: locationMenu() missing 1 required positional argument: 'self' because `locationMenu()` is not a class method, and you're trying to call it in the absence of a `locationsMenu` object. If you don't want it inside a class at all, then make it a regular function, and do: `locationMenu.py`: from main import Engine, Hallway, Canteen, Bedroom, Room def locationMenu(): print("In function locationMenu()") `prog.py`: from locationMenu import locationMenu locationMenu()
Parse log text in Python using Regex Question: I've a format like this: att1="value 1" att2="value 2" att3="value 3" for example level="Information" clientAddr="127.0.0.1" action="GetByName" message="Action completed" url="/customers/foo" method="GET" Can I use regex to parse this? inside the values I won't have any embedded quotes but I'll have spaces Answer: Through findall function , you could get the values inside double quotes. >>> import re >>> m = 'level="Information" clientAddr="127.0.0.1" action="GetByName" message="Action completed" url="/customers/foo" method="GET"' >>> s = re.findall(r'"([^"]*)"', m) >>> for i in s: ... print i ... Information 127.0.0.1 GetByName Action completed /customers/foo GET
Python - Using Cookies Successfully Question: When I would try facebook cookies to open a logged account on facebook... import urllib2, urllib, cookielib jar = cookielib.CookieJar() cookie = urllib2.HTTPCookieProcessor(jar) opener = urllib2.build_opener(cookie) data = urllib.urlencode({'email':'[email protected]','pass':'swagpassword','login':'Log+In'}) req = urllib2.Request('http://www.facebook.com/login.php') response = opener.open(req, data) cookie_header = response.headers.get("Set-Cookie") response = opener.open(req, data) #I open it twice on purpose if "Logout" in response.read(): print("Logged In") jar = cookielib.CookieJar() #new instance cookie = urllib2.HTTPCookieProcessor(jar) #new instance opener = urllib2.build_opener(cookie) #new instance cookie_request = urllib2.Request('http://www.facebook.com/login.php') cookie_request.add_header("cookie", cookie_header) cookie_POST = opener.open(cookie_request) cookie_POST = opener.open(cookie_request) if "Logout" in cookie_POST.read(): print("Logged In") It would print "Logged In" the first time successfully, but when I try using the cookie, I would not be logged in. How can I fix this? (Without using other downloaded modules) Answer: Just reuse the old instances. .... if "Logout" in response.read(): print("Logged In") #jar = cookielib.CookieJar() #new instance #cookie = urllib2.HTTPCookieProcessor(jar) #new instance #opener = urllib2.build_opener(cookie) #new instance cookie_request = urllib2.Request('http://www.facebook.com/login.php') #cookie_request.add_header("cookie", cookie_header) cookie_POST = opener.open(cookie_request) ...
interactive mode doesn't switch on from script (matplotlib 1.4.0 python 2.7.5 on mac osx 10.8.5) Question: Using python 2.6 on my mac the following works fine (i.e. a plot window opens): import matplotlib.pyplot as plt from numpy import linspace, sin, pi plt.ion() print "Is interactive:?", plt.isinteractive() x = linspace(-pi, pi, 1001) plt.plot(x, sin(x)) raw_input() #keep the window open It works when I run it in shell (i.e. $ python test.py) as well as when I run it in an interactive python terminal. I recently installed python 2.7 and with it nothing happens (more precisely, plot window appears in Dock, but doesn't open) when I run my script from shell. The value of plt.isinteractive() is false even after plt.ion(). When I run the same code in an interactive python terminal, everything is fine. The answer to [this question](http://stackoverflow.com/questions/19105388/python-2-7-mac-osx- interactive-plotting-with-matplotlib-not-working) makes the plot window appear, but I find it annoying that now I have to add plt.pause(0.1) to my script. Is there a way to get the earlier behaviour without modifying the code? The backend is macosx. Answer: It seems that this is a [bug](https://github.com/matplotlib/matplotlib/issues/3505) related to matplotlib 1.4. An ugly workaround is to include: import sys sys.ps1 = 'SOMETHING' before importing matplotlib. Alternatively, one can use ipython to run the script. For more details see here <https://github.com/matplotlib/matplotlib/issues/3505>
GraphLab Create "ImportError: No module named graphlab" Question: I followed [these instructions](http://graphlab.com/products/create/quick- start-guide.html) to set up GraphLab on my Ubuntu machine. At the end, I opened Python 2.7.6 and ran the first of the test lines `import graphlab as gl`. This gave me Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named graphlab How can I begin to diagnose this? **Details:** I ran `python -V` from a terminal, and it returned me `Python 2.7.6`. In `/usr/bin` I find the following `pyth*` entries ... I wonder if something somewhere pointed at the wrong version: python python2.7-config python3.4 python-config python2 python2-config python3.4m pythontex python2.7 python3 python3m pythontex3 Answer: Another option is to use `virtualenvwrapper` for the easy creation and application of virtual environments. For example, following [this documentation](http://virtualenvwrapper.readthedocs.org/en/latest/install.html), start with installation: sudo pip install virtualenvwrapper Open your `.bashrc` settings file, for example run `gedit .bashrc` and append the following lines to the bottom of it: export WORKON_HOME=$HOME/.virtualenvs export PROJECT_HOME=$HOME/Devel source /usr/local/bin/virtualenvwrapper.sh Restart your terminal window, and then you can make your virtual environment, say call it "test": mkvirtualenv test Now test is a virtual environment, and your are in it (i.e., `test` is "activated" currently). To put GraphLab in `test`, pip install graphlab-create Similarly, you can install other python toolkits in `test` by using `pip`, and any python program you run from within `test` will be able to see only the python toolkits that are installed here.
Fastest way to extract tar files using Python Question: I have to extract hundreds of tar.bz files each with size of 5GB. So tried the following code: import tarfile from multiprocessing import Pool files = glob.glob('D:\\*.tar.bz') ##All my files are in D for f in files: tar = tarfile.open (f, 'r:bz2') pool = Pool(processes=5) pool.map(tar.extractall('E:\\') ###I want to extract them in E tar.close() But the code has type error: TypeError: map() takes at least 3 arguments (2 given) How can I solve it? Any further ideas to accelerate extracting? Answer: Define a function that extract a single tar file. Pass that function and a tar file list to `multiprocessing.Pool.map`: from functools import partial import glob from multiprocessing import Pool import tarfile def extract(path, dest): with tarfile.open(path, 'r:bz2') as tar: tar.extractall(dest) if __name__ == '__main__': files = glob.glob('D:\\*.tar.bz') pool = Pool(processes=5) pool.map(partial(extract, dest='E:\\'), files)
Loading initial data with Django 1.7 and data migrations Question: I recently switched from Django 1.6 to 1.7, and I began using migrations (I never used South). Before 1.7, I used to load initial data with a `fixture/initial_data.json` file, which was loaded with the `python manage.py syncdb` command (when creating the database). Now, I started using migrations, and this behavior is deprecated : > If an application uses migrations, there is no automatic loading of > fixtures. Since migrations will be required for applications in Django 2.0, > this behavior is considered deprecated. If you want to load initial data for > an app, consider doing it in a data migration. > (<https://docs.djangoproject.com/en/1.7/howto/initial-data/#automatically- > loading-initial-data-fixtures>) The [official documentation](https://docs.djangoproject.com/en/1.7/topics/migrations/#data- migrations) does not have a clear example on how to do it, so my question is : What is the best way to import such initial data using data migrations : 1. Write Python code with multiple calls to `mymodel.create(...)`, 2. Use or write a Django function ([like calling `loaddata`](http://stackoverflow.com/questions/887627/programmatically-using-djangos-loaddata)) to load data from a JSON fixture file. I prefer the second option. I don't want to use South, as Django seems to be able to do it natively now. Answer: Assuming you have a fixture file in `<yourapp>/fixtures/initial_data.json` 1. Create your empty migration: In Django 1.7: python manage.py makemigrations --empty <yourapp> In Django 1.8+, you can provide a name: python manage.py makemigrations --empty <yourapp> --name load_intial_data 2. Edit your migration file `<yourapp>/migrations/0002_auto_xxx.py` 2.1. Custom implementation, inspired by Django' `loaddata` (initial answer): import os from sys import path from django.core import serializers fixture_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '../fixtures')) fixture_filename = 'initial_data.json' def load_fixture(apps, schema_editor): fixture_file = os.path.join(fixture_dir, fixture_filename) fixture = open(fixture_file, 'rb') objects = serializers.deserialize('json', fixture, ignorenonexistent=True) for obj in objects: obj.save() fixture.close() def unload_fixture(apps, schema_editor): "Brutally deleting all entries for this model..." MyModel = apps.get_model("yourapp", "ModelName") MyModel.objects.all().delete() class Migration(migrations.Migration): dependencies = [ ('yourapp', '0001_initial'), ] operations = [ migrations.RunPython(load_fixture, reverse_code=unload_fixture), ] 2.2. A simpler solution for `load_fixture` (per @juliocesar's suggestion): from django.core.management import call_command fixture_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '../fixtures')) fixture_filename = 'initial_data.json' def load_fixture(apps, schema_editor): fixture_file = os.path.join(fixture_dir, fixture_filename) call_command('loaddata', fixture_file) _Useful if you want to use a custom directory._ 2.3. **Simplest:** calling `loaddata` with `app_label` will load fixtures from the `<yourapp>`'s `fixtures` dir automatically : from django.core.management import call_command fixture = 'initial_data' def load_fixture(apps, schema_editor): call_command('loaddata', fixture, app_label='yourapp') _If you don't specify`app_label`, loaddata will try to load `fixture` filename from **all** apps fixtures directories (which you probably don't want)._ 3. Run it python manage.py migrate <yourapp>
how to properly use try/except/with inside functions and main Question: I am a relative python newbie and I am getting confused with how to properly handle exceptions. Apologies for the dumb question. In my main() I iterate through a list of dates and for each date I call a function, which downloads a csv file from a public web server. I want to properly catch exceptions for obvious reasons but especially because I do not know when the files of interest will be available for download. My program will execute as part of a cron job and will attempt to download these files every 3 hours if available. What I want is to download the first file in the list of dates and if that results in a 404 then the program shouldn't proceed to the next file because the assumption is if the oldest date in the list is not available then none of the others that come after it will be available either. I have the following python pseudo code. I have try/except blocks inside the function that attempts to download the files but if an exception occurred inside the function how do I properly handle it in the main() so I can make decisions whether to proceed to the next date or not. The reason why I created a function to perform the download is because I want to re-use that code later on in the same main() block for other file types. def main(): ... ... # datelist is a list of date objects for date in datelist: download_file(date) def download_file(date): date_string = str(date.year) + str(date.strftime('%m')) + str(date.strftime('%d')) request = HTTP_WEB_PREFIX+ date_string + FILE_SUFFIX try: response = urllib2.urlopen(request) except urllib2.HTTPError, e: print "HTTPError = " + str(e) except urllib2.URLError, e: print "URLError = " + str(e) except httplib.HTTPException, e: print "HTTPException = " + str(e) except IOError: print "IOError = " + str(e) except Exception: import traceback print "Generic exception: " + traceback.format_exc() else: print "No problem downloading %s - continue..." % (response) try: with open(TMP_DOWNLOAD_DIRECTORY + response, 'wb') as f: except IOError: print "IOError = " + str(e) else: f.write(response.read()) f.close() Answer: The key concept here is, if you can fix the problem, you should trap the exception; if you can't, it's the caller's problem to deal with. In this case, the downloader can't fix things if the file isn't there, so it should bubble up its exceptions to the caller; the caller should know to stop the loop if there's an exception. So let's move all the exception handling out of the function into the loop, and fix it so it craps out if there's a failure downloading the file, as the spec requires: for date in datelist: date_string = str(date.year) + str(date.strftime('%m')) + str(date.strftime('%d')) try: download_file(date_string) except: e = sys.exc_info()[0] print ( "Error downloading for date %s: %s" % (date_string, e) ) break `download_file` should now, unless you want to put in retries or something like that, simply not trap the exceptions at all. Since you've decoded the date as you like in the caller, that code can come out of `download_file` as well, giving the much simpler def download_file(date_string): request = HTTP_WEB_PREFIX + date_string + FILE_SUFFIX response = urllib2.urlopen(request) print "No problem downloading %s - continue..." % (response) with open(TMP_DOWNLOAD_DIRECTORY + response, 'wb') as f: f.write(response.read()) f.close() I would suggest that the `print` statement is superfluous, but that if you really want it, using `logger` is a more flexible way forward, as that will allow you to turn it on or off as you prefer later by changing a config file instead of the code.
Error while testing postgresql database with python Question: I wanted to start into using databases in python. I chose postgresql for the database "language". I already created several databases, but now I want simply to check if the database exists with python. For this I already read this answer: [Checking if a postgresql table exists under python (and probably Psycopg2)](http://stackoverflow.com/questions/1874113/checking-if-a- postgresql-table-exists-under-python-and-probably-psycopg2) and tried to use their solution: import sys import psycopg2 con = None try: con = psycopg2.connect(database="testdb", user="test", password="abcd") cur = con.cursor() cur.execute("SELECT exists(SELECT * from information_schema.testdb)") ver = cur.fetchone()[0] print ver except psycopg2.DatabaseError, e: print "Error %s" %e sys.exit(1) finally: if con: con.close() But unfortunately, I only get the output Error relation "information_schema.testdb" does not exist LINE 1: SELECT exists(SELECT * from information_schema.testdb) Am I doing something wrong, or did I miss something? Answer: Your question confuses me a little, because you say you want to look to see if a database exists, but you look in the information_schema.tables view. That view would tell you if a table existed in the currently open database. If you want to check if a database exists, assuming you have access to the 'postgres' database, you could: import sys import psycopg2, psycopg2.extras cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor) dbname = 'db_to_check_for_existance' con = None try: con = psycopg2.connect(database="postgres", user="postgres") cur = con.cursor(cursor_factory=psycopg2.extras.DictCursor) cur.execute("select * from pg_database where datname = %(dname)s", {'dname': dbname }) answer = cur.fetchall() if len(answer) > 0: print "Database {} exists".format(dbname) else: print "Database {} does NOT exist".format(dbname) except Exception, e: print "Error %s" %e sys.exit(1) finally: if con: con.close() What is happening here is you are looking in the database tables called pg_database. The column 'datname' contains each of the database names. Your code would supply db_to_check_for_existance as the name of the database you want to check for existence. For example, you could replace that value with 'postgres' and you would get the 'exists' answer. If you replace the value with aardvark you would probably get the does NOT exist report.
ValueError: Inconsistent Shapes Error in Scikit Learn Train_Test Split Question: I'm preparing to run some predictions on a csv document comparing job descriptions to salary outcomes. I've split the data set into training and test where features is what I'm working with and target is what I'm predicting. When I go to print and confirm that these records were separated properly I get the following error: ValueError: Inconsistent Shapes My code and the resulting error follow: import csv import numpy as np # create posting & label list postList = [] labelList = [] filename = '\Users\yantezia.patrick\Downloads\Postings.csv' csvFile = csv.reader(open(filename, 'r'), delimiter=",") for row in csvFile: postList.append(row[2]) labelList.append(row[10]) #appending specific columns to specific list #these willbe labels # remove first row postList = postList[1:] #clearing out the header rows labelList = labelList[1:] temp = np.array([float(i) for i in labelList]) med = np.median(temp) for i, val in enumerate(labelList): if float(val) >= med: labelList[i] = 1 else: labelList[i] = 0 # subset list postList = postList[:100] labelList = labelList[:100] print postList[:2] from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer import pandas as pd # create term matrix cv = CountVectorizer(lowercase=True, stop_words='english', ngram_range=(1,3), min_df=10) tfidf = TfidfVectorizer(lowercase=True, stop_words='english', ngram_range=(1,3), min_df=10) tf_dm = cv.fit_transform(postList) tfidf_dm = tfidf.fit_transform(postList) pd.DataFrame(tfidf_dm.toarray(),index=postList,columns=tfidf.get_feature_names()).head(10) tfidf.get_feature_names() tm = tm.toarray() print tf_dm tm = cv.fit(postList) print tm.vocabulary_ print tf_dm.shape print tfidf_dm.shape #add labels to word vector from sklearn.cross_validation import train_test_split features_train1 = train_test_split(tf_dm, labels, test_size=0.33, random_state=42) features_test1 = train_test_split(tf_dm, labels, test_size=0.33, random_state=42) target_train1 = train_test_split(tf_dm, labels, test_size=0.33, random_state=42) target_test1 = train_test_split(tf_dm, labels, test_size=0.33, random_state=42) features_train2 = train_test_split(tfidf_dm, labels, test_size=0.33, random_state=7) features_test2 = train_test_split(tfidf_dm, labels, test_size=0.33, random_state=7) target_train2 = train_test_split(tfidf_dm, labels, test_size=0.33, random_state=7) target_test2 = train_test_split(tfidf_dm, labels, test_size=0.33, random_state=7) print np.sum(target_train1) print np.sum(target_test1) print target_train1 print target_test1 --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-82-53ecd8559f48> in <module>() ----> 1 print np.sum(target_train1) 2 print np.sum(target_test1) 3 print target_train1 4 print target_test1 C:\Users\yantezia.patrick\AppData\Local\Continuum\Anaconda\lib\site-packages\numpy\core\fromnumeric.pyc in sum(a, axis, dtype, out, keepdims) 1707 except AttributeError: 1708 return _methods._sum(a, axis=axis, dtype=dtype, -> 1709 out=out, keepdims=keepdims) 1710 # NOTE: Dropping the keepdims parameters here... 1711 return sum(axis=axis, dtype=dtype, out=out) C:\Users\yantezia.patrick\AppData\Local\Continuum\Anaconda\lib\site-packages\numpy\core\_methods.pyc in _sum(a, axis, dtype, out, keepdims) 23 def _sum(a, axis=None, dtype=None, out=None, keepdims=False): 24 return um.add.reduce(a, axis=axis, dtype=dtype, ---> 25 out=out, keepdims=keepdims) 26 27 def _prod(a, axis=None, dtype=None, out=None, keepdims=False): C:\Users\yantezia.patrick\AppData\Local\Continuum\Anaconda\lib\site-packages\scipy\sparse\compressed.pyc in __add__(self, other) 340 elif isspmatrix(other): 341 if (other.shape != self.shape): --> 342 raise ValueError("inconsistent shapes") 343 344 return self._binopt(other,'_plus_') ValueError: inconsistent shapes Answer: Can you please explain what you are trying to do? The definitions of `features_train1`, `features_test1`, `target_train1` and `target_test` are the same. As you fixed the random state, they will all have the same content. Have you tried to look at what any of these are? Please look at [the documentation of train_test_split](http://scikit- learn.org/dev/modules/generated/sklearn.cross_validation.train_test_split.html#sklearn.cross_validation.train_test_split). `features_train1` is a tuple, where the first two elements are a split of `tf_dm` and the second two are a split of `labels`. What you most likely meant to do was features_train1, features_test1, labels_train1, labels_test1 = train_test_split(tf_dm, labels, test_size=0.33, random_state=42)
PyCharm show graphics Question: I am trying to have PyCharm show my graphics, but when I run this script nothing appears on the screen. from pylab import * def main(): ion() t = arange(0.0, 2.0, 0.01) s = sin(2*pi*t) plot(t, s) xlabel('time (s)') ylabel('voltage (mV)') title('About as simple as it gets, folks') grid(True) show() I am using Python 2.7.6, Pycharm 3.4 and Windows 7 x64. * * * D:\Python27\python.exe "D:/sletmig/python tests/pandas/chart.py" Process finished with exit code 0 * * * I am new to Python! Answer: add this to the end of script: if __name__ == "__main__": main()
scrapy startproject tutorial: Error when I execute this command Question: C:\Windows\system32>scrapy startproject tutorial Traceback (most recent call last): File "C:\Python34\Scripts\scrapy-script.py", line 9, in <module> load_entry_point('Scrapy==0.24.4', 'console_scripts', 'scrapy')() File "C:\Python34\lib\site-packages\pkg_resources.py", line 353, in load_entry _point return get_distribution(dist).load_entry_point(group, name) File "C:\Python34\lib\site-packages\pkg_resources.py", line 2302, in load_entr y_point return ep.load() File "C:\Python34\lib\site-packages\pkg_resources.py", line 2029, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File "C:\Python34\lib\site-packages\scrapy\__init__.py", line 28, in <module> import _monkeypatches ImportError: No module named '_monkeypatches' I'm a newbie in scrapy & python. I was trying to create a scrapy project but wasn't able to. I have installed pip, setuptools, lxml and scrapy. C:\Windows\system32>pip list cffi (0.8.6) cryptography (0.5.4) cssselect (0.9.1) lxml (3.4.0) pip (1.5.6) pycparser (2.10) pyOpenSSL (0.14) queuelib (1.2.2) Scrapy (0.24.4) setuptools (2.1) six (1.8.0) Twisted (14.0.2) w3lib (1.10.0) zope.interface (4.1.1) Please help! I've been searching around but still couldn't find a solution. Answer: Scrapy is supported under Python 2.7 only at the moment. You'll need to install 2.7 for this to work.
manage.py works but foreman start errors out Question: This question is Heroku and django specific. When I start my application using the command "python manage.py runserver", the webserver starts without error. I can then go and retrieve the homepage my visiting localhost:8000 in my browser. Great. When I start my application using the command "foreman start", the webserver also starts without error. it reads 00:44:19 web.1 | started with pid 9736 00:44:19 web.1 | 2014-09-22 00:44:19 [9736] [INFO] Starting gunicorn 19.0.0 00:44:19 web.1 | 2014-09-22 00:44:19 [9736] [INFO] Listening at: http://0.0.0.0:5000 (9736) 00:44:19 web.1 | 2014-09-22 00:44:19 [9736] [INFO] Using worker: sync 00:44:19 web.1 | 2014-09-22 00:44:19 [9739] [INFO] Booting worker with pid: 9739 Awesome. When i try to visit localhost:5000, something goes awry. The page reads "Internal server error." Huh. I look at the stacktrace that foreman produces, and here is what I see: 00:45:22 web.1 | respiter = self.wsgi(environ, resp.start_response) 00:45:22 web.1 | File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/core/handlers/wsgi.py", line 187, in __call__ 00:45:22 web.1 | self.load_middleware() 00:45:22 web.1 | File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/core/handlers/base.py", line 46, in load_middleware 00:45:22 web.1 | for middleware_path in settings.MIDDLEWARE_CLASSES: 00:45:22 web.1 | File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/conf/__init__.py", line 54, in __getattr__ 00:45:22 web.1 | self._setup(name) 00:45:22 web.1 | File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/conf/__init__.py", line 49, in _setup 00:45:22 web.1 | self._wrapped = Settings(settings_module) 00:45:22 web.1 | File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/sitepackages/django/conf/__init__.py", line 132, in __init__ 00:45:22 web.1 | % (self.SETTINGS_MODULE, e) 00:45:22 web.1 | ImportError: Could not import settings 'gettingstarted.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named 'dj_database_url' dj_database_url did not import correctly. Weird. "Import dj_database_url" appears at the very top of my settings.py file. If I activate my virtualenv and start python, I can run the command "import dj_database_url". Furthermore, when i start the server using manage.py, settings.py is opened, so the import must be working then as well. So why would using foreman break this import? Here is my wsgi.py: import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "gettingstarted.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() Thank you in advance for any help Answer: As `dj_database_url` is only installed in your virtual environment, make sure when you run `foreman` the environment is activated; otherwise you'll see the exception. If you were to deploy this on Heroku, you would not have this problem because by default Heroku will install from your `requirements.txt` file and thus your environment will have `dj_database_url` and everything will work as expected. I also see that you are using Python version 3.4 - unless you have a very specific need, try to use Python 2.7x as some libraries are still being ported to Python 3. You may run into unexplained errors later that are due to version incompatibility.
SQLAlchemy proper Model definition and Selecting / Inserting Question: I'm having trouble with SQL Alchemy with both setting up the models and selecting / inserting If I setup the model as follows and insert in item into a table, it works: #!/usr/bin/python import pymysql import sqlalchemy from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import * from sqlalchemy.orm import sessionmaker from sqlalchemy.sql import select engine = sqlalchemy.create_engine('mysql+pymysql://[email protected]/music?charset=utf8&use_unicode=0', pool_recycle=3600) connection = engine.connect() Base = declarative_base(bind=engine) Base metadata = MetaData() directory = Table('directory', metadata, Column('id', Integer, primary_key=True), Column('size', Integer), Column('name', String, unique=True), ) words = Table('words', metadata, Column('id', Integer, primary_key=True), Column('source', Integer, ForeignKey('directory.id')), Column('words', String), ) ratios = Table('ratios', metadata, Column('id', Integer, primary_key=True), Column('source', Integer, ForeignKey('directory.id')), Column('target', Integer, ForeignKey('directory.id')), Column('ratio', Integer), ) metadata.create_all(engine) # --------------------------------------------------------------------- # Execute Insert # --------------------------------------------------------------------- i = words.insert().values(words='jack', source=1) result = connection.execute(i) However, if I use the above model and try to select using the ID of a record (command below) I get no results: s = directory.select().where(id == 1) result = connection.execute(s) for r in result: print r The table does have a record with that ID, so it should return a result! If instead I setup the models / tables as follows and use the Select command (as below), this works: #!/usr/bin/python import pymysql import sqlalchemy from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import * from sqlalchemy import UniqueConstraint engine = sqlalchemy.create_engine('mysql+pymysql://[email protected]/music?charset=utf8&use_unicode=0', pool_recycle=3600) connection = engine.connect() Base = declarative_base(bind=engine) Base class Directory(Base): __tablename__ = "directory" id = Column(Integer, primary_key=True) name = Column(String(767), unique=True) size = Column(Integer) class Words(Base): __tablename__ = "words" id = Column(Integer, primary_key=True) source = Column(Integer, ForeignKey('directory.id')) words = Column(String(1500)) class Ratios(Base): __tablename__ = "ratios" id = Column(Integer, primary_key=True) source = Column(Integer, ForeignKey('directory.id')) target = Column(Integer, ForeignKey('directory.id')) ratio = Column(Integer) class Rename(Base): __tablename__ = "rename" id = Column(Integer, primary_key=True) source = Column(Integer, ForeignKey('directory.id')) name = Column(String(1500)) Base.metadata.create_all() # --------------------------------------------------------------------- # Execute Select # --------------------------------------------------------------------- s = select([Directory]).where(Directory.id == 1) result = connection.execute(s) for r in result: print r But then if I follow similar syntax as Select for an Insert, this doesn't work: i = insert([Words]).values(source=1, words=out) connection.execute(i) Questions: Can someone recommend which style of creating the models is correct / best? Can someone clarify the syntax for insert, update, select for the recommended model? Is there a tutorial you can recommend that's concise and clear? I've been going through the SQLAlchemy documentation and it's really confusing -- I found examples to setup the models in both ways above, but haven't found any tutorial that's "end-to-end" in a straightforward and consistent way. Thank you. Answer: Are you intentionally trying not to use the ORM? If not, please check out the [comprehensive ORM tutorial](http://docs.sqlalchemy.org/en/rel_0_9/orm/tutorial.html). Here's a working example of your objects with select, insert, and update examples using the ORM: import sqlalchemy from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import * from sqlalchemy import orm engine = sqlalchemy.create_engine('sqlite://') connection = engine.connect() Base = declarative_base() class Directory(Base): __tablename__ = "directory" id = Column(Integer, primary_key=True) name = Column(String(767), unique=True) size = Column(Integer) def __repr__(self): return ( "Directory(id={self.id}, name={self.name}, size={self.size}" .format(self=self)) class Words(Base): __tablename__ = "words" id = Column(Integer, primary_key=True) source = Column(Integer, ForeignKey('directory.id')) directory = orm.relationship('Directory', backref='words') words = Column(String(1500)) def __repr__(self): return ( "Words(id={self.id}, source={self.source}, words={self.words}" .format(self=self)) class Ratios(Base): __tablename__ = "ratios" id = Column(Integer, primary_key=True) source = Column(Integer, ForeignKey('directory.id')) target = Column(Integer, ForeignKey('directory.id')) ratio = Column(Integer) class Rename(Base): __tablename__ = "rename" id = Column(Integer, primary_key=True) source = Column(Integer, ForeignKey('directory.id')) name = Column(String(1500)) def select_objs(session): # # Select examples # print 'Retrieving objects -----------------' word = session.query(Words).filter(Words.id == 1).first() print word print 'Access the directory via a relationship!' print word.directory print '---------------------------' if __name__ == '__main__': engine = create_engine('sqlite://', echo=True) Session = orm.sessionmaker() Session.configure(bind=engine) Base.metadata.bind = engine Base.metadata.create_all() session = Session() # # Insert some objects # dir1 = Directory(id=1, name='Some Dir', size=10) session.add(dir1) # Note because of the relationship configuration and the fact that dir1 # is already in the session, word doesn't have to be added # (no harm in doing so though) word = Words(id=1, directory=dir1, words='Some words!') session.flush() select_objs(session) # # Update them # # Update via ORM word.words = 'Some new words!' # update via query ( session.query(Directory) .filter(Directory.id == 1) .update({"name": "A new directory name!"})) # Prove they're updated: flush session, expunge existing objects session.flush() session.expunge(word) select_objs(session)
How to print/display the output of a telnet session in terminal window (Python) Question: Can someone help to understand how print in the terminal window works? Here is a test script test_script.py import telnetlib HOST = "10.1.1.151" tn = telnetlib.Telnet(HOST) tn.open(HOST) test_var = ["test"] print test_var tn_read = tn.read_very_eager() print tn_read Output when script is run from terminal: $ python test_script.py ['test'] `tn_read` should be something like "User Name : " but it is not printed in the terminal window. If I run it from the interpreter I get what is expected: >>> tn.read_very_eager() '\n\rUser Name : ' Why or what needs to be done to have the following output when the script is called from the terminal ? $ python test_script.py ['test'] User Name : Answer: The short answer is to use `read_until()` with a timeout. e.g. timeout = 3 # seconds tn_read = tn.read_until('User Name :', timeout) print repr(tn_read) The long answer follows. `read_very_eager()` is non-blocking and it will return any cooked data already available, but nothing if there isn't any data. If you call a non-blocking "read" method too soon after establishing the connection, there may be no data to read and `read_very_eager()` will return an empty string - `''`. So your problem is probably timing related; the read is non-blocking and can not return data that has not yet been received, processed, and buffered. When you interact via the terminal, it takes time for you to type in the commands, so you don't notice the timing issue, but when run from a script the human delay is removed, and the timing issue becomes apparent. Try sleeping before calling the read method: import telnetlib import time HOST = "10.1.1.151" tn = telnetlib.Telnet(HOST) test_var = ["test"] print test_var time.sleep(5) tn_read = tn.read_very_eager() print repr(tn_read) Run the above as a script and you _might_ see your expected output.... then again, you might not. Instead you might still see: ['test'] '' There may be other factors too, in particular the server may be responding with a Telnet IAC sequence. `telnetlib` will consume this sequence if it has arrived before you call `read_very_eager()` (any read function actually). In this case `read_very_eager()` also returns an empty string. If you want to see what is being exchanged over the connection, you can call `set_debuglevel(1)`: import telnetlib tn = telnetlib.Telnet('library.cedarville.edu') # any old server will do tn.set_debuglevel(1) tn_read = tn.read_all() print repr(tn_read) Typical output is: Telnet(library.cedarville.edu,23): recv '\xff\xfd\x03' Telnet(library.cedarville.edu,23): IAC DO 3 Telnet(library.cedarville.edu,23): recv '\xff\xfb\x03\xff\xfb\x01\xff\xfd\x18\xff\xfd#\xff\xfd$\xff\xfd\x1f' Telnet(library.cedarville.edu,23): IAC WILL 3 Telnet(library.cedarville.edu,23): IAC WILL 1 Telnet(library.cedarville.edu,23): IAC DO 24 Telnet(library.cedarville.edu,23): IAC DO 35 Telnet(library.cedarville.edu,23): IAC DO 36 Telnet(library.cedarville.edu,23): IAC DO 31 Telnet(library.cedarville.edu,23): recv '\r\n' Telnet(library.cedarville.edu,23): recv 'login: ' Of the other "read" functions, you should probably use `read_until()`. If you don't want to block indefinitely, pass a timeout to it as shown above.
printing json value to file Question: #!/usr/bin/python import os import json import urllib import urllib2 url = "https://www.example.com" parameters = {'resource': 'aaaa', 'apikey': '1111'} data = urllib.urlencode(parameters) req = urllib2.Request(url, data) response = urllib2.urlopen(req) json_data = response.read() with open("test.json") as json_file: json_file.write(json_data) print json_data I dont see I use json again it was before json_data was used now. Answer: As Martijn Pieters pointed out, the data is already encoded so you shouldn't need the `json` module at all in this case You can just write the output to a file json_data = response.read() with open("test.json" , "w") as json_file: json_file.write(json_data)
Performance of Java API versus Python with Cypher for Neo4J Question: I am working with an application that uses a Neo4J graph containing about 10 million nodes. One of the main tasks that I run daily is the batch import of new/updated nodes into the graph, on the order of about 1-2 million. After experimenting with Python scripts in combination with the Cypher query language, I decided to give the embedded graph with Java API a try in order to get better performance results. What I found is about a 5x improvement using the native Java API. I am using Neo4j 2.1.4, which I believe is the latest. I have read in other posts that the embedded graph is a bit faster, but that this should/could be changing in the near future. I would like to validate my findings with anyone who has observed similar results? I have included snippets below just to give a general sense of methods used - code has been greatly simplified. sample from cypher/python: cnode = self.graph_db.create(node(hash = obj.hash, name = obj.title, date_created = str(datetime.datetime.now()), date_updated = str(datetime.datetime.now()) )) sample from embedded graph using java: final Node n = Graph.graphDb.createNode(); for (final Label label : labels){ n.addLabel(label); } for (Map.Entry<String, Object> entry : properties.entrySet()) { n.setProperty(entry.getKey(), entry.getValue()); } Thank you for your insight! Answer: What you're actually doing here is comparing the speeds of two different APIs and merely using two different languages to do that. Therefore, you're not comparing like for like. The Java core API and the REST API used by Python (and other languages) have different idioms, such as explicit vs implicit transactions. Additionally, network latency associated with the REST API will make a great difference, especially if you are using one HTTP call per node created. So to get a more meaningful performance comparison, make sure you are comparing like for like: use Java via the REST API perhaps or use Cypher for both tests. Hint 1: you will get better performance in general over REST by batching up a number of requests into a single API call. Hint 2: the REST API will never be as fast as the core API as the latter is native and the former has many more layers to go through.
Timeout for each thread in ThreadPool in python Question: I am using Python 2.7. I am currently using ThreadPoolExecuter like this: params = [1,2,3,4,5,6,7,8,9,10] with concurrent.futures.ThreadPoolExecutor(5) as executor: result = list(executor.map(f, params)) The problem is that `f` sometimes runs for too long. Whenever I run `f`, I want to limit its run to 100 seconds, and then kill it. Eventually, for each element `x` in `param`, I would like to have an indication of whether or not `f` had to be killed, and in case it wasn't - what was the return value. Even if `f` times out for one parameter, I still want to run it with the next parameters. The `executer.map` method does have a `timeout` parameter, but it sets a timeout for the entire run, from the time of the call to `executer.map`, and not for each thread separately. What is the easiest way to get my desired behavior? Answer: _This answer is in terms of python's multiprocessing library, which is usually preferable to the threading library, unless your functions are just waiting on network calls. Note that the multiprocessing and threading libraries have the same interface._ Given you're processes run for potentially 100 seconds each, the overhead of creating a process for each one is fairly small in comparison. You probably have to make your own processes to get the necessary control. One option is to wrap f in another function that will exectue for at most 100 seconds: from multiprocessing import Pool def timeout_f(arg): pool = Pool(processes=1) return pool.apply_async(f, [arg]).get(timeout=100) Then your code changes to: result = list(executor.map(timeout_f, params)) * * * Alternatively, you could write your own thread/process control: from multiprocessing import Process from time import time def chunks(l, n): """ Yield successive n-sized chunks from l. """ for i in xrange(0, len(l), n): yield l[i:i+n] processes = [Process(target=f, args=(i,)) for i in params] exit_codes = [] for five_processes = chunks(processes, 5): for p in five_processes: p.start() time_waited = 0 start = time() for p in five_processes: if time_waited >= 100: p.join(0) p.terminate() p.join(100 - time_waited) p.terminate() time_waited = time() - start for p in five_processes: exit_codes.append(p.exit_code) You'd have to get the return values through something like [Can I get a return value from multiprocessing.Process?](http://stackoverflow.com/questions/8329974/can-i- get-a-return-value-from-multiprocessing-process) The exit codes of the processes are 0 if the processes completed and non-zero if they were terminated. Techniques from: [Join a group of python processes with a timeout](http://stackoverflow.com/questions/23686913/join-a-group-of-python- processes-with-a-timeout), [How do you split a list into evenly sized chunks in Python?](http://stackoverflow.com/questions/312443/how-do-you-split-a-list- into-evenly-sized-chunks-in-python) * * * As another option, you could just try to use apply_async on [multiprocessing.Pool](https://docs.python.org/2/library/multiprocessing.html#using- a-pool-of-workers) from multiprocessing import Pool, TimeoutError from time import sleep if __name__ == "__main__": pool = Pool(processes=5) processes = [pool.apply_async(f, [i]) for i in params] results = [] for process in processes: try: result.append(process.get(timeout=100)) except TimeoutError as e: results.append(e) Note that the above possibly waits more than 100 seconds for each process, as if the first one takes 50 seconds to complete, the second process will have had 50 extra seconds in its run time. More complicated logic (such as the previous example) is needed to enforce stricter timeouts.
googleappengine install pudb Question: I'm want to debug my python application on google app engine with pudb. I've installed buildout without of using virtualenv and created config file for it buildout.cfg: [buildout] develop = . parts = python app pudb nosetests zipsymlink eggs = gaeapp unzip = true [python] recipe = zc.recipe.egg interpreter = python eggs = ${buildout:eggs} [app] recipe = rod.recipe.appengine url = https://storage.googleapis.com/appengine-sdks/featured/google_appengine_1.9.11.zip server-script = dev_appserver src = ${buildout:directory}/src/gaeapp exclude = tests zip-packages = True [pudb] recipe = zc.recipe.egg eggs = gaeapp pudb [nosetests] recipe = zc.recipe.egg eggs = NoseGAE WebTest gaeapp nose extra-paths = ${buildout:directory}/etc ${buildout:directory}/parts/google_appengine ${buildout:directory}/parts/google_appengine/lib/antlr3 ${buildout:directory}/parts/google_appengine/lib/fancy_urllib ${buildout:directory}/parts/google_appengine/lib/ipaddr ${buildout:directory}/parts/google_appengine/lib/webob_1_1_1 ${buildout:directory}/parts/google_appengine/lib/webapp2/ ${buildout:directory}/parts/google_appengine/lib/yaml/lib interpreter = python [zipsymlink] recipe = svetlyak40wt.recipe.symlinks path = ${app:src} files = ${app:app-directory}/packages.zip # Tools and dependencies svetlyak40wt.recipe.symlinks = 0.2.1 My app.yaml: application: gaeapp runtime: python27 threadsafe: true api_version: 1 handlers: - url: /_ah/spi/.* script: gae_api.APPLICATION libraries: - name: pycrypto version: latest - name: endpoints version: 1.0 - name: setuptools version: latest - name: webob version: latest - name: webapp2 version: latest builtins: - deferred: on My setup.py: from setuptools import setup, find_packages setup( name = "gaeapp", version = "1.0", url = 'http://github.com/blabla/gaeapp', license = 'BSD', description = "Just a test GAE app.", author = 'WOW', packages = find_packages('src'), package_dir = {'': 'src'}, install_requires = ['setuptools', 'pudb'] ) Everything installed fine, nosetests and devappserver are works. Run server: bin/devappserver parts/app I'm trying to use pudb in code: import pudb; pudb.set_trace(); And just see such error: ImportError: No module named pudb Are there any ways to use pudb with GAE apps? Answer: You need to tell `rod.recipe.appengine` what eggs to copy: packages = pudb urwid
Append CSVs without column names Question: I have a list of CSVs , but they don't have column names. With my code in Python 2.7 it appends but the first row in each CSV is recognized as the column names. How can I append the CSVs without column names. For example: 1 2 3 a b 3 2 4 1 a e r Append: 1 2 3 a b 3 2 4 1 a e r The code is the following: import os import pandas as pd targetdir = r'E:/tals/ICF/Base Admision San Marcos 2015-1' filelist = os.listdir(targetdir) big_df=pd.DataFrame() for filename in filelist: big_df = big_df.append(pd.read_csv(os.path.join(targetdir, filename)),ignore_index=True) Answer: Set [`header=None`](http://pandas.pydata.org/pandas-docs/stable/io.html#io- read-csv-table): big_df.append(pd.read_csv(file, header=None), ignore_index=True)
Python, numpy, einsum multiply a stack of matrices Question: For performance reasons, I'm curious if there is a **way to multiply a stack of a stack of matrices**. I have a 4-D array (500, 201, 2, 2). Its basically a 500 length stack of (201,2,2) matrices where for each of the 500, I want to multiply the adjacent matrices using einsum and get another (201,2,2) matrix. I am only doing matrix multiplication on the [2x2] matrices at the end. Since my explanation is already heading off the rails, I'll just show what I'm doing now, and also the 'reduce' equivalent and why its not helpful (because its the same speed computationally). Preferably this would be a numpy one-liner, but I don't know what that is, or even if its possible. # Code: Arr = rand(500,201,2,2) def loopMult(Arr): ArrMult = Arr[0] for i in range(1,len(Arr)): ArrMult = np.einsum('fij,fjk->fik', ArrMult, Arr[i]) return ArrMult def myeinsum(A1, A2): return np.einsum('fij,fjk->fik', A1, A2) A1 = loopMult(Arr) A2 = reduce(myeinsum, Arr) print np.all(A1 == A2) print shape(A1); print shape(A2) %timeit loopMult(Arr) %timeit reduce(myeinsum, Arr) ### Returns: True (201, 2, 2) (201, 2, 2) 10 loops, best of 3: 34.8 ms per loop 10 loops, best of 3: 35.2 ms per loop Any help would be appreciated. Things are functional, but when I have to iterate this over a large series of parameters, the code tends to take a long time, and **I'm wondering if there's a way to avoid the 500 iterations through a loop.** Answer: I don't think it's possible to do this efficiently using numpy (the `cumprod` solution was elegant, though). This is the sort of situation where I would use `f2py`. It's the simplest way of calling a faster language that I know of and only requires a single extra file. fortran.f90: subroutine multimul(a, b) implicit none real(8), intent(in) :: a(:,:,:,:) real(8), intent(out) :: b(size(a,1),size(a,2),size(a,3)) real(8) :: work(size(a,1),size(a,2)) integer i, j, k, l, m !$omp parallel do private(work,i,j) do i = 1, size(b,3) b(:,:,i) = a(:,:,i,size(a,4)) do j = size(a,4)-1, 1, -1 work = matmul(b(:,:,i),a(:,:,i,j)) b(:,:,i) = work end do end do end subroutine Compile with `f2py -c -m fortran fortran.f90` (or `F90FLAGS="-fopenmp" f2py -c -m fortran fortran.f90 -lgomp` to enable OpenMP acceleration). Then you would use it in your script as import numpy as np, fmuls Arr = np.random.standard_normal([500,201,2,2]) def loopMult(Arr): ArrMult = Arr[0] for i in range(1,len(Arr)): ArrMult = np.einsum('fij,fjk->fik', ArrMult, Arr[i]) return ArrMult def myeinsum(A1, A2): return np.einsum('fij,fjk->fik', A1, A2) A1 = loopMult(Arr) A2 = reduce(myeinsum, Arr) A3 = fmuls.multimul(Arr.T).T print np.allclose(A1,A2) print np.allclose(A1,A3) %timeit loopMult(Arr) %timeit reduce(myeinsum, Arr) %timeit fmuls.multimul(Arr.T).T Which outputs True True 10 loops, best of 3: 48.4 ms per loop 10 loops, best of 3: 48.8 ms per loop 100 loops, best of 3: 5.82 ms per loop So that's a factor 8 speedup. The reason for all the transposes is that `f2py` implicitly transposes all the arrays, and we need to transpose them manually to tell it that our fortran code expects things to be transposed. This avoids a copy operation. The cost is that each of our 2x2 matrices are transposed, so to avoid performing the wrong operation we have to loop in reverse. Greater speedups than 8 should be possible - I didn't spend any time trying to optimize this.
Reading files using raw_input or argv Question: I am [Learning Python the Hard Way](http://learnpythonthehardway.org/book/ex15.html). How could I edit this code to work with a file named `'bobsDetails.txt'`? from sys import argv script, filename = argv txt = open(filename) print "Here's your file %r:" % filename print txt.read() print "Type the filename again:" file_again = raw_input("> ") txt_again = open(file_again) print txt_again.read() Answer: You want to look at [fileinput](https://docs.python.org/2/library/fileinput.html) Suppose I have this file: $ cat bobsDetail.txt File 'Bob's Detail' I can write a simple loop that will either 1) process stdin or 2) open the file name and process the contents: #!/usr/bin/python import fileinput for line in fileinput.input(): print line Now make that executable (Unix): $ chmod +x fi.py Then you can process that any way it is presented: $ ./fi.py bobsDetail.txt File 'Bob's Detail' Or, $ cat bobsDetail.txt | ./fi.py File 'Bob's Detail' Then you can identify stdin vs file: for line in fileinput.input(): if fileinput.isfirstline(): if fileinput.isstdin(): print 'stdin' else: print fileinput.filename() print line From a file $ ./fi.py bobsDetail.txt bobsDetail.txt File 'Bob's Detail' From stdin: $ cat bobsDetail.txt | ./fi.py stdin File 'Bob's Detail'
Change Position MouseListener in Python 3.4.1 Question: Simple question. Can anyone translate this peusdocode into Python code?? import tkinter object circle if circle positions contains event.x and event.y: change position Answer: You should read tutorials on the canvas widget if you have not already. I suspect event.x and event.y are screen coordinates. The canvas widget has canvasx and canvasy methods to convert screen coordinates to canvas coordinates. It also has findcloset to find the canvas item closest to a point.
Python openpyxl load_workbook Errors: TypeError (NoneType not Iterable) and ValueError (Max. Value is 180) Question: So I had Python 3.4.1 installed on a Windows XP laptop. I got the openpyxl package (don't remember now which version, I think 2.1) and I was chugging along fine, writing custom code to modify Excel documents as needed at my workplace. Then I formatted my laptop (for work-related reasons) and installed Windows 7. I got hold of Python 3.4.1 again. I pip installed openpyxl ("pip install openpyxl" in my command prompt) - definitely version 2.1 this time. I then tried opening some of my earlier workbooks. **This was in the IDLE GUI interface - not in a script or anything.** I just typed (after properly importing openpyxl->load_workbook): wb=load_workbook('Filename.xlsx') And now I get errors. This file was created by Excel 2007 (created in Windows XP, prior to my formatting my laptop and installing Windows 7), which I was previously able to open just fine with my openpyxl package on my earlier Windows XP configuration. I also tried reopening the Excel file(s) in MS Excel (Windows 7) and resaving, before trying to open with openpyxl. Both my previous Windows XP and my new Windows 7 are 32 bit (no 64 bit anywhere). The errors I get (depends on which file I try to open) are: ERROR FOR FILE No. 1: Traceback (most recent call last): File "<pyshell#5>", line 1, in <module> wb=load_workbook('Filename.xlsx') File "C:\Python34\lib\site-packages\openpyxl\reader\excel.py", line 151, in load_workbook _load_workbook(wb, archive, filename, read_only, keep_vba) File "C:\Python34\lib\site-packages\openpyxl\reader\excel.py", line 244, in _load_workbook wb._external_links = list(detect_external_links(rels, archive)) File "C:\Python34\lib\site-packages\openpyxl\workbook\names\external.py", line 100, in detect_external_links Book.links = list(parse_ranges(range_xml)) File "C:\Python34\lib\site-packages\openpyxl\workbook\names\external.py", line 85, in parse_ranges for n in safe_iterator(names, '{%s}definedName' % SHEET_MAIN_NS): TypeError: 'NoneType' object is not iterable ERROR FOR FILE No. 2: Traceback (most recent call last): File "<pyshell#7>", line 1, in <module> wb=load_workbook('Filename.xlsx') File "C:\Python34\lib\site-packages\openpyxl\reader\excel.py", line 151, in load_workbook _load_workbook(wb, archive, filename, read_only, keep_vba) File "C:\Python34\lib\site-packages\openpyxl\reader\excel.py", line 205, in _load_workbook style_table, color_index, cond_styles = read_style_table(archive.read(ARC_STYLE)) File "C:\Python34\lib\site-packages\openpyxl\reader\style.py", line 215, in read_style_table p.parse() File "C:\Python34\lib\site-packages\openpyxl\reader\style.py", line 44, in parse self.parse_cell_xfs() File "C:\Python34\lib\site-packages\openpyxl\reader\style.py", line 191, in parse_cell_xfs _style['alignment'] = Alignment(**alignment) File "C:\Python34\lib\site-packages\openpyxl\styles\alignment.py", line 54, in __init__ self.textRotation = textRotation File "C:\Python34\lib\site-packages\openpyxl\styles\hashable.py", line 54, in __setattr__ return object.__setattr__(self, *args, **kwargs) File "C:\Python34\lib\site-packages\openpyxl\descriptors\__init__.py", line 89, in __set__ super(Min, self).__set__(instance, value) File "C:\Python34\lib\site-packages\openpyxl\descriptors\__init__.py", line 68, in __set__ raise ValueError('Max value is <0>'.format(self.max)) ValueError: Max value is 180 **For the second case, I went to the __init__.py file and added a line to print the value generated. It turned out to be 255, which is > 180 (hence the error). I have no clue what this value represents - number of unique styles in the document or something?** Are there any dependencies for openpyxl? I have Excel properly installed (in Windows 7 now), with Service Pack 1. I have also tried uninstalling Python 3.4.1 and openpyxl and reinstalling, three or four times. What could be the problem here? Thanks in advance for any answers. Answer: For the second, you have self.textRotation set to 255. That's where the max = 180 (degrees) comes from. In alignment.py, the min/max for textRoation is 0-180. This restriction was added in 2.1. (I find this restriction dubious, as there are 360 degrees.) EDIT: In a more general sense, you're just trying to read an excel file. So, these are likely bugs.
Replace lines in a protobuf file using python Question: I have an existing proto file that I would like to change. What I would like to do is read the file in, change a value or two, and then write it back out to disk as a .proto file, to be read by another program. According to [this documentation](https://developers.google.com/protocol- buffers/docs/reference/python/index), I should be able to write something like this: from google.protobuf import message msg = message.Message() msg.ParseFromString(infile.read()) ... (make changes) outfile.write(msg.SerializeToString()) But it just gives me a `NotImplementedError`. Where is the library I need to interface with protobuf text files? [This](https://developers.google.com/protocol- buffers/docs/reference/cpp/google.protobuf.text_format) seems to work for C++, why hasn't it been implemented for python? (No, the functions in google.protobuf.text_format aren't implemented either). Answer: You seem to be confusing several different concepts here. `message.Message` is an abstract base class. You need to construct an instance of the specific derived class you're interested in. `.proto` files contain type definitions, not message data. You must compile your `.proto` file using `protoc`, the Protocol Compiler. It will generate Python code which you can then import into your program in order to construct the specific type you want. Please see the [Protobuf tutorial](https://developers.google.com/protocol-buffers/docs/pythontutorial) for details. Use `ParseFromString()` to parse _binary_ protobuf data from a file, and use `SerializeToString()` to convert the message back into _binary_ data. You should almost always use binary format for protobuf data. The `TextFormat` class you reference implements an alternative encoding of protobufs that is human-readable. This format is _not_ recommended for most use cases. If you want a human-readable format, you may be better off with JSON, but it will always be much slower and less-compact than binary encodings. `TextFormat` is occasionally useful for debugging (printing out a message) and sometimes for writing message data as source code. `TextFormat` is available in the Python library as the [`text_format` module](https://developers.google.com/protocol- buffers/docs/reference/python/google.protobuf.text_format-module).
A date doesn't match the format Question: Why doesn't a date match the format? >>> import datetime >>> dt1 = "9/1/2014 0:00" >>> datetime.datetime.strptime(dt1, "%m/%d/%y %H:%M") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/_strptime.py", line 325, in _strptime (data_string, format)) ValueError: time data '9/1/2014 0:00' does not match format '%m/%d/%y %H:%M' And this one either: >>> datetime.datetime.strptime(dt1, "%d/%m/%y %H:%M") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/_strptime.py", line 325, in _strptime (data_string, format)) ValueError: time data '9/1/2014 0:00' does not match format '%d/%m/%y %H:%M' Answer: let's open <http://strftime.org/> and check: %m Month as a zero-padded decimal number. so it's need to be `09`, not `9` %d Day of the month as a zero-padded decimal number. so it's need to be `01`, not `1` %y Year without century as a zero-padded decimal number. so it's need to be `14`, not `2014` %H Hour (24-hour clock) as a zero-padded decimal number. so it's need to be `00`, not `0`.
Changing a variable created outside a function within a function using Python Question: I am trying to change the value of player_score, cpu_score, and ties which were created outside the function determineWinner but I get an error such as this: UnboundLocalError: local variable 'cpu_score' referenced before assignment. I have deleted the other assignments (player_score=player_score+1 and ties=ties+1) and added the print ("Player Score",player_score) to test the program. I don't understand is why print ("Player Score",player_score) outputs the value assigned outside of the function but cpu_score=cpu_score+1 yields the "referenced before assignment" error. Also showRolls does what it is supposed to. It seems that the functions can read the values of the variables assigned outside of the functions but can not change the values. Why is that? import random def showRolls(): print("Player 1 roll:",player_roll) print("CPU roll:",cpu_roll) def determineWinner(): if player_roll > cpu_roll: print("Player wins!") print ("Player Score",player_score) elif cpu_roll > player_roll: print("CPU wins!") cpu_score=cpu_score+1 else: print("It's a tie.") #-----------------------MAIN PROGRAM------------------------------------ player_score=0 cpu_score=0 ties=0 player_roll = random.randint(1,6) cpu_roll = random.randint(1,6) showRolls() determineWinner() Answer: Within your function declare any variables that are external to your function, and that you want to _change_ , as `global`. Reading a global variable does not require the declaration. There is some explanation [here](https://docs.python.org/2/faq/programming.html?highlight=global%20variable#what- are-the-rules-for-local-and-global-variables-in-python). So, your function should be: def determineWinner(): global cpu_score if player_roll > cpu_roll: print("Player wins!") print ("Player Score",player_score) elif cpu_roll > player_roll: print("CPU wins!") cpu_score=cpu_score+1 else: print("It's a tie.")
Perl library path/s Question: I need to find the location of my perl library/libraries how can I do this? Something similar to what this gives you for python... python -c "import sys; print sys.path" Thanks Answer: The physical location of loaded modules are in the [`%INC`](http://perldoc.perl.org/perlvar.html#%25INC) hash: > * `%INC` > > The hash `%INC` contains entries for each filename included via the > [`do`](http://perldoc.perl.org/functions/do.html), > [`require`](http://perldoc.perl.org/functions/require.html), or > [`use`](http://perldoc.perl.org/functions/use.html) operators. The key is > the filename you specified (with module names converted to pathnames), and > the value is the location of the file found. The `require` operator uses > this hash to determine whether a particular file has already been included. > > If the file was loaded via a hook (e.g. a subroutine reference, see > [require](http://perldoc.perl.org/functions/require.html) for a description > of these hooks), this hook is by default inserted into `%INC` in place of a > filename. Note, however, that the hook may have set the `%INC` entry by > itself to provide some more specific info. > > Usage demonstrated for a random module on my system: $ perl -MFile::Slurp -e 'print $INC{"File/Slurp.pm"}' /Users/miller/perl5/perlbrew/perls/perl-5.20.0/lib/site_perl/5.20.0/File/Slurp.pm
Unable to pass an lxml etree object to a separate process Question: I'm working on a project to parse multiple xml files concurrently in python using lxml. When I initialize the process I want my main class to do some work on the XML before it passes the etree object to the process, but I am finding that when the etree object arrives in the new process the class survives but the XML is gone from within the object and getroot() returns None. I know that I can only pass pickable data using the queue, but is this also the case with what I pass to the process inside the 'args' field? Here's my code: import multiprocessing, multiprocessing.pool, time from lxml import etree def compute(tree): print("Start Process") print(type(tree)) # Returns <class 'lxml.etree._ElementTree'> print(id(tree)) # Returns new ID 44637320 as expected print(tree.getroot()) # Returns None def pool_init(queue): # see http://stackoverflow.com/a/3843313/852994 compute.queue = queue class Main(): def __init__(self): pass def main(self): tree = etree.parse('test.xml') print(id(tree)) # Returns object ID 43998536 print(tree.getroot()) #Returns <Element SymCLI_ML at 0x29f5dc8> self.queue = multiprocessing.Queue() self.pool = multiprocessing.Pool(processes=1, initializer=pool_init, initargs=(self.queue,)) self.pool.apply_async(func=compute, args=(tree,)) time.sleep(10) if __name__ == '__main__': Main().main() Any and all help much appreciated. **UPDATE/ANSWER** Based on the answer in the next post down I've modified it a bit and managed to get it working with a much lower memory footprint without using String IO. The etree.tostring method returns a byte array, which can be pickled, then to unpickle it the byte array can be parsed by etree. import multiprocessing, multiprocessing.pool, time, copyreg from lxml import etree def compute(tree): print("Start Process") print(type(tree)) # Returns <class 'lxml.etree._ElementTree'> print(tree.getroot()) # Returns <Element SymCLI_ML at 0x29f5dc8>. Success! def pool_init(queue): # see http://stackoverflow.com/a/3843313/852994 compute.queue = queue def elementtree_unpickler(data): return etree.parse(BytesIO(data)) def elementtree_pickler(tree): return elementtree_unpickler, (etree.tostring(tree),) copyreg.pickle(etree._ElementTree, elementtree_pickler, elementtree_unpickler) class Main(): def __init__(self): pass def main(self): tree = etree.parse('test.xml') print(tree.getroot()) #Returns <Element SymCLI_ML at 0x29f5dc8> self.queue = multiprocessing.Queue() self.pool = multiprocessing.Pool(processes=1, initializer=pool_init, initargs=(self.queue,)) self.pool.apply_async(func=compute, args=(tree,)) time.sleep(10) if __name__ == '__main__': Main().main() **UPDATE 2** After doing some bench-marking with memory I found that passing large objects causes the objects to not be able to be cleared up by garbage collection on the main process. This probably isn't an issue at small scale, but by etree objects were in the order of multiple hundreds of MB in memory. As soon as an async task has been called with an XML object in the statement, that object cannot be cleared from memory if it is deleted from the main process, even my manually invoking garbage collection. So as a result I've reverted to closing the XML in the main process and passing the file name to the sub-process. Answer: Use the following code to register simple picklers/unpicklers for lxml Element/ElementTree objects. I used that in the past with lxml and zmq. import copy_reg try: from cStringIO import StringIO except ImportError: from StringIO import StringIO from lxml import etree def element_unpickler(data): return etree.fromstring(data) def element_pickler(element): data = etree.tostring(element) return element_unpickler, (data,) copy_reg.pickle(etree._Element, element_pickler, element_unpickler) def elementtree_unpickler(data): data = StringIO(data) return etree.parse(data) def elementtree_pickler(tree): data = StringIO() tree.write(data) return elementtree_unpickler, (data.getvalue(),) copy_reg.pickle(etree._ElementTree, elementtree_pickler, elementtree_unpickler)
Combining a parsed time with today's date in Python Question: This should be easy, but as ever Python's wildly overcomplicated datetime mess is making simple things complicated... So I've got a time string in HH:MM format (eg. '09:30'), which I'd like to turn into a datetime with today's date. Unfortunately the default date is Jan 1 1900: >>> datetime.datetime.strptime(time_str, "%H:%M") datetime.datetime(1900, 1, 1, 9, 50) [datetime.combine](https://docs.python.org/2/library/datetime.html#datetime.datetime.combine) looks like it's meant exactly for this, but I'll be darned if I can figure out how to get the time parsed so it'll accept it: now = datetime.datetime.now() >>> datetime.datetime.combine(now, time.strptime('09:30', '%H:%M')) TypeError: combine() argument 2 must be datetime.time, not time.struct_time >>> datetime.datetime.combine(now, datetime.datetime.strptime('09:30', '%H:%M')) TypeError: combine() argument 2 must be datetime.time, not datetime.datetime >>> datetime.datetime.combine(now, datetime.time.strptime('09:30', '%H:%M')) AttributeError: type object 'datetime.time' has no attribute 'strptime' This monstrosity works... >>> datetime.datetime.combine(now, datetime.time(*(time.strptime('09:30', '%H:%M')[3:6]))) datetime.datetime(2014, 9, 23, 9, 30) ...but there **must** be a better way to do that...!? Answer: The [function signature](https://docs.python.org/3/library/datetime.html#datetime.datetime.combine) says: datetime.combine(date, time) so pass a `datetime.date` object as the first argument, and a `datetime.time` object as the second argument: >>> import datetime as dt >>> today = dt.date.today() >>> time = dt.datetime.strptime('09:30', '%H:%M').time() >>> dt.datetime.combine(today, time) datetime.datetime(2014, 9, 23, 9, 30)
Python execute remote application using server resource Question: I am trying to ran a matlab executable application from Python on a remote server. I used following code: os.system("\\Server-01\\D$\\matlab_t.exe 7.25 16") # 7.25 and 16 are input arguments of matlab_t.exe The above code is running on my local machine. I noticed that it is using resources (CPU and memory) of my local machine, while I am trying to use resources on the remote server. May I know how I can execute it using server resource? Thanks. Answer: That command will run on your computer, the path may be pointing to a remote server, but no one has told the remote server that it should execute code, only that they need to serve the `matlab_t.exe` file. You have to use a mechanism to access the remote server. Normally ssh is used for this purpose, but the ssh daemon has to be running on the remote server and also you need to have access (ask you admin about that). Then you can use python like this: import paramiko ssh = paramiko.SSHClient() ssh.connect(server, username=username, password=password) ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(cmd_to_execute_on_remote_server)
Twisted Python, TLS and client/server certificate authentication error Question: I've been learning Twisted as best I can, but together with limited TLS knowledge it's proving challenging. I'm trying to write (ultimately) an SMTP server that can send and receive messages both as plain text, or via TLS depending on requirements of a specific message to be sent / received. My sample server code (thus far, just handling the TLS connection, no SMTP bits yet!) is borrowed from <http://twistedmatrix.com/documents/11.0.0/core/howto/ssl.html#auto5> and looks like: from OpenSSL import SSL from twisted.internet import reactor, ssl from twisted.internet.protocol import ServerFactory from twisted.protocols.basic import LineReceiver class TLSServer(LineReceiver): def lineReceived(self, line): print "received: " + line if line == "STARTTLS": print "-- Switching to TLS" self.sendLine('READY') ctx = ServerTLSContext( privateKeyFileName='SSCerts/serverkey.pem', certificateFileName='SSCerts/servercert.pem', ) self.transport.startTLS(ctx, self.factory) class ServerTLSContext(ssl.DefaultOpenSSLContextFactory): def __init__(self, *args, **kw): kw['sslmethod'] = SSL.TLSv1_METHOD ssl.DefaultOpenSSLContextFactory.__init__(self, *args, **kw) if __name__ == '__main__': factory = ServerFactory() factory.protocol = TLSServer reactor.listenTCP(8000, factory) reactor.run() while the client is borrowed from <http://twistedmatrix.com/documents/14.0.0/core/howto/ssl.html#starttls- client> and looks like: from twisted.internet import ssl, endpoints, task, protocol, defer from twisted.protocols.basic import LineReceiver from twisted.python.modules import getModule class StartTLSClient(LineReceiver): def connectionMade(self): self.sendLine("plain text") self.sendLine("STARTTLS") def lineReceived(self, line): print("received: " + line) if line == "READY": self.transport.startTLS(self.factory.options) self.sendLine("secure text") self.transport.loseConnection() @defer.inlineCallbacks def main(reactor): factory = protocol.Factory.forProtocol(StartTLSClient) certData = getModule(__name__).filePath.sibling('servercert.pem').getContent() factory.options = ssl.optionsForClientTLS( u"example.com", ssl.PrivateCertificate.loadPEM(certData) ) endpoint = endpoints.HostnameEndpoint(reactor, 'localhost', 8000) startTLSClient = yield endpoint.connect(factory) done = defer.Deferred() startTLSClient.connectionLost = lambda reason: done.callback(None) yield done if __name__ == "__main__": import starttls_client task.react(starttls_client.main) But when I have the server listening, and I run the client I get: /usr/lib64/python2.6/site-packages/twisted/internet/endpoints.py:30: DeprecationWarning: twisted.internet.interfaces.IStreamClientEndpointStringParser was deprecated in Twisted 14.0.0: This interface has been superseded by IStreamClientEndpointStringParserWithReactor. from twisted.internet.interfaces import ( main function encountered error Traceback (most recent call last): File "starttls_client.py", line 33, in <module> task.react(starttls_client.main) File "/usr/lib64/python2.6/site-packages/twisted/internet/task.py", line 875, in react finished = main(_reactor, *argv) File "/usr/lib64/pytho n2.6/site-packages/twisted/internet/defer.py", line 1237, in unwindGenerator return _inlineCallbacks(None, gen, Deferred()) --- <exception caught here> --- File "/usr/lib64/python2.6/site-packages/twisted/internet/defer.py", line 1099, in _inlineCallbacks result = g.send(result) File "/root/Robot/Twisted/starttls_client.py", line 22, in main u"example.com", ssl.PrivateCertificate.loadPEM(certData) File "/usr/lib64/python2.6/site-packages/twisted/internet/_sslverify.py", line 619, in loadPEM return Class.load(data, KeyPair.load(data, crypto.FILETYPE_PEM), File "/usr/lib64/python2.6/site-packages/twisted/internet/_sslverify.py", line 725, in load return Class(crypto.load_privatekey(format, data)) File "build/bdist.linux-x86_64/egg/OpenSSL/crypto.py", line 2010, in load_privatekey File "build/bdist.linux-x86_64/egg/OpenSSL/_util.py", line 22, in exception_from_error_queue OpenSSL.crypto.Error: [] The strange thing is - I know the certificate and key are fine - I have other "dummy" code (not pasted here, I figured this post is long enough!!) that uses them for validation just fine. Can anyone explain where the code above falls over? I'm at a loss... Thanks :) Answer: So it looks like there is a bug in the sample code found at: <http://twistedmatrix.com/documents/14.0.0/core/howto/ssl.html> Looking at the example "echoclient_ssl.py" there is the line: authority = ssl.Certificate.loadPEM(certData) However, the equivalent bit of code in the "starttls_client.py" example code is: ssl.PrivateCertificate.loadPEM(certData) PrivateCertificate on the client side? Even with my limited understanding of TLS, this seems wrong. Indeed, I modified my code to remove the "Private"... and the error above disappears! As I say, my knowledge and understanding is growing here - but this certainly seems to be the issue / solution to my question!
Calculating sales taxes using functions in python Question: I can't get this program in Python 3.3.5 to run and not sure what I did wrong. The program is to ask the user to enter the total sales for the month, then calculate and display the the following, the amount of county sale (county sales tax is 2.5percent) and amount of state sales tax (state sales taxrate is .05) and the total sales tax (county plus state) I've copied and pasted what I've done so far. # Show individually the 5 purchases amounts, compute the state and county sales # tax on those combined purchases and calculate the total of taxes # and purchases county_sales_tax = .025 state_sales_tax = .05 # Enter the purchase price def main(): purchase = float(input('Enter the price of the purchase: ')) calculate_totals(purchase) # Calculate the purchase price with the coounty and state sales tax def calculate_totals(purchase): county_sales_tax = purchase * .025 state_sales_tax = purchase * .05 total_sales_tax = county_sales_tax + state_sales_tax total_price = purchase + state_sales_tax + county_sales_tax # display the amount of the purchases, county and state sales # taxes, combined amount of the both taxes and total sale with taxes def display_totals(purchase, state_sales_tax, county_sales_tax, total_taxes, total_price): print('The purchase price is $, ') print('The county_sales_tax is $', \ format (county_sales_tax, '.2f')) print('The state_sales_tax is $', \ format (state_sales_tax, '.2f')) print('The total_sales_tax is $:, ') print('The total_price is $:, ') # Call the main function main() Answer: I fixed up a few things, I'll try to walk through where some of the mistakes were. def calculateStateTax(price): state_sales_tax = .05 return price * state_sales_tax def calculateCountyTax(price): county_sales_tax = .025 return price * county_sales_tax def displayTotals(price): print('Original price', price) state_tax = calculateStateTax(price) print('State tax', state_tax) county_tax = calculateCountyTax(price) print('County tax', county_tax) print('Total', price + state_tax + county_tax) def main(): price = float(input('Enter the price of the purchase: ')) displayTotals(price) 1. Indentation is super important in Python! The indentation is used to define the scope of things, note that all the code that is within a defined function is tabbed (or 4 spaces) over 2. In general, you can use functions input parameters and return values to pass things around, instead of declaring global variables. 3. Note that the `displayTotals` function calls the other functions, and gets values out of them via `return`. This means `displayTotals` doesn't need all those parameters to be passed in. 4. It is generally better to only define constants within the scope that they need to exist (e.g. `state_sales_tax` doesn't need to be global).
python audio player filtered without jitter Question: For a project I need to * read WAV file * filter * play WAV file The code below is "working" except that the sound crackles, and that is because I can't guarantee a continue data stream to the audio output. Filtering takes some time and that's where the sound freezes for a very short time. In the future I'd like to do some extra calculations on the raw audio data. I'd like to know which method is the best to keep the sound smooth. import pyaudio import wave from scipy import signal from struct import * chunk = 1024 f = wave.open("sample.wav","rb") p = pyaudio.PyAudio() stream = p.open(format = p.get_format_from_width(f.getsampwidth()), channels = f.getnchannels(), rate = f.getframerate(), output = True) data = f.readframes(chunk) b, a = signal.butter(2, 0.01) tmp = len(data)/2 s = '<' for i in range(0,tmp): s = s + 'h' while data != '': sig = signal.filtfilt(b, a, unpack(s,data)) output_signal = pack(s,*sig) stream.write(output_signal) data = f.readframes(chunk) stream.stop_stream() stream.close() p.terminate() Thanks in advance! Answer: You can try splitting the producer of the data and the consumer (which is pyaudio) in two programs, with a regular Unix pipe connecting them. The Unix pipe plays the role of buffering. Example: in producer.py: while data != '': sig = signal.filtfilt(b, a, unpack(s,data)) output_signal = pack(s,*sig) sys.stdout.write(output_signal) # <=== data = f.readframes(chunk) in consumer.py: while 1: data = sys.stdin.read(1024) if not data: break stream.write(data) Command-line invocation: python producer.py | python consumer.py If it works, then you can also run `python consumer.py` from inside the other program with `os.popen()`, for example.
How do I pull all image links from a Wikimedia page? Question: I am trying to pull all links from a Wikimedia page for famous painters such as Caravaggio with the [Python Wikipedia module](https://pypi.python.org/pypi/wikipedia). import wikipedia page = wikipedia.page("caravaggio") links = page.links However the `.links` method only returns titles of links, not the actual `href` or `src` that I can use to display the image on my page. Is it better to use `import beautifulsoup` for this? Answer: Check this out: #!/usr/bin/python import wikipedia page = wikipedia.page("caravaggio") #links = page.links #for tuple_ in page: # print tuple_ #print dir(page) print page.content #print page.coordinates print 'page.html' print page.html print print 'page.images' print page.images print print 'page.links' print page.links print print 'page.original_title' print page.original_title print print 'page.pageid' print page.pageid print print 'page.parent_id' print page.parent_id print print 'page.references' print page.references print print 'page.revision_id' print page.revision_id print print 'page.section' print page.section print print 'page.sections' print page.sections print print 'page.summary' print page.summary print print 'page.title' print page.title print print 'page.url' print page.url print #print links
Python REST mysql db.commit() error Question: I am having the same error in python no matter where I put the db.commit() statement. Here is my code: from bottle import route, run import json import collections import MySQLdb as db @route('/register/<COD>/<NOMBRE>/<APELLIDO>/<DIRECCION>/<TEL>/<COD_FAC>', method='PUT') def registrar(COD,NOMBRE,APELLIDO,DIRECCION,TEL,COD_FAC): c=db.connect('10.100.70.136','koala','toor','lab2',use_unicode=True) cur=c.cursor() sql1='SELECT * FROM alumnos WHERE codigo="'+COD+'";' cur.execute(sql1) alumnos=cur.fetchall(); i=0 for alumno in alumnos: i+=1 print(i) if i==0: operationResult=1 operationMessage="" cur2=c.cursor() sql2='INSERT INTO alumnos (codigo,nombre,apellido,direccion,telefono,codigoFacultad) VALUES ("'+COD+'","'+NOMBRE+'","'+APELLIDO+'","'+DIRECCION+'","'+TEL+'","'+COD_FAC+'");' cur2.execute(sql2) else: operationResult=2 operationMessage="El alumno con codigo "+COD+" ya se encuentra registrado" db.commit() db.close() results = [] d=collections.OrderedDict() d['operationResult'] = operationResult d['operationMessage'] = operationMessage results.append(d) j = json.dumps(results) return j run(host='localhost',port=8080,debug=True) The error that I get is this: AttributeError("'module' object has no attribute 'commit'",) And the description that I get is the following: Traceback (most recent call last): File &quot;/usr/local/lib/python2.7/dist-packages/bottle-0.12.7-py2.7.egg/bottle.py&quot;, line 862, in _handle return route.call(**args) File &quot;/usr/local/lib/python2.7/dist-packages/bottle-0.12.7-py2.7.egg/bottle.py&quot;, line 1729, in wrapper rv = callback(*a, **ka) File &quot;tarea.preg4.py&quot;, line 33, in registrar db.commit() AttributeError: &#039;module&#039; object has no attribute &#039;commit&#039; Answer: You want to call commit on the connection object - which is `c` in your case. (so make it `c.commit()` instead of `db.commit()`) You can find the python dbapi connection methods [here](http://legacy.python.org/dev/peps/pep-0249/#connection-methods).
Django admin model add_view: how to remove "save and add another" buttons? Question: I was able to remove the "save and add another" and "save and continue editing" buttons by implementing the following code: # At the start of my admin.py file I have: from django.contrib.admin.templatetags.admin_modify import * from django.contrib.admin.templatetags.admin_modify import submit_row as original_submit_row @register.inclusion_tag('admin/submit_line.html', takes_context=True) def submit_row(context): ctx = original_submit_row(context) ctx.update({ 'show_save_and_add_another': context.get('show_save_and_add_another', ctx['show_save_and_add_another']), 'show_save_and_continue': context.get('show_save_and_continue', ctx['show_save_and_continue']) }) return ctx class MyModelAdmin(GuardedModelAdmin): # Then inside MyModelAdmin I have this: def change_view(self, request, object_id, form_url='', extra_context=None): extra_context = extra_context or {} extra_context['show_save_and_add_another'] = False extra_context['show_save_and_continue'] = False return super(MyModelAdmin, self).change_view(request, object_id, form_url, extra_context=extra_context) This works great when I'm using my change_view, but when I'm adding a new instance of the model, the buttons reappear. I tried the following: def add_view(self, request, form_url='', extra_context=None): extra_context = extra_context or {} extra_context['show_save_and_add_another'] = False extra_context['show_save_and_continue'] = False return super(MyModelAdmin, self).add_view(self, request, form_url='', extra_context=extra_context) But it gives me a bizarre MissingAtrribute error -- here's the traceback: Traceback: File "/home/username/.virtualenvs/MyProject/lib/python3.3/site-packages/django/core/handlers/base.py" in get_response 114. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/username/.virtualenvs/MyProject/lib/python3.3/site-packages/django/contrib/admin/options.py" in wrapper 432. return self.admin_site.admin_view(view)(*args, **kwargs) File "/home/username/.virtualenvs/MyProject/lib/python3.3/site-packages/django/utils/decorators.py" in _wrapped_view 99. response = view_func(request, *args, **kwargs) File "/home/username/.virtualenvs/MyProject/lib/python3.3/site-packages/django/views/decorators/cache.py" in _wrapped_view_func 52. response = view_func(request, *args, **kwargs) File "/home/username/.virtualenvs/MyProject/lib/python3.3/site-packages/django/contrib/admin/sites.py" in inner 198. return view(request, *args, **kwargs) File "/home/username/Development/MyProject/webapp/MyModel/admin.py" in add_view 153. return super(MyModelAdmin, self).add_view(self, request, form_url='', extra_context=extra_context) File "/home/username/.virtualenvs/MyProject/lib/python3.3/site-packages/django/utils/decorators.py" in _wrapper 29. return bound_func(*args, **kwargs) File "/home/username/.virtualenvs/MyProject/lib/python3.3/site-packages/django/utils/decorators.py" in _wrapped_view 95. result = middleware.process_view(request, view_func, args, kwargs) File "/home/username/.virtualenvs/MyProject/lib/python3.3/site-packages/django/middleware/csrf.py" in process_view 111. request.COOKIES[settings.CSRF_COOKIE_NAME]) Exception Type: AttributeError at /admin/MyModel/ModelInstance/add/ Exception Value: 'MyModelAdmin' object has no attribute 'COOKIES' I'm using django-guardian and wondering if this is somehow causing my problem? Does anyone know how to get rid of these annoying buttons from the submit_line part of the template when adding a new model instance? Answer: If you want to hide these buttons plainly for cosmetic purposes you can also use CSS and it might not be the best approach since you can enable them back by inspecting the css, it certainly is simple and still granular enough to only hide them on certain model admins. admin.py: class MyModelAdmin(admin.ModelAdmin) .... class Media: #js = ('' ) # Can include js if needed css = {'all': ('my_admin/css/my_model.css', )} my_model.css is located in the static files folder in the path above. my_model.css: /* Optionally make the continue and save button look like primary */ input[name="_continue"]{ border: 2px solid #5b80b2; background: #7CA0C7; color: white; } /* Hide the "Delete", "Add Another" and "Save" buttons, customize this to what you need */ .deletelink, input[name="_addanother"], input[name="_save"]{ display: none; } The classes and names may change between django versions for these buttons, I am using Django 1.6.6 now and I don't think they have changed recently. If you want this to be effective on your entire admin site, you can copy admin/base_site.html default template into your static dir and overwrite the 'extrahead' block to include this style. See [base_site.html](https://github.com/django/django/blob/master/django/contrib/admin/templates/admin/base_site.html). Hopefully the CSS approach helps :) It certainly will not cause any errors for you.
API 'QString' has already been set to version 1 on Eclipse Question: I'm getting following error **on starting debug session** in Eclipse for my code which uses Enthought Mayavi and PyQt as well. Here is the error log in the console. > > pydev debugger: starting (pid: 2208) > Traceback (most recent call last): > File > "D:\eclipse\plugins\org.python.pydev_3.7.1.201409021729\pysrc\pydevd.py", > > > line 2090, in debugger.run(setup['file'], None, None) File > "D:\eclipse\plugins\org.python.pydev_3.7.1.201409021729\pysrc\pydevd.py", > line 1547, in run pydev_imports.execfile(file, globals, locals) # execute > the script File "D:\src\Candls_PyQt\src\application.py", line 10, in > sip.setapi("QString",2) ValueError: API 'QString' has already been set to > version 1 Here is my code snippet. from traits.etsconfig.api import ETSConfig ETSConfig.toolkit = 'qt4' import sip sip.setapi("QString",2) sip.setapi("QVariant",2) from PyQt4 import QtCore, QtGui, uic Answer: This was an issue introduced in the latest version of the debugger. The bug in PyDev is: <https://sw-brainwy.rhcloud.com/tracker/PyDev/452> (it was fixed already but it's still not in a released version). A workaround for now would be manually applying the fix: <https://github.com/fabioz/Pydev/commit/af39f23bc884e9514aaaeede7b6e77e22b6823f6> in your local version of pydev_monkey_qt.py (inside eclipse/plugins/org.python.pydev/pysrc)
Do calculation for array of values - Python Question: I am trying to calculate and plot the amplitude `amp` of the solution of a time dependent `t` differential equation of motion (see `rhs6`) as a function of `wd` for multiple values of the force coefficient `f` found in `f_array`. So far I have got the code working for plotting `amp` against `wd` for a single value of `f`. The result is a resonance peak: ![Resonance peak](http://i.stack.imgur.com/EkhUN.png) The code for plotting `amp` against `wd` for one value of `f` (which produced the above image) is given below. from pylab import * from scipy.integrate import odeint from numpy import * import math #Parameters. k = 2.0 m = 1.0 w0 = (k/m)**(1/2) alpha = 0.2 l = alpha/(2*m) f = 1.0 wd = w0 + 0.025 beta = 0.2 t_fixed = 2.0 #Arrays. t = linspace(0.0, 400.0, 400.0) wd_array = linspace(w0-1.0, w0+1, 400.0) f_array = linspace(10.0, 100.0, 3.0) #Time step. init_x = 0.0 init_v = 0.0 dx = 15.0 dv = 0.0 init_cond = [init_x,init_v] init_cond2 = [init_x + dx,init_v + dv] def rhs6(c,t,wd): c0dot = c[1] c1dot = -2*l*c[1] - w0*w0*c[0] + (f/m)*cos((wd)*t) return [c0dot, c1dot] amp_array=[] for wd in wd_array: res = odeint(rhs6, init_cond, t, args=(wd,)) amp = max(res[:,0]) amp_array.append(amp) plot(wd_array, amp_array) xlabel('Driving frequency, wd') ylabel('Ampltiude, amp') show() Now I wish to find `amp` against `wd` for multiple values of `f`. I have tried to do this by basically making a `for` loop statement over the `f_array`. However, my approach does not work and I get the error: `setting an element with a sequence`. As it's good to show an attempt, below is mine. from pylab import * from scipy.integrate import odeint from numpy import * import math #Parameters. k = 2.0 m = 1.0 w0 = (k/m)**(1/2) alpha = 0.2 l = alpha/(2*m) f = 1.0 wd = w0 + 0.025 beta = 0.2 t_fixed = 2.0 #Arrays. t = linspace(0.0, 200.0, 200.0) wd_array = linspace(w0-1.0, w0+1, 200.0) f_array = linspace(10.0, 200.0, 3.0) #Time step. init_x = 0.0 init_v = 0.0 dx = 15.0 dv = 0.0 init_cond = [init_x,init_v] init_cond2 = [init_x + dx,init_v + dv] def rhs6(c,t,wd,f): c0dot = c[1] c1dot = -2*l*c[1] - w0*w0*c[0] + (f/m)*cos((wd)*t) return [c0dot, c1dot] full_array = zeros(len(f_array)) for index,force in enumerate(f_array): amp_list = [] for wd in wd_array: res = odeint(rhs6, init_cond, t, args=(wd,force)) amp = max(res[:,0]) amp_list.append(amp) print(res) amp_array = array(amp_list) full_array[index] = amp_array for f in full_array: plot(wd, amp) show() Any ideas? Answer: Your issue is that `full_array` is a numpy array and you're trying to set an element inside it as a list, hence the `setting an element with a sequence`. To solve this you can instead create a two-dimensional numpy array and then set each of the rows to be `amp_array` as below from pylab import * from scipy.integrate import odeint from numpy import * import math #Parameters. k = 2.0 m = 1.0 w0 = (k/m)**(1/2) alpha = 0.2 l = alpha/(2*m) f = 1.0 wd = w0 + 0.025 beta = 0.2 t_fixed = 2.0 #Arrays. t = linspace(0.0, 200.0, 200.0) wd_array = linspace(w0-1.0, w0+1, 200.0) f_array = linspace(10.0, 200.0, 3.0) #Time step. init_x = 0.0 init_v = 0.0 dx = 15.0 dv = 0.0 init_cond = [init_x,init_v] init_cond2 = [init_x + dx,init_v + dv] def rhs6(c,t,wd,f): c0dot = c[1] c1dot = -2*l*c[1] - w0*w0*c[0] + (f/m)*cos((wd)*t) return [c0dot, c1dot] full_array = zeros((len(f_array),len(wd_array))) for index,force in enumerate(f_array): amp_list = [] for wd in wd_array: res = odeint(rhs6, init_cond, t, args=(wd,force)) amp = max(res[:,0]) amp_list.append(amp) # print(res) amp_array = array(amp_list) full_array[index,:] = amp_array for f in full_array: plot(wd_array, f) show() Alternatively you could follow [gboffi's](http://stackoverflow.com/a/26013620/3005188) advice and use either [`np.vstack`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html) or [`np.hstack`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.hstack.html) to create your `full_array`. Your `for` loop at the end to plot the arrays was also incorrect and so I've edited that. The graph looks like ![Plot](http://i.stack.imgur.com/6tQdf.png)
Python: How to get absolute path of file in imported module Question: I have imported a module as: from source.x.ReviseOnOrder import reviseOnX, x The method `reviseOnX` runs another python script, say `y.py` which is in the same location i.e `/source/x`. So, when executing `reviseOnX`, I'd like to know the full path so that I can pass the correct path to subprocess that calls `y.py`. Based on other SO questions, I tried the following: print os.path(source.x.ReviseOnOrder.__file__) But it gives the following error: NameError: global name 'source' is not defined How can I find the correct path? Answer: You only have references to objects imported from the module, you don't have the module object itself. Use the [`inspect.getmodule()` function](https://docs.python.org/2/library/inspect.html#inspect.getmodule) to get the module object again: import inspect mod = inspect.getmodule(reviseOnX) print os.path.abspath(mod.__file__) Note that I am using `os.path.abspath()`, **not** `os.path()`. The latter would try to call the module.
Deploy Django on Amazon Elastic Beanstalk. LOGGING_INFO environment variable not set Question: I am successfully deploying my Django (1.7) on Elastic Beanstalk, but I get 500 when I load it in the browser. In the logs I find this: [Wed Sep 24 12:56:11.434509 2014] [:error] [pid 27030] [remote 172.31.6.176:0] mod_wsgi (pid=27030): Target WSGI script '/opt/python/current/app/mysite/wsgi.py' cannot be loaded as Python module. [Wed Sep 24 12:56:11.434557 2014] [:error] [pid 27030] [remote 172.31.6.176:0] mod_wsgi (pid=27030): Exception occurred processing WSGI script '/opt/python/current/app/mysite/wsgi.py'. [Wed Sep 24 12:56:11.434596 2014] [:error] [pid 27030] [remote 172.31.6.176:0] Traceback (most recent call last): [Wed Sep 24 12:56:11.434636 2014] [:error] [pid 27030] [remote 172.31.6.176:0] File "/opt/python/current/app/mysite/wsgi.py", line 14, in <module> [Wed Sep 24 12:56:11.434698 2014] [:error] [pid 27030] [remote 172.31.6.176:0] application = get_wsgi_application() [Wed Sep 24 12:56:11.434721 2014] [:error] [pid 27030] [remote 172.31.6.176:0] File "/opt/python/run/venv/lib/python2.7/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application [Wed Sep 24 12:56:11.434759 2014] [:error] [pid 27030] [remote 172.31.6.176:0] django.setup() [Wed Sep 24 12:56:11.434781 2014] [:error] [pid 27030] [remote 172.31.6.176:0] File "/opt/python/run/venv/lib/python2.7/site-packages/django/__init__.py", line 20, in setup [Wed Sep 24 12:56:11.434813 2014] [:error] [pid 27030] [remote 172.31.6.176:0] configure_logging(settings.LOGGING_CONFIG, settings.LOGGING) [Wed Sep 24 12:56:11.434835 2014] [:error] [pid 27030] [remote 172.31.6.176:0] File "/opt/python/run/venv/lib/python2.7/site-packages/django/conf/__init__.py", line 46, in __getattr__ [Wed Sep 24 12:56:11.434867 2014] [:error] [pid 27030] [remote 172.31.6.176:0] self._setup(name) [Wed Sep 24 12:56:11.434886 2014] [:error] [pid 27030] [remote 172.31.6.176:0] File "/opt/python/run/venv/lib/python2.7/site-packages/django/conf/__init__.py", line 40, in _setup [Wed Sep 24 12:56:11.434916 2014] [:error] [pid 27030] [remote 172.31.6.176:0] % (desc, ENVIRONMENT_VARIABLE)) [Wed Sep 24 12:56:11.435013 2014] [:error] [pid 27030] [remote 172.31.6.176:0] ImproperlyConfigured: Requested setting LOGGING_CONFIG, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings. I am setting the settings file in my .ebextension mysite.config file like this. - option_name: DJANGO_SETTINGS_MODULE value: "mysite.settings" I have also added this to most files: import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings") Where the problem is how my settings file should look like. I have googled this for a good few hours now, but I can't get it to work... I am not even sure how my settings file should look like. Now it just looks like this: LOGGING_CONFIG = 'django.utils.log.dictConfig' I am pretty sure it is the settings file that is the problem. If anyone could point me in the right direction I would be so grateful. Answer: > I have also added this to most files: > > `import os` > > `os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")` That piece of code only needs to be placed in your .wsgi file . It has no use elsewhere. Now, once you've done that, here's why the `setdefault` method isn't doing the same stuff like the manual assignment does: > The key problem is what the setdefault() method does when setting the > environment variable DJANGO_SETTINGS_MODULE compared to using assignment as > previously. In the case of assignment the environment variable is always > updated. For setdefault(), it is only updated if it is not already set. via [this](http://blog.dscpl.com.au/2012/10/requests-running-in-wrong- django.html). Hope this makes it clear.
Calculate 9:00 PM in between two dates in python Question: How can I calculate how many times 9:00 PM can come in between two dates? dt_start_time = "2014-09-23 11:00:00" dt_end_time = "2014-09-24 12:00:00" Expected Output is 1 dt_start_time = "2014-09-24 09:00:00" dt_end_time = "2014-09-24 22:00:00" Expected Output 1 dt_start_time = "2014-09-24 09:00:00" dt_end_time = "2014-09-26 12:00:00" Expected Output 2 Answer: You can do this simply by converting the time difference between the two dates to days, and add an extra day if the dates range misses an extra 9 PM because of roundoff dt_start_time = "2014-09-23 11:00:00" dt_end_time = "2014-09-24 23:00:00" format = "%Y-%m-%d %H:%M:%S" from datetime import datetime as dt st = dt.strptime(dt_start_time, format) end = dt.strptime(dt_end_time, format) count = (end - st).days if st.hour < 21 and end.hour >= 21: count += 1 print count
Unable to import undirected graph from EdgeList file in igraph-python Question: I am using the igraph-python's `Graph.Read_Ncol` function. Below is my code for reading the data. def loadData(filename): data = None data = ig.Graph.Read_Ncol(filename, directed=False) return data I am using this dataset from SNAP group: <https://snap.stanford.edu/data/ca- GrQc.html> As mentioned the dataset has 14496 edges and 5242 nodes. However when I do `data.summary()` on my graph I get the following output. >>> data.summary() 'IGRAPH UN-- 5242 28980 -- \n+ attr: name (v)' Even when I am doing `data.to_undirected()` and trying `data.summary()` again I am getting the same result as above. >>> data.to_undirected() >>> data.summary() 'IGRAPH UN-- 5242 28980 -- \n+ attr: name (v)' When I am loading the graph using the SNAP library in an undirected fashion then I am getting the correct output. def loadData(filename): data = None data = snap.LoadEdgeList(snap.PUNGraph,filename,0,1) return data What am I doing wrong? Or is there an issue with the igraph API? Answer: Most of the edges appear twice in your network, and igraphs adds them as multiple edges. Call `simplify()` on the graph to remove these multiple edges. <http://igraph.org/python/doc/igraph.GraphBase-class.html#simplify>
What's a Pythonic way to make a non-blocking version of an object? Question: I often use python objects with methods that block until finished, and want to convert these methods to non-blocking versions. I find myself executing the following pattern quite frequently: 1. Define the object 2. Define a function which creates an instance of the object, and parses commands to call methods of the object 3. Define a "parent" object which creates a subprocess running the function defined in step 2, and which duplicates the methods of the original object. This gets the job done, but involves a lot of tedious repetition of code, and doesn't seem very Pythonic to me. **Is there a standard, better way to do this?** A highly simplified example to illustrate the pattern I've been using: import ctypes import Queue import multiprocessing as mp class Hardware: def __init__( self, other_init_args): self.dll = ctypes.cll.LoadLibrary('hardware.dll') self.dll.Initialize(other_init_args) def blocking_command(self, arg_1, arg_2, arg_3): """ This command takes a long time to execute, and blocks while it executes. However, while it's executing, we have to coordinate other pieces of hardware too, so blocking is bad. """ self.dll.Takes_A_Long_Time(arg_1, arg_2, arg_3) def change_settings(self, arg_1, arg_2): """ Realistically, there's tons of other functions in the DLL we want to expose as methods. For this example, just one. """ self.dll.Change_Settings(arg_1, arg_2) def close(self): self.dll.Quit() def hardware_child_process( commands, other_init_args): hw = Hardware(other_init_args) while True: cmd, args = commands.recv() if cmd == 'start': hw.blocking_command(**args) elif cmd == 'change_settings': hw.change_settings(**args) elif cmd == 'quit': break hw.close() class Nonblocking_Hardware: """ This class (hopefully) duplicates the functionality of the Hardware class, except now Hardware.blocking_command() doesn't block other execution. """ def __init__( self, other_init_args): self.commands, self.child_commands = mp.Pipe() self.child = mp.Process( target=hardware_child_process, args=(self.child_commands, other_init_args)) self.child.start() def blocking_command(self, arg_1, arg_2, arg_3): """ Doesn't block any more! """ self.commands.send( ('start', {'arg_1': arg_1, 'arg_2': arg_2, 'arg_3': arg_3})) def change_settings(self, arg_1, arg_2): self.commands.send( ('change_settings', {'arg_1': arg_1, 'arg_2': arg_2})) def close(self): self.commands.send(('quit', {})) self.child.join() return None **Backstory:** I use Python to control hardware, usually through closed-source DLLs that I call using ctypes. Frequently, I end up wanting to call functions from the DLL which block until execution finishes, but I don't want my control code to block. For example, I might be synchronizing a camera with illumination using an analog-out card. The camera DLL "snap" function must be called before the analog-out card can send a trigger pulse to the camera, but the "snap" command blocks, preventing me from activating the analog-out card. Answer: I've done something similar by using a metaclass to create non-blocking versions of blocking functions on the object. It allows you to create a non- blocking version of a class just by doing this: class NB_Hardware(object): __metaclass__ = NonBlockBuilder delegate = Hardware nb_funcs = ['blocking_command'] I've taken my original implementation, which targeted Python 3 and used a `concurrent.futures.ThreadPoolExecutor` (I was wrapping blocking I/O calls to make them non-blocking in an `asyncio` context*), and adapted them to use Python 2 and a `concurrent.futures.ProcessPoolExecutor`. Here's the implementation of the metaclass along with its helper classes: from multiprocessing import cpu_count from concurrent.futures import ProcessPoolExecutor def runner(self, cb, *args, **kwargs): return getattr(self, cb)(*args, **kwargs) class _ExecutorMixin(): """ A Mixin that provides asynchronous functionality. This mixin provides methods that allow a class to run blocking methods in a ProcessPoolExecutor. It also provides methods that attempt to keep the object picklable despite having a non-picklable ProcessPoolExecutor as part of its state. """ pool_workers = cpu_count() def run_in_executor(self, callback, *args, **kwargs): """ Runs a function in an Executor. Returns a concurrent.Futures.Future """ if not hasattr(self, '_executor'): self._executor = self._get_executor() return self._executor.submit(runner, self, callback, *args, **kwargs) def _get_executor(self): return ProcessPoolExecutor(max_workers=self.pool_workers) def __getattr__(self, attr): if (self._obj and hasattr(self._obj, attr) and not attr.startswith("__")): return getattr(self._obj, attr) raise AttributeError(attr) def __getstate__(self): self_dict = self.__dict__ self_dict['_executor'] = None return self_dict def __setstate__(self, state): self.__dict__.update(state) self._executor = self._get_executor() class NonBlockBuilder(type): """ Metaclass for adding non-blocking versions of methods to a class. Expects to find the following class attributes: nb_funcs - A list containing methods that need non-blocking wrappers delegate - The class to wrap (add non-blocking methods to) pool_workers - (optional) how many workers to put in the internal pool. The metaclass inserts a mixin (_ExecutorMixin) into the inheritence hierarchy of cls. This mixin provides methods that allow the non-blocking wrappers to do their work. """ def __new__(cls, clsname, bases, dct, **kwargs): nbfunc_list = dct.get('nb_funcs', []) existing_nbfuncs = set() def find_existing_nbfuncs(d): for attr in d: if attr.startswith("nb_"): existing_nbfuncs.add(attr) # Determine if any bases include the nb_funcs attribute, or # if either this class or a base class provides an actual # implementation for a non-blocking method. find_existing_nbfuncs(dct) for b in bases: b_dct = b.__dict__ nbfunc_list.extend(b_dct.get('nb_funcs', [])) find_existing_nbfuncs(b_dct) # Add _ExecutorMixin to bases. if _ExecutorMixin not in bases: bases += (_ExecutorMixin,) # Add non-blocking funcs to dct, but only if a definition # is not already provided by dct or one of our bases. for func in nbfunc_list: nb_name = 'nb_{}'.format(func) if nb_name not in existing_nbfuncs: dct[nb_name] = cls.nbfunc_maker(func) return super(NonBlockBuilder, cls).__new__(cls, clsname, bases, dct) def __init__(cls, name, bases, dct): """ Properly initialize a non-blocking wrapper. Sets pool_workers and delegate on the class, and also adds an __init__ method to it that instantiates the delegate with the proper context. """ super(NonBlockBuilder, cls).__init__(name, bases, dct) pool_workers = dct.get('pool_workers') delegate = dct.get('delegate') old_init = dct.get('__init__') # Search bases for values we care about, if we didn't # find them on the child class. for b in bases: if b is object: # Skip object continue b_dct = b.__dict__ if not pool_workers: pool_workers = b_dct.get('pool_workers') if not delegate: delegate = b_dct.get('delegate') if not old_init: old_init = b_dct.get('__init__') cls.delegate = delegate # If we found a value for pool_workers, set it. If not, # ExecutorMixin sets a default that will be used. if pool_workers: cls.pool_workers = pool_workers # Here's the __init__ we want every wrapper class to use. # It just instantiates the delegate object. def init_func(self, *args, **kwargs): # Be sure to call the original __init__, if there # was one. if old_init: old_init(self, *args, **kwargs) if self.delegate: self._obj = self.delegate(*args, **kwargs) cls.__init__ = init_func @staticmethod def nbfunc_maker(func): def nb_func(self, *args, **kwargs): return self.run_in_executor(func, *args, **kwargs) return nb_func Usage: from nb_helper import NonBlockBuilder import time class Hardware: def __init__(self, other_init_args): self.other = other_init_args def blocking_command(self, arg_1, arg_2, arg_3): print("start blocking") time.sleep(5) return "blocking" def normal_command(self): return "normal" class NBHardware(object): __metaclass__ = NonBlockBuilder delegate = Hardware nb_funcs = ['blocking_command'] if __name__ == "__main__": h = NBHardware("abc") print "doing blocking call" print h.blocking_command(1,2,3) print "done" print "doing non-block call" x = h.nb_blocking_command(1,2,3) # This is non-blocking and returns concurrent.future.Future print h.normal_command() # You can still use the normal functions, too. print x.result() # Waits for the result from the Future Output: doing blocking call start blocking < 5 second delay > blocking done doing non-block call start blocking normal < 5 second delay > blocking The one tricky piece for you is making sure `Hardware` is picklable. You can probably do that by making `__getstate__` delete the `dll` object, and recreate it in `__setstate__`, similar to what `_ExecutorMixin` does. You'll also need the Python 2.x [backport of `concurrent.futures`](https://pypi.python.org/pypi/futures). Note that there's a bunch of complexity in the metaclass so that they'll work properly with inheritance, and support things like providing custom implementations of `__init__` and the `nb_*` methods. For example, something like this is supported: class AioBaseLock(object): __metaclass__ = NonBlockBuilder pool_workers = 1 coroutines = ['acquire', 'release'] def __init__(self, *args, **kwargs): self._threaded_acquire = False def _after_fork(obj): obj._threaded_acquire = False register_after_fork(self, _after_fork) def coro_acquire(self, *args, **kwargs): def lock_acquired(fut): if fut.result(): self._threaded_acquire = True out = self.run_in_executor(self._obj.acquire, *args, **kwargs) out.add_done_callback(lock_acquired) return out class AioLock(AioBaseLock): delegate = Lock class AioRLock(AioBaseLock): delegate = RLock If you don't need that kind of flexibility, you can simplify the implementation quite a bit: class NonBlockBuilder(type): """ Metaclass for adding non-blocking versions of methods to a class. Expects to find the following class attributes: nb_funcs - A list containing methods that need non-blocking wrappers delegate - The class to wrap (add non-blocking methods to) pool_workers - (optional) how many workers to put in the internal pool. The metaclass inserts a mixin (_ExecutorMixin) into the inheritence hierarchy of cls. This mixin provides methods that allow the non-blocking wrappers to do their work. """ def __new__(cls, clsname, bases, dct, **kwargs): nbfunc_list = dct.get('nb_funcs', []) # Add _ExecutorMixin to bases. if _ExecutorMixin not in bases: bases += (_ExecutorMixin,) # Add non-blocking funcs to dct, but only if a definition # is not already provided by dct or one of our bases. for func in nbfunc_list: nb_name = 'nb_{}'.format(func) dct[nb_name] = cls.nbfunc_maker(func) return super(NonBlockBuilder, cls).__new__(cls, clsname, bases, dct) def __init__(cls, name, bases, dct): """ Properly initialize a non-blocking wrapper. Sets pool_workers and delegate on the class, and also adds an __init__ method to it that instantiates the delegate with the proper context. """ super(NonBlockBuilder, cls).__init__(name, bases, dct) pool_workers = dct.get('pool_workers') cls.delegate = dct['delegate'] # If we found a value for pool_workers, set it. If not, # ExecutorMixin sets a default that will be used. if pool_workers: cls.pool_workers = pool_workers # Here's the __init__ we want every wrapper class to use. # It just instantiates the delegate object. def init_func(self, *args, **kwargs): self._obj = self.delegate(*args, **kwargs) cls.__init__ = init_func @staticmethod def nbfunc_maker(func): def nb_func(self, *args, **kwargs): return self.run_in_executor(func, *args, **kwargs) return nb_func * The original code is [here](https://github.com/dano/aioprocessing/blob/master/aioprocessing/executor.py), for reference.
Genshi for loop not working...? Question: I'm having trouble using a Genshi `py:for` attribute. What have I done wrong? Code written out below; to run, make a virtualenv with Python 2, do `pip install genshi flask`, copy the files as listed in an isolated directory, and run `python hello.py` ## The code Contents of `hello.py`: import os.path import traceback import flask import genshi.template app = flask.Flask(__name__) template_dir = os.path.join(os.path.dirname(__file__), 'templates') loader = genshi.template.TemplateLoader(template_dir, auto_reload=True) MESSAGES = [ "Hello", "World", "Sup?", ] @app.route("/", defaults={"name": ""}) @app.route("/<path:name>") def show(name): template_name = name + ".html" try: template = loader.load(template_name) stream = template.generate( messages=MESSAGES, ) rendered = stream.render('html', doctype='html') except Exception as e: tb = traceback.format_exc() return "Cannot load /{}: {} <pre>\n{}</pre>".format(name, e, tb) return rendered if __name__ == '__main__': app.run() Contents of `templates/debug.html`: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:py="http://geshi.edgewall.org/" > <body> <p>Messages is a ${str(type(messages))} of length ${len(messages)}</p> <p>Messages:</p> <pre> ${'\n'.join(m + "!" for m in messages)} </pre> </body> </html> Contents of `templates/hello.html`: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:py="http://geshi.edgewall.org/" > <body> <h1>Messages</h1> <ul> <li py:for="msg in messages"> $msg </li> </ul> </body> </html> ## The problem When I visit `http://localhost:5000/debug` everything seems to work as expected, but when I run `http://localhost:5000/hello` I get "Cannot render /hello: 'msg' not defined" Answer: You're missing an 'n' in your namespace definition. It currently reads 'xmlns:py="http://geshi.edgewall.org/"' but it should read 'xmlns:py="http://genshi.edgewall.org/"'. This causes Genshi to not recognize the 'py:for' attribute and then happily try to evaluate the '$msg' without any 'for msg in messages' to define the variable.
Python kivy link checkbox from application to .kv Question: I am a new user to kivy. I need to link a checkbox from the .kv interface so that it have an impact on the underlying python application. I tried the following code for my application: from kivy.app import App from kivy.uix.floatlayout import FloatLayout from kivy.factory import Factory from kivy.properties import ObjectProperty from kivy.uix.popup import Popup from kivy.uix.checkbox import CheckBox import os class LoadDialog(FloatLayout): load = ObjectProperty(None) cancel = ObjectProperty(None) class Root(FloatLayout): loadfile = ObjectProperty(None) checkbox = CheckBox() def dismiss_popup(self): self._popup.dismiss() def show_load(self): content = LoadDialog(load=self.load, cancel=self.dismiss_popup) self._popup = Popup(title="Load file", content=content, size_hint=(0.9, 0.9)) self._popup.open() def load(self, path, filename): print 'user chose: %s' % os.path.join(path, filename[0]) print self.checkbox.active self.dismiss_popup() def activate_checkbox(self): print 'activate checkbox' self.checkbox._toggle_active() class Chooser(App): pass Factory.register('Root', cls=Root) Factory.register('LoadDialog', cls=LoadDialog) if __name__ == '__main__': Chooser().run() with the following `chooser.kv` file: #:kivy 1.1.0 Root: BoxLayout: orientation: 'vertical' BoxLayout: size_hint_y: 90 height: 30 Button: text: 'Load' on_release: root.show_load() BoxLayout: size_hint_y: 90 height: 30 CheckBox: center: self.parent.center on_active: root.activate_checkbox Label: font_size: 20 center_x: self.parent.width / 4 top: self.parent.top - 50 text: 'Faster' <LoadDialog>: BoxLayout: size: root.size pos: root.pos orientation: "vertical" FileChooserListView: id: filechooser BoxLayout: size_hint_y: None height: 30 Button: text: "Cancel" on_release: root.cancel() Button: text: "Load" on_release: root.load(filechooser.path, filechooser.selection) Unfortunately it was to no avail: it seems that the state of the CheckBox user interface element have no effect whatsoever on the state of the checkbox inside the Root Class. Is there any simple way to link the two? Answer: on_active: root.activate_checkbox This doesn't do anything, you want `root.activate_checkbox()`.
Python script using Glade code produces glib error about old version of GTK Question: I am trying to create my first Glade GUI using Python as the back-end. I created the GUI in Glade and saved the file as a .glade. I then created my Python code and saved it in the same directory as the glade file. Upon running the Python file in the terminal, I receive the following message: Traceback (most recent call last): File "glade6.py", line 56, in <module> main = Buglump() File "glade6.py", line 20, in __init__ self.builder.add_from_file("glade6.glade") glib.GError: glade6.glade: required gtk+ version 3.10, current version is 2.24 OS: Ubuntu 14.04.1 LTS 64-bit And the Python file that was run: Code acquired from '<http://gnipsel.com/glade/index.html>' #!/usr/bin/env python import sys try: import gtk import gtk.glade except: print('GTK not available') sys.exit(1) try: import pygtk pygtk.require('2.0') except: pass class Buglump: def __init__(self): self.builder = gtk.Builder() self.builder.add_from_file("glade6.glade") self.builder.connect_signals(self) # the liststore self.liststore = gtk.ListStore(int,str) self.liststore.append([0,"Select an Item:"]) self.liststore.append([1,"Row 1"]) self.liststore.append([2,"Row 2"]) self.liststore.append([3,"Row 3"]) self.liststore.append([4,"Row 4"]) self.liststore.append([5,"Row 5"]) # the combobox self.combobox = self.builder.get_object("combobox1") self.combobox.set_model(self.liststore) self.cell = gtk.CellRendererText() self.combobox.pack_start(self.cell, True) self.combobox.add_attribute(self.cell, 'text', 1) self.combobox.set_active(0) self.window = self.builder.get_object("window1") self.window1.show() def on_combobox1_changed(self, widget, data=None): self.index = widget.get_active() self.model = widget.get_model() self.item = self.model[self.index][1] print "ComboBox Active Text is", self.item print "ComboBox Active Index is", self.index self.builder.get_object("label1").set_text(self.item) def on_window1_destroy(self, object, data=None): print "quit with cancel" gtk.main_quit() if __name__ == "__main__": main = Buglump() gtk.main() Answer: `import gtk` actually imports `gtk+2.x`. If you need to use `gtk+3`, assuming you already have it installed, you need to write: From gi.repository import Gtk (with capital letter G) Remove these lines: import gtk import gtk.glade and remember to change all instances of `gtk` inside your code to `Gtk`. Example: change `gtk.main_quit()` to `Gtk.main_quit()`
Google Push-To-Deploy Pipelines - Unit tests fail with module import error Question: I get the following error when I try to execute a build on the Google provisioned Jenkins servers in Compute Engine. [deployment_5371449468518400_1411607125060] $ /bin/sh -xe /tmp/hudson807438832151987098.sh + nosetests --with-xunit --xunit-file=nosetests.xml E ====================================================================== ERROR: Failure: ImportError (No module named google.appengine.ext) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/nose/loader.py", line 414, in loadTestsFromName addr.filename, addr.module) File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 47, in importFromPath return self.importFromDir(dir_path, fqname) File "/usr/lib/python2.7/dist-packages/nose/importer.py", line 94, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/var/jenkins/workspace/deployment_5371449468518400_1411607125060/tests.py", line 9, in <module> from google.appengine.ext import ndb ImportError: No module named google.appengine.ext ---------------------------------------------------------------------- Ran 1 test in 0.448s I am confident that this is happening because of the following line in my tests.py from google.appengine.ext import ndb Please help. I am including a [link](https://docs.google.com/document/d/1Z0kdZGA_QwaKMfECUR39vcWCl00QuXyIG3pj9Ypcp0k/edit) to a doc which has more details Answer: Joseph, as the path is not set correctly, please add this to the beginning of your tests.py file: import sys sys.path.append("/google-cloud-sdk/platform/google_appengine") Please, don't forget to add this part before trying to import anything from that library as the path wouldn't be configured yet. In other words: import sys sys.path.append("/google-cloud-sdk/platform/google_appengine") some other imports #next import ONLY after the path has been updated to point to the App Engine libraries from google.appengine.ext import ndb
Changes made using pymysql don't "persist" to database Question: I am trying to Insert Into a mysql database using pymysql. After the Insert and calling commit, as a demo, I Select and print all rows. The new row is printed, but - only the last added row is displayed (printed) and the auto_incremented Id field increments but only one row is displayed regardless of how many time I run the code. No rows are visible in mysql command line select or in the Sequel Pro "Contents" tab. I believe I have reviewed all the pertinent "Similar Questions" Thanks. import pymysql connCost = pymysql.connect(host='127.0.0.1', port=3306, user='*****', passwd='********', db='CU_COST') testCur = connCost.cursor() vF1 = 321 vF2 = '/zero#' # Transaction implictily started? query = "Insert Into testInsert (F1, F2) Values(" + str(vF1) + ", '" + vF2 +"')" testCur.execute(query) connCost.commit testCur.execute("Select * From testInsert") rows = testCur for row in rows: print row testCur.close connCost.close Result of print (after running the code 9 times): 2.7.5 (v2.7.5:ab05e7dd2788, May 13 2013, 13:18:45) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] Python Type "help", "copyright", "credits" or "license" for more information. [evaluate testInsert2.py] (9, 321, '/zero#') Answer: Commit is a function. Therefore you must call it as such: connCost.commit() That should fix it.
Installing MySQLdb on Mac OS X 10.9.5 Question: From this [link](https://pypi.python.org/pypi/MySQL-python/1.2.4) I downloaded MySQLdb and run following two commands sudo python setup.py build sudo python setup.py install I also define the environment path in .bash_profile as follows export PATH="/Applications/XAMPP/xamppfiles/bin/:$PATH" The **problem** is after running above command I don't see any error, but when I try the following command in python shell I see error import MySQLdb **Error Log:** Traceback (most recent call last): File "<stdin>", line 1, in <module> File "MySQLdb/__init__.py", line 19, in <module> import _mysql ImportError: dlopen(//anaconda/lib/python2.7/site-packages/MySQL_python-1.2.4-py2.7-macosx-10.5-x86_64.egg/_mysql.so, 2): Library not loaded: libmysqlclient.18.dylib Referenced from: //anaconda/lib/python2.7/site-packages/MySQL_python-1.2.4-py2.7-macosx-10.5-x86_64.egg/_mysql.so Reason: image not found My Operating system - Mac 10.9.5 and Python version - 2.7.5 :: Anaconda 1.6.1 (x86_64) **How I can install MySQLdb on Mac OS X 10.9.5, 64bit?** Thanks. Duplicate with [this question](http://stackoverflow.com/questions/5531958/installing-mysqldb-on- mac-os-x), but this is for 32bit Mac OS X Snow Leopard. Answer: Since Mysql has officially supply the driver for python called mysql connector, And Mysqldb has not been in maintenance, I suggest you install the offical one and Django has supported it, <http://dev.mysql.com/downloads/connector/python/>, Then in your settings.py: Just Change the engine name DATABASES = { 'default': { 'NAME': 'user_data', 'ENGINE': 'mysql.connector.django', 'USER': 'mysql_user', 'PASSWORD': 'priv4te', 'OPTIONS': { 'autocommit': True, }, } } And the guide for developer <http://dev.mysql.com/doc/connector- python/en/index.html> Have fun
Lost on part with Python loops with a function to roll combined value of dice Question: I have to create a function of two randomly rolled dice the return the combined value. Then I have to use a loop to to roll it to give me five different combined values. The point I keep getting stuck at is defining the loop in terms of the combined roll to get an out come. I just keep getting the sequence 1 22 333 4444 55555 If you have any suggestions or help its greatly appreciated. Thank you. Heres the code I've worked out so far import random def point(): rollone = random.randint(1, 6) rolltwo = random.randint(1, 6) combined_value = dicesum(rollone, rolltwo) def dicesum(x, y): result = x + y return result for result in range(1,6): for dicerolls in range(result): print(result, end='') print() point() Answer: Your `point` method doesn't return anything after it sums the results. See the below code. import random def point(): rollone = random.randint(1,7) # sample (1,7) to get values from 1 to 6 rolltwo = random.randint(1,7) combined_value = rollone + rolltwo # don't need a helper just to add these return combined_value # return the combined value def main(): for roll in range(5): print "Roll", roll, "Sum", point() Testing >>> main() Roll 0 Sum 6 Roll 1 Sum 2 Roll 2 Sum 5 Roll 3 Sum 7 Roll 4 Sum 12
unable to install PIL on ubuntu 13.04 Question: I am trying to install PIL ( not pillow ) on ubuntu 13.04. I installed all the dependency libs for jpeg, tiff, png, and lcms support. I used the following command to install PIL: $ sudo pip install --allow-external PIL --allow-unverified PIL PIL However, even after doing this, I still get the following: -------------------------------------------------------------------- PIL 1.1.7 SETUP SUMMARY -------------------------------------------------------------------- version 1.1.7 platform linux2 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] -------------------------------------------------------------------- *** TKINTER support not available *** JPEG support not available *** ZLIB (PNG/ZIP) support not available *** FREETYPE2 support not available *** LITTLECMS support not available -------------------------------------------------------------------- To add a missing option, make sure you have the required library, and set the corresponding ROOT variable in the setup.py script. To check the build, run the selftest.py script. changing mode of build/scripts-2.7/pilfile.py from 644 to 755 changing mode of build/scripts-2.7/pildriver.py from 644 to 755 changing mode of build/scripts-2.7/pilfont.py from 644 to 755 changing mode of build/scripts-2.7/pilprint.py from 644 to 755 changing mode of build/scripts-2.7/pilconvert.py from 644 to 755 changing mode of /usr/local/bin/pilfile.py to 755 changing mode of /usr/local/bin/pildriver.py to 755 changing mode of /usr/local/bin/pilfont.py to 755 changing mode of /usr/local/bin/pilprint.py to 755 changing mode of /usr/local/bin/pilconvert.py to 755 Successfully installed PIL Cleaning up... Any idea what I am doing wrong ? EDIT: I did install the suppport libraries. They are getting installed into /usr/lib/x85_linux-gnu folder. So I created symlinks as follows: 2293 sudo ln -s /usr/lib/x86_64-linux-gnu/libfreetype.so /usr/lib 2294 sudo ln -s /usr/lib/x86_64-linux-gnu/liblcms.so /usr/llib 2295 sudo ln -s /usr/lib/x86_64-linux-gnu/libpng.so /usr/lib 2296 sudo ln -s /usr/lib/x86_64-linux-gnu/libtiff.so /usr/lib When I symlinked jpeg the first time, it detected it and compiled support for it. So I added the other symlinks as above. However it now fails with this error: building '_imagingft' extension x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/freetype2 -IlibImaging -I/usr/include -I/usr/local/include -I/usr/include/python2.7 -c _imagingft.c -o build/temp.linux-x86_64-2.7/_imagingft.o _imagingft.c:73:31: fatal error: freetype/fterrors.h: No such file or directory #include <freetype/fterrors.h> ^ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- Cleaning up... Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/PIL/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-BQ7cvZ-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/PIL Traceback (most recent call last): File "/usr/bin/pip", line 9, in <module> load_entry_point('pip==1.5.4', 'console_scripts', 'pip')() File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 185, in main return command.main(cmd_args) File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 161, in main text = '\n'.join(complete_log) UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 57: ordinal not in range(128) Note that when I installed libfreetype, it installed the libs in /usr/include/freetype2 folder but PIL seems to be looking in a different place. Answer: I couldnt figure out a solution to this problem. I gave up on enabling freetype support. I just removed the libfreetype lib from my system. That combined with doing the symlink for the other libs seemed to make it work.
Error while importing paramiko in python script Question: I'm trying to build a python script which has the line "import paramiko" and I get this error: Traceback (most recent call last): File "script.py", line 3, in <module> import paramiko File "/home/FBML7HR/.local/lib/python3.4/site-packages/paramiko-1.15.1-py3.4.egg/paramiko/__init__.py", line 30, in <module> File "/home/FBML7HR/.local/lib/python3.4/site-packages/paramiko-1.15.1-py3.4.egg/paramiko/transport.py", line 49, in <module> File "/home/FBML7HR/.local/lib/python3.4/site-packages/paramiko-1.15.1-py3.4.egg/paramiko/dsskey.py", line 26, in <module> File "/home/FBML7HR/.local/lib/python3.4/site-packages/Crypto/PublicKey/DSA.py", line 89, in <module> from Crypto import Random File "/home/FBML7HR/.local/lib/python3.4/site-packages/Crypto/Random/__init__.py", line 29, in <module> from Crypto.Random import _UserFriendlyRNG File "/home/FBML7HR/.local/lib/python3.4/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 38, in <module> from Crypto.Random.Fortuna import FortunaAccumulator File "/home/FBML7HR/.local/lib/python3.4/site-packages/Crypto/Random/Fortuna/FortunaAccumulator.py", line 39, in <module> from . import FortunaGenerator File "/home/FBML7HR/.local/lib/python3.4/site-packages/Crypto/Random/Fortuna/FortunaGenerator.py", line 36, in <module> from Crypto.Cipher import AES File "/home/FBML7HR/.local/lib/python3.4/site-packages/Crypto/Cipher/AES.py", line 50, in <module> from Crypto.Cipher import _AES ImportError: /home/FBML7HR/.local/lib/python3.4/site-packages/Crypto/Cipher/_AES.cpython-34m.so: undefined symbol: rpl_malloc I've installed pycrypto and paramiko modules. Any idea what could be the problem here? Answer: somebody had a similar issue with pycrypto and fixed by setting an env and doing a re-install. Check out <http://github.com/jtriley/StarCluster/issues/138>
Read from a .csv file containing exported pcap packet info using Tshark and count particular instances in Python Question: I want to read from a .csv file(created from a .pcap file) and count the no. of instances for certain network packets in python under info column in python. I am using python 2.7 on Windows 7 OS.Is there a way I can do this. Thanks in advance Answer: Try this <https://docs.python.org/2/library/csv.html> >>> import csv >>> with open('eggs.csv', 'rb') as csvfile: ... spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|') ... for row in spamreader: ... print ', '.join(row)
get specific text such as "Something new" using BeautifulSoup Python Question: I am making a focused crawler and facing an issue while find a for a key phrase in the document. Supposing the key phrase I want to search in the document is "Something new" using BeautifulSoup with python I do the following if soup.find_all(text = re.compile("Something new",re.IGNORECASE)): print true I want it to print true only for the following cases "something new" --> true "$#something new,." --> true AND not for the following cases: "thisSomething news" --> false "Somethingnew" --> false assuming special characters are allowed. Has anyone ever done something like this before. ?? Thanks for the help. Answer: Then, search for `something new` and don't apply `re.IGNORECASE`: import re from bs4 import BeautifulSoup data = """ <div> <span>something new</span> <span>$#something new,.</span> <span>thisSomething news</span> <span>Somethingnew</span> </div> """ soup = BeautifulSoup(data) for item in soup.find_all(text=re.compile("something new")): print item Prints: something new $#something new,. * * * You can also take a non-regex approach and [pass a function](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-function) instead of a compiled regex pattern: for item in soup.find_all(text=lambda x: 'something new' in x): print item For the example HTML used above, it also prints: something new $#something new,.