text
stringlengths
226
34.5k
Extracting Tags from BeautifulSoup Question: I am trying to get the Body-Tag from <http://feeds.reuters.com/~r/reuters/technologyNews/~3/ZyAuZq5Cbz0/story01.htm> but BeautifulSoup doesn't find it. Is this because of invalid HTML? If so, how can I prevent this? I also tried to prefix HTML-Errors using PyTidyLib (<http://countergram.com/open-source/pytidylib/docs/index.html>) Here is some of the code: def getContent(url, parser="lxml"): request = urllib2.Request(url) try: response = opener.open(request).read() except: print 'EMPTY CONTENT',url return None doc, errors = tidy_document(response) return parse(url, doc) def parse(url, response, parser="lxml"): try: soup = bs(response,parser) except UnicodeDecodeError as e: if parser=="lxml": return parse(url, response, "html5lib") else: print e,url print 'EMPTY CONTENT',url return None body = soup.body ... When I print out Soup, I can see the opening and closing body-Tag, but after body = soup.body, I get None. I am using Python 2.7.3 and BeautifulSoup4 It seems to work with BeautifulSoup3, but I need to stick to BS4 due to performance issues. Answer: I finally got it running. Here is the code: import urllib2 from lxml import html url = "http://www.reuters.com/article/2013/04/17/us-usa-immigration-tech-idUSBRE93F1DL20130417?feedType=RSS&feedName=technologyNews" response = urllib2.urlopen(url).read().decode("utf-8") test = html.fromstring(response) for p in test.body.iter('p'): print p.text_content()
ValueError with Python and Numpy Question: I'm trying to rebuild a song in python, but I cannot concatenate the notes of the same. I get this error: > **ValueError** : operands could not be broadcast together with shapes (0) > (1250) Here's my code: import numpy as np, matplotlib.pyplot as plt def nota(f,d): ts = 0.0002 t = np.arange(0, d, ts) X = 5500*np.cos(2*np.pi*f*t) return X # II.2.b) pausa = nota(0,0) La = nota(440,0.25) Mi = nota(659.26,0.25) Do = nota(253.25,0.25) Sol = nota(783.99,0.25) Si = nota(493.88,0.25) Solbemol = nota(830.61,0.25) def FurElise(): musica = np.array((pausa,pausa,La,Mi,La,pausa,pausa,Mi,Mi,Solbemol, \ pausa,pausa,La,Mi,La,pausa,pausa,pausa,La,Mi,La, \ pausa,pausa,Mi,Mi,Solbemol,pausa,pausa,La,Mi,La, \ pausa,Do,Sol,Do,pausa,pausa,Sol,Sol,Si,pausa,pausa, \ La,Mi,La,pausa,pausa,Mi,Mi,Mi,pausa)) y=0 for x in musica: z=np.hstack((x,y)) y = y+x z=np.hstack((x,y)) plt.plot(z) plt.show() FurElise() Answer: As @filmor notes, `x` and `y` are of different shapes, and the reason for that is your definition of `pausa = nota(0,0)`. By using a `d` value of `0`, the resulting array is of length `0` while all other arrays are of length `1250`, and `y = y+x` will eventually throw the error you're seeing (e.g. after 3 iterations, given your current definition of `musica`). Assuming you want the pause to be of the same length as all other notes, you can re-define `pausa` so as to get rid of the error: pausa = nota(0,0.25)
How to refresh a browser tap after time.sleep in python Question: So this is what I'm using to open the browser import webbrowser import time url = "http://google.com" time = 5 def TestBrowse(url,time): webbrowser.open(url) time.sleep(time) I want a function or method following time.sleep that will refresh the tab that the function opens. This is a module I'm just getting familiar with so I don't even know if its a better module or solution for this (or if at all possible) Infact my main target was to be able to close the tab but I've been reading there is no way to do that, If this is false I would also love knowing how to do that. I've experimented with using os.system to kill the browser but os.system never seems to work inside a function (and It doesn't seem like a good idea anyway) Answer: Maybe using [selenium](http://docs.seleniumhq.org/) would be a better option for browser programing. It accepts python scripts. Also, you could try creating a wrapper web page with an embedded script that does the refreshing and/or exiting, though the browser might threat it as a cross-site scripting and limit the functionality of the URL you are trying to access. In any case to use that you would need to program in javascript rather than python.
How to integrate SimpleGUI with Python 2.7 and 3.0 shell Question: I am learning Python from Coursera. In this course they use **SimpleGUI** module on [CodeSkulptor](http://www.codeskulptor.org/). Can anyone tell me how to integrate SimpleGUI with python 2.7 and 3.0 shell? Answer: You can just use SimpleGUITk (<http://pypi.python.org/pypi/SimpleGUITk>) which implements a Tk version of simplegui. To use your CodeSkulptor code in the desktop, you just need to replace import simplegui with import simpleguitk as simplegui and that's it, your program made for CodeSkulptor code should work on the desktop.
Double the length of a python numpy array with interpolated new values Question: I have an array of 5 numbers: A = [10, 20, 40, 80, 110] I need to create a new array with a 10nth length numbers. The extra numbers could be the average number between the two # of `A`. for example: EDIT B = [10 , 15 , 20 ,30, 40, 60, 80, 95, 110 ] Is it possible using a scipy or numpy function ? Answer: Using [this answer](http://stackoverflow.com/a/5347492/1258041): In [1]: import numpy as np In [2]: a = np.array([10, 20, 40, 80, 110]) In [3]: b = a[:-1] + np.diff(a)/2 In [4]: c = np.empty(2 * a.size -1) In [5]: c[::2] = a In [6]: c[1::2] = b In [7]: c Out[7]: array([ 10., 15., 20., 30., 40., 60., 80., 95., 110.])
Can python-igraph be used for min s-t cut? Question: Can Python-igraph be used to do min s-t cut? I want the minimum cost cut that cuts the designated source and sink nodes. Thanks! Answer: Yep. from igraph import Graph from random import randint g = Graph.GRG(100, 0.2) # generate a geometric random graph g.es["capacity"] = [randint(0, 1000) for i in xrange(g.ecount())] cut = g.maxflow(0, 99, "capacity") cut.membership then gives you the membership of each vertex (0-1 vector), cut[0] gives you the vertices on one side of the cut, cut[1] gives the other, cut.value gives the value of the cut. [all credit goes to @Tamás]
Python 3.3 + Matplotlib 1.2.0: pdf export generates "'str' does not support the buffer interface" error Question: I just started migrating from matlab/mathematica to python for technical computing. I have been learning how to use the matplotlib.pyplot package and was hoping someone could help me with fonts. I ultimately need to save graphical output as pdf or eps files that I can open in Adobe Illustrator. Initially, my pdf and eps output contained outlined fonts (rather than embedded fonts retaining text information). Following [this helpful advice](http://stackoverflow.com/questions/5956182/cannot-edit-text-in-chart- exported-by-matplotlib-and-opened-in-illustrator), I ended up with the following code: import matplotlib as mpl import matplotlib.pyplot as plt # if I omit the next line, the plot saves without error, but with outlined fonts mpl.rcParams['pdf.fonttype'] = 42 #set Truetype fonts for Adobe plt.plot(range(5),range(5),'r-') plt.ylabel('y') plt.xlabel('x') plt.title('title') plt.show() plt.savefig("myfig.pdf") However, when I set rcParams['pdf.fonttype']=42, the final line generates the error copied below. Can anyone point me in the right direction? I am running Python 3.3 and matplotlib 1.2.0, using the Pyzo distribution on Mac OS 10.6. Traceback (most recent call last): File "<tmp 1>", line 11, in <module> plt.savefig("myfig.pdf") File "/Applications/pyzo2013b/lib/python3.3/pyzo-packages/matplotlib/pyplot.py", line 472, in savefig return fig.savefig(*args, **kwargs) File "/Applications/pyzo2013b/lib/python3.3/pyzo-packages/matplotlib/figure.py", line 1364, in savefig self.canvas.print_figure(*args, **kwargs) File "/Applications/pyzo2013b/lib/python3.3/pyzo-packages/matplotlib/backends/backend_qt4agg.py", line 161, in print_figure FigureCanvasAgg.print_figure(self, *args, **kwargs) File "/Applications/pyzo2013b/lib/python3.3/pyzo-packages/matplotlib/backend_bases.py", line 2093, in print_figure **kwargs) File "/Applications/pyzo2013b/lib/python3.3/pyzo-packages/matplotlib/backend_bases.py", line 1845, in print_pdf return pdf.print_pdf(*args, **kwargs) File "/Applications/pyzo2013b/lib/python3.3/pyzo-packages/matplotlib/backends/backend_pdf.py", line 2307, in print_pdf file.close() File "/Applications/pyzo2013b/lib/python3.3/pyzo-packages/matplotlib/backends/backend_pdf.py", line 507, in close self.writeFonts() File "/Applications/pyzo2013b/lib/python3.3/pyzo-packages/matplotlib/backends/backend_pdf.py", line 607, in writeFonts fonts[Fx] = self.embedTTF(realpath, chars[1]) File "/Applications/pyzo2013b/lib/python3.3/pyzo-packages/matplotlib/backends/backend_pdf.py", line 1054, in embedTTF return embedTTFType42(font, characters, descriptor) File "/Applications/pyzo2013b/lib/python3.3/pyzo-packages/matplotlib/backends/backend_pdf.py", line 987, in embedTTFType42 self.currentstream.write(unicode_cmap) File "/Applications/pyzo2013b/lib/python3.3/pyzo-packages/matplotlib/backends/backend_pdf.py", line 379, in write compressed = self.compressobj.compress(data) TypeError: 'str' does not support the buffer interface Answer: This is a bug in `matplotlib` and is fixed here: <https://github.com/matplotlib/matplotlib/pull/1978#issuecomment-17493157>
Retriving a tag value from multiple XML files in a directory using python Question: I am currently learning python to automate a few things in my job. I need to retrieve a tag value from multiple xml files in a directory. The directory has many subfolders too. I tried the following code, and understood what is missing. But I am not able to fix this. Here is my code: from xml.dom.minidom import parse, parseString import os def jarv(dir): for r,d,f in os.walk(dir): for files in f: if files.endswith(".xml"): print files dom=parse(files) name = dom.getElementsByTagName('rev') print name[0].firstChild.nodeValue jarv("/path) I understand that while executing the `dom=parse(files)` line, it has got the filename without the path. So it says no such files/directory. I don't know how to fix this. Answer: You have to use os.path.join() to build the correct path from the dirname and the filename: dom=parse(os.path.join(r, files)) should do it
How to use a C dll in python Question: I am trying to use a C++ dll in python. I am running python 2.7 on Windows server 2012; both are 64bit. To create a dll, I followed the directions on "Walkthrough: Creating and Using a Dynamic Link Library (C++)" on <http://msdn.microsoft.com/en- us/library/vstudio/ms235636.aspx>. I used their example code. I am new to dlls and python so I thought I would start with examples. My python code: from ctypes import * hw = CDLL("Y:\dll_check\MathFuncsDll.dll") print "HelloWorld" I get the following error: Y:\dll_check> python .\MathFuncsMain/py Traceback (most recent call last): File ".\MathFunsMain.py" libimgr = CDLL("Y:\dll_check\MathFuncsDll.dll") File "C:Python27\lib\ctypes\__init__.py", line 365, in __init__ self._handle = _dlopen(self._name, mode) WindowsError: [Error 193] %1 is not a valid Win32 application What is causing my error? Answer: use 32 bit python version and compile to resolve the issue. Reason(probably) : even though your machine is 64-bit; the dll you have created is a 32 bit one. bcz through visual studio you can select and create only "Win32 application".
PyInstaller created exc. does not run on virtualbox with ubuntu Question: I used PyInstaller to create an executable from a GUI script I wrote (using wx.python) using this command... python /home/torosean/pyinstaller/pyinstaller.py -F -w My_GUI_login_simplified.py I can run the executable on the host computer w/o any problems by cd-ing into the dist folder and running... ./My_GUI_login_simplified Now when I test the executable in ubuntu (using virtual box) I get the error shown below. I would like to test the executable on several os's before handing it to my colleagues preferably on ubuntu and later on mac in vb again. Anyways here is the error. Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/torosean/pyinstaller/PyInstaller/loader/pyi_importers.py", line 270, in load_module File "/home/torosean/Documents/python_funcs/uploader/build/My_GUI_login_simplified/out00-PYZ.pyz/wx", line 45, in <module> File "/home/torosean/pyinstaller/PyInstaller/loader/pyi_importers.py", line 270, in load_module File "/home/torosean/Documents/python_funcs/uploader/build/My_GUI_login_simplified/out00-PYZ.pyz/wx._core", line 4, in <module> File "/home/torosean/pyinstaller/PyInstaller/loader/pyi_importers.py", line 409, in load_module ImportError: /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1: undefined symbol: _glapi_tls_Dispatch Now my questions are? 1). Is this a problem having to do with how PyInstaller compiles the script into an executable. In other words from the error shown can one say what went wrong if anything with the PyInstaller making the exec. ( I personally don't "feel" like that is the case but I don't know for sure)? 2). Is this a virtual box/ubuntu problem? Any suggestions on how to fix it, so far i haven't found anything useful. 3). for people who create executables from python scripts, how do you go about testing the executable, do you use virtual machines or is there something better out there? Any suggestions would be most appreciated. SPECS: host os (Fedora 18 64 bit) guest os (ubuntu 12.04 LTS 64 bit) virtual machine: virtual box (4.2.12) python 2.7.3 Thank you all To the moderator: Sorry if this question doesn't belong here, I thought if there is something that PyInstaller does that causes this error, someone else might benefit knowing about it. EDIT: Same result with OpenSUSE on vb.... EDIT: Did three things and it worked... 1).Installed the virtualbox from oracles website. 2). Performed an update and things seemed to magically work! Answer: I can give you half of an answer... This happens when the application you're writing depends on libGL, but pyinstaller didn't include it when it packaged up the "binary". In the pyinstaller "spec" file, you need to define additional libraries & append them to the list of binaries that are returned by the Analysis step. In my application, I've done the following: additionalLibs = [] additionalLibs.append( ("libGL.so.1", "/usr/lib64/libGL.so.1", 'BINARY') ) # yada yada a = Analysis(['myApp.py'], pathex=['/path/to/myAppDir'], hiddenimports=[], hookspath=None) pyz = PYZ(a.pure) exe = EXE(pyz, a.scripts, a.binaries + additionalLibs, a.zipfiles, a.datas, name=os.path.join('dist', 'myApp'), icon="myApp.ico", debug=False, strip=None, upx=True, console=console ) This will then include the libGL in your packaged binary -- which works great on all systems that either have no libGL.so.1 or have a compatible libGL.so.1. On systems that have an incompatible libGL.so.1 (as can be the case if your system is up to date & the target system is not, or vice versa), then you'll get a similar error... hence "half an answer" I was trying to find an answer to the other half (making it always work) when I found your question.
Python - connection.close() - SyntaxError: invalid syntax Question: I am trying to create quote of the day server, and I am having a problem with sockets. My program code is below: #!/usr/bin/python from socket import * from ConfigParser import * import sys import argparse import random class serverConf: port = 17 host = "" quotefile = "" def initConfig(filename): config = ConfigParser() config.add_section('Server') config.set('Server', 'host', '') config.set('Server', 'port', '17') config.add_section('Quotes') config.set('Quotes', 'file', 'quotes.txt') with open(filename, 'w') as configfile: config.write(configfile) def parseConfig(filename): configOptions = serverConf() try: config = ConfigParser() config.read(filename) configOptions.port = config.getint('Server', 'port') configOptions.host = config.get('Server', 'host') configOptions.quoteFile = config.get('Quotes', 'file') except KeyboardInterrupt: raise except: print "[Info] Configuration file \'" + filename + "\' does not exist. Creating one with the default values" configOptions.port = 17 configOptions.host = "" configOptions.quoteFile = "quotes.txt" try: initConfig(filename) except KeyboardInterrupt: raise except: print "[Fatal] Unable to create configuration file \'" + filename + "\'. This program must exit" sys.exit() print "[Info] Read configuration options" return configOptions def doInitMessage(): print "Quote Of The Day Server" print "-----------------------" print "Version 1.0 By Ian Duncan" print "" def random_line(afile): line = next(afile) for num, aline in enumerate(afile): if random.randrange(num + 2): continue line = aline return line def doCheckQuotesFile(quotesFilename): try: with open(quotesFilename): pass print "[Info] Discovered file containing quotes \'" + quotesFilename + "\'" except IOError: print "[Fatal] Unable to read quotes file \'" + quotesFilename + "\'. This program must exit" sys.exit() def Start(args): filename = "qotdconf.ini" doInitMessage() configOptions = parseConfig(filename) if configOptions.host == '': print "[Info] Will start server at: " + "*" + ":" + str(configOptions.port) else: print "[Info] Will start server at: " + configOptions.host + ":" + str(configOptions.port) doCheckQuotesFile(configOptions.quoteFile) try: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind ( (configOptions.host, configOptions.port) ) s.listen (5) print "[Info] The server is now broadcasting" while 1: connection, address = s.accept() print("[Info] Accepted Connection from " + str(address) + "!") connection.send(random_line(configOptions.quoteFile + "\n") connection.close() except socket.error: print "[Fatal] There was a problem. This program must exit" sys.exit() parser = argparse.ArgumentParser(description='A quote of the day server.') parser.add_argument('--port', dest='accumulate', action='store_const', const=int, default=max, help='the port the server broadcasts on') parser.add_argument('--host', dest='accumulate', action='store_const', const=str, default=max, help='the ip address or domain name the server broadcasts on') parser.add_argument('--quotes', dest='accumulate', action='store_const', const=str, default=max, help='the file that stores the quotes') parser.add_argument('--config', dest='accumulate', action='store_const', const=str, default=max, help='specify a custom configuration file') args = parser.parse_args() Start(args) When I run this, I get: File "server.py", line 129 connection.close() ^ SyntaxError: invalid syntax In other instances where I had problems, Python pointed me to the problem immediately. This time, I have no idea what is going on. What is causing this error? Answer: Ah, it's the ol' missing parenthesis on the preceding line again... connection.send(random_line(configOptions.quoteFile + "\n") Because Python automatically continues statements across lines inside parentheses, it's seeing this statement and the next as one line: connection.send(random_line(configOptions.quoteFile + "\n") connection.close() And clearly that doesn't make any sense (it needs _something_ before `connection.close()`, and _also_ still needs a closing parenthesis). So you want: connection.send(random_line(configOptions.quoteFile + "\n"))
I want to run a python script periodically with a variable persisted each repetition. How can I do this? Question: I have a python script which moves a robot based on a bearing. The bearing calculated is relative to another robot. I want to calculate a corrective factor based on the magnetic bearing the robot is moving along + the change in distance from other points in the network. this corrective factor could then be applied to the bearing calculated relative to another bot to make the bearing closer to a true magnetic bearing ( I have worked out the maths behind this but don't think there is a need to go into the details here). the way my script runs is by calling other scripts and passing values to and reading them from them. A light piece of pseduo code looks like: find a bearing relative to another bot to the point to be reached move towards it along this bearing test accuracy of the bearing calculate a correction factor I then want to repeat the script and correct the bearing initially calculated with the correction factor (simple add or subtract x degrees) How can I persist the variable each time the script repeats so that the correction factor can be added or subtracted from the next time instead of having to be recalculated from scratch? Answer: Store it in a file like that: import json json.dump(data, open(filename, 'wb')) and red it next time with f = open(filename) data = json.load(f) f.close()
Get body text of an email using python imap and email package Question: I want to retrieve body (only text) of emails using python imap and email package. As per this [SO thread](http://stackoverflow.com/questions/3449220/how-do-i- recieve-a-html-email-as-a-regular-text?rq=1), I'm using the following code: mail = email.message_from_string(email_body) bodytext = mail.get_payload()[ 0 ].get_payload() Though it's working fine for some instances, but sometime I get similar to following response [<email.message.Message instance at 0x0206DCD8>, <email.message.Message instance at 0x0206D508>] Answer: You are assuming that messages have a uniform structure, with one well-defined "main part". That is not the case; there can be messages with a single part which is not a text part (just an "attachment" of a binary file, and nothing else) or it can be a multipart with multiple textual parts (or, again, none at all) and even if there is only one, it need not be the first part. Furthermore, there are nested multiparts (one or more parts is another MIME message, recursively). In so many words, you must inspect the MIME structure, then decide which part(s) are relevant for your application. If you only receive messages from a fairly static, small set of clients, you may be able to cut some corners (at least until the next upgrade of Microsoft Plague hits) but in general, there simply isn't a hierarchy of any kind, just a collection of (not necessarily always directly related) equally important parts.
executing current python file in vim creates new file if current time is parsed Question: I am executing python files from vim as described here: [How to execute file I'm editing in Vi(m)](http://stackoverflow.com/questions/953398/how-to- execute-file-im-editing-in-vim) I observe the same behavior on Windows & Linux. For testing, I moved my .vim so as to avoid other plugins interfering. Then I set: :set makeprg=python\ % Now when I run an example file like this (called mini.py) import datetime print "hello" def foo1(): print "foo" print "str: " + str(datetime.datetime.now()) print "str: " + str(datetime.datetime.now().date()) foo1() Now when I execute :make "mini.py" 10L, 173C written :!python mini.py 2>&1| tee /tmp/vew33jl/9 hello foo str: 2013-05-07 17:01:47.124149 str: 2013-05-07 "str: 2013-05-07 17" [New File] (3 of 4): 47.124149 vim kind of chokes on the datetime.now output and creates a new file with the current date and instantly displays it. Is this behavior to be expected? If I comment out the .now() line (now().date() is not a problem apparently), it works as expected and I more or less see the text output that I'd expect. Answer: When you use `'makeprg'`, Vim parses the output according to `'errorformat'` to retrieve error messages from the output. Your date output looks conspicuously like a typical error message, and by default, `:make` jumps to the first error location it encounters. What you can do: * Use `:make!` (with bang); that will avoid the jump to the first error. Or: * In addition to setting `'makeprg'`, also clear the `'errorformat'` to avoid that Vim parses the output; unless you're only ever edit Python files with Vim; you should use `:setlocal`, not the global `:set`, and put that into `~/.vim/after/ftplugin/python.vim`: :setlocal makeprg=python\ % :setlocal errorformat=
PyCharm How to import module from network share Question: I'm running PyCharm 2.7.2 on Windows7 with interpreter v2.7.4 I need to import a module that lives on a network share. I believe the PyCharm way of doing this is to add another 'Content Root'. However PyCharm only presents the C drive in the Add Content Root dialog. How can I import the module? (without moving it or messing with pythonpath at runtime) Answer: PyCharm doesn't support UNC paths, as a workaround you can map this share to a network drive letter and PyCharm will see it. Note that it may affect performance. If running as Administrator, Windows will not allow the application to access any network drives.
Django import error - bad argument to internal function Question: I'm making a basic time-card program. I'm getting an error while importing views from urls.py. The other files in the app (models, forms, etc) can be imported, but any call to import views returns an error I haven't seen before. Request Method: GET Request URL: http://127.0.0.1:8000/admin/ Django Version: 1.5 Python Version: 3.3.0 Installed Applications: ('django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.admin', 'django.contrib.admindocs', 'timesheets') Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware') Traceback: File "C:\Python33\lib\site-packages\django\core\handlers\base.py" in get_response 103. resolver_match = resolver.resolve(request.path_info) File "C:\Python33\lib\site-packages\django\core\urlresolvers.py" in resolve 319. for pattern in self.url_patterns: File "C:\Python33\lib\site-packages\django\core\urlresolvers.py" in url_patterns 347. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "C:\Python33\lib\site-packages\django\core\urlresolvers.py" in urlconf_module 342. self._urlconf_module = import_module(self.urlconf_name) File "C:\Python33\lib\site-packages\django\utils\importlib.py" in import_module 35. __import__(name) File "c:\timecards\timecards\urls.py" in <module> 10. url(r'^/', include('timesheets.urls')), File "C:\Python33\lib\site-packages\django\conf\urls\__init__.py" in include 25. urlconf_module = import_module(urlconf_module) File "C:\Python33\lib\site-packages\django\utils\importlib.py" in import_module 35. __import__(name) File "c:\timecards\timesheets\urls.py" in <module> 2. from timesheets import views Exception Type: SystemError at /admin/ Exception Value: ..\Objects\tupleobject.c:143: bad argument to internal function My timesheets.urls.py is: from django.conf.urls import patterns, include, url from timesheets import views # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Examples: # url(r'^$', 'timecards.views.home', name='home'), # url(r'^timecards/', include('timecards.foo.urls')), url(r'^$', views.login), url(r'^logout/$', views.logout), url(r'^login/$', views.login), url(r'^timecards/$', views.timecards), url(r'^timecards/add/$', views.addcard), # Uncomment the admin/doc line below to enable admin documentation: # url(r'^admin/doc/', include('django.contrib.admindocs.urls')), # Uncomment the next line to enable the admin: url(r'^admin/', include(admin.site.urls)), ) timesheets.views.py is: from django.shortcuts import render, redirect from django.contrib.auth import authenticate, login, logout from timesheets.forms import AppointmentForm from timesheets.models import Appointment def login(request): context={'next':'/timecards/', 'username':''} if request.method == 'POST': username = request.POST['username'] password = request.POST['password'] context['username'] = '' user = authenticate(username=username, password=password) if user is not None: if user.is_active: login(request, user) redirect('timecards') else: context['error'] ='Did not find match for username and password' render('login.html', context) def logout(request): logout(request) redirect('login') def timecards(request): if not request.user.is_authenticated(): redirect('login') employee = request.user timecards = Appointment.objects.get('employee_id'=employee.id) context = {'employee': employee, 'timecards':timecards} return render(request, 'timecards.html', context) def addcard(request): if not request.user.is_authenticated(): redirect('login') employee = request.user if request.method == 'POST': form = AppointmentForm(request.POST) if form.is_valid(): cd = form.cleaned_data appoint = form.save(commit=False) appoint.employee_id = employee.id appoint.save() return redirect('timecards') else: form = AppointmentForm() context = {'employee': employee, 'form':form} return render(request,'addcards.html', context) Answer: Had similar issue but problem with my app, and none of the above solution worked. I tried running my views.py in python console and found the error in it. after i changed it worked for me.
PyMC regression of many regressions? Question: I haven't been using PyMC for long, but I was pleased at how quickly I was able to get a linear regression off the ground (this code _should_ run without modification in IPython): import pandas as pd from numpy import * import pymc data=pd.DataFrame(rand(40)) predictors=pd.DataFrame(rand(40,5)) sigma = pymc.Uniform('sigma', 0.0, 200.0, value=20) params= array([pymc.Normal('%s_coef' % (c), mu=0, tau=1e-3,value=0) for c in predictors.columns]) @pymc.deterministic(plot=False) def linear_regression_model(x=predictors,beta=params): return dot(x,beta) ynode = pymc.Normal('ynode', mu=linear_regression_model, tau=1.0/sigma**2,value=data,observed=True) m = pymc.Model(concatenate((params,array([sigma,ynode])))) %time pymc.MCMC(m).sample(10000, 5000, 5, progress_bar=True) In this model there are 40 subjects (observations) and 5 covariates for each subject. The model won't converge because of the random data, but it samples with no error (and my real data does converge to an accurate result). The model I am having a problem with is an extension of this. Each subject actually has 3 (or N) observations and so I need to fit a line to these observations and then use the intercept of the line as the "data" or input for the final regression node. I think this a classical hierarchical model, but correct me if I'm thinking about it in the wrong way. My solution was to set up a series of 40 linear regressions (one for each subject) and then use the vector of intercept parameters as the data for the final regression. #nodes for fitting 3 values for each of 40 subjects with a line #40 subjects, 3 data points each data=pd.DataFrame(rand(40,3)) datax=arange(3) """ to fit a line to each subject's data we need: (1) a slope and offset parameter (2) a stochastic node for the data (3) a sigma parameter for the stochastic node Initialize them all as object arrays """ sigmaArr=empty((len(data.index)),dtype=object) ynodeArr=empty((len(data.index)),dtype=object) slopeArr=empty((len(data.index)),dtype=object) offsetArr=empty((len(data.index)),dtype=object) #Fill in the empty arrays for i,ID in enumerate(data.index): sigmaArr[i]=pymc.Uniform('sigma_%s' % (ID) , 0.0, 200.0, value=20) slopeArr[i]=pymc.Normal('%s_slope' % (ID), mu=0, tau=1e-3,value=0) offsetArr[i]=pymc.Normal('%s_avg' % (ID), mu=0, tau=1e-3,value=data.ix[ID].mean()) #each model fits a line to the three data points @pymc.deterministic(name='time_model_%s' % ID,plot=False) def line_model(xx=datax,slope=slopeArr[i],avg=offsetArr[i]): return slope*xx + avg ynodeArr[i]=pymc.Normal('ynode_%s' % (ID), mu = line_model, tau = 1/sigmaArr[i]**2,value=data.ix[ID],observed=True) #nodes for final regression (there are 5 covariates in this regression) predictors=pd.DataFrame(rand(40,5)) sigma = pymc.Uniform('sigma', 0.0, 200.0, value=20) params= array([pymc.Normal('%s_coef' % (c), mu=0, tau=1e-3,value=0) for c in predictors.columns]) @pymc.deterministic(plot=False) def linear_regression_model(x=predictors,beta=params): return dot(x,beta) ynode = pymc.Normal('ynode', mu=linear_regression_model, tau=1.0/sigma**2,value=offsetArr) nodes=concatenate((sigmaArr,ynodeArr,slopeArr,offsetArr,params,array([sigma, ynode]))) m = pymc.Model(nodes) %time pymc.MCMC(m).sample(10000, 5000, 5, progress_bar=True) This model fails at the sampling step. The error appears to be in trying to cast offsetArr as dtype=float64 when instead its dtype=object. What is the correct way to do this? Do I need another deterministic node to help cast my offsetArr to float64? Do I need a special kind of pymc.Container? Thanks for your help! Answer: Have you tried using a simple list instead of numpy arrays to store the PyMC objects?
getting started with cython by using cdef Question: I dont understand how to setup and run code with cython. I added `cdef`, `double`, etc to pertinent pieces of my code. setup.py of course the name hello isn't being used. [cython doc](http://docs.cython.org/src/quickstart/build.html) from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension("hello", ["hello.pyx"])] setup( name = 'Hello world app', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) **Edit** Redefining the question to make it more of a concrete example. I want to run this system of ODEs through Cython. I am not sure if `double` is the best choice but the numbers are on the order of magnitudes of 10 to the 10 and are non-integer. So I want to call this in a bigger code where `mus` is a variable defined in the script calling the module. I have my `setup.py` file to compile the pyx file. I am not sure with what I need to do so I can call this ode now. Say I name the module 3bodyproblem. I would then call it in the scipt as `import 3bodyproblem` and then do `3bodyproblem.3bodyproblem(what would this input be)' I have be reading [intro to cython for odes](http://hplgit.github.io/teamods/cyode/cyode-sphinx/main_cyode.html) but I am not sure how to use their example with mine. Also, if it needs to be in rk format, see the code below the first code. **Code 1** cdef deriv(double u, dt): cdef double u[0], u[1], u[2] return [u[3], u[4], u[5], -mus * u[0] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5, -mus * u[1] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5, -mus * u[2] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5] dt = np.linspace(0.0, t12sec, t12sec) u = odeint(deriv, u0, dt, atol = 1e-13, rtol = 1e-13) x, y, z, vx, vy, vz = u.T **Code 2** cdef deriv(dt, double u): cdef double u[0], u[1], u[2], u[3], u[4], u[5] return [u[3], u[4], u[5], -mus * u[0] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5, -mus * u[1] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5, -mus * u[2] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5] solver = ode(deriv).set_integrator('dop853') solver.set_initial_value(u0) z = np.zeros(300000) for ii in range(300000): z[ii] = solver.integrate(dt[ii])[0] x, y, z, x2, y2, z2 = z.T If I just try to compile code 1, cython doesn't know about the variables defined the larger `.py` file. I want to avoid setting variables in the cython code so it can be used else where with out re compiling. Here are the errors: Error compiling Cython file: ------------------------------------------------------------ ... import numpy as np from scipy.integrate import odeint cdef deriv(double u, dt): cdef double u[0], u[1], u[2] ^ ------------------------------------------------------------ ODEcython.pyx:7:17: 'u' redeclared Error compiling Cython file: ------------------------------------------------------------ ... import numpy as np from scipy.integrate import odeint cdef deriv(double u, dt): cdef double u[0], u[1], u[2] ^ ------------------------------------------------------------ ODEcython.pyx:7:23: 'u' redeclared Error compiling Cython file: ------------------------------------------------------------ ... import numpy as np from scipy.integrate import odeint cdef deriv(double u, dt): cdef double u[0], u[1], u[2] ^ ------------------------------------------------------------ ODEcython.pyx:7:29: 'u' redeclared Error compiling Cython file: ------------------------------------------------------------ ... -mus * u[0] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5, -mus * u[1] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5, -mus * u[2] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5] dt = np.linspace(0.0, t12sec, t12sec) ^ ------------------------------------------------------------ ODEcython.pyx:16:28: undeclared name not builtin: t12sec Error compiling Cython file: ------------------------------------------------------------ ... -mus * u[1] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5, -mus * u[2] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5] dt = np.linspace(0.0, t12sec, t12sec) u = odeint(deriv, u0, dt, atol = 1e-13, rtol = 1e-13) ^ ------------------------------------------------------------ ODEcython.pyx:17:16: Cannot convert 'object (double, object)' to Python object Error compiling Cython file: ------------------------------------------------------------ ... -mus * u[1] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5, -mus * u[2] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5] dt = np.linspace(0.0, t12sec, t12sec) u = odeint(deriv, u0, dt, atol = 1e-13, rtol = 1e-13) ^ ------------------------------------------------------------ ODEcython.pyx:17:20: undeclared name not builtin: u0 Error compiling Cython file: ------------------------------------------------------------ ... cdef deriv(double u, dt): cdef double u[0], u[1], u[2] return [u[3], u[4], u[5], -mus * u[0] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5, ^ ------------------------------------------------------------ ODEcython.pyx:11:17: undeclared name not builtin: mus building 'ODEcython' extension creating build creating build/temp.linux-x86_64-2.7 x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c ODEcython.c -o build/temp.linux-x86_64-2.7/ODEcython.o ODEcython.c:1:2: error: #error Do not use this file, it is the result of a failed Cython compilation. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 **Maybe this will help** So I want to use the module in a file like this: (different deriv function but just ignore that the idea is the same) import numpy as np from scipy.integrate import ode import pylab from mpl_toolkits.mplot3d import Axes3D from scipy.optimize import brentq from scipy.optimize import fsolve me = 5.974 * 10 ** 24 # mass of the earth mm = 7.348 * 10 ** 22 # mass of the moon G = 6.67259 * 10 ** -20 # gravitational parameter re = 6378.0 # radius of the earth in km rm = 1737.0 # radius of the moon in km r12 = 384400.0 # distance between the CoM of the earth and moon rs = 66100.0 # distance to the moon SOI Lambda = np.pi / 6 # angle at arrival to SOI M = me + mm d = 300 # distance the spacecraft is above the Earth pi1 = me / M pi2 = mm / M mue = 398600.0 # gravitational parameter of earth km^3/sec^2 mum = G * mm # grav param of the moon mu = mue + mum omega = np.sqrt(mu / r12 ** 3) # distance from the earth to Lambda on the SOI r1 = np.sqrt(r12 ** 2 + rs ** 2 - 2 * r12 * rs * np.cos(Lambda)) vbo = 10.85 # velocity at burnout h = (re + d) * vbo # angular momentum energy = vbo ** 2 / 2 - mue / (re + d) # energy v1 = np.sqrt(2.0 * (energy + mue / r1)) # refer to the close up of moon diagram # refer to diagram for angles theta1 = np.arccos(h / (r1 * v1)) phi1 = np.arcsin(rs * np.sin(Lambda) / r1) # p = h ** 2 / mue # semi-latus rectum a = -mue / (2 * energy) # semi-major axis eccen = np.sqrt(1 - p / a) # eccentricity nu0 = 0 nu1 = np.arccos((p - r1) / (eccen * r1)) # Solving for the eccentric anomaly def f(E0): return np.tan(E0 / 2) - np.sqrt((1 - eccen) / (1 + eccen)) * np.tan(0) E0 = brentq(f, 0, 5) def g(E1): return np.tan(E1 / 2) - np.sqrt((1 - eccen) / (1 + eccen)) * np.tan(nu1 / 2) E1 = fsolve(g, 0) # Time of flight from r0 to SOI deltat = (np.sqrt(a ** 3 / mue) * (E1 - eccen * np.sin(E1) - (E0 - eccen * np.sin(E0)))) # Solve for the initial phase angle def s(phi0): return phi0 + deltat * 2 * np.pi / (27.32 * 86400) + phi1 - nu1 phi0 = fsolve(s, 0) nu = -phi0 gamma = 0 * np.pi / 180 # angle in radians of the flight path vx = vbo * (np.sin(gamma) * np.cos(nu) - np.cos(gamma) * np.sin(nu)) # velocity of the bo in the x direction vy = vbo * (np.sin(gamma) * np.sin(nu) + np.cos(gamma) * np.cos(nu)) # velocity of the bo in the y direction xrel = (re + 300.0) * np.cos(nu) - pi2 * r12 yrel = (re + 300.0) * np.sin(nu) u0 = [xrel, yrel, 0, vx, vy, 0] def deriv(u, dt): return [u[3], # dotu[0] = u[3] u[4], # dotu[1] = u[4] u[5], # dotu[2] = u[5] (2 * omega * u[4] + omega ** 2 * u[0] - mue * (u[0] + pi2 * r12) / np.sqrt(((u[0] + pi2 * r12) ** 2 + u[1] ** 2) ** 3) - mum * (u[0] - pi1 * r12) / np.sqrt(((u[0] - pi1 * r12) ** 2 + u[1] ** 2) ** 3)), # dotu[3] = that (-2 * omega * u[3] + omega ** 2 * u[1] - mue * u[1] / np.sqrt(((u[0] + pi2 * r12) ** 2 + u[1] ** 2) ** 3) - mum * u[1] / np.sqrt(((u[0] - pi1 * r12) ** 2 + u[1] ** 2) ** 3)), # dotu[4] = that 0] # dotu[5] = 0 dt = np.linspace(0.0, 259200.0, 259200.0) # secs to run the simulation u = odeint(deriv, u0, dt) x, y, z, x2, y2, z2 = u.T fig = pylab.figure() ax = fig.add_subplot(111, projection='3d') ax.plot(x, y, z, color = 'r') # adding the moon phi = np.linspace(0, 2 * np.pi, 100) theta = np.linspace(0, np.pi, 100) xm = rm * np.outer(np.cos(phi), np.sin(theta)) + r12 - pi2 * r12 ym = rm * np.outer(np.sin(phi), np.sin(theta)) zm = rm * np.outer(np.ones(np.size(phi)), np.cos(theta)) ax.plot_surface(xm, ym, zm, color = '#696969', linewidth = 0) ax.auto_scale_xyz([-8000, 385000], [-8000, 385000], [-8000, 385000]) # adding the earth xe = re * np.outer(np.cos(phi), np.sin(theta)) - pi2 * r12 ye = re * np.outer(np.sin(phi), np.sin(theta)) ze = re * np.outer(np.ones(np.size(phi)), np.cos(theta)) ax.plot_surface(xe, ye, ze, color = '#4169E1', linewidth = 0) ax.auto_scale_xyz([-8000, 385000], [-8000, 385000], [-8000, 385000]) pylab.show() Answer: try something like this: # hello.pyx cimport numpy as np import numpy as np def deriv(np.ndarray u, double mus): return [u[3], u[4], u[5], -mus * u[0] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5, -mus * u[1] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5, -mus * u[2] / (u[0] ** 2 + u[1] ** 2 + u[2] ** 2) ** 1.5] build as before, than use it in python like this: import hello [...] u_new = hello.deriv(u, mus) # or put it into odeint `cdef` functions are only callable from the c-side, also you must put all parameters in the function definition.
Scraperwiki character encoding anomaly Question: Here is a ScraperWiki scraper written in Python: import lxml.html import scraperwiki from unidecode import unidecode html = scraperwiki.scrape("http://www.timeshighereducation.co.uk/world-university-rankings/2012-13/world-ranking/range/001-200") root = lxml.html.fromstring(html) for tr in root.cssselect("table.ranking tr"): if len(tr.cssselect("td.rank")) > 0 and len(tr.cssselect("td.uni")) > 0: university = unidecode(tr.cssselect("td.uni")[0].text_content()).strip().title() if 'cole' in university: print university It produces the following output: Ecole Polytechnique Federale De Lausanne Ecole Normale Superieure Acole Polytechnique Ecole Normale Superieure De Lyon My question: what is causing the initial character on the third output line to be rendered as "A" rather than as "E", and how can I stop this from happening? Answer: Based on [soulseekah](http://stackoverflow.com/users/482864/soulseekah)'s helpful comment above, and on the _lxml_ docs [here](http://lxml.de/lxmlhtml.html) and [here](http://lxml.de/elementsoup.html#using-only-the-encoding-detection), the following solution works: import lxml.html import scraperwiki from unidecode import unidecode from BeautifulSoup import UnicodeDammit def decode_html(html_string): converted = UnicodeDammit(html_string, isHTML=True) if not converted.unicode: raise UnicodeDecodeError( "Failed to detect encoding, tried [%s]", ', '.join(converted.triedEncodings)) return converted.unicode html = scraperwiki.scrape("http://www.timeshighereducation.co.uk/world-university-rankings/2012-13/world-ranking/range/001-200") root = lxml.html.fromstring(decode_html(html)) for tr in root.cssselect("table.ranking tr"): if len(tr.cssselect("td.rank")) > 0 and len(tr.cssselect("td.uni")) > 0: university = unidecode(tr.cssselect("td.uni")[0].text_content()).strip().title() if 'cole' in university: print university
How do I zip a file in python Question: How do I zip a file in python and how do specify the compression level ? I have the below code so far and the error I get is: Zip("log4j.dl-service.log.2013-05-03-22",9) AttributeError: 'str' object has no attribute 'ZipFile' Code: import zipfile import fileinput def Zip(file,level): """ This function uses the zip library native to python to compress files with Usage: Zip(file,level) """ if file and level: try: zipfile = file+'.zip' output = zipfile.ZipFile(zipfile, 'wb') for line in fileinput.input(file): output.write(line) output.close() if os.path.exists(zipfile): return zipfile else: logMe('critical',"zip failed in Zip()") return False except Exception,e: logMe('critical',e) Answer: The problem here is this line: zipfile = file+'.zip' After that, `zipfile` no longer refers to the module, but to this string. So, when you do this: output = zipfile.ZipFile(zipfile, 'wb') You're asking a string, not a module, to call `ZipFile`. Hence this error: AttributeError: 'str' object has no attribute 'ZipFile' The fix is to not call your variable `zipfile`. As Jeff Langemeier says, the key is: > Don't use variable names that are the same as your imports that's just > asking for trouble. B: Don't use variable names that are the same as your > imports it makes it impossible to read. C: Don't use variable names that are > the same as your imports. More generally, don't use the same name for two different things. * * * For your second question: > how do specify the compression level You can't. This is intentional, for the same reason that the `zip` command- line tool no longer documents 10 levels of compression, just `-0` for store (no compression), and `-9` for "compress better". There's almost never any good use for anything but store or the default. `-9` often doesn't really compress better than the default—or it compresses some files better and others worse. If you need better compression, you need a better algorithm—e.g., use `.tar.bz2` instead of `.zip`, or use `p7zip` (which can create zip-compatible files) instead of plain `zip`. So, Python's library only gives you two choices: store or default. As [the docs](http://docs.python.org/2/library/zipfile.html#zipfile.ZipFile) show: > `class zipfile.ZipFile(file[, mode[, compression[, allowZip64]]])` > > … > > compression is the ZIP compression method to use when writing the archive, > and should be ZIP_STORED or ZIP_DEFLATED And likewise for the `write`/`writestr` methods. If you really want to do this, you can call `zlib.compress` directly, create a `ZipInfo` object directly, and add it to the archive yourself. If you look at [the source](http://hg.python.org/cpython/file/2.7/Lib/zipfile.py#l1196) (which is linked from the docs), you can see what `writestr` does—it's really not complicated once you strip out all the irrelevant conditions and type checks. But really, I don't think you want to do this.
A good way to make long strings wrap to newline in Python? (3.x) Question: In my project, I have a bunch of strings that are read in from a file. Most of them, when printed in the command console, exceed 80 characters in length and wrap around, looking ugly. I want to be able to have Python read the string, then test if it is over 75 characters in length. If it is, then split the string up into multiple strings, then print one after the other on a new line. I also want it to be smart, not cutting off full words. i.e. `"The quick brown <newline> fox..."` instead of `"the quick bro<newline>wn fox..."`. I've tried modifying similar code that truncates the string after a set length, but just trashes the string instead of putting it in a new line. What are some methods I could use to accomplish this? Answer: You could use `textwrap` module: >>> import textwrap >>> strs = "In my project, I have a bunch of strings that are read in from a file. Most of them, when printed in the command console, exceed 80 characters in length and wrap around, looking ugly." >>> print(textwrap.fill(strs, 20)) In my project, I have a bunch of strings that are read in from a file. Most of them, when printed in the command console, exceed 80 characters in length and wrap around, looking ugly. **help** on `textwrap.fill` >>> textwrap.fill? Definition: textwrap.fill(text, width=70, **kwargs) Docstring: Fill a single paragraph of text, returning a new string. Reformat the single paragraph in 'text' to fit in lines of no more than 'width' columns, and return a new string containing the entire wrapped paragraph. As with wrap(), tabs are expanded and other whitespace characters converted to space. See TextWrapper class for available keyword args to customize wrapping behaviour. Use `regex` if you don't want to merge a line into another line: import re strs = """In my project, I have a bunch of strings that are. Read in from a file. Most of them, when printed in the command console, exceed 80. Characters in length and wrap around, looking ugly.""" print('\n'.join(line.strip() for line in re.findall(r'.{1,40}(?:\s+|$)', strs))) """ Reading a single line at once: for x in strs.splitlines(): print '\n'.join(line.strip() for line in re.findall(r'.{1,40}(?:\s+|$)', x)) """ **output:** In my project, I have a bunch of strings that are. Read in from a file. Most of them, when printed in the command console, exceed 80. Characters in length and wrap around, looking ugly.
Django template: localize number returned from simple tag Question: I hope this question is not here, I have googled it for long time and not found anything. I have problem with formating numbers in templates, but somehow special. Maybe, this is whole problem how to call filters on values returned form simple tags. So, first, some facts: * using python 2.7 and Django 1.5 * In czech language, we are using comma "," as decimal separator and optionaly space " " as thousand separator (more often is to write numbers without thousand separator). * Normally, writing number by `{{ price }}` is printing e.g. **16,0** For some reason, I have to work with numbers in template: {% count_basic price.leafletPrice.price product.amount product.amount_unit product.category.basic_amount product.category.basic_amount_unit %} count basic is simple tag: from django import template register = template.Library() @register.simple_tag def count_basic(price, product_amount, product_unit, category_amount, category_unit): if product_unit == 'mililiter' or product_unit == 'gram': return (float(price) / product_amount) * 1000 else: return float(price) / product_amount Result is that is't printed as **16.0**. I have set l10n in `settings.py`to True: USE_I18N = True USE_L10N = True I have also found "humanize" as working but firstly, I don't know how to call simple tag/filter with value returned from another simple tag (maybe using `with`?), secondly, if it's working for normal numbers, why not for numbers returned from simple tag? Is the number badly returned? Should be converted? I have tried also return value as `Decimal()` and not working. Any easy working solution appreciated. Answer: OK, finally, I have found out how to do this. I have changed my simple tag, specially the return part(s) to: from django.template.defaultfilters import floatformat return floatformat(float(price) / product_amount, 2) `floatformat` made template to display number formated, with comma as decimal separator. Two decimal places (second parameter of floatformat) was requested.
Python Regex Mark-Up Question: Hi guys having trouble with a particular problem. I am using python's regex to alter the markup source to output html format. markup source: [ # sometextsometextsometextsometextsometextsometext. # # sometextsometextsometextsometextsometextsometextsometextsometext sometextsometextsometextsometextsometextsometext. # ] [ hello i am a normal paragraph. ] desired output: <ol> <li> sometextsometextsometextsometextsometextsometext. </li> <li> sometextsometextsometextsometextsometextsometextsometextsometext sometextsometextsometextsometextsometextsometext. </li> </ol> <p> hello i am a normal paragraph. </p> Answer: import re with open('mk.txt') as f: with open('newmk.txt','w+') as g: text = f.read() SquareGroups = re.findall(r'\[(?:.|\n)+?\]',text) for group in SquareGroups: if '#' in group: #must be ol group = group.replace('[','<ol>') group = group.replace(']','</ol>') group = re.sub('#(?= ?\w)','<li>',group) group = re.sub('(?<=[\w ])#','</li>',group) else: group = group.replace('[','<p>') group = group.replace(']','</p>') g.write(group) g.write('\n') #optional, just makes the output look 'nicer' Transforms your input in `mk.txt` into the following text in `newmk.txt`: <ol> <li> sometextsometextsometextsometextsometextsometext. </li> <li> sometextsometextsometextsometextsometextsometextsometextsometext sometextsometextsometextsometextsometextsometext. </li> </ol> <p> hello i am a normal paragraph. </p>
NYtimes API, python Question: Hi I want to get all the information after 'title' from NYtimes API, here is my code from urllib2 import urlopen from json import loads import codecs import time def call_the_articles(): url = "http://api.nytimes.com/svc/search/v1/article?query=US&facets=POLITICS&api-key=##" return loads(urlopen(url).read()) articles = call_the_articles() if __name__ == '__main__': for story in articles("results"): print story['title'].encode('ascii', 'replace') But when I run in terminal, the error coming out like: File "NYtimes.py", line 10, in <module> articles = call_the_articles() File "NYtimes.py", line 8, in call_the_articles return loads(urlopen(url).read()) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 406, in open response = meth(req, response) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 519, in http_response 'http', request, response, code, msg, hdrs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 444, in error return self._call_chain(*args) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 378, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 527, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 400: Bad Request How to solve the problem? Answer: I suspect what you want is: url = "http://api.nytimes.com/svc/search/v1/article?format=json&query=US+des_facet%3A%5BPOLITICS+AND+GOVERNMENT%5D&api-key=## There are a couple of things going on to cause a bad request: 1.) You're using the `facets` keyword incorrectly. From the [Times API developer docs on facets](http://developer.nytimes.com/docs/read/article_search_api#h3-facets- working): > Facets can be thought of as search "perspectives." With facets, you can look > at search results from different perspectives, and you can approach your > search queries from different angles. Each facet can be seen as representing > a property or characteristic of Times article data. > > Facets can reveal points of commonality and distinction that are not > immediately apparent. For example, two articles with the word "bicycle" in > the title may have two very different nytd_section_facet (NYTimes.com > section) values: "Movies" and "Health." Similarly, two articles that discuss > seemingly disparate topics, such as cloud computing and auto shows, may > share a des_facet (descriptive subject term) value: "NEW MODELS, DESIGN AND > PRODUCTS." 2.)You need to URLEncode your query when you send it through urlopen(). Also, `articles` will be a dict, so you'll want to get the articles out using `[]`: for story in articles["results"]: If the query here is not exactly what you want, NYT has a tool that allows you to play with constructing your queries: [NYT API Request Tool](http://prototype.nytimes.com/gst/apitool/index.html?api_id=0&request_id=0&query=US%20des_facet%3a%5BPOLITICS%20AND%20GOVERNMENT%5D&facets=&begin_date=&fields=&offset=&rank=&resp_format=json&perform_request=Make%20Request&use_pp=on).
Sending MMS via MonkeyRunner on Android throws Java exception Question: I need to send MMS by writing a MonkeyRunner script. My script is as below and it throws an exception. Can anyone please help? I am interested in writing scripts using intents and not using co-ordinates method: from com.android.monkeyrunner import MonkeyDevice, MonkeyRunner, MonkeyImage device= MonkeyDevice for i in range(5): device =MonkeyRunner.waitForConnection(8) if device != None: print "Device found..." break; Intent sendIntent = new Intent("android.intent.action.SEND_MSG"); sendIntent.putExtra("999999", toText); sendIntent.putExtra(Intent.EXTRA_SUBJECT, "MMS"); sendIntent.putExtra("sms_body", textMessage); sendIntent.putExtra(Intent.EXTRA_STREAM, Uri.fromFile(new File("/sdcard/file.gif"))); sendIntent.setType("image/jpeg"); device.startActivity(sendIntent); > 130508 12:37:35.663:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > Script terminated due to an exception > 130508 12:37:35.663:S [main] > [com.android.monkeyrunner.MonkeyRunnerOptions]Synta xError: ("mismatched > input 'sendIntent' expecting NEWLINE", > ('C:\Users\halappa\Work\MMBU\EOS2\ES2\Samsung\adt-bundle- > windows-x86_64-20130219\adt-bundle > -windows-x86_64-20130219\sdk\tools\mms.py', 9, 7, 'Intent sendIntent = new > Intent("android.intent.action.SEND_MSG"); \n')) > 130508 12:37:35.663:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.ParserFacade.fixParseError(ParserFacade.java:94) > 130508 12:37:35.663:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.ParserFacade.parse(ParserFacade.java:143) > 130508 12:37:35.663:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.Py.compile_flags(Py.java:1644) > 130508 12:37:35.663:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.core.**builtin**.execfile_flags(**builtin**.java:530) > 130508 12:37:35.663:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:156) > 130508 12:37:35.663:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at com.android.monkeyrunner.ScriptRunner.run(ScriptRunner.java:116) > 130508 12:37:35.663:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at > com.android.monkeyrunner.MonkeyRunnerStarter.run(MonkeyRunnerStarter.java:77) > 130508 12:37:35.663:S [main] [com.android.monkeyrunner.MonkeyRunnerOptions] > at > com.android.monkeyrunner.MonkeyRunnerStarter.main(MonkeyRunnerStarter.java:18 > 9) Answer: There is a way to send an SMS or an MMS programatically in a monkeyrunner script using a device shell command to use activity manager to start an activity with an intent built with the start parameters device.shell("am start -a android.intent.action.VIEW -t vnd.android-dir/mms-sms -e address \'number or adderess goes here\' -e subject \'subject goes here\' -e sms_body \'message body goes here\'") This will launch the default SMS or MMS application with the number/address, subject and message as supplied in the intent extras. If you don't have a default SMS/MMS messaging application and there are more than one on the device, you will need to choose one of the several in a popup dialog. Whether you send an SMS or MMS seems to depend on whether there is a subject extra included. On my devices, the message is an SMS if there is no subject extra and is an MMS if there is a subject extra. The above will only launch the SMS/MMS application with the supplied address and message and, if also supplied, the subject. However, it won't send the message. I've always had to add a touch to the send button with its x, y coordinates to get the message to send. device.touch(x coordinate here, y coordinate here, MonkeyDevice.DOWN_AND_UP) There are three extras for the intent, an address, a subject and a sms_body. I've tried using monkeyrunner with the direct intent creation and not the `am start` command from the device shell but I've never successfully extras into the intent with that approach whereas the code above works. The other direct intent creation works for me only when the intent doesn't need extras. If you need the x, y coordinates of the send button, you can get them for more recent devices by turning on "Pointer location" in the device's developer options. The device's screen will then have a very narrow translucent ribbon at the top of the screen with information about your screen touches, blue crosshairs marking a current touch location and a blue track/trail from any touch motion. The ribbon at the top indicates location, velocity of motion and other data. For touch coordinates, launch the application and touch the screen where the send button is to find its coordinates. Coordinates are ALWAYS relative to the uppermost left corner of the current device orientation. That means if you rotate the device from portrait to landscape, the coordinates will be relative to whatever corner becomes the uppermost left corner in the new orientation even if the application doesn't rearrange itself for the new orientation. Also, in my scripts, I dismiss the soft keyboard with a programatic BACK button just before the touch for the send button. device.press('KEYCODE_BACK', MonkeyDevice.DOWN_AND_UP) I've found that whether the soft keyboard is shown or not is not very predictable, so, before dismissing the soft keyboard, I include a test for whether it is showing or not using a method shown here in another stack overflow answer: [How to Determine Whether Soft Keyboard Is Shown on Screen](http://stackoverflow.com/questions/12903907/how-to-determine-whether- softkeyboard-is-shown-on-the-screen-while-using-monke/19686510#19686510) Enjoy!
Python: Importing files based on which files the user wants Question: I have the following directory structure + code | --+ plugins | -- __init__.py -- test_plugin.py (has a class TestPlugin) -- another_test_plugin.py (has a class AnotherTestPlugin) --+ load.py --+ __init__.py In load.py, I want to be able to initialize only those classes that the user specifies. For example, lets say I do something like $ python load.py -c test_plugin # Should only import test_plugin.py and initialize an object of the TestPlugin class I am having trouble trying to use the "imp" module to do it. It keeps on saying "No such file or directory". My understanding is that it is somehow not understanding the path properly. Can someone help me out with this? Answer: ok, your problem is a path related problem. You expect that the script is being run in the same directory as where load.py is, where it is not the case. what you have to do is something like: import imp, os, plugins path = os.path.dirname(plugins.__file__) imp.load_source('TestPlugin', os.path.join(path, 'test_plugin.py') where plugins is the module containing all your plugins (i.e. just the empty `__init__.py`), that will help you get the full path to your plugin modules' files.
Python - Extract and reformat field names from rows of data Question: I have a flat text file (infile) that I would like to restructure. It has a few tab-delimited columns and looks something like this: Person1 HEIGHT=60;WEIGHT=100;AGE=22 Person2 HEIGHT=62;WEIGHT=101;AGE=25 Person3 HEIGHT=64;WEIGHT=110;AGE=29 and I want it to look like this: PERSON HEIGHT WEIGHT AGE 1 60 100 22 2 62 101 25 3 64 110 29 You can see that the second column actually contains several semicolon- delimited header/value fields, and I want to restructure them into typical column header rows. Right now I have: for line in infile: line = line.split("\t") line_meta = line[1].split(";") print line_meta I am thinking the best solution will to now loop over the line_meta variable, use use regular expressions to detect header names (detect strings which start with multiple capital letters and ends with "="_), add each header to a dictionary as a key, and then store the rest of the string as the value. Then, for the next row, if the same header is detected just append to the existing dictionary. Can anyone help with this code or provice feedback about how to proceed? Thank you **EDIT:** Thank you for your responses. I simplified my data for this example, but here is what one of the actual meta columns looks like (still ; delimited, but values types are mixed): P=0.9626;IPU=.$.+1T.+1T.+;IRF=ncRNA;IUC=UTR3;IGN=NCRNA00115;IGI=NCRNA00115,RP11-206L10.16-001;IET=0;IEO=0;IEN=.;IHT=0;IHVC=0;IHD=.;IHI=.;IHN=.;IDI=.;IDN=.;ITMAF=.;ITAMR=.;ITASN=.;ITAFR=.;ITEUR=.;ITNRB=+A;ISF=.;ISD=.;ISM=.;ISX=.; Answer: You could just use one regular expression to split out the key=value pairs: import re key_value = re.compile('(?P<key>[A-Z]+)=(?P<value>\[^\s=;]+)(?:(?=;)|$)') This expression uses named groups, but you could do without those if you find it easier to read: key_value = re.compile('([A-Z]+)=([^\s=;])(?:(?=;)|$)') The `(?:..)` group is a non-capturing group; it is only used here to demark what the `|` or symbol applies to. The pattern matches uppercase characters before the `=` symbol, and anything that is _not_ whitespace, a `=` or `;` character, provided that there is a `;` _or_ the end of the string right after the value. This splits out keys and values for each line: >>> key_value = re.compile('(?P<key>[A-Z]+)=(?P<value>[^\s=;]+)(?:(?=;)|$)') >>> key_value.findall('Person1\tHEIGHT=60;WEIGHT=100;AGE=22') [('HEIGHT', '60'), ('WEIGHT', '100'), ('AGE', '22')] This can easily then be turned into a dictionary: >>> dict(key_value.findall('Person1\tHEIGHT=60;WEIGHT=100;AGE=22')) {'AGE': '22', 'WEIGHT': '100', 'HEIGHT': '60'} You can then write these with, for example, using [`csv.DictWriter()`](http://docs.python.org/2/library/csv.html): import csv import re key_value = re.compile('(?P<key>[A-Z]+)=(?P<value>[^\s=;]+)(?:(?=;)|$)') with open(inputfilename) as infile, open(outputfilename, 'wb') as outfile: writer = csv.DictWriter(outfile, ('PERSON', 'HEIGHT', 'WEIGHT', 'AGE'), delimiter='\t') writer.writeheader() for line in infile: person = line.split('\t', 1)[0] row = dict(key_value.findall(line)) row['PERSON'] = person writer.writerow(row) Demo based on your real data sample: >>> dict(key_value.findall(' P=0.9626;IPU=.$.+1T.+1T.+;IRF=ncRNA;IUC=UTR3;IGN=NCRNA00115;IGI=NCRNA00115,RP11-206L10.16-001;IET=0;IEO=0;IEN=.;IHT=0;IHVC=0;IHD=.;IHI=.;IHN=.;IDI=.;IDN=.;ITMAF=.;ITAMR=.;ITASN=.;ITAFR=.;ITEUR=.;ITNRB=+A;ISF=.;ISD=.;ISM=.;ISX=.;\n')) {'ISX': '.', 'ITAMR': '.', 'IDN': '.', 'ISM': '.', 'IDI': '.', 'ISF': '.', 'ISD': '.', 'ITMAF': '.', 'IUC': 'UTR3', 'IGI': 'NCRNA00115,RP11-206L10.16-001', 'ITNRB': '+A', 'IHVC': '0', 'IET': '0', 'ITASN': '.', 'ITEUR': '.', 'ITAFR': '.', 'IEO': '0', 'IEN': '.', 'IGN': 'NCRNA00115', 'IRF': 'ncRNA', 'P': '0.9626', 'IHT': '0', 'IHI': '.', 'IHN': '.', 'IPU': '.$.+1T.+1T.+', 'IHD': '.'}
Why is mmap returning a size of zero? Question: I'm working on a beaglebone (running Angstrom Linux) and trying to use Python's `mmap` module to gain read and write access to the `/dev/mem` file. However, for some reason, the code below prints a value of zero. I'm fairly new to mmap and I'm not sure if there is something obvious I am missing. from mmap import mmap mmap_offset=0x44c00000 mmap_size=0x48ffffff-mmap_offset with open("/dev/mem", "r+b") as f: testMap=mmap(f.fileno(),mmap_size,offset=mmap_offset) print testMap.size() print testMap[0] returns: 0 8 Answer: Because device inodes like `/dev/mem` report an apparent size of 0 when queried by [`stat()`](http://linux.die.net/man/2/stat). This is simply how special device nodes are implemented, as it's not usually useful to ask a device node how large it is. (Consider `/dev/zero`, `/dev/kbd`, `/dev/urandom`, and any device node that is used for communication and not data storage, such as device nodes representing photo scanners or input devices.) You should still be able to retrieve data from the mmap handle.
I want to save my parsed HTML file into TXT file Question: I've parsed a web page showing article. I want to save the parsed data into text file, but my python shell shows an error like this: UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 107: ordinal not in range(128) and here is a part of my code search_result = urllib.urlopen(url) f = search_result.read() #xml parsing parsedResult = xml.dom.minidom.parseString(f) linklist = parsedResult.getElementsByTagName('link') #extracting links extractedURL = linklist[3].firstChild.nodeValue #pick one link page = urllib.urlopen(extractedURL).read() #making html file g= open('yyyy.html', 'w') g.write(page) g.close() #reading html file and parsing html to get pure text of article g= open('yyyy.html', 'r') bs = BeautifulSoup(g,fromEncoding="utf-8") g.close() article = bs.find(id="articleBody") content = article.get_text() #save as a text file h= open('yyyy.txt', 'w') h.write(content) h.close() What should I add to make this work? Answer: Try with import codecs h = codecs.open('yyyy.txt', 'w', 'utf-8') or using Python 3.
AttributeError: 'Settings' object has no attribute 'ADMIN_MEDIA_PREFIX' Question: i am trying to make a smart menu using ChainedForeignKey, but so far i am stuck with the import of `ChainedForeignKey`: from smart_selects.db_fields import ChainedForeignKey I am using pydev on Eclipse. I downloaded the smart select package here: <http://pydoc.net/Python/django-smart-selects/1.0.2/> and added it to my `PYTHONPATH` when i debug i get: Traceback (most recent call last): File "C:\Users\Mr.Machine\Desktop\eclipse\plugins\org.python.pydev_2.7.0.2013032300\pysrc\pydevd.py", line 1397, in <module> debugger.run(setup['file'], None, None) File "C:\Users\Mr.Machine\Desktop\eclipse\plugins\org.python.pydev_2.7.0.2013032300\pysrc\pydevd.py", line 1090, in run pydev_imports.execfile(file, globals, locals) #execute the script File "C:\Users\Mr.Machine\Desktop\Workspace\Medbook\testApp\forms.py", line 4, in <module> from smart_selects.db_fields import ChainedForeignKey File "C:\Users\Mr.Machine\Desktop\django-smart-selects-1.0.2\smart_selects\db_fields.py", line 2, in <module> import form_fields File "C:\Users\Mr.Machine\Desktop\django-smart-selects-1.0.2\smart_selects\form_fields.py", line 1, in <module> from smart_selects.widgets import ChainedSelect File "C:\Users\Mr.Machine\Desktop\django-smart-selects-1.0.2\smart_selects\widgets.py", line 20, in <module> class ChainedSelect(Select): File "C:\Users\Mr.Machine\Desktop\django-smart-selects-1.0.2\smart_selects\widgets.py", line 30, in ChainedSelect class Media: File "C:\Users\Mr.Machine\Desktop\django-smart-selects-1.0.2\smart_selects\widgets.py", line 33, in Media ('js/jquery.min.js', 'js/jquery.init.js')] File "C:\Python27\lib\site-packages\django\conf\__init__.py", line 54, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'ADMIN_MEDIA_PREFIX' Any help appreciated Answer: Marking the comment as answer for future reference: Looks like `ADMIN_MEDIA_PREFIX` was not set. Setting `ADMIN_MEDIA_PREFIX='/static/'` or any other appropriate value for ADMIN media would fix the issue
Python: get int value from a char string Question: This is one of those silly questions and I don't really know how to formulate it, so I'll give an example. I got v = chr(0xae) + chr(0xae) where #AEAE is, in decimal, the value of 44718. My question is how I get the integer value of `v`? I know about `ord()` but I can use it only for a char, and not for a string. Thank you. Answer: I managed to do this using the `struct` module: import struct int_no = struct.unpack('>H', v)[0] print int_no which outputs the desired results: 44718
Mixin container takes 0 arguments but 2 were passed Question: I am trying to future-proof the css in a [zinnia](http://django-blog- zinnia.com/)-django blog using [compass](http://compass-style.org/), [susy](http://susy.oddbird.net/), and [sass](http://sass-lang.com/). I have successfully copied my zinnia templates into `/var/www/static/zinnia` by running `python manage.py collectstatic`. Then I edit a sass file, `cd` to `/var/www/static/zinnia/`, and run `compass compile`. This produces the following errors and causes my blog to not render with css anymore! error sass/screen.scss (Line 27 of sass/partials/_layouts.scss: Mixin container takes 0 arguments but 2 were passed.) #from running compass compile in shell On the top of my blog webpage appears the following in firefox: Syntax error: Mixin container takes 0 arguments but 2 were passed. on line 27 of /var/www/static/zinnia/sass/partials/_layouts.scss, in `container' from line 27 of /var/www/static/zinnia/sass/partials/_layouts.scss from line 20 of /var/www/static/zinnia/sass/screen.scss [Relevant zinnia code is visible on github.](https://github.com/Fantomas42/django-blog- zinnia/tree/master/zinnia/static/zinnia) At present the code is a virtual mirror of the github repository. Here is [line 27](https://github.com/Fantomas42/django-blog- zinnia/blob/master/zinnia/static/zinnia/sass/partials/_layouts.scss#L27), where this error seems to be originating from. Line 27 of _layouts.scss: @include container($total-columns, $screen-layout); As a clue I was able to learn [this](https://github.com/Fantomas42/django- blog-zinnia/issues/241). However, I would prefer to use the latest gem versions available to me. I do not know if this compiles with older gems, but the answer to this piece of the puzzle is a bit tangential (though useful). Thus, I need an answer that will me to compile without errors. Thank you for your help. Answer: I ran into the same error, even with the order of importing compass before susy. What fixed it for me was uninstalling [this no-longer-used gem: compass- susy-plugin](https://github.com/ericam/compass-susy-plugin).
Using PyTables from a cython module Question: I am solving a set of Coupled ODEs and facing two problems: speed and memory storage. As such I use `cython_gsl` to create a module which solves my ODEs. Until now I had simply written the data to a `.txt` file but I think it will be more useful to use `PyTables`. As such I define in my `.pyx` file something like from cython_gsl cimport * from tables import * def main (parameters for run ): class vector(IsDescription): name= StringCol(16) # 16-character String i = Int32Col() # 32-bit integer j = Int32Col() # 32-bit integer k = Int32Col() # 32-bit integer h5file = tables.openFile("tutorial1.h5", mode = "r", title = "Test file") group = h5file.createGroup("/", 'spin_vectors',"Spin vectors of the crust and core") table = h5file.createTable(group, 'shellvector', vector, " ") ... Setup the ODEs ... while (t < t1): status = gsl_odeiv_evolve_apply (e, c, s, &sys, &t, t1, &h, y) if (status != GSL_SUCCESS): break #write_file.write("%.16e %.16e %.16e %.16e %.16e %.16e %.16e\n" %(t, y[0], y[1],y[2],y[3], y[4],y[5]) ) shell_table.row['i']=y[0] shell_table.row['j']=y[1] shell_table.row['k']=y[2] shell_table.row.append() shell_table.flush() I then compile it using a `setup.py` file which outputs (successfully) a `.so` file. Unfortunately upon importing this into Ipython I get the error NameError: Int32 Which I believe is a PyTables thing. So it seems it is not being imported correctly? While I think this a good way to do this if anyone has better suggestions on how to handle data from python/cython I would be very happy to hear..Google has almost nothing! Answer: unfortunately there is not enough information here to be able to answer your question. What you are trying to do **should** work. However, I don't think this error is coming from PyTables (which doesn't have any lone "Int32" classes, though it does have an "Int32Atom" and an "Int32Col"). I suspect instead that this is from CythonGSL. Is there any way that you could please post the full traceback -- rather than just the last error -- so that we can know for sure?
Replicating .accdb database tables with python pyodbc Question: Novice python user here. There are other ways to achieve this, but I believe this is my best option. I maintain an MS Access 2007 database (.accdb) that is used on offline TabletPCs in the field to collect data. When arriving back at the office, the user can re-connect to the server. I'm trying to utilize the pyodbc module to loop through the tables and rows and insert records from the offline field database into the server "master" database. The selection part of the script appears to be working to grab the records in the offline field database and put them into a dictionary for later use. Recommendations based on the original post now has the insert part looping through the dictionary and creating parameter-based sql for inserting the records. However, the code throws the following error after two records are inserted into the first table in the loop. The next sql string is for the next table - so all records are inserted into the first table correctly and the error occurs when moving to the next table. Error: ('HY010', '[HY010] [Microsoft][ODBC Driver Manager] Function sequence error (0) (SQLFetch)') I read error information here but don't know what to make of it: <http://msdn.microsoft.com/en- us/library/windows/desktop/ms712424(v=vs.85).aspx> import pyodbc otherDbaseDict = {} connOtherDbase = pyodbc.connect(("Driver={Microsoft Access Driver (*.mdb, *.accdb)};" "DBQ=C:\\other.accdb;")) otherDbaseTables = connOtherDbase.cursor().tables() counter = 0 for tblOther in otherDbaseTables: if tblOther.table_name.startswith("tbl"): #ignores MS sys tables nameOther = tblOther.table_name cursor = connOtherDbase.cursor() selectSQL = 'SELECT * FROM {}'.format(nameOther) #generate SQL select syntax cursor.execute(selectSQL) rows = cursor.fetchall() for row in rows: counter = counter + 1 #counter digit used to create unique key, since table names repeat otherDbaseDict.update({nameOther+str(counter):row}) connMainDbase = pyodbc.connect(("Driver={Microsoft Access Driver (*.mdb, *.accdb)};" "DBQ=C:\\main.accdb;")) mainDbaseTables = connMainDbase.cursor().tables() #beargle2 for tblMain in mainDbaseTables: if tblMain.table_name.startswith("tbl"): nameMain = tblMain.table_name # get all column names with list comprehension columns = [row.column_name for row in cursor.columns(table=nameMain)] for k, v in otherDbaseDict.iteritems(): if nameMain in k: # build dynamic sql sql = 'INSERT into {0}({1}) values ({2})' # add question mark placeholders, one for each column # value to insert sql = sql.format(nameMain, ','.join(columns), ','.join(len(columns) * '?')) #print sql cursor = connMainDbase.cursor() # execute parameterized insert cursor.execute(sql, v) connMainDbase.commit() Any pointers on ways to reconcile the error? Was just thinking, is this a cursor issue? Do I need to "reset/refresh" or whatever it may be called after each connMainDbase.commit() or before switching tables? Execute dies once it finishes with the first table. Looking into it but comments welcome... Answer: The ProgrammingError is occuring because you have both single and double quotes around the `insertSQL` string. The reason you have only one field is that you are still within the `for` loop block that is adding field names to the list. You need to unindent before constructing the `insertSQL` string, so your last few lines should be: for row in cursor.columns(table=nameMain): fieldName = str(row.column_name) fields.append(fieldName) insertSQL = 'INSERT into {0}({1}) values ({2})'.format(nameMain,fields,v) cursor.execute(insertSQL) connMainDbase.commit() However, that is still not going to give you valid SQL because you have your field names within single quotes and square brackets, and double parentheses around your values. I think the simplest way to fix this is to convert the list & tuple to strings first: insertSQL = 'INSERT into {0}({1}) values ({2})'.format(nameMain,', '.join(fields),', '.join(v)) Finally, it's best practice to pass your values to the `execute` method as parameters so that you use a parameterized query (it might not matter a great deal in your use case, but you might as well follow best practice anyway). To do this you need to have ? in place of each value that needs to be added to the query which you can do by replacing `' ,'.join(v)` with `' ,'.join('?'*len(v))` and passing the `v` tuple as the second argument to `execute`. You should end up with: for row in cursor.columns(table=nameMain): fieldName = str(row.column_name) fields.append(fieldName) insertSQL = 'INSERT into {0}({1}) values ({2})'.format(nameMain,', '.join(fields),', '.join('?'*len(v))) cursor.execute(insertSQL, v) connMainDbase.commit()
z3py on MacOSX: cannot get a model Question: I am seeing a strange problem with z3py on Mac, was wondering if anyone has seen this before: $ cat bug.py from z3 import * x = Int('x') s = Solver() s.add(x > 5) print(s.check()) print(s.model()) $ python bug.py sat [x = ] The value of x is missing from the model. I tried both master and unstable branches with the same result. However, z3 itself does give the correct model if run on a similar .smt2 file. My configuration is Mac OSX 10.6.8, Python 2.7.4. Answer: The problem is very specific for my setup, but maybe someone will run into it as well: the root cause is that a wrong version of libgomp was picked up by the dynamic loader -- i.e. the versions used to compile and to run do not match. Here is a more severe manifestation of this issue: $ python Python 2.7.4 (default, May 9 2013, 18:51:46) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from z3 import * >>> IntVal(1) >>> The numeric value is not printed, i.e. the correct output is >>> IntVal(1) 1 >>> Setting DYLD_LIBRARY_PATH to point to the correct version of the library fixes the issue.
How to center this Entry widget? Question: I am starting a project in Python: an email client. I know it may sound odd, but it's just for fun. I have an `Entry` widget and a `Canvas` widget. When I `pack` them both, It gives me the Canvas, and an Entry widget centred above the Canvas, with a border around the Canvas, therefore a border separating the Entry widget and the Canvas. What I want is the Entry widget centred in the Canvas, with no borders dividing the Entry box and the Canvas. When I set the master widget in the Entry widget to `canvas`, that canvas doesn't appear: only the Entry widget, with the window large enough only to contain the Entry widget. If the master widget is set to `frame`, I get what I described above, the canvas and Entry widget separate, but the size of the whole window is as specifies. This is my actual code: import tkinter frame = tkinter.Tk() frame.config(height=370, width=670, bg="#C2C2D6") entry = tkinter.Entry(frame, exportselection=0) canvas = tkinter.Canvas(frame, height=370, width=670, bg="#C2C2D6", bd=0, relief="sunken") entry.pack() canvas.pack() frame.mainloop() Answer: You do not need `Canvas`. You can set `frame` as the parent of `entry`: import tkinter frame = tkinter.Tk() frame.config(height=370, width=670, bg="#C2C2D6") entry = tkinter.Entry(frame, exportselection=0) entry.pack(padx=100, pady=100, expand=1, fill='x') frame.mainloop()
Cannot get mapnik working with python3, but it does with python2 Question: Here is a post about installing a module in python3. When I use brew install python, then it installs it for 2.7. When I use the method suggested by dan, which aimed to install it directly in python3 (who i really thank), but which didn't work : # Figure out the path to python3 PY3DIR=`dirname $(which python3)` # And /then/ install with brew. That will have it use python3 to get its path PATH=$PY3DIR:$PATH brew install mapnik The installation was successful **but** in python2. so I get: For non-homebrew Python, you need to amend your PYTHONPATH like so: export PYTHONPATH=/usr/local/lib/python2.7/site-packages:$PYTHONPATH so i finally add the path **manually in python3** : import sys sys.path.append('/usr/local/lib/python2.7/site-packages') I get this error : Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/mapnik/__init__.py", line 69, in <module> from _mapnik import * ImportError: dlopen(./_mapnik.so, 2): Symbol not found: _PyClass_Type Referenced from: ./_mapnik.so Expected in: flat namespace in ./_mapnik.so Please help, I have spent so many hours on this ... Thanks!!! Answer: The Mapnik python bindings depend on boost_python. And both need to use the same python. The problem is likely that homebrew is providing a bottle of boost which includes boost python built against python 2.7 and not python 3.x.
How should I handle this HTTPS request in Python? Question: I am trying to use the [Strava API v3](http://strava.github.io/api/v3/) in Python, and I am afraid I am missing something. The docs say: > This base URL is used for all Strava API requests: <https://api.strava.com> > > `$ curl -i https://api.strava.com` > >> > `HTTP/1.1 200 OK Content-Type: application/json Status: 200 OK` `X-RateLimit-Limit: 5000 X-RateLimit-Remaining: 4999 Content-Length: 2` > > Responses are in JSON format and gzipped. I am currently doing this: import urllib print urllib.urlopen('https://api.strava.com').read() And gettin this: Traceback (most recent call last): File "StravaAPIv3.py", line 3, in <module> print urllib.urlopen('https://api.strava.com').read() File "C:\Python27\lib\urllib.py", line 86, in urlopen return opener.open(url) File "C:\Python27\lib\urllib.py", line 207, in open return getattr(self, name)(url) File "C:\Python27\lib\urllib.py", line 436, in open_https h.endheaders(data) File "C:\Python27\lib\httplib.py", line 954, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 814, in _send_output self.send(msg) File "C:\Python27\lib\httplib.py", line 776, in send self.connect() File "C:\Python27\lib\httplib.py", line 1157, in connect self.timeout, self.source_address) File "C:\Python27\lib\socket.py", line 553, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): IOError: [Errno socket error] [Errno 11004] getaddrinfo failed I don't know where to start, since I don't know much about HTTP requests and HTTPS UPDATE: According to Merlin's suggestion to use `requests` module, I am doing this: import requests r = requests.get('https://api.strava.com/') print r.status_code print r.headers['content-type'] print r.encoding print r.text print r.json() but keep getting an error: `requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.strava.com', port=443): Max retries exceeded with url: / (Caused by <class 'so cket.gaierror'>: [Errno 11004] getaddrinfo failed)` Answer: Try using requests! It's safer. <http://docs.python-requests.org/en/latest/>
FuzzyWuzzy String Matching - Case Sensitivity Question: I'm using the [FuzzyWuzzy String Matching module from SeatGeek](http://seatgeek.com/blog/dev/fuzzywuzzy-fuzzy-string-matching-in- python). I find that when using the token_set_ratio search algorithm, small differences in case gives wildly differing results. For example, if I am looking for the phrase "I am eating" in a file, I get a 100% match. But if the phrase is "i am eating", just the change in case of ONE letter, gives me a 65% match. Is there any way to make the algorithm case insensitive? Answer: token_set_ratio() is case sensitive by default. from fuzzywuzzy import fuzz fuzz.token_set_ratio("I am eating", "i am eating") => 100
How to delete a dyanmic DNS record using Python or Bash? Question: The Following code in python updates import dns.query import dns.tsigkeyring import dns.update import sys keyring = dns.tsigkeyring.from_text({'host-example.' : 'XXXXXXXXXXXXXXXXXXXXXX=='}) update = dns.update.Update('dyn.test.example', keyring=keyring) update.replace('host', 300, 'a', sys.argv[1]) response = dns.query.tcp(update, '10.0.0.1') but I could not find out how to remove a dns entry. Answer: The [`delete()`](http://www.dnspython.org/docs/1.14.0/dns.update.Update- class.html#delete) method of `dns.update.Update` can be used to delete a record. import dns.query import dns.tsigkeyring import dns.update keyring = dns.tsigkeyring.from_text({'host-example.' : 'XXXXXXXXXXXXXXXXXXXXXX=='}) update = dns.update.Update('dyn.test.example', keyring=keyring) update.delete('host', 'A') response = dns.query.tcp(update, '10.0.0.1')
Python package: how to avoid redefining author version etc? Question: I would like to distribute a Python package (I would like to use setuptools and I already have a working setup.py file), and the related documentation (produced using Sphinx). I find myself a bit confused by the fact that I have to specify the authors names, maintainers, version, release, date, emails etc in different parts. I was wondering if there is some way to define this kind of common information only once for the package and then use it both in the setup.py script and in .rst files and so on. What are the possible approaches to this problem? Answer: If you are invoking sphinx using distutils, your case is covered. The answer is in the documentation in [sphinx/setup_command.py](https://bitbucket.org/birkenfeld/sphinx/src/72dceb35264e9429c8d9bb1a249a638ac21f0524/sphinx/setup_command.py?at=default#cl-35). From that example, your setup.py should have a part that looks somewhat like this: # this is only necessary when not using setuptools/distribute from sphinx.setup_command import BuildDoc cmdclass = {'build_sphinx': BuildDoc} name = 'My project' version = '1.2' release = '1.2.0' setup( name=name, author='Bernard Montgomery', version=release, cmdclass=cmdclass, # these are optional and override conf.py settings command_options={ 'build_sphinx': { 'project': ('setup.py', name), 'version': ('setup.py', version), 'release': ('setup.py', release)}}, ) After that, calling `python setup.py build_sphinx` will build the documentation, having a single point of truth for those shared values. Well done. Works for me. Hope it helps!
Python eTree Parser isn't appending an element Question: Look at my log and see how it says that the row I'm getting back from Postgres has been turned from a string into an element (and I print the string, print the element, print the isElement boolean!) and yet when I try to append it, the error is that it's not an element. Huff, puff. import sys from HTMLParser import HTMLParser from xml.etree import cElementTree as etree import xml.etree.ElementTree as ET from xml.etree.ElementTree import Element, SubElement, tostring import psycopg2 import psycopg2.extras def main(): # Connect to an existing database conn = psycopg2.connect(dbname="**", user="**", password="**", host="/tmp/", port="**") # Open a cursor to perform database operations cur = conn.cursor(cursor_factory = psycopg2.extras.RealDictCursor) cur.execute("SELECT * FROM landingpagedata;") rows = cur.fetchall() class LinksParser(HTMLParser): def __init__(self): HTMLParser.__init__(self) self.tb = etree.TreeBuilder() def handle_starttag(self, tag, attributes): self.tb.start(tag, dict(attributes)) def handle_endtag(self, tag): self.tb.end(tag) def handle_data(self, data): self.tb.data(data) def close(self): HTMLParser.close(self) return self.tb.close() template = 'template.html' # parser.feed(open('landingIndex.html').read()) #for testing # root = parser.close() for row in rows: parser = LinksParser() parser.feed(open(template).read()) root = parser.close() #title title = root.find(".//title") title.text = row['title'] #headline h1_id_headline = root.find(".//h1") h1_id_headline.text = row['h1_id_headline'] # print row['h1_id_headline'] #intro p_class_intro = root.find(".//p[@class='intro']") p_class_intro.text = row['p_class_intro'] # print row['p_class_intro'] Here is where the problems occur! #recommended p_class_recommendedbackground = root.find(".//div[@class='recommended_background_div']") print p_class_recommendedbackground p_class_recommendedbackground.clear() newElement = ET.fromstring(row['p_class_recommendedbackground']) print row['p_class_recommendedbackground'] print ET.iselement(newElement) p_class_recommendedbackground.append(newElement) html = tostring(root) f = open(row['page_name'], 'w').close() f = open(row['page_name'], 'w') f.write(html) f.close() # f = '' # html = '' parser.reset() root = '' # Close communication with the database cur.close() conn.close() if __name__ == "__main__": main() My log is this: {background: url(/images/courses/azRealEstate.png) center no-repeat;} <Element 'div' at 0x10a999720> <p class="recommended_background">Materials are are aimed to all aspiring real estate sales associates who wish to obtain the Arizona Real Estate Salesperson license, which is provided by the <a href="http://www.re.state.az.us/" style="text-decoration: underline;">Arizona Department of Real Estate</a>.</p> True Traceback (most recent call last): File "/Users/Morgan13/Programming/LandingPageBuilder/landingPages/landingBuilderTest.py", line 108, in <module> main() File "/Users/Morgan13/Programming/LandingPageBuilder/landingPages/landingBuilderTest.py", line 84, in main p_class_recommendedbackground.append(newElement) TypeError: must be Element, not Element [Finished in 0.1s with exit code 1] Answer: I can reproduce the error message this way: from xml.etree import cElementTree as etree import xml.etree.ElementTree as ET croot = etree.Element('root') child = ET.Element('child') croot.append(child) # TypeError: must be Element, not Element The root cause of the problem is that we are mixing the `cElementTree` implementation of `ElementTree` with the `xml.etree.ElementTree` implementation of `ElementTree`. Never the twain should meet. So the fix is simply to pick one, say `etree`, and replace all occurrences of the other (e.g. replace `ET` with `etree`).
Trouble with printing out of Classes in Python Question: We are supposed to use the code below to print out the parameters listed in it, currently however we are unable to do so and are using a round about method. This is supposed to print out things instead of what we print out in the Game class in the playturn function def __str__(self): x = self.name + ":\t" x += "Card(s):" for y in range(len(self.hand)): x +=self.hand[y].face + self.hand[y].suit + " " if (self.name != "dealer"): x += "\t Money: $" + str(self.money) return(x) Here is our actual code, if you also see any other issues your input would be greatly appreciated from random import* #do we need to address anywhere that all face cards are worth 10? class Card(object): def __init__(self,suit,number): self.number=number self.suit=suit def __str__(self): return '%s'%(self.number) class DeckofCards(object): def __init__(self,deck): self.deck=deck self.shuffledeck=self.shuffle() def shuffle(self): b=[] count=0 while count<len(self.deck): a=randrange(0,len(self.deck)) if a not in b: b.append(self.deck[a]) count+=1 return(b) def deal(self): if len(self.shuffledeck)>0: return(self.shuffledeck.pop(0)) else: shuffle(self) return(self.shuffledeck.pop(0)) class Player(object): def __init__(self,name,hand,inout,money,score,bid): self.name=name self.hand=hand self.inout=inout self.money=money self.score=score self.bid=bid def __str__(self): x = self.name + ":\t" x += "Card(s):" for y in range(len(self.hand)): x +=self.hand[y].face + self.hand[y].suit + " " if (self.name != "dealer"): x += "\t Money: $" + str(self.money) return(x) class Game(object): def __init__(self,deck, player): self.player=Player(player,[],True,100,0,0) self.dealer=Player("Dealer",[],True,100,0,0) self.deck=DeckofCards(deck) self.blackjack= False def blackjacksearch(self): if Game.gettot(self.player.hand)==21:#changed return True else: return False def firstround(self): #self.player.inout=True#do we need this since this is above #self.player.hand=[]#do wee need this.... #self.dealer.hand=[]#do we need this .... self.player.hand.append(DeckofCards.deal(self.deck)) for card in self.player.hand: a=card print(self.player.name + ' ,you were dealt a '+str(a)) self.dealer.hand.append(DeckofCards.deal(self.deck)) for card in self.dealer.hand: a=card print('The Dealer has '+str(a)) playerbid=int(input(self.player.name + ' how much would you like to bet? ')) self.player.money-=playerbid self.player.bid=playerbid def playturn(self): #should this be changed to inout instead of hit.....we never use inout #for player in self.player: # a=player #print(str(a)) hit=input('Would you like to hit? ') #should input be in loop? while self.player.inout==True: #and self.blackjack!=True:#changed print(self.player.name + ' , your hand has:' + str(self.player.hand)) #do we want to make this gettot? so it prints out the players total instead of a list....if we want it in a list we should print it with out brakets self.player.hand.append(DeckofCards.deal(self.deck)) for card in self.player.hand: a=card print('The card that you just drew is: ' + str(a)) #print(Game.gettot(self.player.hand)) hit=input('Would you like to hit? ') if hit=='yes': (self.player.hand.append(DeckofCards.deal(self.deck)))#changed self.player.inout==True# else: (self.player.hand) #changed self.player.inout==False #changed if self.player.blackjack==True: print(self.player.name + " has blackjack ") if hit=='no': print (self.player.hand.gettot()) def playdealer(self): while Game.gettot(self.dealer.hand)<17:#changed self.dealer.hand.append(DeckofCards.deal(self.deck)) dealerhand=Game.gettot(self.dealer.hand) #changed print(dealerhand) if Game.gettot(self.dealer.hand)==21:#changed self.dealer.blackhjack=True dealerhand1=Game.gettot(self.dealer.hand)#changed print(dealerhand1) def gettot(self,hand): total=0 for x in self.hand: if x==Card('H','A'): b=total+x if b>21: total+=1 else: total+=11 if x==Card('D','A'): b=total+x if b>21: total+=1 else: total+=11 if x==Card('S','A'): b=total+x if b>21: total+=1 else: total+=11 if x==Card('C','A'): b=total+x #changed if b>21: total+=1 else: total+=11 else: total+=x return(total) def playgame(self): play = "yes" while (play.lower() == "yes"): self.firstround() self.playturn() if self.player.blackjack == True: print(self.player.name + " got BLACKJACK! ") self.player.money += self.player.bid * 1.5 print (self.player.name + " now has " + str(self.player.money)) print("\n") self.player.inout = False if self.player.score > 21: print(self.player.name + " lost with a tot of " + str(self.player.score)) self.player.money -= self.player.bid print (self.player.name + " now has " + str(self.player.money)) print ("\n\n") self.player.inout = False self.playdealer() if self.dealer.blackjack == True: print("Dealer got blackjack, dealer wins\n") self.player.money -= self.player.bid print("Round\n") print("\t",self.dealer) print("\t",self.player) print("\t Dealer has " + str(self.dealer.score) + ", " + self.player.name + " has " + str(self.player.score)) elif self.player.inout == True: print("Round\n") print("\t",self.dealer) print("\t",self.player) print("\n\t Dealer has " + str(self.dealer.score) + ", " + self.player.name + " has " + str(self.player.score)) if self.dealer.score > 21: print("\t Dealer lost with a total of " + str(self.dealer.score)) self.player.money += self.player.bid print(self.player.name + " now has " + str(self.player.money)) elif self.player.score > self.dealer.score: print("\t" +self.player.name + " won with a total of " + str(self.player.score)) self.player.money += self.player.bid print("\t"+self.player.name + " now has " + str(self.player.money)) else: print("\t Dealer won with a total of " + str(self.dealer.score)) self.player.money -= self.player.bid print("\t"+self.player.name + " now has " + str(self.player.money)) else: print("Round") print("\t",self.dealer) print("\t",self.player) if self.player.blackjack == False: print("\t "+ self.player.name + " lost" ) else: print("\t "+self.player.name + " Won!") if self.player.money <= 0: print(self.player.name + " out of money - out of game ") play = "no" else: play = input("\nAnother round? ") print("\n\n") print("\nGame over. ") print(self.player.name + " ended with " + str(self.player.money) + " dollars.\n") print("Thanks for playing. Come back soon!") ls= [Card('H','A'),Card('H','2'),Card('H','3'),Card('H','4'),Card('H','5'),Card('H','6'),Card('H','7'),Card('H','8'),Card('H','9'),Card('H','10'), Card('H','J'),Card('H','Q'),Card('H','K'), Card('S','A'),Card('S','2'),Card('S','3'),Card('S','4'),Card('S','5'), Card('S','6'),Card('S','7'),Card('S','8'),Card('S','9'),Card('S','10'), Card('S','J'),Card('S','Q'),Card('S','K'), Card('C','A'),Card('C','2'),Card('C','3'),Card('C','4'),Card('C','5'), Card('C','6'),Card('C','7'),Card('C','8'),Card('C','9'),Card('C','10'), Card('C','J'),Card('C','Q'),Card('C','K'), Card('D','A'),Card('D','2'),Card('D','3'),Card('D','4'),Card('D','5'), Card('D','6'),Card('D','7'),Card('D','8'),Card('D','9'),Card('D','10'), Card('D','J'),Card('D','Q'),Card('D','K')] def main(): x = input("Player's name? ") blackjack = Game(ls,x) blackjack.playgame() main() Answer: The problem is that, in at least some places, you're trying to print a `list`. While printing anything, including a `list`, calls `str` on it, the `list.__str__` method calls `repr` on its elements. (If you don't know the difference between `str` and `rep`, see [Difference between `__str__` and `__repr__` in Python](http://stackoverflow.com/questions/1436703/difference- between-str-and-repr-in-python).) If you want to print the `str` of every element in a list, you have to do it explicitly, with a `map` or list comprehension. For example, instead of this: print(self.player.name + ' , your hand has:' + str(self.player.hand)) … do this: print(self.player.name + ' , your hand has:' + [str(card) for card in self.player.hand]) But this is still probably not what you want. You will get `['8', '9']` instead of `[<__main__.Card object at 0x1007aaad0>, <__main__.Card object at 0x1007aaaf0>]`, but you probably wanted something more like `8H 9C'. To do that, you'd want something like: print(self.player.name + ' , your hand has:' + ' '.join(str(card) for card in self.player.hand)) You already have similar (although more verbose) code inside `Player.__str__`: for y in range(len(self.hand)): x +=self.hand[y].face + self.hand[y].suit + " " This code could be improved in a few ways. First, it's going to raise an `AttributeError` because you're using `face` instead of `number`. But really, you shouldn't need to do this at all—the whole reason you created a `Card.__str__` method is so you can just use `str(Card)`, right? Second, you almost never want to loop over `range(len(foo))`, especially if you do `foo[y]` inside the loop. Just loop over `foo` directly. Putting that together: for card in self.hand: x += str(card) + " " At any rate, you need to do the same thing in both places. The version that uses the `join` method and a generator expression is a little simpler than the explicit loop, but does require a bit more Python knowledge to understand. Here's how you'd use it here: x += " ".join(str(card) for card in self.hand) + " " * * * Your next problem is that `Card.__str__` doesn't include the suit. So, instead of `8H 9C`, you're going to get `8 9`. That should be an easy fix to do on your own. * * * Meanwhile, if you find yourself writing the same code more than once, you probably want to abstract it out. You _could_ just write a function that takes a hand `list` and turns it into a string: def str_hand(hand): return " ".join(str(card) for card in self.hand) But it might be even better to create a `Hand` class that wraps up a list of cards, and pass that around, instead of using a `list` directly.
How to get the name of the top most (entry) script in python? Question: I have a utility module in Python that needs to know the name of the application that it is being used in. Effectively this means the name of the top-level python script that was invoked to start the application (i.e. the one where __name=="__main__" would be true). __name__ gives me the name of the current python file, but how do I get the name of the top-most one in the call chain? Answer: Having switch my Google query to "how to to find the _process_ name from python" vs how to find the "top level script name", I found [this overly thorough treatment of the topic](http://doughellmann.com/2012/04/determining- the-name-of-a-process-from-python-2.html). The summary of which is the following: import __main__ import os appName = os.path.basename(__main__.__file__).strip(".py")
nose runs test on function in setUp when no suite specified Question: I have a tests file in one of my Pyramid projects. It has _one_ suite with _six_ tests in it: ... from .scripts import populate_test_data class FunctionalTests(unittest.TestCase): def setUp(self): settings = appconfig('config:testing.ini', 'main', relative_to='../..') app = main({}, **settings) self.testapp = TestApp(app) self.config = testing.setUp() engine = engine_from_config(settings) DBSession.configure(bind=engine) populate_test_data(engine) def tearDown(self): DBSession.remove() tearDown() def test_index(self): ... def test_login_form(self): ... def test_read_recipe(self): ... def test_tag(self): ... def test_dish(self): ... def test_dashboard_forbidden(self): ... Now, when I run `nosetests templates.py` (where `templates.py` is the mentioned file) I get the following output: ......E ====================================================================== ERROR: templates.populate_test_data ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/yentsun/env/local/lib/python2.7/site-packages/nose-1.1.2-py2.7.egg/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/yentsun/env/local/lib/python2.7/site-packages/nose-1.1.2-py2.7.egg/nose/util.py", line 622, in newfunc return func(*arg, **kw) TypeError: populate_test_data() takes exactly 1 argument (0 given) ---------------------------------------------------------------------- Ran 7 tests in 1.985s FAILED (errors=1) When I run the tests with test suite specified `nosetests templates.py:FunctionalTests`, the output is, as expected, ok: ...... ---------------------------------------------------------------------- Ran 6 tests in 1.980s OK Why do I have different output and why is an extra (7th) test run? **UPDATE.** Its a bit frustrating, but I when removed the word _test_ from the name `populate_test_data` (it became `populate_dummy_data`), everything worked fine. The problem is solved for now, but maybe somebody knows what went wrong here - why a function from `setUp` had been tested? Answer: > **Finding and running tests** > > nose, by default, follows a few simple rules for test discovery. > > * If it looks like a test, it’s a test. Names of directories, modules, > classes and functions are compared against the **testMatch regular > expression** , and those that match are considered tests. Any class that is > a unittest.TestCase subclass is also collected, so long as it is inside of a > module that looks like a test. > (from [nose 1.3.0 documentation](https://nose.readthedocs.org/en/latest/finding_tests.html)) In the nose's code, the regexp is defined as `r'(?:^|[\b_\.%s-])[Tt]est' % os.sep`, and if you inpect `nose/selector.py`, method `Selector.matches(self, name)` you'll see that the code uses [re.search](http://docs.python.org/2/library/re.html#re.search), which looks for a match anywhere in the string, not only at the beginning, as `re.match` does. A small test: >>> import re >>> import os >>> testMatch = r'(?:^|[\b_\.%s-])[Tt]est' % os.sep >>> re.match(testMatch, 'populate_test_data') >>> re.search(testMatch, 'populate_test_data') <_sre.SRE_Match object at 0x7f3512569238> So `populate_test_data` indeed "looks like a test" by nose's standards.
How does python import order affect names? Question: I'm doing a flask tutorial (<http://blog.miguelgrinberg.com/post/the-flask- mega-tutorial-part-i-hello-world>) and I came upon a behaviour that I couldn't explain. The main directory structure of the tutorial is : microblog | |---- app | |---- __init__.py | |---- views.py | |---- flask |---- run.py and the contents of the files are : ### microblog/run.py #!flask/bin/python from app import app app.run(debug=True) ### microblog/app/**init**.py from flask import Flask app = Flask(__name__) from app import views ### microblog/app/views.py from app import app @app.route("/") @app.route("/index") def index(): return "Hello World!" everything works but if I transpose these two lines: app = Flask(__name__) from app import views in views.py and then I execute run.py I get: > ImportError: cannot import name app Why does that happen? Answer: Because you're trying to import from the newly created variable `app`. If you want to import variable modules, then use `importlib` package: my_module = importlib.import_module(app, 'view')
How to use lzma2 in python code? Question: I know there is a module called pylzma. But it only support lzma, not lzma2. My current solution is using `subprocess.call()` to call 7z program. Is there a better way? Answer: You can use [backports.lzma](https://github.com/peterjc/backports.lzma), see for more info: [Python 2.7: Compressing data with the XZ format using the "lzma" module](http://stackoverflow.com/questions/22370068/python-2-7-compressing- data-with-the-xz-format-using-the-lzma-module) Then it's simply a matter of doing e.g.: from backports import lzma with open('hello.xz', 'wb') as f: f.write(lzma.compress(b'hello', format=lzma.FORMAT_XZ)) Or simpler (XZ format is default): with lzma.open('hello.xz', 'wb') as f: f.write(b'hello') See <http://docs.python.org/dev/library/lzma.html> for usage details.
Python: Various Lists To Cartesian Product, Variable Length Question: I want to create words or rather combinations from three different arrays. The output should be a Cartesian product with itself and the other array with variable length. Example: array1 = ["a", "b", "c"] array2 = ["x", "y", "z"] Cartesian product of both, length 3: aaa, aab, aac ... axb, axc, axx... zzx, zzy, zzz Could you point me in the right direction? Answer: There is the `itertools` module which has method [`product`](http://docs.python.org/2/library/itertools.html#itertools.product) which does what you need. >>> from itertools import product >>> [''.join(items) for items in product(array1 + array2, repeat=3)] ['aaa', 'aab', 'aac', 'aax', 'aay', 'aaz', 'aba', 'abb', 'abc', 'abx', 'aby', 'abz', 'aca', 'acb', 'acc', 'acx', 'acy', 'acz', 'axa', 'axb', 'axc', 'axx', 'axy', 'axz', 'aya', 'ayb', 'ayc', 'ayx', 'ayy', 'ayz', 'aza', 'azb', 'azc', 'azx', 'azy', 'azz', 'baa', 'bab', 'bac', 'bax', 'bay', 'baz', 'bba', 'bbb', 'bbc', 'bbx', 'bby', 'bbz', 'bca', 'bcb', 'bcc', 'bcx', 'bcy', 'bcz', 'bxa', 'bxb', 'bxc', 'bxx', 'bxy', 'bxz', 'bya', 'byb', 'byc', 'byx', 'byy', 'byz', 'bza', 'bzb', 'bzc', 'bzx', 'bzy', 'bzz', 'caa', 'cab', 'cac', 'cax', 'cay', 'caz', 'cba', 'cbb', 'cbc', 'cbx', 'cby', 'cbz', 'cca', 'ccb', 'ccc', 'ccx', 'ccy', 'ccz', 'cxa', 'cxb', 'cxc', 'cxx', 'cxy', 'cxz', 'cya', 'cyb', 'cyc', 'cyx', 'cyy', 'cyz', 'cza', 'czb', 'czc', 'czx', 'czy', 'czz', 'xaa', 'xab', 'xac', 'xax', 'xay', 'xaz', 'xba', 'xbb', 'xbc', 'xbx', 'xby', 'xbz', 'xca', 'xcb', 'xcc', 'xcx', 'xcy', 'xcz', 'xxa', 'xxb', 'xxc', 'xxx', 'xxy', 'xxz', 'xya', 'xyb', 'xyc', 'xyx', 'xyy', 'xyz', 'xza', 'xzb', 'xzc', 'xzx', 'xzy', 'xzz', 'yaa', 'yab', 'yac', 'yax', 'yay', 'yaz', 'yba', 'ybb', 'ybc', 'ybx', 'yby', 'ybz', 'yca', 'ycb', 'ycc', 'ycx', 'ycy', 'ycz', 'yxa', 'yxb', 'yxc', 'yxx', 'yxy', 'yxz', 'yya', 'yyb', 'yyc', 'yyx', 'yyy', 'yyz', 'yza', 'yzb', 'yzc', 'yzx', 'yzy', 'yzz', 'zaa', 'zab', 'zac', 'zax', 'zay', 'zaz', 'zba', 'zbb', 'zbc', 'zbx', 'zby', 'zbz', 'zca', 'zcb', 'zcc', 'zcx', 'zcy', 'zcz', 'zxa', 'zxb', 'zxc', 'zxx', 'zxy', 'zxz', 'zya', 'zyb', 'zyc', 'zyx', 'zyy', 'zyz', 'zza', 'zzb', 'zzc', 'zzx', 'zzy', 'zzz']
How to load in Python-RSA a public RSA key from a file generated with openssl? Question: I generated a private and a public key with the following commands: openssl genrsa -out private_key.pem 512 openssl rsa -in private_key.pem -pubout -out public_key.pem I then tried to load them with a python script using Python-RSA: import os import rsa with open('private_key.pem') as privatefile: keydata = privatefile.read() privkey = rsa.PrivateKey.load_pkcs1(keydata,'PEM') with open('public_key.pem') as publicfile: pkeydata = publicfile.read() pubkey = rsa.PublicKey.load_pkcs1(pkeydata) random_text = os.urandom(8) #Generate signature signature = rsa.sign(random_text, privkey, 'MD5') print signature #Verify token try: rsa.verify(random_text, signature, pubkey) except: print "Verification failed" My python script fails when it tries to load the public key: ValueError: No PEM start marker "-----BEGIN RSA PUBLIC KEY-----" found Answer: Python-RSA uses the PEM RSAPublicKey format and the PEM RSAPublicKey format uses the header and footer lines: [openssl NOTES](http://www.openssl.org/docs/apps/rsa.html#NOTES) \-----BEGIN RSA PUBLIC KEY----- \-----END RSA PUBLIC KEY----- Output the public part of a private key in RSAPublicKey format: openssl EXAMPLES openssl rsa -in key.pem -RSAPublicKey_out -out pubkey.pem
Capture mongo shell output using subprocess.Popen Question: I am trying the following: #!/usr/bin/python import os, subprocess func = 'print("Hello World")' x = subprocess.Popen(['mongo', '--eval', func], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) print x.stdout.read() print x.stderr.read() But all I am getting is: MongoDB shell version: 2.2.3 followed by two new lines. How do I capture the output of function execution? Answer: Reading the pipes gets whatever is currently inside said pipe. Your mongo is waiting to connect to the localhost. Since it doesn't return quickly enough, your read command is not getting the results. This may be because you don't have mongo running locally, but you will run into this problem repeatedly if you don't wait for the subprocess to complete.
Python regex -- extraneous matchings Question: I want to split a string using `-`, `+=`, `==`, `=`, `+`, and white-space as delimiters. I want to keep the delimiter unless it is white-space. I've tried to achieve this with the following code: def tokenize(s): import re pattern = re.compile("(\-|\+\=|\=\=|\=|\+)|\s+") return pattern.split(s) print(tokenize("hello-+==== =+ there")) I expected the output to be ['hello', '-', '+=', '==', '=', '=', '+', 'there'] However I got ['hello', '-', '', '+=', '', '==', '', '=', '', None, '', '=', '', '+', '', None, 'there'] Which is almost what I wanted, except that there are quite a few extraneous `None`s and empty strings. Why is it behaving this way, and how might I change it to get what I want? Answer: [re.split](http://docs.python.org/2/library/re.html#re.split) by default returns an array of the bits of strings that are in between the matches: (As @Laurence Gonsalves notes, this is its main use.) ['hello', '', '', '', '', '', '', '', 'there'] Note the empty strings in between `-` and `+=`, `+=` and `==`, etc. As the docs explain, because you're using a capture group (i.e., because you're using `(\-|\+\=|\=\=|\=|\+)` instead of `(?:\-|\+\=|\=\=|\=|\+)`, the bits that the capture group matches are interspersed: ['hello', '-', '', '+=', '', '==', '', '=', '', None, '', '=', '', '+', '', None, 'there'] `None` corresponds to where the `\s+` half of your pattern was matched; in those cases, the capture group captured nothing. From looking at the docs for re.split, I don't see an easy way to have it discard empty strings in between matches, although a simple list comprehension (or [filter](http://docs.python.org/2/library/functions.html#filter), if you prefer) can easily discard `None`s and empty strings: def tokenize(s): import re pattern = re.compile("(\-|\+\=|\=\=|\=|\+)|\s+") return [ x for x in pattern.split(s) if x ] **One last note** : For what you've described so far, this will work fine, but depending on the direction your project goes, you may want to switch to a proper parsing library. [The Python wiki](http://wiki.python.org/moin/LanguageParsing) has a good overview of some of the options here.
Assign python Decimal objects to MySQL DECIMAL columns Question: I have a table "City" defined as: CREATE TABLE `City` ( `id` int(11) NOT NULL, `lat` decimal(9,6) default NULL, `long` decimal(9,6) default NULL, `lm_index` int(11) default NULL, `num_business` int(11) default NULL, `postal_code` varchar(16) default NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 I use python with SQLAlchemy to map this table: Base = declarative_base(engine) class City(Base): '''''' __tablename__ = 'City' __table_args__ = {'autoload':True} def __init__(self, id, lat, long, lm_index, numbiz, postal): self.id = id self.lat = lat self.long = long self.lm_index = lm_index self.num_business = numbiz self.postal_code = postal If I understand the python decimal package correctly, this should work: from decimal import * getcontext().prec = 6 lat = 12.11111111 long = 46.2222222 city = City(1, Decimal(str(lat)), Decimal(str(long)), 0, 0, 0) However, I still get the same errors as I do when I don't call Decimal(): Warning: Data truncated for column 'lat' at row 1 What am I doing wrong here? Answer: My guess is, because you're sending an instance of `Decimal`, the `lat` or `long` fields are being turned to something like a string representation of the instance. Again, I'm not sure. You can use `pdb` to debug that. Anyway, there are better ways to deal with `DECIMAL` fields in `MySQL` via SQLAlchemy. Instead of handling the decimal processing yourself, you could just let sqlalchemy take care of it - from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class City(Base): __tablename__ = 'City' __table_args__ = {'autoload':True} id = Column(Integer, primary_key=True) lat = Column(Numeric(precision=9, scale=6)) long = Column(Numeric(precision=9, scale=6)) ... `Numeric` field returns `decimal.Decimal` objects by default, applying conversion as needed. Then you can initiate it as - city = City(lat=12.11111111, long=46.2222222, ...) If you override `__init__()` then you can initiate it like `City(12.1111, 46.2222)`, but I prefer keeping the column name, that way you know at a glance which field is getting what, instead of counting the number of values each time. I haven't tested the code, but it should work. Try.
Refer to group inside group with Regex Question: I am trying to find a regex that groups a word that ends on two identical symbols followed by 'ter' and splits it on the two symbols. Example: The word 'Letter' should be grouped into 'Let' and 'ter'. I'm using python and this is what i've gotten so far: match = re.search(r'(\w*)((\w)\1(er$))', str) print match.group(1) #should print 'Let' print match.group(2) #should print 'ter' The problem is that the (\w)\1 doesn't refer to the right group, because it's a group inside a group. How is this solved? Thanks in advance. Answer: I'm using named groups as that makes referencing them easier: import re pattern = r""" \b(?P<first_part>\w*(?P<splitter>\w)) # matches starting at a word boundary (?P<last_part>(?P=splitter)er\b) # matches the last letter of the first group # plus 'er' if followed by a word boundary """ matcher = re.compile(pattern, re.X) print matcher.search('letter').groupdict() # out: {'first_part': 'let', 'last_part': 'ter', 'splitter': 't'}
changing plaintext tags into HTML tags to display in browser in python Question: ok so I'm writing a function in python which takes a text document which is tagged with tags like ===, ==, ---, #text# etc. etc. (alot like wikipedia). Now my program basically has to replace those with HTML tags such as &ndash, &mdash, <>text etc. so that they can be displayed properly in a browser. This is what i've got so far: def tag_change (): for () in range () sub('--', '&ndash;') sub('---', '&mdash;') sub('''*''', '<i>*</i>') sub("'''*'''", '<b>*</b>') sub("==*==", "<h1>*</h1>") sub("#*#", "<li>*</li>") Am I on the right track? Or is there something else I need to include? I'm fairly new to this Answer: Your best bet (if you want to write your own function and avoid using an existing tool) is to use regex, which is simple enough import re def subst(text): str = '#text#' capture = re.search('#(.+)#', str) return '<li>'+ capture.group(1)+ '</li>' I hope you get the idea you could also use patterns like '==(.+)==' and so forth to capture what you want. You can view this post to learn more about using re.search and re.match <http://stackoverflow.com/a/180993/2152321> You can also learn more about regex pattern construction here <http://www.tutorialspoint.com/python/python_reg_expressions.htm>
Circular import is only stopping Django command, not shell or web response Question: I have two classes which import each other: **profile/models.py** class Company(models.Model): name = ... class CompanyReview(models.Model): company = models.ForeignKey(Company) from action.models import CompanyAction action = models.ForeignKey(CompanyAction) **action/models.py** from profile.models import Company class CompanyAction(models.Model): company = models.ForeignKey(Company, null = True, blank = True) The circular import works when the Django app is executed on the server or when I call view functions in the shell. However, when I import one of the classes, Django command will fail with an error (see Traceback below). **Why is that the case and only causing a problem in the`command method`? How can I avoid the error? I have tried a lazy import of the `CompanyAction` class, but it led to the same error message.** **not working alternative:** class CompanyReview(models.Model): company = models.ForeignKey(Company) from django.db.models import get_model _model = get_model('action', 'CompanyAction') action = models.ForeignKey(_model) Interestingly, the variable `_model` is **empty** if I execute my command function and the classes are imported. When I load `./manage.py shell`, the variable contains the correct class name. Why is that the case? **Traceback** (virtual-env)PC:neurix$ python manage.py close_action Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/Users/Development/virtual-re/lib/python2.7/site-packages/django/core/management/__init__.py", line 453, in execute_from_command_line utility.execute() File "/Users/Development/virtual-re/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Users/Development/virtual-re/lib/python2.7/site-packages/django/core/management/__init__.py", line 272, in fetch_command klass = load_command_class(app_name, subcommand) File "/Users/Development/virtual-re/lib/python2.7/site-packages/django/core/management/__init__.py", line 77, in load_command_class module = import_module('%s.management.commands.%s' % (app_name, name)) File "/Users/Development/virtual-re/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/Users/Development/project/apps/action/management/commands/close_action.py", line 2, in <module> from action.models import CompanyAction File "/Users/Development/project/apps/action/models.py", line 26, in <module> from profile.models import Company File "/Users/Development/apps/profile/models.py", line 436, in <module> class CompanyReview(models.Model): File "/Users/Development/project/apps/profile/models.py", line 446, in CompanyReview action = models.ForeignKey(_model) File "/Users/Development/virtual-re/lib/python2.7/site-packages/django/db/models/fields/related.py", line 993, in __init__ assert isinstance(to, six.string_types), "%s(%r) is invalid. First parameter to ForeignKey must be either a model, a model name, or the string %r" % (self.__class__.__name__, to, RECURSIVE_RELATIONSHIP_CONSTANT) AssertionError: ForeignKey(None) is invalid. First parameter to ForeignKey must be either a model, a model name, or the string 'self' Answer: Django has a system for stopping circular imports on foreign keys detailed here: <https://docs.djangoproject.com/en/dev/ref/models/fields/#foreignkey> You would want to do something like: class CompanyReview(models.Model): company = models.ForeignKey(Company) action = models.ForeignKey('action.CompanyAction') class CompanyAction(models.Model): company = models.ForeignKey('profile.Company', null = True, blank = True)
Generating (StartDateOfMonth, EndDateofMonth) for every month within date range Question: I want to generate a list of (StartDateOfMonth, EndDateOfMonth) values for a specified date range. e.x. time range: 2011-09-11, 2013-04-24, the list should be : [('2011-9-11', '2011-9-30'), ('2011-10-01', '2011-10-31'), ('2011-11-01', '2011-11-30'), ('2011-12-01', '2011-12-31'), ('2012-1-01', '2012-1-31'), ('2012-2-01', '2012-2-29'), ('2012-3-01', '2012-3-31'), ('2012-4-01', '2012-4-30'), ('2012-5-01', '2012-5-31'), ('2012-6-01', '2012-6-30'), ('2012-7-01', '2012-7-31'), ('2012-8-01', '2012-8-31'), ('2012-9-01', '2012-9-30'), ('2012-10-01', '2012-10-31'), ('2012-11-01', '2012-11-30'), ('2012-12-01', '2012-12-31'), ('2013-1-01', '2013-1-31'), ('2013-2-01', '2013-2-28'), ('2013-3-01', '2013-3-31'), ('2013-4-01', '2013-4-24')] I have come up with a somewhat ugly looking code. This is partly because of my lack of list compherension and other capibilities of Python. The code is: def getMonthRanges(startDate, endDate): dateRange = [] allYears= [eachYear for eachYear in range(startDate.year, endDate.year+1)] allMonths= [eachMonth for eachMonth in range(1, 13)] for eachYear in allYears: for eachMonth in allMonths: if eachYear == startDate.year: if eachMonth == startDate.month: startOfMonth = str(eachYear)+'-'+str(eachMonth) + '-'+str(startDate.day) endOfMonth = str(eachYear)+ '-'+str(eachMonth) + '-'+str(calendar.monthrange(eachYear, eachMonth)[1]) dateRange.append((startOfMonth, endOfMonth)) elif eachMonth > startDate.month: startOfMonth = str(eachYear)+ '-'+str(eachMonth) + '-01' endOfMonth = str(eachYear)+'-'+str(eachMonth)+ '-'+ str(calendar.monthrange(eachYear, eachMonth)[1]) dateRange.append((startOfMonth, endOfMonth)) else: continue if eachYear == endDate.year: if eachMonth == endDate.month: startOfMonth = str(eachYear)+'-'+str(eachMonth) + '-01' endOfMonth = str(eachYear)+ '-'+str(eachMonth) + '-'+str(endDate.day) dateRange.append((startOfMonth, endOfMonth)) break elif eachMonth < endDate.month: startOfMonth = str(eachYear)+ '-'+str(eachMonth) + '-01' endOfMonth = str(eachYear)+'-'+str(eachMonth)+ '-'+ str(calendar.monthrange(eachYear, eachMonth)[1]) dateRange.append((startOfMonth, endOfMonth)) elif eachYear > startDate.year and eachYear < endDate.year: startOfMonth = str(eachYear)+ '-'+str(eachMonth) + '-01' endOfMonth = str(eachYear)+'-'+str(eachMonth)+ '-'+ str(calendar.monthrange(eachYear, eachMonth)[1]) dateRange.append((startOfMonth, endOfMonth)) return dateRange Requesting feedback from other developers if this code can be condensed/improved? Answer: Here's a way to do it using only the [_datetime_ module](http://docs.python.org/2.7/library/datetime.html#module-datetime): >>> from datetime import date, timedelta >>> from pprint import pprint >>> def next_month(x): 'Advance the first of the month, wrapping the year if necessary' if x.month < 12: return x.replace(month=x.month+1, day=1) return x.replace(year=x.year+1, month=1) >>> def getMonthRanges(startDate, endDate): result = [] first = startDate while first < endDate: nm = next_month(first) last = min(endDate, nm - timedelta(days=1)) result.append([str(first), str(last)]) first = nm return result >>> pprint(getMonthRanges(date(2011, 9, 11), date(2013, 4, 24))) [['2011-09-11', '2011-09-30'], ['2011-10-01', '2011-10-31'], ['2011-11-01', '2011-11-30'], ['2011-12-01', '2011-12-31'], ['2012-01-01', '2012-01-31'], ['2012-02-01', '2012-02-29'], ['2012-03-01', '2012-03-31'], ['2012-04-01', '2012-04-30'], ['2012-05-01', '2012-05-31'], ['2012-06-01', '2012-06-30'], ['2012-07-01', '2012-07-31'], ['2012-08-01', '2012-08-31'], ['2012-09-01', '2012-09-30'], ['2012-10-01', '2012-10-31'], ['2012-11-01', '2012-11-30'], ['2012-12-01', '2012-12-31'], ['2013-01-01', '2013-01-31'], ['2013-02-01', '2013-02-28'], ['2013-03-01', '2013-03-31'], ['2013-04-01', '2013-04-24']]
Python: How to use %%% when parsing text Question: I'm trying to parse the text in the ebooks at gutenberg.org to extract info about the books, for example, the title. Every book on there has a line like this: *** START OF THIS PROJECT GUTENBERG EBOOK THE ADVENTURES OF SHERLOCK HOLMES *** I'd like to use some thing like this: book_name=() index = 0 for line in finalLines: index+=1 if "*** START OF THIS PROJECT GUTENBERG EBOOK "%%%"***" in line: print(index, line) book_name=%%% but I'm obviously not doing it right. Can someone show me how it's done?? Answer: Regex is the way to go: import re title_regex = re.compile(r'\*{3} START OF THIS PROJECT GUTENBERG EBOOK (.*?) \*{3}') for index, line in enumerate(finalLines): match = title_regex.match(line) if match: book_name = match.group(1) print(index, book_name) You can also parse it line-by-line: import urllib.request url = 'http://www.gutenberg.org/cache/epub/1342/pg1342.txt' book = urllib.request.urlopen(url) lines = book.readlines() book.close() reached_start = False metadata = {} for index, line in enumerate(lines): if line.startswith('***'): if not reached_start: reached_start = True else: break if not reached_start and ':' in line: key, _, value = line.partition(':') metadata[key.lower()] = value
Read Specific Columns from csv file with Python csv Question: I'm trying to parse through a csv file and extract the data from only specific columns. Example csv: ID | Name | Address | City | State | Zip | Phone | OPEID | IPEDS | 10 | C... | 130 W.. | Mo.. | AL... | 3.. | 334.. | 01023 | 10063 | I'm trying to capture only specific columns, say `ID`, `Name`, `Zip` and `Phone`. Code I've looked at has led me to believe I can call the specific column by its corresponding number, so ie: `Name` would correspond to `2` and iterating through each row using `row[2]` would produce all the items in column 2. Only it doesn't. Here's what I've done so far: import sys, argparse, csv from settings import * # command arguments parser = argparse.ArgumentParser(description='csv to postgres',\ fromfile_prefix_chars="@" ) parser.add_argument('file', help='csv file to import', action='store') args = parser.parse_args() csv_file = args.file # open csv file with open(csv_file, 'rb') as csvfile: # get number of columns for line in csvfile.readlines(): array = line.split(',') first_item = array[0] num_columns = len(array) csvfile.seek(0) reader = csv.reader(csvfile, delimiter=' ') included_cols = [1, 2, 6, 7] for row in reader: content = list(row[i] for i in included_cols) print content and I'm expecting that this will print out only the specific columns I want for each row except it doesn't, I get the last column only. Answer: The only way you would be getting the last column from this code is if you don't include your print statement **in** your `for` loop. This is most likely the end of your code: for row in reader: content = list(row[i] for i in included_cols) print content You want it to be this: for row in reader: content = list(row[i] for i in included_cols) print content Now that we have covered your mistake, I would like to take this time to introduce you to the [pandas](http://pandas.pydata.org/) module. Pandas is spectacular for dealing with csv files, and the following code would be all you need to read a csv and save an entire column into a variable: import pandas as pd df = pd.read_csv(csv_file) saved_column = df.column_name #you can also use df['column_name'] so if you wanted to save all of the info in your column `Names` into a variable, this is all you need to do: names = df.Names It's a great module and I suggest you look into it. If for some reason your print statement was in `for` loop and it was still only printing out the last column, which shouldn't happen, but let me know if my assumption was wrong. Your posted code has a lot of indentation errors so it was hard to know what was supposed to be where. Hope this was helpful!
Read python in processing Question: Does anyone know how to read a python file in processing without using processing.py or other third party library or platform? I have a python file that can generate a text and want my processing read it real time. But it seems like there is something wrong with the “loadStrings” stuff since my three-line text are not generated at the same time, my third line is always showed a little bit slower than my first two lines so the processing sketch messed it up at some point. How to deal with the problem? String[] lines; PFont font; void setup() { size(800, 600); font = createFont("Arial", 16); frameRate(2); //lines = loadStrings("output.txt"); } void draw() { background(255); textFont(font); fill(0); lines = loadStrings("output.txt"); for (int i = 0; i < 3; i++) { String word = lines[i]; text(word, random(width), random(height)); } // noLoop(); } My python sketch: class MarkovGenerator(object): def __init__(self, n, max): self.n = n # order (length) of ngrams self.max = max # maximum number of elements to generate self.ngrams = dict() # ngrams as keys; next elements as values beginning = tuple(["China", "is"]) # beginning ngram of every line beginning2 = tuple(["But", "it"]) self.beginnings = list() self.beginnings.append(beginning) self.beginnings.append(beginning2) def tokenize(self, text): return text.split(" ") def feed(self, text): tokens = self.tokenize(text) # discard this line if it's too short if len(tokens) < self.n: return # store the first ngram of this line #beginning = tuple(tokens[:self.n]) #self.beginnings.append(beginning) for i in range(len(tokens) - self.n): gram = tuple(tokens[i:i+self.n]) next = tokens[i+self.n] # get the element after the gram # if we've already seen this ngram, append; otherwise, set the # value for this key as a new list if gram in self.ngrams: self.ngrams[gram].append(next) else: self.ngrams[gram] = [next] # called from generate() to join together generated elements def concatenate(self, source): haha = list() kk = list() haha = " ".join(source) ouou = haha.split(".") kk = ouou[0] return kk # return " ".join(source) # generate a text from the information in self.ngrams def generate(self,i): from random import choice # get a random line beginning; convert to a list. #current = choice(self.beginnings) current = self.beginnings[i] output = list(current) for i in range(self.max): if current in self.ngrams: possible_next = self.ngrams[current] next = choice(possible_next) output.append(next) # get the last N entries of the output; we'll use this to look up # an ngram in the next iteration of the loop current = tuple(output[-self.n:]) else: break output_str = self.concatenate(output) return output_str def search_facebook_posts(self): import json import urllib import time FB = list() query = {'q': "feel", 'limit': 200} resp = urllib.urlopen('http://graph.facebook.com/search?' + urllib.urlencode(query)) data = json.loads(resp.read()) posts = list() for item in data['data']: if 'message' in item: posts.append(item) for post in posts: FB.append(post['message'].encode('ascii', 'replace')) return FB def together(self): import re sentences = list() manysentences = list() togetherlist = self.search_facebook_posts() for line in togetherlist: line = line.replace(".", "\n") line = line.replace(",", "\n") line = line.replace("?", "\n") line = line.replace(";", "\n") line = line.replace("!", "\n") line = line.replace("...", "\n") line = line.replace(":", "\n") sentenca = line.split("\n") for i in range(len(sentenca)): sentences.append(sentenca[i]) for sentence in sentences: if "feel" in sentence: for matching in re.findall(r'\b[Ff]eel(.*)$',sentence): manysentences.append(matching) sentencesnew = random.choice(manysentences) haha = "I feel" + sentencesnew return haha def namelist(self): import random namelisty = list() for line in open("namelist"): namelisty.append(line+"said") thisname = random.choice(namelisty) return thisname if __name__ == '__main__': import sys import random import codecs generator = MarkovGenerator(n=2, max=16) for line in open("china"): line = line.strip() generator.feed(line) print generator.together()+"." Answer: You could use java's Runtime and [Process](http://docs.oracle.com/javase/6/docs/api/java/lang/Process.html) class: import java.io.*; void setup() { try { Process p = Runtime.getRuntime().exec("python /path/to/your/script.py arguments"); BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); String line=null; while((line=input.readLine()) != null) { System.out.println(line); } int exitWith = p.waitFor(); System.out.println("Exited with error code "+exitWith); } catch (Exception e) { e.printStackTrace(); } } so for example with a minimal ten.py: import sys if len(sys.argv) > 0: print sys.argv[1] * 10 I'd see a message printed 10 times: import java.io.*; void setup() { try { Process p = Runtime.getRuntime().exec("python /Users/hm/Documents/Processing/tests/CMD/ten.py hello"); BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); String line=null; while((line=input.readLine()) != null) { System.out.println(line); } int exitWith = p.waitFor(); System.out.println("Exited with error code "+exitWith); } catch (Exception e) { e.printStackTrace(); } }
SWIG: difference between %import and %include Question: The [SWIG docs](http://www.swig.org/Doc2.0/Preprocessor.html) explain these two directives as follows: * `%include`: "To include another file into a SWIG interface, use the `%include` directive ... Unlike, `#include`, `%include` includes each file once (and will not reload the file on subsequent `%include` declarations). Therefore, it is not necessary to use include-guards in SWIG interfaces." * `%import`: "SWIG provides another file inclusion directive with the `%import` directive ... The purpose of `%import` is to collect certain information from another SWIG interface file or a header file without actually generating any wrapper code. Such information generally includes type declarations (e.g., typedef) as well as C++ classes that might be used as base-classes for class declarations in the interface. " My question is what are the differences between these two directives and what are the pros/cons of using each? * * * P.S. Just for some background info. I have a simple C++ - python extension that builds and works when I use either of the above directives. One, however (`%import`) gives fewer warnings when I call `swig -c++ -python my_file.i`. Answer: The way SWIG works is that it assumes that any valid C++ declarations you provide are to be exposed to the target language. Therefore, any C++ code that SWIG is provided will be used to generate an interface. `%import` is an inclusion mechanism that _prevents_ the generation of the interface for the code it includes. That's the difference. So the question you ask when including a header is, "Do I want all of the stuff in this header to be exposed to the target language?" If the answer is "no", then you use `%import`.
Permission denied when using cat - even as root? Question: How is it possible I am getting a permission denied using the below? I am using python 2.7 and ubuntu 12.04 Below is my mapper.py file import sys import json for line in sys.stdin: line = json.loads(line) key = "%s:%s" % (line['user_key'],line['item_key']) value = 1 sys.stdout.write('%s\t%s\n' % (key,value)) Below is my data file {"action": "show", "user_key": "heythat:htuser:32", "utc": 1368339334.568242, "id": "d518b7c5-a180-439b-8036-2bb40ca080cd", "item_key": "heythat:htitem:1000"} {"action": "click", "user_key": "heythat:htuser:32", "utc": 1368339334.573988, "id": "cc8c35ec-9e67-4ef8-a189-6116c7d0336a", "item_key": "heythat:htitem:1001"} {"action": "click", "user_key": "heythat:htuser:32", "utc": 1368339334.575226, "id": "6c457f9a-afc2-4b61-be2f-d4ea2863aa69", "item_key": "heythat:htitem:1002"} {"action": "show", "user_key": "heythat:htuser:32", "utc": 1368339334.575315, "id": "e0b08c30-459b-4f77-b9a4-05939457ab99", "item_key": "heythat:htitem:1000"} {"action": "click", "user_key": "heythat:htuser:32", "utc": 1368339334.57538, "id": "90084ea2-75c6-4b8a-bc22-9d9f2da1c0de", "item_key": "heythat:htitem:1002"} {"action": "show", "user_key": "heythat:htuser:32", "utc": 1368339334.57538, "id": "2f76a861-2b66-430a-b70d-2af6e1b9f365", "item_key": "heythat:htitem:1001"} {"action": "show", "user_key": "heythat:htuser:32", "utc": 1368339334.57538, "id": "282eec8a-7f6d-4ad3-917a-aae049062d87", "item_key": "heythat:htitem:1002"} {"action": "show", "user_key": "heythat:htuser:32", "utc": 1368339334.575447, "id": "bc48a6bc-f8f8-420e-9b80-0bd0c2bbde0d", "item_key": "heythat:htitem:1000"} {"action": "show", "user_key": "that:htuser:32", "utc": 1368339334.575513, "id": "14b49763-e2fe-4beb-bff6-f4b34b3d2ef3", "item_key": "that:htitem:1001"} {"action": "show", "user_key": "that:htuser:32", "utc": 1368339334.575596, "id": "983cbcf3-4375-4b3b-86ed-a8fbc86ff4b3", "item_key": "that:htitem:1002"} Below is my error cat /home/ubuntu/workspace/logging/data.txt | /home/ubuntu/workspace/logging/mapper.py bash: /home/ubuntu/workspace/logging/mapper.py: Permission denied Answer: Your `mapper.py` file needs to be executable (on some executable partition) so `chmod a+x mapper.py` The underlying [execve(2)](http://man7.org/linux/man-pages/man2/execve.2.html) syscall is failing with EACCES Execute permission is denied for the file or a script or ELF interpreter. EACCES The file system is mounted noexec.
wx.Python: Passing control between multiple panels Question: I'm a newbie to wxPython, and have researched similar questions, but can't specifically find an answer to my question. I'm creating two panels with a splitter. Each panel has a number of widgets. I'd like to have a widget in one panel control some properties of the other and vice versa) In the example, I'm trying to change the background of `RightPanel` from a button in `LeftPanel`. I'm obviously doing something wrong as a I get an error: > TypeError: **init**() takes exactly 2 arguments (1 given) Code: import wx import wx.grid as gridlib import pyodbc class RightPanel(wx.Panel): """""" def __init__(self, parent): """Constructor""" wx.Panel.__init__(self, parent=parent) grid = gridlib.Grid(self) grid.CreateGrid(5,5) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(grid, 0, wx.EXPAND) self.SetSizer(sizer) class LeftPanel(wx.Panel): """""" def __init__(self, parent): """Constructor""" wx.Panel.__init__(self, parent=parent) self.create_controls() self.SetBackgroundColour("light green") def create_controls(self): self.h_sizer = wx.BoxSizer(wx.HORIZONTAL) self.v_sizer = wx.BoxSizer(wx.VERTICAL) self.button = wx.Button(self, label="Press me!") self.button.Bind(wx.EVT_BUTTON, self.on_button_pressed) self.v_sizer.Add(self.button, 0) self.v_sizer.Add(self.h_sizer, 0, wx.EXPAND) self.SetSizer(self.v_sizer) def on_button_pressed(Panel,event): RightPanel().SetBackgroundColour("light blue") class MyForm(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, wx.ID_ANY, "DB Viewer",size=(350, 250)) splitter = wx.SplitterWindow(self) leftP = LeftPanel(splitter) rightP = RightPanel(splitter) splitter.SplitVertically(leftP, rightP) splitter.SetMinimumPaneSize(20) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(splitter, 1, wx.EXPAND) self.SetSizer(sizer) if __name__ == "__main__": app = wx.App(False) frame = MyForm() frame.Show() app.MainLoop() Any help greatly appreciated. Regards Answer: A clean design can be achieve using pubsub: import wx import wx.grid as gridlib from wx.lib.pubsub import pub #import pyodbc class RightPanel(wx.Panel): """""" def __init__(self, parent): """Constructor""" wx.Panel.__init__(self, parent=parent) grid = gridlib.Grid(self) grid.CreateGrid(5,5) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(grid, 0, wx.EXPAND) self.SetSizer(sizer) pub.subscribe(self.changeColourEvent, "MOOD_CHANGE") def changeColourEvent(self, value): self.SetBackgroundColour(value) self.Refresh() class LeftPanel(wx.Panel): """""" def __init__(self, parent): """Constructor""" wx.Panel.__init__(self, parent=parent) self.create_controls() self.SetBackgroundColour("grey") def create_controls(self): self.h_sizer = wx.BoxSizer(wx.HORIZONTAL) self.v_sizer = wx.BoxSizer(wx.VERTICAL) self.bbutton = wx.Button(self, label="Got dem blues?!") self.bbutton.Bind(wx.EVT_BUTTON, self.blues_button_pressed) self.hbutton = wx.Button(self, label="Happy happy!") self.hbutton.Bind(wx.EVT_BUTTON, self.happy_button_pressed) self.v_sizer.Add(self.bbutton, 0) self.v_sizer.Add(self.hbutton, 0) self.v_sizer.Add(self.h_sizer, 0, wx.EXPAND) self.SetSizer(self.v_sizer) def blues_button_pressed(self,event): pub.sendMessage("MOOD_CHANGE", value = "blue") def happy_button_pressed(self,event): pub.sendMessage("MOOD_CHANGE", value = "yellow") class MyForm(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, wx.ID_ANY, "DB Viewer",size=(350, 250)) splitter = wx.SplitterWindow(self) leftP = LeftPanel(splitter) rightP = RightPanel(splitter) splitter.SplitVertically(leftP, rightP) splitter.SetMinimumPaneSize(20) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(splitter, 1, wx.EXPAND) self.SetSizer(sizer) if __name__ == "__main__": app = wx.App(False) frame = MyForm() frame.Show() app.MainLoop() The advantage that this sort of approach brings is that it means that no pane is dependent on the design of any other pane. You can see that neither MyForm nor RightPanel needs to know whether LeftPanel is deciding that it's time to change colour based on a button or a checkbox or any other mechanism. In this code, MyForm cares only about instantiating two panes. It does not get tangled up in the logic of what goes between them. It's also readily extensible in the type of information that objects (in this case, panes) can pass to each other. It also allows for other elements to be added to the design that care about the same kinds of thing (in my example case, mood changes) without impacting the code of anything other than themself.
Generating a dense matrix from a sparse matrix in numpy python Question: I have a Sqlite database that contains following type of schema: termcount(doc_num, term , count) This table contains terms with their respective counts in the document. like (doc1 , term1 ,12) (doc1, term 22, 2) . . (docn,term1 , 10) This matrix can be considered as sparse matrix as each documents contains very few terms that will have a non-zero value. How would I create a dense matrix from this sparse matrix using numpy as I have to calculate the similarity among documents using cosine similarity. This dense matrix will look like a table that have docid as the first column and all the terms will be listed as the first row.and remaining cells will contain counts. Answer: from scipy.sparse import csr_matrix A = csr_matrix([[1,0,2],[0,3,0]]) >>>A <2x3 sparse matrix of type '<type 'numpy.int64'>' with 3 stored elements in Compressed Sparse Row format> >>> A.todense() matrix([[1, 0, 2], [0, 3, 0]]) >>> A.toarray() array([[1, 0, 2], [0, 3, 0]]) this is an example of how to convert a sparse matrix to a dense matrix taken from [scipy](http://www.scipy.org/SciPyPackages/Sparse) This example is now listed in the stackoverflow docs on SciPy [here](http://stackoverflow.com/documentation/scipy/2128/introduction-to- scipy/9963/convert-a-sparse-matrix-to-a-dense-matrix-using- scipy#t=201607242031316475931)
Python and zlib: Terribly slow decompressing concatenated streams Question: I've been supplied with a zipped file containing multiple individual streams of compressed XML. The compressed file is 833 mb. If I try to decompress it as a single object, I only get the first stream (about 19 kb). I've modified the following code supplied as a answer to an [older question](http://stackoverflow.com/questions/12147484/extract-zlib-compressed- data-from-binary-file-in-python) to decompress each stream and write it to a file: import zlib outfile = open('output.xml', 'w') def zipstreams(filename): """Return all zip streams and their positions in file.""" with open(filename, 'rb') as fh: data = fh.read() i = 0 print "got it" while i < len(data): try: zo = zlib.decompressobj() dat =zo.decompress(data[i:]) outfile.write(dat) zo.flush() i += len(data[i:]) - len(zo.unused_data) except zlib.error: i += 1 outfile.close() zipstreams('payload') infile.close() This code runs and produces the desired result (all the XML data decompressed to a single file). The problem is that it takes several days to work! Even though there are tens of thousands of streams in the compressed file, it still seems like this should be a much faster process. Roughly 8 days to decompress 833mb (estimated 3gb raw) suggests that I'm doing something very wrong. Is there another way to do this more efficiently, or is the slow speed the result of a read-decompress-write---repeat bottleneck that I'm stuck with? Thanks for any pointers or suggestions you have! Answer: It's hard to say very much without more specific knowledge of the file format you're actually dealing with, but it's clear that your algorithm's handling of substrings is quadratic-- not a good thing when you've got tens of thousands of them. So let's see what we know: You say that the vendor states that they are > using the standard zlib compression library.These are the same compression > routines on which the gzip utilities are built. From this we can conclude that the component streams are in **raw zlib format,** and are _not_ encapsulated in a gzip wrapper (or a PKZIP archive, or whatever). The authoritative documentation on the ZLIB format is here: <http://tools.ietf.org/html/rfc1950> So let's assume that your file is exactly as you describe: **A 32-byte header, followed by raw ZLIB streams concatenated together, without any other stuff in between.** (**Edit:** That's not the case, after all). Python's [zlib documentation](http://docs.python.org/2/library/zlib.html) provides a `Decompress` class that is actually pretty well suited to churning through your file. It includes an attribute `unused_data` whose [documentation](http://docs.python.org/2/library/zlib.html#zlib.Decompress.unused_data) states clearly that: > The only way to determine where a string of compressed data ends is by > actually decompressing it. This means that when compressed data is contained > part of a larger file, you can only find the end of it by reading data and > feeding it followed by some non-empty string into a decompression object’s > decompress() method until the unused_data attribute is no longer the empty > string. So, this is what you can do: Write a loop that reads through `data`, say, one block at a time (no need to even read the entire 800MB file into memory). Push each block to the `Decompress` object, and check the `unused_data` attribute. When it becomes non-empty, you've got a complete object. Write it to disk, create a new decompress object and initialize iw with the `unused_data` from the last one. This just might work (untested, so check for correctness). **Edit:** Since you do have other data in your data stream, I've added a routine that aligns to the next ZLIB start. You'll need to find and fill in the two-byte sequence that identifies a ZLIB stream in _your_ data. (Feel free to use your old code to discover it.) While there's no fixed ZLIB header in general, it should be the same for each stream since it consists of [protocol options and flags,](http://tools.ietf.org/html/rfc1950) which are presumably the same for the entire run. import zlib # FILL IN: ZHEAD is two bytes with the actual ZLIB settings in the input ZHEAD = CMF+FLG def findstart(header, buf, source): """Find `header` in str `buf`, reading more from `source` if necessary""" while buf.find(header) == -1: more = source.read(2**12) if len(more) == 0: # EOF without finding the header return '' buf += more offset = buf.find(header) return buf[offset:] You can then advance to the start of the next stream. I've added a `try`/`except` pair since the same byte sequence might occur outside a stream: source = open(datafile, 'rb') skip_ = source.read(32) # Skip non-zlib header buf = '' while True: decomp = zlib.decompressobj() # Find the start of the next stream buf = findstart(ZHEAD, buf, source) try: stream = decomp.decompress(buf) except zlib.error: print "Spurious match(?) at output offset %d." % outfile.tell(), print "Skipping 2 bytes" buf = buf[2:] continue # Read until zlib decides it's seen a complete file while decomp.unused_data == '': block = source.read(2**12) if len(block) > 0: stream += decomp.decompress(block) else: break # We've reached EOF outfile.write(stream) buf = decomp.unused_data # Save for the next stream if len(block) == 0: break # EOF outfile.close() PS 1. If I were you I'd write each XML stream into a separate file. PS 2. You can test whatever you do on the first MB of your file, till you get adequate performance.
ugettext and ugettext_lazy functions not recognized by makemessages in Python Django Question: I'm working with Django 1.5.1 and I'm experiencing some "strange behaviour" with translations. I'm using `ugettext` and `ugettext_lazy` in the same Python file. If I organize the imports as: from django.utils.translation import ugettext as trans from django.utils.translation import ugettext_lazy as _ or from django.utils.translation import ugettext as trans, ugettext_lazy as _ The strings marked as `trans("string")` are skipped when running `makemessages` command. However, if I don't rename the `ugettext` it works well with both versions: from django.utils.translation import ugettext from django.utils.translation import ugettext_lazy as _ or from django.utils.translation import ugettext, ugettext_lazy as _ Now `trans("string")` works well. **So, does anybody know why this import renaming is causing the renamed function not to be called?** Is this an actual Python "limitation" I didn't know when renaming more than one function inside the same module? * * * **UPDATE** After some testing, I've realized that even creating an empty python module inside an app with the following code it doesn't work: from django.utils.translation import ugettext_lazy as translate a = translate("string") However, if using `_` for the alias it works: from django.utils.translation import ugettext_lazy as _ a = _("string") My **conclusion** is: _You can only use the`_` alias for `ugettext` and `ugettext_lazy` (or any other related translation function) in Django or else it won't be recognized by `makemessages` command_. The technical explanation can be found in Robert Lujo's answer. Thanks! Answer: Django command utility makemessages internally calls [xgettext](https://www.gnu.org/savannah- checkouts/gnu/gettext/manual/html_node/xgettext-Invocation.html) program like this: cmd = ( 'xgettext -d %s -L Python %s %s --keyword=gettext_noop ' '--keyword=gettext_lazy --keyword=ngettext_lazy:1,2 ' '--keyword=ugettext_noop --keyword=ugettext_lazy ' '--keyword=ungettext_lazy:1,2 --keyword=pgettext:1c,2 ' '--keyword=npgettext:1c,2,3 --keyword=pgettext_lazy:1c,2 ' '--keyword=npgettext_lazy:1c,2,3 --from-code UTF-8 ' '--add-comments=Translators -o - "%s"' % (domain, wrap, location, work_file)) (source can be found [here](https://github.com/django/django/blob/master/django/core/management/commands/makemessages.py#L98)). So, some keywords are predefined by the [xgettext](https://www.gnu.org/savannah- checkouts/gnu/gettext/manual/html_node/xgettext-Invocation.html) utility (check reference for --keyword): * for python - gettext, ugettext, dgettext, ngettext, ungettext, dngettext, _ and some are added by django utility: * gettext_lazy , ngettext_lazy , ugettext_noop , ugettext_lazy , ungettext_lazy , pgettext , npgettext , pgettext_lazy , npgettext_lazy Keyword **trans** is not in any of these keyword sets, so you should not use it for marking texts for translations.
Proper way to extract JSON data from the web given an API Question: I have an URL in the form of http://site.com/source.json?s= And I wish to use Python to create a class that will allow me to parse in my "s" query, send it to that site, and extract out the JSON results. I've tried importing json/setting up the class, but nothing ever really works and I'm trying to learn good practices at the same time. Can anyone help me out? Answer: Ideally, you should (especially when starting out), use the [requests](http://docs.python-requests.org/en/latest/) library. This would enable your code to be: import requests r = requests.get('http://site.com/source.json', params={'s': 'somevalue/or other here'}) json_result = r.json() This automatically escapes the parameters, and automatically converts your JSON result into a Python dict....
Odd exponent on plotting narrow range of data Question: I get a some very odd behaviour of matplotlib in an ipython notebook when trying to plot this data: [ -142.8916729, -142.89161936, -142.89161954, -142.89162066, -142.89162031, -142.89162033, -142.89162049, -142.89162055, -142.89162054, -142.89162084, -142.89162155, -142.89162204, -142.89162172, -142.89162157, -142.89162164, -142.89162162, -142.89162163, -142.89162165, -142.89162166, -142.89162166, -142.89162166, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162152 ] The result is this, notice the exponent: ![Graph with odd exponent](http://i.stack.imgur.com/kWwuN.png) How can I get rid of this? Answer: It matplotlib terms, it's called an offset, rather than an exponent. (For easier searching in the future.) By default, matplotlib assumes that you want to display the data ranges as compactly as possible, so it will include offsets and/or multipliers to make the display a bit more compactly (notice the `-` instead of an `x`.) If you don't want your plot to be displayed this way, the easiest way to remove it is to use `plt.ticklabel_format(useOffset=False)` (You'll have to overlook the camelCase.) As a quick example: import matplotlib.pyplot as plt import numpy as np x = np.array([-142.8916729, -142.89161936, -142.89161954, -142.89162066, -142.89162031, -142.89162033, -142.89162049, -142.89162055, -142.89162054, -142.89162084, -142.89162155, -142.89162204, -142.89162172, -142.89162157, -142.89162164, -142.89162162, -142.89162163, -142.89162165, -142.89162166, -142.89162166, -142.89162166, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162167, -142.89162152]) fig, ax = plt.subplots() ax.plot(x) ax.ticklabel_format(useOffset=False) plt.show() ![enter image description here](http://i.stack.imgur.com/ANiyg.png)
Bytes to String or String to Bytes? Question: I have read some of the examples here but I am such a novice I don't understand some of them and others don't seem to work(probably because I'm such a novice but... import urllib.request import re Symbols = ['aapl', 'spy' , 'goog' , 'nflx'] i = 0 while i < len(Symbols): Yahoo='http://finance.yahoo.com/q?s=' + Symbols[i] htmlfile = urllib.request.urlopen(Yahoo) htmltext = htmlfile.read() string = Symbols[i] symbol = string.encode('utf-8') pattern= re.compile(b'<span id="yfs_l84_'+ symbol +'">(.+?)</span>') price= re.findall(pattern, htmltext) print('The price of' + str(Symbols[i]) + ' is ' + str(price)) i+=1 This doesn't work because in the `re.compile` statement I am trying to concatenate str and bytes. I need to convert the string to bytes so that I can later iterate over a list of symbols and scrape the latest stock price from yahoo finance. I have a feeling there is something wrong with my syntax there and the examples and python documentation has an argument for 'encoding' which I think is the 'utf-8' string but I don't really know. Can someone help me with this? Edit: I am using Bytes here because that is the only way it works, I get an error to change it to bytes if I don't (I am using 3.3). The error is this: Traceback (most recent call last): File "C:\Users\Deaven And Teigan\Documents\Python Projects\YahooFinance.py", line 14, in <module> pattern= re.compile(b'<span id="yfs_l84_'+ symbol +'">(.+?)</span>') TypeError: can't concat bytes to str Answer: You should work with strings right up to the point where you actually want to make the network request rather than mix strings and bytes in your code. Generally eating, strings are the abstract representation of characters, whereas bytes are a specific encoding (e.g. Utf-8) of a string into a sequence of bytes that can be sent over a network. Maybe you want to use a raw string for this line: pattern= re.compile(b'<span id="yfs_l84_'+ symbol +'">(.+?)</span>') Instead use r'<span id="yfs_l84_'
PythonXY, IPython Qt Console, matplotlib, draw something not in inline mode Question: I'm new to PythonXY and Matplotlib. I installed PythonXY (v2.7.3.1) in default full mode. ![PythonXY](http://i.stack.imgur.com/5FPEQ.png) I use the "IPython Qt Console" application. ![IPython Qt Console](http://i.stack.imgur.com/Mif78.png) I draw something using `matplotlib.pyplot` (imported as `plt`). For example. When `plt.plot([1,3,2,4])`, the figure display in the same IPython console immediately. ![plt.plot](http://i.stack.imgur.com/7xN9g.png) If this, I cannot add some other properties, `plt.title`, `plt.xlabel`, `plt.ylabel`, or more. Why? And how can I draw figures in another window, adding more properties, and making the figure not display until `plt.show()`? Answer: If you select from the Interactive Consoles dropdown on the Python(x,y) Home launcher, "IPython (Qt)" and then click either the Console 2 or cmd.exe button, it should then run IPython (Qt) with the Qt4Agg backend which will allow you to plot in a separate window and apply titles and so on. For more info see [What is a backend](http://matplotlib.org/faq/usage_faq.html#what-is- a-backend). What Python(x,y) in your example above is doing is launching IPython with the pylab inline backend which is different from the standard backends so your commands aren't having any affect, similar behaviour is noted in this [issue on github](https://github.com/ipython/ipython/issues/2851#issuecomment-14090091). It doesn't seem possible to change the backend once IPython has been launched with inline. I'm not sure where the Python(x,y) options are set or which script is called to launch the item in the Applications dropdown. Worth taking a look at [Anaconda](https://store.continuum.io/cshop/anaconda/) as a free Python distribution with a scientific focus and regular updates. Anaconda doesn't by default load pylab into IPython so you can choose the backend after launching IPython.
Embed .SVG files into PDF using reportlab Question: I have written a script in python that produces matplotlib graphs and puts them into a pdf report using `reportlab`. I am having difficulty embedding SVG image files into my PDF file. I've had no trouble using PNG images but I want to use SVG format as this produces better quality images in the PDF report. This is the error message I am getting: IOError: cannot identify image file Does anyone have suggestions or have you overcome this issue before? Answer: You need to make sure you are importing PIL (Python Imaging Library) in your code so that ReportLab can use it to handle image types like SVG. Otherwise it can only support a few basic image formats. That said, I recall having some trouble, even when using PIL, with vector graphics. I don't know if I tried SVG but I remember having a lot of trouble with EPS.
Trouble getting matplotlib to produce plots Question: I can get matplotlib to work in pylab (ipython --pylab), but when I execute the same command in a python script a plot does not appear. My workspace focus changes from a fullscreened terminal to a Desktop when I run my script, which suggests that it is trying to plot something but failing. The following code works in `ipython --pylab` but not in my script. import matplotlib.pyplot as plt plt.plot(arange(10)) I am on Mac OS X Mountain Lion. **What is causing this to fail when I run a script but not in the interactive prompt?** Answer: I believe you need `plt.show()` .
Python ImportError: No module named 'path' Question: ''' Data class''' import os.path from random import Random class TestData(object, Random): def FetchDataFromFile(self, filename): """ Open the file as read only """ myfile = open(os.path.join(os.getcwd(),filename), 'r') """ read the information in the file """ lines = myfile.read() ''' Remove the header as this will not be used ''' header = lines[0] lines.remove(header) return lines I am getting: > ImportError: No module named path > > File "**pyclasspath** /Lib/Testdata.py", line 2, in os.path is working in all other classes in my project. Can someone point me what mistake i am doing? I moved this file from one directory to another. Apart from it, there is no difference between this class and other classes. Answer: `import os` should work fine instead import os from random import Random class TestData(object, Random): def FetchDataFromFile(self, filename): """ Open the file as read only """ myfile = open(os.path.join(os.getcwd(),filename), 'r') """ read the information in the file """ lines = myfile.read() ''' Remove the header as this will not be used ''' header = lines[0] lines.remove(header) return lines as an asside your method could be def FetchDataFromFile(self, filename): """ Open the file as read only """ return list(open(os.path.join(os.getcwd(),filename), 'r'))[1:]
Connecting to MS SQL Server with Windows Authentication using Python? Question: How do I connect MS SQL Server using Windows Authentication, with the pyodbc library? I can connect via MS Access and SQL Server Management Studio, but cannot get a working connection ODBC string for Python. Here's what I've tried (also without `'Trusted_Connection=yes'`): pyodbc.connect('Trusted_Connection=yes', driver='{SQL Server}', server='[system_name]', database='[databasename]') pyodbc.connect('Trusted_Connection=yes', uid='me', driver='{SQL Server}', server='localhost', database='[databasename]') pyodbc.connect('Trusted_Connection=yes', driver='{SQL Server}', server='localhost', uid='me', pwd='[windows_pass]', database='[database_name]') pyodbc.connect('Trusted_Connection=yes', driver='{SQL Server}', server='localhost', database='[server_name]\\[database_name]') pyodbc.connect('Trusted_Connection=yes', driver='{SQL Server}', server='localhost', database='[server_name]\[database_name]') pyodbc.connect('Trusted_Connection=yes', driver='{SQL Server}', database='[server_name]\[database_name]') Answer: You can specify the connection string as one long string that uses semi-colons (`;`) as the argument separator. Working example: import pyodbc cnxn = pyodbc.connect(r'Driver={SQL Server};Server=.\SQLEXPRESS;Database=myDB;Trusted_Connection=yes;') cursor = cnxn.cursor() cursor.execute("SELECT LastName FROM myContacts") while 1: row = cursor.fetchone() if not row: break print(row.LastName) cnxn.close() For connection strings with lots of parameters, the following will accomplish the same thing but in a somewhat more readable way: conn_str = ( r'Driver={SQL Server};' r'Server=.\SQLEXPRESS;' r'Database=myDB;' r'Trusted_Connection=yes;' ) cnxn = pyodbc.connect(conn_str) (Note that there are no commas between the individual string components.)
I keep getting "InterfaceError: Error binding parameter 0 - probably unsupported type." Question: I am trying to work my way through the official Django tutorial (<https://docs.djangoproject.com/en/1.5/intro/tutorial01/>) but I'm running into a problem when trying to use the shell. Specifically, when I try to run `python manage.py shell` I get the error "InterfaceError: Error binding parameter 0 - probably unsupported type." I don't know what this means, and the only code I've written is the example code given in the tutorial: from django.db import models class Poll(models.Model): question = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') def __unicode__self(): return self.question class Choice(models.Model): poll = models.ForeignKey(Poll) choice_text = models.CharField(max_length=200) votes = models.IntegerField(default=0) def __unicode__(self): return choice_text I also encountered the problem "SQLite received a naive datetime while time zone support is active." but I used an answer from another SO post to ignore that warning and I don't think that's what's causing this InterfaceError. I'm running Django 1.5 with Python 2.7 on Ubuntu 12.10 and using sqlite3. If anyone has any ideas as to what's going on I'd really appreciate the help. Answer: I had the same problem. Make sure you have a valid time zone in the mysite\settings.py file. I followed the link <http://www.postgresql.org/docs/8.1/static/datetime- keywords.html#DATETIME-TIMEZONE-SET-TABLE> given on the Django tutorial. However, those keywords don't work with sqlite. Look up the time zones on <http://en.wikipedia.org/wiki/List_of_tz_zones_by_name>. For me it's America/New_York.
Python save txt file in a different folder Question: the following script is running well and saving the txt output in the Desktop as I am running the script from Desktop. However, I want to save the txt files to my Documents in a new folder named ASCII. How can I give the command for doing that. The 8phases.txt has the following lines- -1 1 -1 -1 1 1 1 1 1 1 -1 1 -1 -1 -1 1 1 -1 1 -1 -1 -1 -1 1 The script- import numpy as np import matplotlib.pyplot as plt D=12 n=np.arange(1,4) x = np.linspace(-D/2,D/2, 3000) I = np.array([125,300,75]) phase = np.genfromtxt('8phases.txt') I_phase = I*phase for count,i in enumerate(I_phase): F = sum(m*np.cos(2*np.pi*l*x/D) for m,l in zip(i,n)) s = np.column_stack([x,F]) np.savetxt((str(count)+'.txt'),s) Any help please- Answer: You should probably provide full path in argument of savetxt method, for example: np.savetxt(r"C:\ASCII\%s.txt" % count,s)
How to change list into HTML table ? (Python) Question: This is what I have done when there is no HTML codes from collections import defaultdict hello = ["hello","hi","hello","hello"] def test(string): bye = defaultdict(int) for i in hello: bye[i]+=1 return bye And i want to change this to html table and This is what I have try so far, but it still doesn't work def test2(string): bye= defaultdict(int) print"<table>" for i in hello: print "<tr>" print "<td>"+bye[i]= bye[i] +1+"</td>" print "</tr>" print"</table>" return bye ​ Answer: from collections import defaultdict hello = ["hello","hi","hello","hello"] def test2(strList): d = defaultdict(int) for k in strList: d[k] += 1 print('<table>') for i in d.items(): print('<tr><td>{0[0]}</td><td>{0[1]}</td></tr>'.format(i)) print('</table>') test2(hello) **Output** <table> <tr><td>hi</td><td>1</td></tr> <tr><td>hello</td><td>3</td></tr> </table>
Python pandas plot is a no-show Question: When I run this code import pandas as pd import numpy as np def add_prop(group): births = group.births.astype(float) group['prop'] = births/births.sum() return group pieces = [] columns = ['name', 'sex', 'births'] for year in range(1880, 2012): path = 'yob%d.txt' % year frame = pd.read_csv(path, names = columns) frame['year'] = year pieces.append(frame) names = pd.concat(pieces, ignore_index = True) total_births = names.pivot_table('births', rows = 'year', cols = 'sex', aggfunc = sum) total_births.plot(title = 'Total Births by sex and year') I get no plot. This is from Wes McKinney's book on using Python for data analysis. Can anyone point me in the right direction? Answer: Put import matplotlib.pyplot as plt at the top, and plt.show() at the end.
Python cross-module globale variable Question: I'm new to Python and I was trying out nose as a unit test framework. I came across a behavior I didn't expect, but maybe this is normal, hence my question. I have two (very basic) files: __init__.py: #!/usr/bin/env python glob = 0 def setup(): global glob glob = 42 print "Package setup" test_mymod.py: #!/usr/bin/env python from unittest import TestCase from . import glob print "test_mymod.py" class testMyMod(TestCase): def setUp(self): print glob def test_random(self): pass def tearDown(self): pass Running `nosetest -s` gives me following output: test_mymod.py Package setup 0 Since the setup() function of the package is invoked before the setUp() function of the test, I expected to see `print glob` to output `42`. Am I doing something wrong, or is there no way of doing what I want? It seems to me that importing a variable copies its value instead of referencing it, but maybe there is way to do otherwise? Thank you Answer: When you do `from . import glob` at the top of your test file, you get a reference to the value of `glob` in your namespace. This happens before you call `setup()`. When you call `setup()` the value of `glob` is updated in the `__init__.py` namespace but not `test_mymod.py`. Instead of importing `glob` directly, reference it like `package.glob`. Alternatively, set `glob` to its correct value at package import time; having unitialized globals that people can import is considered bad practice for exactly this reason.
Flask-SQLAlchemy InvalidRequestError: Object is already attached to session Question: I'm creating a forum project using Flask, and managing all the users, threads, posts, etc. using Flask-SQLAlchemy. However, I've found that when I attempt to do x (e.g. edit a post), I get an InvalidRequestError if I attempt to do anything else (e.g. delete the post). For editing a post, def post_edit(id, t_id, p_id): post = Post.query.filter_by(id=p_id).first() if post.author.username == g.user.username: form = PostForm(body=post.body) if form.validate_on_submit(): post.body = form.body.data db.session.commit() return redirect(url_for('thread', id=id, t_id=t_id)) return render_template('post_edit.html', form=form, title='Edit') else: flash('Access denied.') return redirect(url_for('thread', id=id, t_id=t_id)) and deleting a post, @app.route('/forum=<id>/thr=<t_id>/p=<p_id>/delete', methods=['GET','POST']) def post_delete(id, t_id, p_id): post = Post.query.filter_by(id=p_id).first() if post.author.username == g.user.username: db.session.delete(post) db.session.commit() return redirect(url_for('thread', id=id, t_id=t_id)) else: flash('Access denied.') return redirect(url_for('thread', id=id, t_id=t_id)) and posting a post @app.route('/forum/id=<id>/thr=<t_id>', methods=['GET','POST']) def thread(id, t_id): forum = Forum.query.filter_by(id=id).first() thread = Thread.query.filter_by(id=t_id).first() posts = Post.query.filter_by(thread=thread).all() form = PostForm() if form.validate_on_submit(): post = Post(body=form.body.data, timestamp=datetime.utcnow(), thread=thread, author=g.user) db.session.add(post) db.session.commit() return redirect(url_for('thread', id=id, t_id=t_id)) return render_template('thread.html', forum=forum, thread=thread, posts=posts, form=form, title=thread.title) Unfortunately, the only surefire way to make this problem resolve itself is to reset the script that actually runs the app, run.py #!bin/python from app import app app.run(debug=True,host='0.0.0.0') Answer: Are you using WooshAlchemy because it might be part of your problem. [Described here](http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial- part-xvi-debugging-testing-and-profiling) He describes the "fix" that requires modification of WooshAlchemy extension. Usually though it could mean you called a Post model object and then attached it using "session.add" and then tried to "session.delete" or did another "session.add" on the same object. Also your request routing is a bit strange for flask I've never seen the "thr=<t_id>" type of notation with Flask before. Has that been working well for you? <http://flask.pocoo.org/docs/quickstart/#variable-rules>
Python: Append a list to an existing list assigned to a key in a dictionary? Question: Lets say I have a dict with key= 'keys' >>> keys 'taste' After a few lines..output >>> {'taste': ('sweet', 'sour', 'juicy', 'melon-like')} This code snippet from collections import defaultdict agent=defaultdict(str) key_list=[] key_list=[(keys,tuple(key_list))] agent=dict(key_list) #agent[keys]+=key_list What I want to know is, is there a way to lets say I have `agent= {'taste': ('sweet', 'sour', 'juicy', 'melon-like')}` I want to add to a list key_list=['yuck!','tasty','smoothie'] and `agent.setdefault('taste',[]).append(key_list)` and have output like: {'taste': ('sweet', 'sour', 'juicy', 'melon-like','yuck!','tasty','smoothie')} instead of {'taste': ('sweet', 'sour', 'juicy', 'melon-like',['yuck!','tasty','smoothie'])} Is there a way to that? Inshort: 1. I want to add a list to an existing list,which is a value to a key in a dictionary (w/o iterations to find that particular key) 2. Check if the element being fed in as a list already contains that element in that list which is a value to a particular key, say 'taste' here (could be string, as here) Answer: Check this out: >>> tst = {'taste': ('sweet', 'sour', 'juicy', 'melon-like')} >>> tst.get('taste', ()) #default to () if does not exist. ('sweet', 'sour', 'juicy', 'melon-like') >>> key_list=['yuck!','tasty','smoothie'] >>> tst['taste'] = tst.get('taste') + tuple(key_list) >>> tst {'taste': ('sweet', 'sour', 'juicy', 'melon-like', 'yuck!', 'tasty', 'smoothie')} To retrieve, >>> tst = {'taste': ('sweet', 'sour', 'juicy', 'melon-like', 'yuck!', 'tasty', 'smoothie')} >>> taste = tst.get('taste') >>> taste ('sweet', 'sour', 'juicy', 'melon-like', 'yuck!', 'tasty', 'smoothie') >>> 'sour' in taste True >>> 'sour1' in taste False
Python Boto created cloud-init #include script not running on EC2 Question: I've run out of ideas, would appreciate some help. I'm starting and EC2 Ubuntu 12.04 instance and adding the following script to the user data: #!/usr/bin/env python import sys from boto.s3.connection import S3Connection AWS_BOOTSTRAP_BUCKET = 'myBucket' AWS_ACCESS_KEY_ID = 'MyAccessId' AWS_SECRET_ACCESS_KEY = 'MySecretKey' s3 = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) install = s3.generate_url(300, 'GET', bucket=AWS_BOOTSTRAP_BUCKET, key='bash1.txt', force_http=True) config = s3.generate_url(300, 'GET', bucket=AWS_BOOTSTRAP_BUCKET, key='cloud-config.txt', force_http=True) start = s3.generate_url(300, 'GET', bucket=AWS_BOOTSTRAP_BUCKET, key='bash2.txt', force_http=True) sys.stdout.write("#include\n") sys.stdout.write(install+"\n") sys.stdout.write(config+"\n") sys.stdout.write(start+"\n") After the instance has started, I can right click on the instance and View Sys Log. I can see the following near the bottom: Generating locales... en_US.UTF-8... done Generation complete. #include http://nerdcloudinit.s3.amazon... http://nerdcloudinit.s3.amazon... http://nerdcloudinit.s3.amazon... I can run wget from the instance on the provided url's and see the contents of the txt files. Why aren't the scripts added via #include working? Any help would be appreciated. Kind regards, C Answer: You've got the right idea, you're just one layer too deep :) With Ubuntu and cloud-init, your user data can be an executable script OR it can be a CloudConfig file. In your question, you are passing a python executable as your userdata. On the Ubuntu machine, it sees an executable program (because it starts with `#!`) and it runs it. And that's all it will do. It won't process the output of your program. What you want, is to run the python script on YOUR computer, and use the OUTPUT of the program as your userdata. So in your example, the userdata passed to Amazon should be: #include http://nerdcloudinit.s3.amazon... http://nerdcloudinit.s3.amazon... http://nerdcloudinit.s3.amazon... cloud-init on Ubuntu will see the first line `#include` and know that it needs to download the files. Then, it will look at each file it downloads to process each one. The mime-multipart is only useful if you want to lump multiple files together into one. For example instead of downloading three files from S3, you could put them all together into one file and `#include` a single URL. You don't need that here though. The link you included for cloud-init docs <http://help.ubuntu.com/community/CloudInit> has more examples of what your userdata can contain. Enjoy!
Variable decimal place limit in function set by user? Question: I just started learning python and am currently writing a script that converts Celsius to Fahrenheit and vise versa. I have the main part done but now I want to be able to let the user set the number of decimals displayed in the output... The first function contains my failed attempt and the second is set at 2 decimal places. def convert_f_to_c(t,xx): c = (t - 32) * (5.0 / 9) print "%.%f" % (c, xx) def convert_c_to_f(t): f = 1.8 * t + 32 print "%.2f" % f print "Number of decimal places?" dec = raw_input(">") print "(c) Celsius >>> Ferenheit\n(f) Ferenheit >>> Celcius" option = raw_input(">") if option == 'c': cel = int(raw_input("Temperature in Celcius?")) convert_c_to_f(cel) else: fer = int(raw_input("Temperature in Ferenheit?")) convert_f_to_c(fer,dec) Answer: num_dec = int(raw_input("Num Decimal Places?") print "%0.*f"%(num_dec,3.145678923678) in % format strings you can use a `*` for this feature afaik there is no equivelent method in `'{}'.Format`method >>> import math >>> print "%0.*f"%(3,math.pi) 3.142 >>> print "%0.*f"%(13,math.pi) 3.1415926535898 >>>
OverWrite an existing list in CSV files Python Question: I have a csv files with player attributes: ['Peter Regin', '2', 'DAN', 'N', '1987', '6', '6', '199', '74', '2', '608000', '', '77', '52', '74', '72', '58', '72', '71', '72', '70', '72', '74', '68', '74', '41', '40', '51'] ['Andrej Sekera', '8', 'SVK', 'N', '1987', '6', '6', '198', '72', '3', '1323000', '', '65', '39', '89', '78', '75', '70', '72', '56', '53', '56', '57', '72', '57', '59', '70', '51'] For example, I want to check if a player is a CENTER ('2' in position 1 in my list) and after I want to modify the 12 element (which is '77' for Peter Regin) How can I do that using the CSV module ? import csv class ManipulationFichier: def __init__(self, fichier): self.fichier = fichier def read(self): with open(self.fichier) as f: reader = csv.reader(f) for row in reader: print(row) def write(self): with open(self.fichier) as f: writer = csv.writer(f) for row in f: if row[1] == 2: writer.writerows(row[1] for row in f) Which do nothing important.. Thanks, Answer: In general, CSV files cannot be reliably modified in-place. Read the entire file into memory (usually a list of lists, as in your example), modify the data, then write the entire file back. Unless your file is really huge, and you do this really often, the performance hit will be negligible.
How to convert tuple in string to tuple object? Question: In Python 2.7, I have the following string: "((1, u'Central Plant 1', u'http://egauge.com/'), (2, u'Central Plant 2', u'http://egauge2.com/'))" **How can I convert this string back to tuples?** I've tried to use `split` a few times but it's very messy and makes a list instead. Desired output: ((1, 'Central Plant 1', 'http://egauge.com/'), (2, 'Central Plant 2', 'http://egauge2.com/')) Thanks for the help in advance! Answer: You should use the `literal_eval` method from the `ast` module which you can read more about [**here**](http://docs.python.org/2/library/ast.html#ast.literal_eval). >>> import ast >>> s = "((1, u'Central Plant 1', u'http://egauge.com/'),(2, u'Central Plant 2', u'http://egauge2.com/'))" >>> ast.literal_eval(s) ((1, u'Central Plant 1', u'http://egauge.com/'), (2, u'Central Plant 2', u'http://egauge2.com/'))
Raspberry Pi: how to send serial command using Ethernet? Question: I have a python script, which will start cherrypy webserver when run in the terminal. In the script, i use pyserial by importing `serial`, then i open up the port /dev/ttyAMA0 and i can send any serial commands. @cherrypy.expose def login (self, **data): passcode = data.get("passcode", None) print "logging in using passcode %s"%passcode ,type(passcode) import serial import time #open connection serialport=serial.Serial ("/dev/ttyAMA0", 9600, timeout=0.5) #write in user sign in code serialport.write("\x03LI%s\x0D"%passcode) #get reply reply=serialport.readlines(1) print reply, type(reply) However, since there is an Ethernet port and i can send the serial command to that similar device using Netcat, how can i let this script to send commands through the ethernet port instead of the serial port? What should i change? sorry but i'm really clueless on how to do this. i've searched through google and i can't find answers. :( Answer: What about using the [`socat`](http://www.dest- unreach.org/socat/doc/socat.html) command? Using the following command line you can make `/dev/ttyAMA0` accessible via `eth 127.0.0.1:5555`: socat PTY,link=/dev/ttyAMA0 TCP:127.0.0.1:5555
Insert a .jpg in a canvas with tkinter and Python 3.2 Question: So I want to put a .jpg in a canvas, all i found on internet is to use PIL but i'm using Python 3.2 so PIL doesn't work. What can i do to insert a .jpg in a canvas with Python 3.2 ? Answer: Just to save anyone else viewing this now from hunting around for the bits and pieces (like I just did) As Martijn Pieters said use Pillow rather than PIL, but the code looks the same from tkinter import Tk, Canvas from PIL import ImageTk, Image root = Tk() #Create a canvas canvas = Canvas(root, width=400, height=300) canvas.pack() # Load the image file im = Image.open('test_image.jpg') # Put the image into a canvas compatible class, and stick in an # arbitrary variable to the garbage collector doesn't destroy it canvas.image = ImageTk.PhotoImage(im) # Add the image to the canvas, and set the anchor to the top left / north west corner canvas.create_image(0, 0, image=canvas.image, anchor='nw') root.mainloop()
Is there an alternative to pickle - save a dictionary (python) Question: I need to save a dictionary to a file, In the dictionary there are strings, integers, and dictionarys. I did it by my own and it's **not** pretty and nice to user. I know about pickle but as I know it is not safe to use it, because if someone _replace the file_ and I (or someone else) will run the file that uses the _replaced file_ , It will be running and might do some things. **it's just not safe.** Is there another function or imported thing that does it. Answer: Pickle is not safe when _transfered by a untrusted 3rd party_. Local files are just fine, and if something can replace files on your filesystem then you have a different problem. That said, if your dictionary contains nothing but string keys and the values are nothing but Python lists, numbers, strings or other dictionaries, then use JSON, via the [`json` module](http://docs.python.org/2/library/json.html).
"Underlying C/C++ object has been deleted" Question: I'm having an exception running a simple application in python 2.7 with Qt. Code: # *-* coding: utf-8 *-* __author__ = 'luismasuelli' import sys from PyQt4 import QtGui class StreamWidget(QtGui.QWidget): def __init__(self): super(StreamWidget, self).__init__(self) self.initialize() def initialize(self): self.setWindowTitle("Stream capture test") self.resize(400, 300) self.center() self.show() def center(self): qr = self.frameGeometry() cp = QtGui.QDesktopWidget().availableGeometry().center() qr.moveCenter(cp) self.move(qr.topLeft()) def main(): app = QtGui.QApplication(sys.argv) window = StreamWidget() sys.exit(app.exec_()) main() Sh*t: RuntimeError: underlying C/C++ object has been deleted (at the super() call line) What could be the error and how can i solve it? Any help will be appreciated. Answer: Got the error! I passed a parameter (self) without noticing it. i'm a noob at that and seems that parameter is the parent widget. passing self is not only logically wrong but also an uninitialized qt object.
check if numpy array is subset of another array Question: Similar questions have already been asked on SO, but they have more specific constraints and their answers don't apply to my question. Generally speaking, what is the most pythonic way to determine if an arbitrary numpy array is a subset of another array? More specifically, I have a roughly 20000x3 array and I need to know the indices of the 1x3 elements that are entirely contained within a set. More generally, is there a more pythonic way of writing the following: master=[12,155,179,234,670,981,1054,1209,1526,1667,1853] #some indices of interest triangles=np.random.randint(2000,size=(20000,3)) #some data for i,x in enumerate(triangles): if x[0] in master and x[1] in master and x[2] in master: print i For my use case, I can safely assume that len(master) << 20000\. (Consequently, it is also safe to assume that master is sorted because this is cheap). Answer: You can do this easily via iterating over an array in list comprehension. A toy example is as follows: import numpy as np x = np.arange(30).reshape(10,3) searchKey = [4,5,8] x[[0,3,7],:] = searchKey x gives array([[ 4, 5, 8], [ 3, 4, 5], [ 6, 7, 8], [ 4, 5, 8], [12, 13, 14], [15, 16, 17], [18, 19, 20], [ 4, 5, 8], [24, 25, 26], [27, 28, 29]]) Now iterate over the elements: ismember = [row==searchKey for row in x.tolist()] The result is [True, False, False, True, False, False, False, True, False, False] You can modify it for being a subset as in your question: searchKey = [2,4,10,5,8,9] # Add more elements for testing setSearchKey = set(searchKey) ismember = [setSearchKey.issuperset(row) for row in x.tolist()] If you need the indices, then use np.where(ismember)[0] It gives array([0, 3, 7])
Infoblox WAPI: how to search for an IP Question: Our network team uses [InfoBlox](http://www.infoblox.com/products/ip-address- management) to store information about IP ranges (Location, Country, etc.) There is an API available but Infoblox's documentation and examples are not very practical. I would like to search via the API for details about an IP. To start with - I would be happy to get anything back from the server. I modified [the only example I found](http://www.infoblox.com/community/blog/using-python-nios- rest-api) import requests import json url = "https://10.6.75.98/wapi/v1.0/" object_type = "network" search_string = {'network':'10.233.84.0/22'} response = requests.get(url + object_type, verify=False, data=json.dumps(search_string), auth=('adminname', 'adminpass')) print "status code: ", response.status_code print response.text which returns an error 400 status code: 400 { "Error": "AdmConProtoError: Invalid input: '{\"network\": \"10.233.84.0/22\"}'", "code": "Client.Ibap.Proto", "text": "Invalid input: '{\"network\": \"10.233.84.0/22\"}'" } I would appreciate any pointers from someone who managed to get this API to work with Python. * * * **UPDATE** : Following up on the solution, below is a piece of code (it works but it is not nice, streamlined, does not perfectly checks for errors, etc.) if someone one day would have a need to do the same as I did. def ip2site(myip): # argument is an IP we want to know the localization of (in extensible_attributes) baseurl = "https://the_infoblox_address/wapi/v1.0/" # first we get the network this IP is in r = requests.get(baseurl+"ipv4address?ip_address="+myip, auth=('youruser', 'yourpassword'), verify=False) j = simplejson.loads(r.content) # if the IP is not in any network an error message is dumped, including among others a key 'code' if 'code' not in j: mynetwork = j[0]['network'] # now we get the extended atributes for that network r = requests.get(baseurl+"network?network="+mynetwork+"&_return_fields=extensible_attributes", auth=('youruser', 'youpassword'), verify=False) j = simplejson.loads(r.content) location = j[0]['extensible_attributes']['Location'] ipdict[myip] = location return location else: return "ERROR_IP_NOT_MAPPED_TO_SITE" Answer: By using requests.get and json.dumps, aren't you sending a GET request while adding JSON to the query string? Essentially, doing a GET https://10.6.75.98/wapi/v1.0/network?{\"network\": \"10.233.84.0/22\"} I've been using the WebAPI with Perl, not Python, but if that is the way your code is trying to do things, it will probably not work very well. To send JSON to the server, do a POST and add a '_method' argument with 'GET' as the value: POST https://10.6.75.98/wapi/v1.0/network Content: { "_method": "GET", "network": "10.233.84.0/22" } Content-Type: application/json * * * Or, don't send JSON to the server and send GET https://10.6.75.98/wapi/v1.0/network?network=10.233.84.0/22 which I am guessing you will achieve by dropping the json.dumps from your code and handing search_string to requests.get directly.
Use bat file in CGI Python on localhost Question: I'm quite stuck with with a complex mix of files and languages! The problem: My webform starts a python script, as a cgi script, on localhost(apache). In this python script I want to execute a batchfile. This batchfile executes several commands, which i tested thoroughly. If i execute the following python file in the python interpreter or in CMD it does execute the bat file. But when I 'start' the python script from the webform it says it did it, but there are no results, so i guess something is wrong with the cgi part of the problem?! The process is complicated, so if someone has a better way of doing this...pls reply;). I'm using windows so that makes things even more annoying sometimes. I think it's not the script, because I try `subprocess.call`, `os.startfile` and `os.system` already! It either does nothing or the webpage keeps loading(endless loop) Python script: import os from subprocess import Popen, PIPE import subprocess print "Content-type:text/html\r\n\r\n" p = subprocess.Popen(["test.bat"], stdout = subprocess.PIPE, stderr = subprocess.PIPE) out, error = p.communicate() print out print "DONE!" The bat file: @echo off ::Preprocess the datasets CMD /C java weka.filters.unsupervised.attribute.StringToWordVector -b -i data_new.arff -o data_new_std.arff -r tweetin.arff -s tweetin_std.arff :: Make predictions with incoming tweets CMD /C java weka.classifiers.functions.SMO -T tweetin_std.arff -t data_new_std.arff -p 2 -c first > result.txt Thanks for your reply!! Answer: A couple of things come to mind. You might want to try setting your Popen's shell=True. Sometimes I have noticed that's solved my problem. p = subprocess.Popen(["test.bat"], stdout = subprocess.PIPE, stderr = subprocess.PIPE, shell=True) You may also want to take a look at [Fabric](http://docs.fabfile.org/en/1.6/), which is perfect for this kind of automation.
Trying to send simple messages with zeromq in python between two hosts Question: Script running on machine 1 import zmq context = zmq.Context() socket = context.socket(zmq.SUB) socket.bind("tcp://127.0.0.1:5000") print "socket bound" while True: print "Waiting for message" message = socket.recv() print "message received: " + str(message) This script gets to the socket.recv() and then never returns from that call. The process that sends the data runs on machine2 import zmq context = zmq.Context() socket = context.socket(zmq.PUB) print "socket created" socket.connect("tcp://machine2:5000") print "socket connected" for i in range(1, 3): print "About to send " + str(i) socket.send("Hello " + str(i)) print "Sent " + str(i) print "About to close socket" socket.close() print "Socket closed" Executes to completion, but never finishes... $ python bar.py socket created socket connected About to send 1 Sent 1 About to send 2 Sent 2 About to close socket Socket closed I'm obviously doing it wrong, how do I create a 'queue' to receive multiple messages from publishes on remote hosts? Answer: Just need to bind the socket properly and set option using setsockopt as given below. It will be fine.. import zmq import socket context = zmq.Context() socket = context.socket(zmq.SUB) socket.setsockopt(zmq.SUBSCRIBE, "") socket.bind("tcp://*:5000") print "socket bound" while True: print "Waiting for message" message = socket.recv() print "message received: " + str(message)
Login to google using Python Question: I have been trying for some time now to login to my google account and download a file from it **without using any external libraries**. I am specifically talking about Google Finance (<https://www.google.com/finance>). All I want to do is login, and download my portfolio (after you sign in and go to the Portfolios tab, there is a link saying: Download to spreadsheet). But I can't get it to work. I have seen several posts here regarding similar problems but none of them worked for me. This is the code I have now: import urllib, urllib2, cookielib #Gets the current directory output_path = os.getcwd() def make_url(ticker_symbol): return base_url + ticker_symbol def make_filename(ticker_symbol): return output_path + "\\" + ticker_symbol + ".csv" # Login page for Google Finance login_url = "https://accounts.google.com/ServiceLogin?service=finance&passive=1209600&continue=https://www.google.com/finance&followup=https://www.google.com/finance" # Google Finance portfolio Download url (works after you signed in) download_url = "https://www.google.com/finance/portfolio?pid=1&output=csv&action=viewt&ei=ypZyUZi_EqGAwAP5Vg" username = 'my_username' password = 'my_password' cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) login_data = urllib.urlencode({'username' : username, 'j_password' : password}) opener.open(login_url, login_data) # Download the file try: urllib.urlretrieve(download_url, make_filename("My_portfolio")) except urllib.ContentTooShortError as e: outfile = open(make_filename("My_portfolio"), "w") outfile.write(e.content) outfile.close() What am I doing wrong? Thanks a lot!! Answer: The author of the [script](http://everydayscripting.blogspot.co.il/2009/10/python-fixes-to- google-login-script.html) you mention has since updated the said script(s). These scripts are designed specifically for gVoice, but can be used as an example for your needs with gFinance I'm sure. The new scripts are found at [GitHub](https://github.com/hillmanov/gvoice).
Python list reduction Question: I'm working on a continuously learning focused web crawler to find news articles related to specific crisis and tragedy events that happen around the world. I am currently working on making the data model as lean and efficient as possible considering its constant growth as the crawl continues. **I am storing the data model in a list** (to do TFIDF comparisons to the page being crawled) **and I want to reduce the size of the list but not lose the relative counts of each item in the list**. This is a sample model from 2 crawled webpages: [[u'remark', u'special', u'agent', u'richard', u'deslauri', u'press', u'investig', u'crime', u'terror', u'crime', u'inform', u'servic', u'inform', u'laboratori', u'servic', u'want', u'want', u'want', u'terror', u'crime', u'want', u'news', u'news', u'press', u'news', u'servic', u'crime', u'inform', u'servic', u'laboratori', u'servic', u'servic', u'crime', u'crime', u'crime', u'terror', u'boston', u'press', u'remark', u'special', u'agent', u'richard', u'deslauri', u'press', u'investig', u'remark', u'special', u'agent', u'richard', u'deslauri', u'press', u'investig', u'boston', u'special', u'agent', u'remark', u'richard', u'deslauri', u'boston', u'investig', u'time', u'time', u'investig', u'boston', u'terror', u'law', u'enforc', u'boston', u'polic', u'polic', u'alreadi', u'alreadi', u'law', u'enforc', u'around', u'evid', u'boston', u'polic', u'evid', u'laboratori', u'evid', u'laboratori', u'may', u'alreadi', u'laboratori', u'investig', u'boston', u'polic', u'law', u'enforc', u'investig', u'around', u'alreadi', u'around', u'investig', u'law', u'enforc', u'evid', u'may', u'time', u'may', u'may', u'investig', u'may', u'around', u'time', u'investig', u'investig', u'boston', u'boston', u'news', u'press', u'boston', u'want', u'boston', u'want', u'news', u'servic', u'inform'], [u'2011', u'request', u'inform', u'tamerlan', u'tsarnaev', u'foreign', u'govern', u'crime', u'crime', u'inform', u'servic', u'inform', u'servic', u'nation', u'want', u'ten', u'want', u'want', u'crime', u'want', u'news', u'news', u'press', u'releas', u'news', u'stori', u'servic', u'crime', u'inform', u'servic', u'servic', u'servic', u'crime', u'crime', u'crime', u'news', u'press', u'press', u'releas', u'2011', u'request', u'inform', u'tamerlan', u'tsarnaev', u'foreign', u'govern', u'2011', u'request', u'inform', u'tamerlan', u'tsarnaev', u'foreign', u'govern', u'2013', u'nation', u'press', u'tamerlan', u'tsarnaev', u'dzhokhar', u'tsarnaev', u'tamerlan', u'tsarnaev', u'dzhokhar', u'tsarnaev', u'dzhokhar', u'tsarnaev', u'tamerlan', u'tsarnaev', u'dzhokhar', u'tsarnaev', u'2011', u'foreign', u'govern', u'inform', u'tamerlan', u'tsarnaev', u'inform', u'2011', u'govern', u'inform', u'tamerlan', u'tsarnaev', u'foreign', u'foreign', u'govern', u'2011', u'inform', u'foreign', u'govern', u'nation', u'press', u'releas', u'crime', u'releas', u'ten', u'news', u'stori', u'2013', u'ten', u'news', u'stori', u'2013', u'ten', u'news', u'stori', u'2013', u'2011', u'request', u'inform', u'tamerlan', u'tsarnaev', u'foreign', u'govern', u'nation', u'press', u'releas', u'want', u'news', u'servic', u'inform', u'govern']] I want to maintain the list of words and not embed the count into the list itself. I would like the list to go from: [Boston, Boston,Boston,Bombings,Bombings,Tsarnaev,Tsarnaev,Time] to [Boston,Boston,Bombings,Tsarnaev] _Basically,_ if I had a list [a,a,a,b,b,c], I would want to reduce it to [a,a,b] **EDIT:** Sorry for not being clear, but I will try again. I do **not** want a set. The number of occurrences is very important because it is a weighted list so "Boston" should appear more times than "time" or another similar term. What I am trying to accomplish is to _minimize_ the data model while removing the insignificant terms from the model. So in the above example, I purposely left out C because it adds to much "fat" to the model. I want to keep the relativity in that A appeared 1 more time than B and 2 more times than C but since C only appeared once in the original model, it is being removed from the _lean_ model. Answer: from collections import defaultdict d = defaultdict(int) for w in words[0]: d[w] += 1 mmin = min(d[p] for p in d) then you can subtract this mmin from each word and create a new list. But perhaps the dict is compact enough. To preserve the order, you can use the info from the dict and devise some smart way to filter your initial word list. For example, for the word list `[a,a,a,b,b,c]`, the dictionary will contain `{a:3, b:2, c:1}` and the `mmin=1`. You can use this information to have a slimmer dictionary by subtracting 1 from all items to get `{a:2, b:1}` and since `c` is `0` it is removed. Complete code: from collections import defaultdict d = defaultdict(int) words = ['a','a','a','b','b','c'] for w in words: d[w] += 1 mmin = min(d[p] for p in d) slim=[] for w in words: if d[w] > mmin: slim.append(w) d[w] -= 1 print slim
Why does iterative elementwise array multiplication slow down in numpy? Question: The code below reproduces the problem I have encountered in the algorithm I'm currently implementing: import numpy.random as rand import time x = rand.normal(size=(300,50000)) y = rand.normal(size=(300,50000)) for i in range(1000): t0 = time.time() y *= x print "%.4f" % (time.time()-t0) y /= y.max() #to prevent overflows The problem is that after some number of iterations, things start to get gradually slower until one iteration takes multiple times more time than initially. A plot of the slowdown ![enter image description here](http://i.stack.imgur.com/S839J.png) CPU usage by the Python process is stable around 17-18% the whole time. I'm using: * Python 2.7.4 32-bit version; * Numpy 1.7.1 with MKL; * Windows 8. Answer: As @Alok pointed out, this seems to be caused by [denormal numbers](http://en.wikipedia.org/wiki/Denormal_number) affecting the performance. I ran it on my OSX system and confirmed the issue. I don't know of a way to flush denormals to zero in numpy. I would try to get around this issue in the algorithm by avoiding the very small numbers: do you really need to be dividing `y` until it gets down to `1.e-324` level? If you avoid the low numbers e.g. by adding the following line in your loop: y += 1e-100 then you'll have a constant time per iteration (albeit slower because of the extra operation). Another workaround is to use higher precision arithmetics, e.g. x = rand.normal(size=(300,50000)).astype('longdouble') y = rand.normal(size=(300,50000)).astype('longdouble') This will make each of your steps more expensive, but each step take roughly the same time. See the following comparison in my system: ![enter image description here](http://i.stack.imgur.com/ePUrI.png)
How can I slow down a loop in Python? Question: If I have a list l: l = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] Is there a way to control the following _for_ loop so that the next element in the list is only printed one second after the previous? for i in l: print i In other words, is there a way to elegantly slow down a loop in Python? Answer: You can use `time.sleep` import time for i in l: print i time.sleep(1)
Python script not work in crontab Question: Here is my script(randombg.py): #!/usr/bin/env python # -*- coding: utf8 -*- import random import subprocess import os BACKGROUND = '/home/david/wallpaper/dell2312' IGNORE_FILES = ['/home/david/wallpaper/dell2312/.directory'] def enumerate(): global BACKGROUND file_collections = [] for root, dirs, files in os.walk(BACKGROUND): for file in files: file_collections.append(os.path.join(root, file)) return file_collections def randombg(): select_files = list(set(enumerate())-set(IGNORE_FILES)) subprocess.call(['feh', '--bg-scale', random.choice(select_files)]) def main(): while 1: randombg() if __name__ == '__main__': main() I have run `chmod a+x randombg.py` and it worked with `python randombg.py` .Let's say its path is `/path/to/randombg.py.` Also, run `/path/to/randombg.py` worked. However, when I added it to crontab as below: 1 * * * * /path/to/randombg.py or 01 * * * * python /path/to/randombg.py or 01 * * * * /usr/bin/python /path/to/randombg.py All failed. I can't figure out. Could anyone explain? PS: ArchLinux * * * More infomation When I run `ps aux|grep python`, I can't find the `randombg.py` while sometimes it appears. * * * Addtional logs from crontab redirect stderr: import: unable to open X server `' @ error/import.c/ImportImageCommand/361. import: unable to open X server `' @ error/import.c/ImportImageCommand/361. import: unable to open X server `' @ error/import.c/ImportImageCommand/361. /home/david/dotfiles/randombg.py: line 9: BACKGROUND: command not found /home/david/dotfiles/randombg.py: line 10: IGNORE_FILES: command not found /home/david/dotfiles/randombg.py: line 13: syntax error near unexpected token `(' /home/david/dotfiles/randombg.py: line 13: ` def enumerate():' Answer: Try to change your `subprocess.call` to subprocess.call("export DISPLAY=:0; feh --bg-scale " + random.choice(select_files), shell=True) This should export the `DISPLAY` variable, as scripts run from crontab do not have access to environmental variables by default.
Python ndarray division Question: I have this piece of code from scipy import misc from numpy import fft orig = misc.imread('lena256.png') blur = misc.imread('lena256blur.png') orig_f = fft.rfft2(orig) blur_f = fft.rfft2(blur) kernel_f = blur_f / orig_f # do the deconvolution From another question here on stackoverflow ([Link](http://stackoverflow.com/questions/16541709/inverse-convolution-of- image)). But I know nothing about python. What is this line `kernel_f = blur_f / orig_f` supposed to do ? Is it dividing element by element or its matrix division, that can be "rewritten" using matrix inverse ? I tried google that, but found nothing usefull. If someone could post me code in C that do the same (I am using alglib for mathematic, but there is no division of matrices, afaik). Answer: This is elementwise division. See [NumPy for Matlab Users](http://www.scipy.org/NumPy_for_Matlab_Users) in the category of ndarray operators. ndarray: `All operations (*, /, +, ** etc.) are elementwise`
Django giving [Errno 13] Permission denied Question: I am developing an application with Python that parses a webpage and then downloads the images contained on the webpage. I am using WAMP for the webserver and DJango for the framework. The python script I have implemented runs as expected (downloads images to my local desktop properly) on my local computer, but when I try and run it using on the webserver with DJango and WAMP, I get the error [Errno 13] Permission denied:'C:\Users\user123\Desktop\images'. Below is my code, any ideas what's causing the error. from django.http import HttpResponse from bs4 import BeautifulSoup as bsoup import urlparse from urllib2 import urlopen from urllib import urlretrieve import os import sys import zipfile from django.core.servers.basehttp import FileWrapper def getdata(request): out = r'C:\Users\user123\Desktop\images' if request.GET.get('q'): #url = str(request.GET['q']) url = "http://google.com" soup = bsoup(urlopen(url)) parsedURL = list(urlparse.urlparse(url)) for image in soup.findAll("img"): print "Old Image Path: %(src)s" % image #Get file name filename = image["src"].split("/")[-1] #Get full path name if url has to be parsed parsedURL[2] = image["src"] image["src"] = '%s\%s' % (out,filename) print 'New Path: %s' % image["src"] # print image outpath = os.path.join(out, filename) # if image["src"].lower().startswith("http"): urlretrieve(image["src"], outpath) else: urlretrieve(urlparse.urlunparse(parsedURL), out) #Constructs URL from tuple (parsedURL) #Create HTML File and writes to it to check output (stored in same directory). html = soup.prettify("utf-8") with FileWrapper(open("output.html", "wb")) as file: file.write(html) #Create where zip file will be stored (same directory htmlparser file) zip = zipfile.ZipFile('C:\Users\user123\Desktop\Images.zip', 'w') #Path where file that will be zipped up is located path = 'images' #For each file, add it to the zip folder. for root, dirs, files in os.walk(path): for file in files: zip.write(os.path.join(root, file)) zip.close() else: url = 'You submitted nothing!' return HttpResponse(url) Answer: your user does not seem to have write permissions for the "images" directory. set the directory to "world writeable" and try again.
pythonian comparison: date.time from csv file to date.time from timestamp Question: In python I import a csv file with one datetime value at each row (2013-03-14 07:37:33) and I want to compare it with the datetime values I obtain with timestamp. I assume that when I read the csv the result is strings, but when I try to compare them in a loop with the strings from timestamp does not compare them at all without giving me an error at the same time. Any suggestions? csv_in = open('FakeOBData.csv', 'rb') reader = csv.reader(csv_in) for row in reader: date = row OBD.append(date) . . . for x in OBD: print x sightings = db.edge.find ( { "tag" : int(participant_tag)},{"_id":0}).sort("time") for sighting in sightings: time2 = datetime.datetime.fromtimestamp(time) if x == time2: Answer: Use [datetime.datetime.strptime](http://docs.python.org/2/library/datetime.html#datetime.datetime.strptime) to parse the strings into datetime objects. You may also have to work out what time zone the date strings in your CSV are from and adjust for that. `%Y-%m-%d %H:%M:%S` should work as your format string: x_datetime = datetime.datetime.strptime(x, '%Y-%m-%d %H:%M:%S') if x_datetime == time2: Or parse it when reading: for row in reader: date = datetime.datetime.strptime(row[0], '%Y-%m-%d %H:%M:%S')