text
stringlengths
226
34.5k
What is the function in Python that sort files by extension? Question: import os import string os.chdir('C:\Python27') x=os.listdir('C:\Python27') y=[f for f in os.listdir(dirname) if os.path.isfile(os.path.join(dirname, f))] for k in y: fileName, fileExtension = os.path.splitext(k) print fileName,fileExtension And now, I want to sort the files by extension. Answer: To sort by name, then by extension: y.sort(key=os.path.splitext) It produces the order: a.2 a.3 b.1 To sort only by extension: y.sort(key=lambda f: os.path.splitext(f)[1]) It produces the order: b.1 a.2 a.3
Issues in error handling in python Question: I am trying to bypass this error: `ItemNotFoundError: insufficient items with name u'No_Thanks'` error by using try..except statement. However, I am getting another error saying: `NameError: name 'ItemNotFoundError' is not defined`. I am not sure why is this happening. Thanks. Here is the code that I am using br = mechanize.Browser() br.addheaders = [('User-agent', 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1;Trident/5.0)')] urls = "http://shop.o2.co.uk/mobile_phone/pay_monthly/init/Samsung/Galaxy_Ace_Purple" r = br.open(urls) page_child = br.response().read() soup_child = BeautifulSoup(page_child) contracts = [tag_c['value']for tag_c in soup_child.findAll('input', {"name": "tariff-duration"})] data_usage = [tag_c['value']for tag_c in soup_child.findAll('input', {"name": "allowance"})] for contract in contracts: if contract <>"Pay_and_Go": for data in data_usage: br.select_form('formDuration') br.form['tariff-duration']=[contract,] try: br.form['allowance']=[data,] except ItemNotFoundError: continue br.submit() page_child_child = br.response().read() soup_child_child = BeautifulSoup(page_child_child) items = soup_child_child.findAll('div', {"class": "n-pay-today"}) Answer: I'm guessing the exception is defined by `mechanize`. Try: `except mechanize.ItemNotFoundError` * * * It appears that this is correct after installing mechanize: >>> import mechanize >>> print mechanize.ItemNotFoundError <class 'mechanize._form.ItemNotFoundError'> >>> print mechanize.__version__ (0, 2, 5, None, None)
universal python library for internalization and translation Question: I need to internationalize and translate python application. I look forward for some dictionary collection resides in additional resource files that could be switched runtime and used smoothly inside python code. I've searched stackoverflow.com for similar tools but find only platform- specific libraries, e.g. for pylons, for django and so on. Is there any general ready for use library? Answer: Python's standard `gettext` module provides this. See the Python docs [here](http://docs.python.org/2/library/gettext.html). > The gettext module provides internationalization (I18N) and localization > (L10N) services for your Python modules and applications. It supports both > the GNU gettext message catalog API and a higher level, class-based API that > may be more appropriate for Python files. The interface described below > allows you to write your module and application messages in one natural > language, and provide a catalog of translated messages for running under > different natural languages. A simple example: import gettext gettext.bindtextdomain('myapplication', '/path/to/my/language/directory') gettext.textdomain('myapplication') _ = gettext.gettext # ... print _('This is a translatable string.')
Forward a port via UPnP in Python Question: I am making a Python application that requires the user to have a port forwarded to his computer in order to communicate with a server or another user. The current implementation works quite great, yet the only thing is that the person who's running the file must forward the port to the local IP manually. I want to automate this. He picks a port, script checks if it can be forwarded, then it forwards it. If it can't, it handles the error respectively. I've looked into some libraries that claim they can do this in pure Python (since I will need to compile to .exe's [...] after finishing) but didn't manage to find something useful. If you could provide me with a code sample on how to attempt to forward a port and handle success/fail respectively, that would be great. Thanks in advance for your time. P.S.:It's Python 2.7.X that I am targeting Answer: Looks like there are a few options: * Miniupnp has [python bindings](https://github.com/miniupnp/miniupnp/blob/master/miniupnpc/testupnpigd.py) * [Python bindings for GNUPnP](https://packages.debian.org/sid/python/python-gupnp-igd) * [miranda-upnp](https://code.google.com/p/miranda-upnp/) is pure python There is a nice example of the python bindings for GNUPnP being used to open ports on a router [here](http://www.gniibe.org/memo/system/dynamic- ip/upnp.html). In that example the lease time is set to 0, which is unlimited. See [here](https://developer.gnome.org/gupnp- igd/unstable/GUPnPSimpleIgd.html#gupnp-simple-igd-add-port) for the definition of add_port. A simple example might be: #! /usr/bin/python import gupnp.igd import glib from sys import stderr my_ip = YOUR_IP igd = gupnp.igd.Simple() igd.external_ip = None main = glib.MainLoop() def mep(igd, proto, eip, erip, port, localip, lport, msg): if port == 80: igd.external_ip = eip main.quit() def emp(igd, err, proto, ep, lip, lp, msg): print >> stderr, "ERR" print >> stderr, err, proto, ep, lip, lp, msg main.quit() igd.connect("mapped-external-port", mep) igd.connect("error-mapping-port", emp) #igd.add_port("PROTO", EXTERNAL_PORT, INTERNAL_IP, INTERNAL_PORT, LEASE_DURATION_IN_SECONDS, "NAME") igd.add_port("TCP", 80, my_ip, 8080, 86400, "web") main.run()
Python 32 or 64 on 64 bit Windows 7? How will this effect installing easy_install? Question: I wrote some python code on my mac and how I have to transfer it over to a windows computer. This is frustrating beyond words. I installed Python 2.7 x32, then I uninstalled it, then I installed Python 2.7 x64. My python script depends on xlrd and xlwt, and some other downloaded modules. I would like to install those using easy_install or pip or any way that is easy for somebody who doesn't know too much about the really intricate workings of a computer. As of now, if I do this: C:\Windows\System32> python I get: 'python' is not recognized as an internal or external command operable program or batch file. ![This is my python folder](http://imgur.com/h8HsMHF) ![This is the Scripts folder](http://i.imgur.com/vgK3Lix.png) ![These are my environment variables](http://i.imgur.com/o4idDim.png) Would someone suggest a next move for me? I did a bunch of google-ing and stackoverflow-ing and seem to have landed [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/), I'm not sure how to download something from here. For example, how would I download Pip, and also, which one would I download? People say that a person should download and install [this](http://peak.telecommunity.com/dist/ez_setup.py), how would I do that? I'm familiar with terminal, what would I type into cmd? Answer: The reason that I things were not going well for me was because I was using a mixture of Python 2.7 (32-bit) and Python 2.7 (64-bit). After more researching I found that even though my Windows 7 was 64 bit, I didn't have to download the 64-bit python. In fact, the 32-bit python was more compatible with other programs. So I clicked around and uninstalled python and everything else that had the name 'Python' in it from my Add/Remove programs menu. This took about 10 minutes. Next, I downloaded the 32-bit Python 2.7. After this, all of the packages that I download, I made sure that they were 32-bit and not 64-bit. Also, very important, I went to my 'Environment Variables' (google how to get there) and added the directory of where Python27 was installed. To do this you can follow [this](http://pythoncentral.org/how-to-install-python-2-7-on- windows-7-python-is-not-recognized-as-an-internal-or-external-command/) guide. The last thing that I did was add ;.PY to the end of PATHEXT.
Raspberry pi 'Response IndentationError' Question: I have been looking at a tutorial on how to send sms texts through the rasp pi. Here is the code that I have and I'm not sure why I have an error. #!/usr/bin/python #----------------------------------- # Send SMS Text Message # # Author : Matt Hawkins # Site : http://www.raspberrypi-spy.co.uk/ # Date : 30/08/2012 # # Requires account with TxtLocal # http://www.txtlocal.co.uk/?tlrx=114032 # #----------------------------------- # Import required libraries import urllib # URL functions import urllib2 # URL functions # Define your message message = 'Test message sent from my Raspberry Pi' # Set your username and sender name. # Sender name must alphanumeric and # between 3 and 11 characters in length. username = '[email protected]' sender = 'Jonny.D' # Your unique hash is available from the docs page # https://control.txtlocal.co.uk/docs/ hash = '8fe5dae7bafdbbfb00c7aebcfb24e005b5cb7be8' # Set the phone number you wish to send # message to. # The first 2 digits are the country code. # 44 is the country code for the UK # Multiple numbers can be specified if required # e.g. numbers = ('447xxx123456','447xxx654321') numbers = ('447xxxxxx260') # Set flag to 1 to simulate sending # This saves your credits while you are # testing your code. # To send real message set this flag to 0 test_flag = 1 #----------------------------------- # No need to edit anything below this line #----------------------------------- values = {'test' : test_flag, 'uname' : username, 'hash' : hash, 'message' : message, 'from' : sender, 'selectednums' : numbers } url = 'http://www.txtlocal.com/sendsmspost.php' postdata = urllib.urlencode(values) req = urllib2.Request(url, postdata) print 'Attempt to send SMS ...' try: response = urllib2.urlopen(req) response_url = response.geturl() if response_url==url: print 'SMS sent!' except urllib2.URLError, e: print 'Send failed!' print e.reason And here is the error message I have popping up on the terminal File "send_sms.py", line 331 response = urllib2.urlopen(req) ^ IndentationError: expected an indented block Answer: Python requires proper indentation, like this: try: response = urllib2.urlopen(req) response_url = response.geturl() if response_url==url: print 'SMS sent!' except urllib2.URLError, e: print 'Send failed!' print e.reason Here's [a section on Python indentation from Dive Into Python 3](http://getpython3.com/diveintopython3/your-first-python- program.html#indentingcode).
Evaluation order in python list and tuple Question: Let' say we have codes like a = (fcn1(), fcn2()) b = [fcn1(), fcn2()] Does python interpreter evaluate fcn1() before fcn2()? Or they can have undefined order? Answer: They are evaluated from [left to right](http://docs.python.org/2/reference/expressions.html#evaluation-order). From the [docs](http://docs.python.org/2/reference/expressions.html#list- displays)(for lists): > When a comma-separated list of expressions is supplied, its elements are > evaluated from left to right and placed into the list object in that order. Small test using `dis.dis()`: In [208]: def f1():pass In [209]: def f2():pass In [210]: import dis In [212]: def func(): a = (f1(), f2()) b = [f1(), f2()] .....: In [213]: dis.dis(func) 2 0 LOAD_GLOBAL 0 (f1) 3 CALL_FUNCTION 0 6 LOAD_GLOBAL 1 (f2) 9 CALL_FUNCTION 0 12 BUILD_TUPLE 2 15 STORE_FAST 0 (a) 3 18 LOAD_GLOBAL 0 (f1) 21 CALL_FUNCTION 0 24 LOAD_GLOBAL 1 (f2) 27 CALL_FUNCTION 0 30 BUILD_LIST 2 33 STORE_FAST 1 (b) 36 LOAD_CONST 0 (None) 39 RETURN_VALUE Note: In case of assignment, the right hand side is evaluated first.
Python Django Admin Clean() Method not overiding values Question: Maybe I am missing something here, but according to the django docs, I should be able to overide values sent from an admin form from within the clean() method. From django docs def clean(self): from django.core.exceptions import ValidationError # Don't allow draft entries to have a pub_date. if self.status == 'draft' and self.pub_date is not None: raise ValidationError('Draft entries may not have a publication date.') # Set the pub_date for published items if it hasn't been set already. if self.status == 'published' and self.pub_date is None: self.pub_date = datetime.date.today() I have stripped down my code and am just trying a basic example here from within the admin Models.py class Test(models.Model): name = models.CharField(max_length=255,) def clean(self): self.name = 'Robin Hood' return self So when I try and add a new Test record, if I leave the name field empty, it should grab the value from the clean method and save. What happens though, is that the form doesnt validate, and the field remains empty. Am I missing something blantantly obvious here? Answer: You won't even get as far as running the model clean method. Django will run the form's validation code first, and because your field is not defined with `blank=True`, the form will enforce that constraint first. What you should do is to override the form, setting `required=False` to the `name` field, then writing a _form_ clean method which sets values in - and returns - `self.cleaned_data`: class TestForm(forms.ModelForm): name = forms.CharField(required=False) class Meta: model = Test def clean(self): self.cleaned_data['name'] = 'Robin Hood' return self.cleaned_data and reference that form in your admin class: class TestAdmin(admin.ModelAdmin): form = TestForm
parse commands with regex and python Question: I have a string like this: str = "something move 11 something move 12 something 13 copy 14 15" _where the "something" means some text, or no text at all._ and as a result I want to have a list like: [('move', 11, ''), ('move', 12, 13), ('copy', 14, 15)] I tried using this: re.findall('(move|copy).+?([0-9]+).+?([0-9]+)*', str) but it gives my the output: [('move', 11, ''), ('move', 12, ''), ('copy', 14, '')] I understand that is because the last number is optional, but I just have no idea how I could it get working. How can I do this? Answer: You could use a [regular expression](http://docs.python.org/2/library/re.html) (with lookbehind and lookahead): In [1]: import re In [2]: tokens = "something move 11 something move 12 something 13 copy 14 15" In [3]: split_movements = re.split('(?<=\d)\s(?!\d+)', tokens) In [4]: split_movements Out[4]: ['something move 11', 'something move 12', 'something 13', 'copy 14 15'] In [5]: movements = [re.split('\s(?=\d+)', m) for m in split_movements] In [6]: movements Out[6]: [['something move', '11'], ['something move', '12'], ['something', '13'], ['copy', '14', '15']]
South, how to migrate from CharField to ForeignKey? Question: The model: class ListContext(models.Model): content = models.CharField(max_length=200, blank=True) I use south to manage schema migrations. Now I change the previous model to this one: from django.contrib.contenttypes.models import ContentType class ListContext(models.Model): content = models.ForeignKey(ContentType, blank=True, null=True) For the migration: $ python manage.py schemamigration --auto page ~ Changed field content on page.ListContext + Added index for ['content'] on page.ListContext Created 0010_auto__chg_field_listcontext_content.py. You can now apply this migration with: ./manage.py migrate page Everything is fine until this point: $ python manage.py migrate page Running migrations for page: - Migrating forwards to 0010_auto__chg_field_listcontext_content. > page:0010_auto__chg_field_listcontext_content FATAL ERROR - The following SQL query failed: ALTER TABLE "page_page_listcontext" ALTER COLUMN "content_id" TYPE integer, ALTER COLUMN "content_id" DROP NOT NULL, ALTER COLUMN "content_id" DROP DEFAULT; The error was: column "content_id" cannot be cast to type integer Error in migration: page:0010_auto__chg_field_listcontext_content I can guess the error happens during the cast from string to int, but how can I avoid this and get the migration done? Could it make any difference, I don't care to preserve the data stored in the table. Answer: If you manually edit the forwards function: Rename the column: db.rename_column('sometable', 'content', 'content_old') Then add your column back: db.add_column('sometable', 'content', self.gf('django.db.models.fields.IntegerField')(default=0)) Then execute a query that updates the new field by looking up the id. db.execute(""" UPDATE sometable SET content = (SELECT FKTable.id FROM FKTable WHERE (FKTable.content = sometable.content_old AND sometable.content_old != '') OR (FKTable.content = 'none' AND sometable.content_old = '')) --Maybe cut the OR out """) You would then have to do some fancy stuff to make backwards work properly.
Using numpy.take for faster fancy indexing Question: **EDIT** I have kept the more complicated problem I am facing below, but my problems with `np.take` can be summarized better as follows. Say you have an array `img` of shape `(planes, rows)`, and another array `lut` of shape `(planes, 256)`, and you want to use them to create a new array `out` of shape `(planes, rows)`, where `out[p,j] = lut[p, img[p, j]]`. This can be achieved with fancy indexing as follows: In [4]: %timeit lut[np.arange(planes).reshape(-1, 1), img] 1000 loops, best of 3: 471 us per loop But if, instead of fancy indexing, you use take and a python loop over the `planes` things can be sped up tremendously: In [6]: %timeit for _ in (lut[j].take(img[j]) for j in xrange(planes)) : pass 10000 loops, best of 3: 59 us per loop Can `lut` and `img` be in someway rearranged, so as to have the whole operation happen without python loops, but using `numpy.take` (or an alternative method) instead of conventional fancy indexing to keep the speed advantage? * * * **ORIGINAL QUESTION** I have a set of look-up tables (LUTs) that I want to use on an image. The array holding the LUTs is of shape `(planes, 256, n)`, and the image has shape `(planes, rows, cols)`. Both are of `dtype = 'uint8'`, matching the `256` axis of the LUT. The idea is to run the `p`-th plane of the image through each of the `n` LUTs from the `p`-th plane of the LUT. If my `lut` and `img` are the following: planes, rows, cols, n = 3, 4000, 4000, 4 lut = np.random.randint(-2**31, 2**31 - 1, size=(planes * 256 * n // 4,)).view('uint8') lut = lut.reshape(planes, 256, n) img = np.random.randint(-2**31, 2**31 - 1, size=(planes * rows * cols // 4,)).view('uint8') img = img.reshape(planes, rows, cols) I can achieve what I am after using fancy indexing like this out = lut[np.arange(planes).reshape(-1, 1, 1), img] which gives me an array of shape `(planes, rows, cols, n)` , where `out[i, :, :, j]` holds the `i`-th plane of `img` run through the `j`-th LUT of the `i`-th plane of the LUT... All is good, except for this: In [2]: %timeit lut[np.arange(planes).reshape(-1, 1, 1), img] 1 loops, best of 3: 5.65 s per loop which is completely unacceptable, especially since I have all of the following not so nice looking alternatives using `np.take` than run much faster: 1. A single LUT on a single plane runs about x70 faster: In [2]: %timeit np.take(lut[0, :, 0], img[0]) 10 loops, best of 3: 78.5 ms per loop 2. A python loop running through all the desired combinations finishes almost x6 faster: In [2]: %timeit for _ in (np.take(lut[j, :, k], img[j]) for j in xrange(planes) for k in xrange(n)) : pass 1 loops, best of 3: 947 ms per loop 3. Even running all combinations of planes in the LUT and image and then discarding the `planes**2 - planes` unwanted ones is faster than fancy indexing: In [2]: %timeit np.take(lut, img, axis=1)[np.arange(planes), np.arange(planes)] 1 loops, best of 3: 3.79 s per loop 4. And the fastest combination I have been able to come up with has a python loop iterating over the planes and finishes x13 faster: In [2]: %timeit for _ in (np.take(lut[j], img[j], axis=0) for j in xrange(planes)) : pass 1 loops, best of 3: 434 ms per loop The question of course is if there is no way of doing this with `np.take` without any python loop? Ideally whatever reshaping or resizing is needed should happen on the LUT, not the image, but I am open to whatever you people can come up with... Answer: Fist of all I have to say I really liked your question. Without rearranging `LUT` or `IMG` the following solution worked: %timeit a=np.take(lut, img, axis=1) # 1 loops, best of 3: 1.93s per loop But from the result you have to query the diagonal: a[0,0], a[1,1], a[2,2]; to get what you want. I've tried to find a way to do this indexing only for the diagonal elements, but still did not manage. Here are some ways to rearrange your `LUT` and `IMG`: The following works if the indexes in `IMG` are from 0-255, for the 1st plane, 256-511 for the 2nd plane, and 512-767 for the 3rd plane, but that would prevent you from using `'uint8'`, which can be a big issue...: lut2 = lut.reshape(-1,4) %timeit np.take(lut2,img,axis=0) # 1 loops, best of 3: 716 ms per loop # or %timeit np.take(lut2, img.flatten(), axis=0).reshape(3,4000,4000,4) # 1 loops, best of 3: 709 ms per loop in my machine your solution is still the best option, and very adequate since you just need the diagonal evaluations, i.e. plane1-plane1, plane2-plane2 and plane3-plane3: %timeit for _ in (np.take(lut[j], img[j], axis=0) for j in xrange(planes)) : pass # 1 loops, best of 3: 677 ms per loop I hope this can give you some insight about a better solution. It would be nice to look for more options with `flatten()`, and similar methods as `np.apply_over_axes()` or `np.apply_along_axis()`, that seem to be promising. I used this code below to generate the data: import numpy as np num = 4000 planes, rows, cols, n = 3, num, num, 4 lut = np.random.randint(-2**31, 2**31-1,size=(planes*256*n//4,)).view('uint8') lut = lut.reshape(planes, 256, n) img = np.random.randint(-2**31, 2**31-1,size=(planes*rows*cols//4,)).view('uint8') img = img.reshape(planes, rows, cols)
File to dictionary Question: > **Possible Duplicate:** > [Python - file to > dictionary?](http://stackoverflow.com/questions/4803999/python-file-to- > dictionary) I've been looking on this website, and racking my brain, but I just can't find the answer. I have a file of words that are matched with numbers, delimited with '*'. Now I have to find a way to convert this file to a dictionary in python and look up words from another text and assign the values from the dictionary file. I know how to import a plain-text file, just not how I can make it "act like a dictionary" does anyone have any ideas? Thanks everyone Answer: Assuming your file is one per line, e.g. chicken*5 dog*3 ... You can do with open("path/to/file") as f: data = [line.split("*") for line in f] data = dict((word, int(cnt)) for (word, cnt) in data)
Run code from a Python module, modify module, then run again without exiting interpeter Question: I'd like to be able to open a Python shell, execute some code defined in a module, then modify the module, then re-run it in the same shell without closing/reopening. I've tried reimporting the functions/objects after modifying the script, and that doesn't work: Python 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from my_module import buggy_function, test_input >>> buggy_function(test_input) wrong_value # Returns incorrect result # Go to the editor, make some changes to the code and save them # Thought reimporting might get the new version >>> from my_module import buggy_function, test_input >>> buggy_function(test_input) wrong_value # Returns the same incorrect result Clearly reimporting did not get me the 'new version' of the function. In this case, it isn't that big a deal to close the interpreter and reopen it. But, if the code I'm testing is complicated enough, sometimes I have to do a fair amount of importing objects and defining dummy variables to make a context that can adequately test the code. It does get annoying to have to do this every time I make a change. Anyone know how to "refresh" the module code within the Python interpeter? Answer: use `imp.reload()`: In [1]: import imp In [2]: print imp.reload.__doc__ reload(module) -> module Reload the module. The module must have been successfully imported before.
How to set self.maxDiff in nose to get full diff output? Question: When using nose 1.2.1 with Python 3.3.0, I sometimes get an error message similar to the following one ====================================================================== FAIL: maxdiff2.test_equal ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python3.3/site-packages/nose/case.py", line 198, in runTest self.test(*self.arg) File "/Users/loic/cmrsj/Calculus_II/scrap/maxdiff2.py", line 32, in test_equal assert_equal(str1, str2) AssertionError: 'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec a diam lectus.\n [truncated]... != 'Suspendisse lectus leo, consectetur in tempor sit amet, placerat quis neque.\nE [truncated]... Diff is 1780 characters long. Set self.maxDiff to None to see it. ---------------------------------------------------------------------- Ran 1 test in 0.064s FAILED (failures=1) In many situations, to figure out what the error really is, I need to see the full diff output. However, I have no idea of how to set that `self.maxDiff`. Googling for nose and maxDiff does not help. With the same version of nose on Python 2.7.1 the full diff is printed to screen. Here is a simple script that generates the error above when run with `nosetests-3.3`: from nose.tools import assert_equal def test_equal(): str1 = """\ Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec a diam lectus. Sed sit amet ipsum mauris. Maecenas congue ligula ac quam viverra nec consectetur ante hendrerit. Donec et mollis dolor. Praesent et diam eget libero egestas mattis sit amet vitae augue. Nam tincidunt congue enim, ut porta lorem lacinia consectetur. Donec ut libero sed arcu vehicula ultricies a non tortor. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean ut gravida lorem. Ut turpis felis, pulvinar a semper sed, adipiscing id dolor. Pellentesque auctor nisi id magna consequat sagittis. Curabitur dapibus enim sit amet elit pharetra tincidunt feugiat nisl imperdiet. Ut convallis libero in urna ultrices accumsan. Donec sed odio eros. Donec viverra mi quis quam pulvinar at malesuada arcu rhoncus. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. In rutrum accumsan ultricies. Mauris vitae nisi at sem facilisis semper ac in est.""" str2 = """\ Suspendisse lectus leo, consectetur in tempor sit amet, placerat quis neque. Etiam luctus porttitor lorem, sed suscipit est rutrum non. Curabitur lobortis nisl a enim congue semper. Aenean commodo ultrices imperdiet. Vestibulum ut justo vel sapien venenatis tincidunt. Phasellus eget dolor sit amet ipsum dapibus condimentum vitae quis lectus. Aliquam ut massa in turpis dapibus convallis. Praesent elit lacus, vestibulum at malesuada et, ornare et est. Ut augue nunc, sodales ut euismod non, adipiscing vitae orci. Mauris ut placerat justo. Mauris in ultricies enim. Quisque nec est eleifend nulla ultrices egestas quis ut quam. Donec sollicitudin lectus a mauris pulvinar id aliquam urna cursus. Cras quis ligula sem, vel elementum mi. Phasellus non ullamcorper urna.""" assert_equal(str1, str2) Answer: You set `maxDiff` to `None`. But you will have to actually use a [`unittest.TestCase`](http://docs.python.org/2/library/unittest.html#unittest.TestCase.maxDiff) for your tests for that to work.This shold work. class MyTest(unittest.TestCase): maxDiff = None def test_diff(self): <your test here>
python json encode - Missing ( and ' - urllib2.open() is ending in HTTP Error 400 Question: I am encoding a 40kb dictionary of dictionaries and lists into json and then pushing it over http to a nosql database. I've used both jsonpickle.encode and json.dumps modules to encode my dictionary's content, but both are leading to an http error. I tried manually CURLing the problematic section of code with a result of the error "-bash: syntax error near unexpected token `('" Here is some example code: import urllib2 , jsonpickle url = "http://amazonaws.com/server/%s/_create" % item data = jsonpickle.encode( some_dict_of_dicts_and_lists ) # also tried json.dumps here. try: req = urllib2.Request ( url , data , { 'Content-Type' : 'application/json' } ) f = urllib2.urlopen ( req ) except Exception as e: print "Error: %s" %e The above works for entering certain sections of my dictionary into my nosql db. However, this prints "Error: HTTP Error 400: Bad Request" when I send over other sections of my dictionary. This means to me that something is not getting encoded properly in the data variable/string. To get CURL's response on this problem, I tried CURLing the following code: item_id = item_dictionary [ 'id' ] data = jsonpickle.encode( some_dict_of_dicts_and_lists ) # also tried json.dumps here. command = 'curl -XPOST "http://amazonaws.com/server/%s/_create" -d '"%s"' % ( nsn_id , data ) os.system(command) This produces the error "sh: -c: line 0: syntax error near unexpected token `('" If I try to manually type the curl command into the command line, I get the following set of errors: curl: (6) Could not resolve host: material; nodename nor servname provided, or not known curl: (6) Could not resolve host: items; nodename nor servname provided, or not known curl: (6) Could not resolve host: of; nodename nor servname provided, or not known curl: (3) [globbing] unmatched close brace/bracket at pos 146 {"error":"MapperParsingException[Failed to parse]; nested: JsonParseException[Unexpected end-of-input in VALUE_STRING\n at [Source: [B@2ff246ab; line: 1, column: 6519]]; ","status":400} So: 1) Is there a better way to make sure the the json encode process captures makes all of the parentheses and single quotes are formatted with a '\' in front of them? Should I do a replace on all of them? 2) Is there a good way to figure out the cause of this error in a more detailed way using urllib2 or do I need to do CURL from the command line in order to get the bash type error? Answer: To answer both your questions: 1) `json.dumps` \- assuming that you pass it valid input - will always create a string representing a valid JSON object. If you pass invalid input, it will throw an exception. Python's `json` module is extremely widely used and if there were a bug that could cause it to spit out an invalidly formatted JSON dictionary then this would definitely be known (and fixed) by now. 2) You're getting the HTTPError because the web server is returning a page with a [HTTP Status Code](http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html) of 400 in response to your request. There was probably also some content in the page body, though, and `urllib2` hasn't thrown that away. You can call the `.read()` method of the HTTPError to get any content that was on the page, rather than just seeing the status code like you are at the moment (which is sometimes not enough to understand what you've done wrong). Most web APIs will return you some sort of useful explanation when spitting out a `400 - Bad Request` error, such as a message telling you what parameter you missed out in your request. Putting `print e.read()` in your exception block will let you see this, and will probably allow you to solve your current problem.
Get list of all applications deployed on a weblogic server Question: Using the following code, I am able to connect to the weblogic server. Now I want to get a list of all the applications deployed on the server. listapplications() from the command prompt lists the applications, but I am not able to store the output into a variable when I execute interpreter.exec(listapplications()) because interpreter.exec returns a void. Any ideas on how can I store the applications list in a collection/array? Any other alternative or leads would also help. import org.python.util.InteractiveInterpreter; import weblogic.management.scripting.utils.WLSTInterpreter; public class SampleWLST { public static void main(String[] args) { SampleWLST wlstObject = new SampleWLST(); wlstObject.connect(); } public void connect() { InteractiveInterpreter interpreter = new WLSTInterpreter(); interpreter.exec("connect('username', 'password', 't3://localhost:8001')"); } } Answer: I solved it. I captured the output of the wlst by redirect to a stream using setOut method of InteractiveInterpreter and wrote a scanner to read the stream in Java. Hope this might help someone else. ArrayList<String> appList = new ArrayList<String>(); Writer out = new StringWriter(); interpreter.setOut(out); interpreter.exec("print listApplications()"); StringBuffer results = new StringBuffer(); results.append(out.toString()); Scanner scanner = new Scanner(results.toString()); while(scanner.hasNextLine()){ String line = scanner.nextLine(); line = line.trim(); if(line.equals("None")) continue; appList.add(line); }
Django and venv: settings.DATABASES is improperly configured. Please supply the ENGINE value Question: This happens when I run python manage.py syncdb. It also happens when I run python manage.py syncdb --mysite.settings. Not sure where to go from here: django isn't recognizing my settings file and I don't know why or how to rectify it. python ../manage.py syncdb Traceback (most recent call last): File "../manage.py", line 9, in <module> execute_from_command_line(sys.argv) File "/home/ryan/Programming/OpenCV-2.4.2/msheroku/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 443, in execute_from_command_line utility.execute() File "/home/ryan/Programming/OpenCV-2.4.2/msheroku/venv/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 382, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/ryan/Programming/OpenCV-2.4.2/msheroku/venv/local/lib/python2.7/site-packages/django/core/management/base.py", line 196, in run_from_argv self.execute(*args, **options.__dict__) File "/home/ryan/Programming/OpenCV-2.4.2/msheroku/venv/local/lib/python2.7/site-packages/django/core/management/base.py", line 232, in execute output = self.handle(*args, **options) File "/home/ryan/Programming/OpenCV-2.4.2/msheroku/venv/local/lib/python2.7/site-packages/django/core/management/base.py", line 371, in handle return self.handle_noargs(**options) File "/home/ryan/Programming/OpenCV-2.4.2/msheroku/venv/local/lib/python2.7/site-packages/django/core/management/commands/syncdb.py", line 57, in handle_noargs cursor = connection.cursor() File "/home/ryan/Programming/OpenCV-2.4.2/msheroku/venv/local/lib/python2.7/site-packages/django/db/backends/dummy/base.py", line 15, in complain raise ImproperlyConfigured("settings.DATABASES is improperly configured. " django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the ENGINE value. Check settings documentation for more details. From settings.py DATABASES = { 'default': { 'ENGINE': 'postgresql_psycopg2', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'xxx', # Or path to database file if using sqlite3. 'USER': 'xxx', # Not used with sqlite3. 'PASSWORD': 'xxx', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } I'm pretty confident that it's not finding the right settings file to begin with. Answer: Hi I am assuming you are testing your app using heroku and your trying to connect to your local database ? if so be sure to remove this line of code : # Parse database configuration from $DATABASE_URL import dj_database_url DATABASES['default'] = dj_database_url.config() # Honor the 'X-Forwarded-Proto' header for request.is_secure() SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') Add it in when you are gonna connect to a database on a anther server.
QApplication instance causing python shell to be sluggish Question: My IPython shell becomes sluggish after I instantiate a QApplication object. For example, even from a fresh start, the following code will make my shell sluggish enough where I have to restart it. from PyQt4 import QtGui app = QtGui.QApplication([]) As soon as that is submitted, my typing becomes lagged by 2 or 3 seconds. My computer is not fantastic, but I still have plenty of available memory, and it's only the python shell that seems to be affected. I've tried both the default python interpreter and the ipython interpreter with the same results. Any suggestions? Update: I also tried running a standalone pyqt "Hello World" program in ipython using the `%run` magic command and when control was returned to ipython after I closed the resulting "Hello World" window, it had the same effect; the shell became sluggish and my typing starting lagging by 2-3 seconds. Answer: This may help: QtCore.pyqtRemoveInputHook() > When the QtCore module is imported for the first time it installs a Python > input hook (ie. it sets the value of Python's PyOS_InputHook variable). This > allows commands to be entered at the interpreter prompt while the > application is running. It is then possible to dynamically create new Qt > objects and call the methods of any existing Qt object. > > The input hook can cause problems for certain types of application, > particularly those that provide a similar facility through different means. > This function removes the input hook installed by PyQt. > > The input hook can be restored using the pyqtRestoreInputHook() function. <http://www.riverbankcomputing.com/static/Docs/PyQt4/html/qtcore.html#pyqtRemoveInputHook>
MySQL for Python: Incorrect Integer value Question: I have the same question as asked here: [Default value for empty integer fields when importing CSV data in MySQL](http://stackoverflow.com/questions/5394228/default-value-for-empty- integer-fields-when-importing-csv-data-in-mysql) I keep getting the warning "Incorrect Integer value" when importing a csv file into Mysql. I've read all the relevant questions/answers here, and the link above is my direct question. But I'm trying to implement the answer given there in Python 2.7, and I can't seem to get it working. My code below is my attempt to implement the answer from the above post. I think the issue is the syntax for using the "Set" clause in my Load DATA Local Infile statement...I would really appreciate any help since MySQL automatically converts empty INT's to "0" and that would mess up my analysis since I want blanks to be null. My code is: import MySQLdb db = MySQLdb.connect(blah, blah, blah) cursor = db.cursor() sql = """CREATE TABLE PLAYER( PLAYER_ID INT, DATE DATETIME, BATTING_AVERAGE INT )""" cursor.execute(sql) statement = """LOAD DATA LOCAL INFILE 'path/file.csv' INTO TABLE PLAYER COLUMNS TERMINATED BY ',' SET BATTING_AVERAGE = IF(@BATTING_AVERAGE='',NULL,@BATTING_AVERAGE) IGNORE 1 LINES;""" The error that this code gives is: ProgramingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right synatx to use near 'IGNORE 1 Lines at line 5") Answer: The `SET` and `IGNORE` clauses are backwards. Try swapping them: statement = """LOAD DATA LOCAL INFILE 'path/file.csv' INTO TABLE PLAYER COLUMNS TERMINATED BY ',' IGNORE 1 LINES SET BATTING_AVERAGE = IF(@BATTING_AVERAGE='',NULL,@BATTING_AVERAGE);"""
Timing a Function in Python Question: I'm trying to time two different functions in python. The first: import cProfile def bin_search(A, first,last, target): #returns index of target in A, if present #returns -1 if target is not present in A if first > last: return -1 else: mid = (first+last)/2 if A[mid]==target: return mid elif A[mid]>target: return bin_search(A,first,mid-1,target) else: return bin_search(A,mid+1,last,target) the second def trin_search(A,first,last,target): #returns index of target in A, if present #returns -1 if target is not present in A if target> last or target<first: return -1 if first>last: return -1 else: one_third=first+(last-first)/3 two_thirds=first+2*(last-first)/3 if A[one_third]==target: return one_third elif A[one_third]>target: #search the left-hand third return trin_search(A,first, one_third,target) elif A[two_thirds]==target: return two_thirds elif A[two_thirds]>target: #search the middle third return trin_search(A,one_third+1,two_thirds-1,target) else: #search the right-hand third return trin_search(A,two_thirds+1,last,target) I'm trying to time them using the cprofile.run() method. I call: cprofile.run('trin_search(newlist, newlist[0], newlist[-1], 17)') and cprofile.run('bin_search(newlist, newlist[0], newlist[-1], 17)') with the results for the first: 6 function calls (4 primitive calls) in 0.000 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 :0(setprofile) 1 0.000 0.000 0.000 0.000 <string>:1(<module>) 3/1 0.000 0.000 0.000 0.000 Jan 18.py:16(trin_search) 0 0.000 0.000 profile:0(profiler) 1 0.000 0.000 0.000 0.000 profile:0(trin_search(newlist, newlist[0], newlist[-1], 17)) and the second 7 function calls (3 primitive calls) in 0.000 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 <string>:1(<module>) 5/1 0.000 0.000 0.000 0.000 Jan 18.py:2(bin_search) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} How is it possible that they take 0 time to operate? Cheers, Answer: As already pointed out by other use the `timeit` module, here's an example how to time a function with parameters: import timeit arg = 10 def foo(arg): return arg**arg t=timeit.Timer("foo(arg)","from __main__ import foo, arg") print t.timeit(5) Note that you have to import both the function and the variables that you are using in your function call. Also, I would suggest that you use [IPython](http://ipython.org/) where you have _"magic commands"_ so you can simply do `%timeit foo(arg)`. * * * For your example, this should work: t=timeit.Timer("bin_search(newlist, newlist[0], newlist[-1], 17)", "from __main__ import bin_search, newlist")
Parsing HTML with Python 2.7 Question: Evening folks (or morning depending on where you are :) ). I'm looking to parse a webpage which contains multiple segments similar to the below:- > <p><a name="Abercrombie"></a></p> <h3>Abercrombie Council</h3> <p>Mr > Billy Smith<br />The Managing Director<br />123 Jones Street, > London<br />T:02081234567<br /><a > href="mailto:[email protected]">Email</a></p> What I'm wishing to do is to capture the source code from the webpage and then parse through it extracting the unique info above and place this into rows in a tab delimited document with a new line at the end - splitting up the title, name of office, name of individual, job role, address, telephone number, email address. I've been looking at using BeautifulSoup but I'm just wondering if there's any other tools that are more suitable? Answer: I'd say BeautifulSoup would be your best and easiest option and parse pages or chunks of HTML. You can also try scrapy or even scraperwiki **Sample Usage for BS** import BeautifulSoup import urllib2 get = urllib2.urlopen('http://site.com').read() dom = BeautifulSoup.BeautifulSoup(get) data = dom.findAll('p', {'class' : 'address'}) # <p class='address'>....</p> for i in data: print data More examples: <http://www.crummy.com/software/BeautifulSoup/bs3/documentation.html>
I would like to retrive history words from an online dictionary webpage with Python 3.2 urllib Question: I am using [tureng](http://tureng.com) online dictionary for Turkish-English / English-Turkish translation and this webpage records word search history and I would like to retrive these words with this code import urllib.request f = urllib.request.urlopen("http://tureng.com/history/details") text=f.read().decode('utf-8') #print(text) start = text.find("<span class=\"historyTerm\"><a href=\"/search/") I know how to parse the searched word afterwords however the problem is that while retriving the page source with urllib, I can't see my searched words. It says "there are no history records for this day". So how can I solve this problem? If you view the page source after searching a few words, you can see [tureng history words](http://tureng.com/history/details) Here in my browser, I can see the searched words: ![turenginbrowser](http://i.stack.imgur.com/Dw5nb.png) Here in python: ![turenginpython](http://i.stack.imgur.com/1x2hm.png) Answer: When you're looking up for words and translations, I suppose that you're using your web browser. The website you're using should store the history of lookup inside a cookie or in the cache of your browser. When your calling the URL using your python script, it's the same as opening a new web browser with a clean session.
Python Opencv SolvePnP yields wrong translation vector Question: I am attempting to calibrate and find the location and rotation of a single virtual camera in Blender 3d using homography. I am using Blender so that I can double check my results before I move on to the real world where that is more difficult. I rendered ten pictures of a chess board in various locations and rotations in the view of my stationary camera. With opencv's python, I used cv2.calibrateCamera to find the intrinsic matrix from the detected corners of the chess board in the ten images and then used that in cv2.solvePnP to find the extrinsic parameters(translation and rotation). However, though the estimated parameters were close to the actual ones, there is something fishy going on. My initial estimation of the translation was (-0.11205481,-0.0490256,8.13892491). The actual location was (0,0,8.07105). Pretty close right? But when I moved and rotated the camera slightly and rerendered the images, the estimated translation became farther off. Estimated: (-0.15933154,0.13367286,9.34058867). Actual: (-1.7918,-1.51073,9.76597). The Z value is close, but the X and the Y are not. I am utterly confused. If anybody can help me sort through this, I would be highly grateful. Here is the code (it's based off of the python2 calibrate example supplied with opencv): #imports left out USAGE = ''' USAGE: calib.py [--save <filename>] [--debug <output path>] [--square_size] [<image mask>] ''' args, img_mask = getopt.getopt(sys.argv[1:], '', ['save=', 'debug=', 'square_size=']) args = dict(args) try: img_mask = img_mask[0] except: img_mask = '../cpp/0*.png' img_names = glob(img_mask) debug_dir = args.get('--debug') square_size = float(args.get('--square_size', 1.0)) pattern_size = (5, 8) pattern_points = np.zeros( (np.prod(pattern_size), 3), np.float32 ) pattern_points[:,:2] = np.indices(pattern_size).T.reshape(-1, 2) pattern_points *= square_size obj_points = [] img_points = [] h, w = 0, 0 count = 0 for fn in img_names: print 'processing %s...' % fn, img = cv2.imread(fn, 0) h, w = img.shape[:2] found, corners = cv2.findChessboardCorners(img, pattern_size) if found: if count == 0: #corners first is a list of the image points for just the first image. #This is the image I know the object points for and use in solvePnP corners_first = [] for val in corners: corners_first.append(val[0]) np_corners_first = np.asarray(corners_first,np.float64) count+=1 term = ( cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_COUNT, 30, 0.1 ) cv2.cornerSubPix(img, corners, (5, 5), (-1, -1), term) if debug_dir: vis = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) cv2.drawChessboardCorners(vis, pattern_size, corners, found) path, name, ext = splitfn(fn) cv2.imwrite('%s/%s_chess.bmp' % (debug_dir, name), vis) if not found: print 'chessboard not found' continue img_points.append(corners.reshape(-1, 2)) obj_points.append(pattern_points) print 'ok' rms, camera_matrix, dist_coefs, rvecs, tvecs = cv2.calibrateCamera(obj_points, img_points, (w, h)) print "RMS:", rms print "camera matrix:\n", camera_matrix print "distortion coefficients: ", dist_coefs.ravel() cv2.destroyAllWindows() np_xyz = np.array(xyz,np.float64).T #xyz list is from file. Not shown here for brevity camera_matrix2 = np.asarray(camera_matrix,np.float64) np_dist_coefs = np.asarray(dist_coefs[:,:],np.float64) found,rvecs_new,tvecs_new = cv2.solvePnP(np_xyz, np_corners_first,camera_matrix2,np_dist_coefs) np_rodrigues = np.asarray(rvecs_new[:,:],np.float64) print np_rodrigues.shape rot_matrix = cv2.Rodrigues(np_rodrigues)[0] def rot_matrix_to_euler(R): y_rot = asin(R[2][0]) x_rot = acos(R[2][2]/cos(y_rot)) z_rot = acos(R[0][0]/cos(y_rot)) y_rot_angle = y_rot *(180/pi) x_rot_angle = x_rot *(180/pi) z_rot_angle = z_rot *(180/pi) return x_rot_angle,y_rot_angle,z_rot_angle print "Euler_rotation = ",rot_matrix_to_euler(rot_matrix) print "Translation_Matrix = ", tvecs_new Thank you so much Answer: I think you may be thinking of `tvecs_new` as the camera position. Slightly confusingly that is not the case! In fact its the position of the world origin in camera co-ords. To get the camera pose in the object/world co-ords, I believe you need to do: `-np.matrix(rotation_matrix).T * np.matrix(tvecs_new)` And you can get the Euler angles using `cv2.decomposeProjectionMatrix(P)[-1]` where `P` is the `[r|t]` 3 by 4 extrinsic matrix. I found [this](http://ksimek.github.com/2012/08/22/extrinsic/) to be a pretty good article about the intrinsics and extrinsics...
How to save "complete webpage" not just basic html using Python Question: I am using following code to save webpage using Python: import urllib import sys from bs4 import BeautifulSoup url = 'http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html' f = urllib.urlretrieve(url,'test.html') **Problem** : This code saves html as basic html without javascripts, images etc. I want to save webpage as complete (Like we have option in browser) **Update** : I am using following code now to save all the js/images/css files of webapge so that it can be saved as complete webpage but still my output html is getting saved like basic html: import pycurl import StringIO c = pycurl.Curl() c.setopt(pycurl.URL, "http://www.vodafone.de/privat/tarife/red-smartphone-tarife.html") b = StringIO.StringIO() c.setopt(pycurl.WRITEFUNCTION, b.write) c.setopt(pycurl.FOLLOWLOCATION, 1) c.setopt(pycurl.MAXREDIRS, 5) c.perform() html = b.getvalue() #print html fh = open("file.html", "w") fh.write(html) fh.close() Answer: Try emulating your browser with [selenium](http://pypi.python.org/pypi/selenium). This script will pop up the `save as` dialog for the webpage. You will still have to figure out how to emulate pressing enter for download to start as the file dialog is out of selenium's reach (how you do it is also OS dependent). from selenium import webdriver from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.keys import Keys br = webdriver.Firefox() br.get('http://www.google.com/') save_me = ActionChains(br).key_down(Keys.CONTROL)\ .key_down('s').key_up(Keys.CONTROL).key_up('s') save_me.perform() Also I think following _@Amber_ suggestion of grabbing the the linked resources may be a simpler, thus a better solution. Still, I think using selenium is a good starting point as `br.page_souce` will get you the entire dom along with the dynamic content generated by javascript.
clean urls issue using .htaccess in php laravel project Question: I am working on a php laravel project. I am currently facing issues with .htaccess file. I have following .htaccess <IfModule mod_rewrite.c> Options -MultiViews RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] </IfModule> When I reload my page the it gave me following error 404 Not Found The requested URL /contacts was not found on this server. Then I opened `/etc/apache2/users/username.conf` file which had following line of code <Directory "/Users/username/Sites/"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> In above code I changed `AllowOverride None` to `AllowOverride All`. Then I reload page and got following error 403 Forbidden You don't have permission to access /contacts on this server. When I add `FollowSymLinks` to `.htaccess` file `Options` such as like this `Options -MultiViews FollowSymLinks`. Then sometimes I get this `500 Internal Server Error` error and sometime this `Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data.`. Each time I reload my page one these errors with `FollowSymLinks` option. I also uncomment following lines in `/etc/apache2/httpd.conf` LoadModule rewrite_module libexec/apache2/mod_rewrite.so LoadModule php5_module libexec/apache2/libphp5.so and still I am getting same `permission denied error`. Please help me I am trying to solve this problem for past 3 days but it is till unresolved. **Update** Here is my entire `httpd.conf` file. # # Mac OS X / Mac OS X Server # The <IfDefine> blocks segregate server-specific directives # and also directives that only apply when Web Sharing or # server Web Service (as opposed to other services that need Apache) is on. # The launchd plist sets appropriate Define parameters. # Generally, desktop has no vhosts and server does; server has added modules, # custom virtual hosts are only activated when Web Service is on, and # default document root and personal web sites at ~username are only # activated when Web Sharing is on. # # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/2.2> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/2.2/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "log/foo_log" # with ServerRoot set to "/usr" will be interpreted by the # server as "/usr/log/foo_log". # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to point the LockFile directive # at a local disk. If you wish to share the same ServerRoot for multiple # httpd daemons, you will need to change at least LockFile and PidFile. # ServerRoot "/usr" # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 <IfDefine !MACOSXSERVER> Listen 80 </IfDefine> # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module libexec/apache2/mod_authn_file.so LoadModule authz_host_module libexec/apache2/mod_authz_host.so LoadModule cache_module libexec/apache2/mod_cache.so LoadModule disk_cache_module libexec/apache2/mod_disk_cache.so LoadModule dumpio_module libexec/apache2/mod_dumpio.so LoadModule reqtimeout_module libexec/apache2/mod_reqtimeout.so LoadModule ext_filter_module libexec/apache2/mod_ext_filter.so LoadModule include_module libexec/apache2/mod_include.so LoadModule filter_module libexec/apache2/mod_filter.so LoadModule substitute_module libexec/apache2/mod_substitute.so LoadModule deflate_module libexec/apache2/mod_deflate.so LoadModule log_config_module libexec/apache2/mod_log_config.so LoadModule log_forensic_module libexec/apache2/mod_log_forensic.so LoadModule logio_module libexec/apache2/mod_logio.so LoadModule env_module libexec/apache2/mod_env.so LoadModule mime_magic_module libexec/apache2/mod_mime_magic.so LoadModule cern_meta_module libexec/apache2/mod_cern_meta.so LoadModule expires_module libexec/apache2/mod_expires.so LoadModule headers_module libexec/apache2/mod_headers.so LoadModule ident_module libexec/apache2/mod_ident.so LoadModule usertrack_module libexec/apache2/mod_usertrack.so #LoadModule unique_id_module libexec/apache2/mod_unique_id.so LoadModule setenvif_module libexec/apache2/mod_setenvif.so LoadModule version_module libexec/apache2/mod_version.so LoadModule proxy_module libexec/apache2/mod_proxy.so LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so LoadModule proxy_scgi_module libexec/apache2/mod_proxy_scgi.so LoadModule proxy_balancer_module libexec/apache2/mod_proxy_balancer.so #LoadModule ssl_module libexec/apache2/mod_ssl.so LoadModule mime_module libexec/apache2/mod_mime.so LoadModule dav_module libexec/apache2/mod_dav.so LoadModule autoindex_module libexec/apache2/mod_autoindex.so LoadModule asis_module libexec/apache2/mod_asis.so LoadModule info_module libexec/apache2/mod_info.so LoadModule cgi_module libexec/apache2/mod_cgi.so LoadModule dav_fs_module libexec/apache2/mod_dav_fs.so LoadModule vhost_alias_module libexec/apache2/mod_vhost_alias.so LoadModule negotiation_module libexec/apache2/mod_negotiation.so LoadModule dir_module libexec/apache2/mod_dir.so LoadModule imagemap_module libexec/apache2/mod_imagemap.so LoadModule actions_module libexec/apache2/mod_actions.so LoadModule speling_module libexec/apache2/mod_speling.so LoadModule alias_module libexec/apache2/mod_alias.so LoadModule rewrite_module libexec/apache2/mod_rewrite.so LoadModule php5_module libexec/apache2/libphp5.so #Apple specific modules LoadModule apple_userdir_module libexec/apache2/mod_userdir_apple.so LoadModule bonjour_module libexec/apache2/mod_bonjour.so <IfDefine !MACOSXSERVER> LoadModule authn_dbm_module libexec/apache2/mod_authn_dbm.so LoadModule authn_anon_module libexec/apache2/mod_authn_anon.so LoadModule authn_dbd_module libexec/apache2/mod_authn_dbd.so LoadModule authn_default_module libexec/apache2/mod_authn_default.so LoadModule auth_basic_module libexec/apache2/mod_auth_basic.so LoadModule auth_digest_module libexec/apache2/mod_auth_digest.so LoadModule authz_groupfile_module libexec/apache2/mod_authz_groupfile.so LoadModule authz_user_module libexec/apache2/mod_authz_user.so LoadModule authz_dbm_module libexec/apache2/mod_authz_dbm.so LoadModule authz_owner_module libexec/apache2/mod_authz_owner.so LoadModule authz_default_module libexec/apache2/mod_authz_default.so LoadModule mem_cache_module libexec/apache2/mod_mem_cache.so LoadModule dbd_module libexec/apache2/mod_dbd.so LoadModule proxy_connect_module libexec/apache2/mod_proxy_connect.so LoadModule proxy_ftp_module libexec/apache2/mod_proxy_ftp.so LoadModule proxy_ajp_module libexec/apache2/mod_proxy_ajp.so LoadModule status_module libexec/apache2/mod_status.so </IfDefine> <IfDefine MACOSXSERVER> LoadModule hfs_apple_module libexec/apache2/mod_hfs_apple.so #LoadModule auth_digest_apple_module libexec/apache2/mod_auth_digest_apple.so #LoadModule encoding_module libexec/apache2/mod_encoding.so #LoadModule jk_module libexec/apache2/mod_jk.so LoadModule apple_auth_module libexec/apache2/mod_auth_apple.so LoadModule spnego_auth_module libexec/apache2/mod_spnego_apple.so LoadModule apple_digest_module libexec/apache2/mod_digest_apple.so #LoadModule python_module libexec/apache2/mod_python.so #LoadModule xsendfile_module libexec/apache2/mod_xsendfile.so LoadModule apple_status_module libexec/apache2/mod_status_apple.so </IfDefine> # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User _www Group _www # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName www.example.com:80 <IfDefine MACOSXSERVER> DocumentRoot /var/empty <IfModule mod_auth_digest_apple.c> BrowserMatch "MSIE" AuthDigestEnableQueryStringHack=On </IfModule> <IfModule mod_headers.c> Header add MS-Author-Via "DAV" RequestHeader set X_FORWARDED_PROTO 'https' env=https RequestHeader set X_FORWARDED_PROTO 'http' env=!https </IfModule> <IfModule mod_encoding.c> EncodingEngine on NormalizeUsername on DefaultClientEncoding UTF-8 # Windows XP? AddClientEncoding "Microsoft-WebDAV-MiniRedir/" MSUTF-8 # Windows 2K SP2 with .NET AddClientEncoding "(Microsoft .* DAV\$)" MSUTF-8 # Windows 2K SP2/Windows XP AddClientEncoding "(Microsoft .* DAV 1.1)" CP932 # Windows XP? AddClientEncoding "Microsoft-WebDAV*" CP932 # RealPlayer AddClientEncoding "RMA/*" CP932 # MacOS X webdavfs AddClientEncoding "WebDAVFS" UTF-8 # cadaver AddClientEncoding "cadaver/" EUC-JP </IfModule> <Directory /usr/share/web> AllowOverride None Options MultiViews FollowSymlinks Order allow,deny Allow from all Header Set Cache-Control no-cache </Directory> Alias /webmail /usr/share/web/webmail.html Alias /changepassword /usr/share/web/changepassword.html Alias /profilemanager /usr/share/web/profilemanager.html Alias /webcal /usr/share/web/webcal.html </IfDefine> <IfDefine !MACOSXSERVER> <IfDefine WEBSHARING_ON> # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/Library/WebServer/Documents" # # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # This should be changed to whatever you set DocumentRoot to. # <Directory "/Library/WebServer/Documents"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/2.2/mod/core.html#options # for more information. # Options Indexes FollowSymLinks MultiViews # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # AllowOverride None # # Controls who can get stuff from this server. # Order allow,deny Allow from all </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> DirectoryIndex index.html </IfModule> </IfDefine> </IfDefine> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <FilesMatch "^\.([Hh][Tt]|[Dd][Ss]_[Ss])"> Order allow,deny Deny from all Satisfy All </FilesMatch> # # Apple specific filesystem protection. # <Files "rsrc"> Order allow,deny Deny from all Satisfy All </Files> <DirectoryMatch ".*\.\.namedfork"> Order allow,deny Deny from all Satisfy All </DirectoryMatch> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "/private/var/log/apache2/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combinedvhost LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%v %h %l %u %t \"%r\" %>s %b" commonvhost <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinediovhost </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "/private/var/log/apache2/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "/private/var/log/apache2/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAliasMatch ^/cgi-bin/((?!(?i:webobjects)).*$) "/Library/WebServer/CGI-Executables/$1" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock /private/var/run/cgisock </IfModule> # # "/Library/WebServer/CGI-Executables" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/Library/WebServer/CGI-Executables"> AllowOverride None Options None Order allow,deny Allow from all </Directory> # # DefaultType: the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig /private/etc/apache2/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # #AddType text/html .shtml #AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile /private/etc/apache2/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall is used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # #EnableMMAP off #EnableSendfile off TraceEnable off # Supplemental configuration # # The configuration files in the /private/etc/apache2/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM prefork specific) StartServers 1 MinSpareServers 1 MaxSpareServers 1 # ServerLimit and MaxClients support n% syntax which sets them to a # fraction of the current RLIMIT_NPROC limit. ServerLimit 50% MaxClients 50% ListenBackLog 512 MaxRequestsPerChild 100000 # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # UseCanonicalName: Determines how Apache constructs self-referencing # URLs and the SERVER_NAME and SERVER_PORT variables. # When set "Off", Apache will use the Hostname and Port supplied # by the client. When set "On", Apache will use the value of the # ServerName directive. # UseCanonicalName Off # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minor | Minimal | Major | Prod # where Full conveys the most information, and Prod the least. # ServerTokens Full # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # ServerSignature On # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # PidFile: The file in which the server should record its process # identification number when it starts. PidFile /var/run/httpd.pid # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. LockFile "/private/var/log/apache2/accept.lock" <IfModule mod_rewrite.c> RewriteLock /var/log/apache2/rewrite.lock </IfModule> # Language settings Include /private/etc/apache2/extra/httpd-languages.conf <IfDefine WEBSHARING_ON> # Multi -language error messages #Include /private/etc/apache2/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include /private/etc/apache2/extra/httpd-autoindex.conf # User home directories Include /private/etc/apache2/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include /private/etc/apache2/extra/httpd-info.conf # Virtual hosts Include /private/etc/apache2/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual Include /private/etc/apache2/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include /private/etc/apache2/extra/httpd-dav.conf </IfDefine> # Secure (SSL/TLS) connections <IfDefine !MACOSXSERVER> #Include /private/etc/apache2/extra/httpd-ssl.conf </IfDefine> <IfDefine MACOSXSERVER> <IfModule mod_ssl.c> SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown SSLPassPhraseDialog exec:/etc/apache2/getsslpassphrase SSLSessionCache shmcb:/var/run/ssl_scache(512000) SSLSessionCacheTimeout 300 SSLMutex file:/var/run/ssl_mutex SSLRandomSeed startup builtin SSLRandomSeed connect builtin AddType application/x-x509-ca-cert crt AddType application/x-pkcs7-crl crl </IfModule> </IfDefine> <IfModule mod_jk.c> JKWorkersFile /etc/apache2/workers.properties JKLogFile /var/log/apache2/mod_jk.log JkShmFile /var/log/apache2/jk-runtime-status </IfModule> <IfModule php5_module> AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> </IfModule> <IfDefine !MACOSXSERVER> Include /etc/apache2/other/*.conf </IfDefine> <IfDefine MACOSXSERVER> <IfDefine WEBSERVICE_ON> Include /etc/apache2/sites/*.conf </IfDefine> <IfDefine !WEBSERVICE_ON> Include /etc/apache2/sites/virtual_host_global.conf Include /etc/apache2/sites/*_.conf Include /etc/apache2/sites/*__shadow.conf </IfDefine> </IfDefine> **Update 2** Error log is below [error log link](https://gist.github.com/4640328) Answer: Use the followings in your `/etc/apache2/users/username.conf` file: <Directory "C:/Developments/Projects/php/laravel/t-ravel/public"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Require all granted </Directory>
How to write time (local system time), in text file or in log file using a Python script? Question: I have to write a test case, for that I am using sikuli which works on a Python script, here I am not able to write local system time in text file. import time; localtime = time.localtime(time.time()) inp=file("C:\\Users\\%path%\\Log.txt", 'w') inp.write('************** Full Process ****************\n') inp.write('Local current time :', localtime) incorrect Here I am creating a .txt file and also I have to write time and I don't know how to write the code. Answer: That's pretty much correct, but you have an error writing the time: inp.write('Local current time :', localtime) If you want to format a string like that, you need to use the `%` operator: inp.write('Local current time : %s' % localtime) Furthermore, just printing the Time object will print a very weird string. You want to write the date in a more convenient way, by example YYYY/MM/DD - HH:MM:SS. You do it like this: localtime.strftime ('%Y/%m/%d - %H:%M:%S') So your piece of code will be: import time localtime = time.localtime(time.time()) timestring = time.strftime ('%Y/%m/%d - %H:%M:%S') inp=file("Log.txt", 'w') inp.write('************** Full Process ****************\n') inp.write('Local current time : %s' % timestring)
Python calling DLL calling Python, "WindowsError: exception: access violation reading 0x00000004" Question: I have an application written in Python. The application calls some functions in the dll (using ctypes) that calls some functions from the python C API to load and run some functions in a (different) python module. This causes `WindowsError: exception: access violation reading 0x00000004` Some cout debugging tells me that the access violation happens on a call to the Python C API. I know the DLL successfully loads the python module, running to completion when I test it as a stand alone .exe but when it is run from the python application it gives this access violation error. I first though that maybe I should use Py_NewInterpreter in the DLL to create a new separate space for these python module functions to run. This would seem convienent as there is no need for the application and the modules runs by the DLL to share any data. <http://docs.python.org/2/c-api/init.html#Py_NewInterpreter> However my initial tests with Py_NewInterpreter have proved unsuccessful simply making further errors and crashing. So my question is how should I implement this/what do I need to do make the DLL load of the python modules work? Answer: This looks like dereferencing a null pointer, i.e. a pointer to a structure/class is null and you are reading a subobject at offset 4. Sprinkle a few assert()s throughout your code to make sure you don't miss any null pointer. Note also that the Python C API regularly returns null in case of an exception. That said, you have a Python program importing a DLL via ctypes. This DLL in turn embeds a Python interpreter via Python's C API. Right? If I'm not mistaken, that effectively gives you two interpreter instances in one process. The problem there is that any global objects are shared between the two and that this might not be coordinated. What I would try is to write a proper Python (wrapper) module instead of using ctypes to import the DLL. However, there's another thing that I would do in parallel and that is to ask on the Python user mailinglist, whether this setup can work or not and why. There is also a news gateway in case you prefer that.
what's the PYTHONPATH when there is no PYTHONPATH? Question: I need to add a new directory location to my `PYTHONPATH`, but the problem is I'm on a clean, newly-installed system (Linux) where no `PYTHONPATH` has yet been defined. I've read about and used `PYTHONPATH` and I thought I understood it quite well, but I do not know what's happening when no `PYTHONPATH` yet exists. I can't append to something that doesn't exist, but I want all important libraries currently found to still work, so being cautious, from within Python I did `print str(sys.path)` to get all the standard values. Then I defined an `env`-variable for `PYTHONPATH` including all the nodes I had just found, plus my new directory. But wow did a lot of stuff stop working! Python is so messed up with the new `env`-variable that I had to remove it, at which point everything worked again. With the bad `PYTHONPATH` the system was so confused it couldn't even find an error message to display when an incorrect command was typed in at the prompt. My problem is not something simple like a missing colon, or using semi-colons when I should use colons; I checked. Also my new directory doesn't cause the problem because even without the new node the problems still occur. So can anyone explain why this approach doesn't work? %%%%%%%%%%%%%%%%%%%%%%%%%%%%% Below I provide extra details as requested, but one need not read any futher, I think the problem is fixed. The explanation that the nodes listed in PYTHONPATH do not override all "standard" nodes but rather become new, additional entries (prepended I believe, so one can control what comes first) was the key. Starting from scratch, defining no PYTHONHOME or PYTHONPATH, results in this from within Python: print ':'.join(sys.path) :/usr/lib/python2.7:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/lib-old:/usr/lib/python2.7/lib-dynload:/usr/local/lib/python2.7/dist-packages:/usr/lib/python2.7/dist-packages:/usr/lib/python2.7/dist-packages/PIL:/usr/lib/python2.7/dist-packages/gst-0.10:/usr/lib/python2.7/dist-packages/gtk-2.0:/usr/lib/python2.7/dist-packages/ubuntu-sso-client Using this as a PYTHONPATH (i.e., defining an env-variable before invoking Python), results in a very poorly functioning command prompt, even without explicitly using Python. For example: $> export PYTHONPATH='/usr/lib/python2.7:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/lib-old:/usr/lib/python2.7/lib-dynload:/usr/local/lib/python2.7/dist-packages:/usr/lib/python2.7/dist-packages:/usr/lib/python2.7/dist-packages/PIL:/usr/lib/python2.7/dist-packages/gst-0.10:/usr/lib/python2.7/dist-packages/gtk-2.0:/usr/lib/python2.7/dist-packages/ubuntu-sso-client' $> echo $PYTHONPATH /usr/lib/python2.7:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/lib-old:/usr/lib/python2.7/lib-dynload:/usr/local/lib/python2.7/dist-packages:/usr/lib/python2.7/dist-packages:/usr/lib/python2.7/dist-packages/PIL:/usr/lib/python2.7/dist-packages/gst-0.10:/usr/lib/python2.7/dist-packages/gtk-2.0:/usr/lib/python2.7/dist-packages/ubuntu-sso-client $> IntentionalBadCommand Fatal Python error: Py_Initialize: Unable to get the locale encoding File "/usr/lib/python2.7/encodings/__init__.py", line 123 raise CodecRegistryError,\ ^ SyntaxError: invalid syntax Aborted The mistake was thinking that PYTHONPATH needs to contain the whole universe of everything needed. Yes, I did RTFM before posting, but I guess I missed the significance of the beginning word "Augment". So taking the advice that not everything needs to be explicitly specified -- that one can just specify the extra additions one wants, I tried: $> export PYTHONPATH=/usr/lib/python2.7/dist-packages/postgresql-pkg $> echo $PYTHONPATH /usr/lib/python2.7/dist-packages/postgresql-pkg $> IntentionalBadCommand IntentionalBadCommand: command not found So it seems to be working, though I haven't yet tried to use the postgresql package mentioned above. Still it's a little mysterious why prepending an abundance of unnecessary nodes to the PYTHONPATH would make things break as badly as it did, especially since I got the entries from what should be a reliable source: sys.path. But anyway, it's probably solved, so THANKS! Answer: It's not clear what your problem could be, but note that you don't need to add the default value of `sys.path` to your `PYTHONPATH` variable. The directories you put in `PYTHONPATH` are _additional_ directories to search; the system default is appended to your `PYTHONPATH`. In other words, roughly speaking: sys.path = ":".split( os.environ['PYTHONPATH'] ) + sys.path Showing the exact value of `PYTHONPATH` and the resultant errors would help us determine the problem.
Why Twisted Manhole ConnectionDone is an error? Question: I'm using twisted manhole (https://github.com/HoverHell/pyaux/blob/master/pyaux/runlib.py#L126), and I also send errors caught by Twisted into python logging (https://github.com/HoverHell/pyaux/blob/master/pyaux/twisted_aux.py#L9). However, as a result, the log gets `ConnectionDone()` errors, which isn't a very interesting thing as an _error_. What would be appropriate to change to avoid getting this (and, possibly, some other) not-exactly-errors? Filtering for `twisted.python.failure.Failure` cases, perhaps? And where from is the ConnectionDone() even raised and why? Answer: ConnectionDone() instance is given to connectionLost() callback after the connection has been closed. You should be seeing this, when the client side decides to close the connection. You definitely don't want to filter the Failure out. You can think of the failure as a "asynchronous analogy" of the Exception. The usual thing to do, not to see some kind of exceptions is something like: from twisted.internet import error ... def connectionLost(self, reason): if reason.check(error.ConnectionDone): # this is normal, ignore this pass else: # do whatever you have been doing for logging
Run multiple python functions using flask Question: **2nd UPDATE** Almost there!! But getting a "ValueError: Attempting to use a port that is not open" > > File "c:\Python27\lib\site-packages\flask\app.py", line 1701, in > __call__ > return self.wsgi_app(environ, start_response) > > File "c:\Python27\lib\site-packages\flask\app.py", line 1689, in > wsgi_app > response = self.make_response(self.handle_exception(e)) > > File "c:\Python27\lib\site-packages\flask\app.py", line 1687, in > wsgi_app > response = self.full_dispatch_request() > > File "c:\Python27\lib\site-packages\flask\app.py", line 1360, in > full_dispatch_request > rv = self.handle_user_exception(e) > > File "c:\Python27\lib\site-packages\flask\app.py", line 1358, in > full_dispatch_request > rv = self.dispatch_request() > > File "c:\Python27\lib\site-packages\flask\app.py", line 1344, in > dispatch_request > return self.view_functions[rule.endpoint](**req.view_args) > > File "G:\OverAir\arduino\server.py", line 19, in light_off > board.output([pin]) > > File "G:\OverAir\arduino\arduino.py", line 16, in output > self.__sendData(len(pinArray)) > > File "G:\OverAir\arduino\arduino.py", line 56, in __sendData > while(self.__getData()[0] != "w"): > > File "G:\OverAir\arduino\arduino.py", line 61, in __getData > return self.serial.readline().rstrip('\n') > > File "c:\Python27\lib\site-packages\serial\serialwin32.py", line 221, in > read > if not self.hComPort: raise portNotOpenError > > ValueError: Attempting to use a port that is not open > > > UPDATE > > The code from @Blender answer below is probably correct. The issue now I am > having is I'm getting **"SerialException: could not open port COM5: [Error > 5] Access is denied."** > > However, if I run my original, on.py or off.py script individually, it works > fine...I don't see how the new code is not working...unless does it try to > open COM5 twice?? > > END UPDATE I trying to run multiple python functions, or separate .py scripts, using Flask; in order to generate separate URLs. Just trying to turn the Arduino ON/OFF running it locally. For example, I have on.py and off.py and want to build sever.py. So, when I open 127.0.0.1:5000/on it runs on.py, similarly 127.0.0.1:5000/off runs off.py, After a few hours, I just decided to add both .py functions into the server.py, but of course it doesn't work...I think I just don't understand the parameters usage. Really thank you in advance! ## Open a serial connection with Arduino. from time import sleep from arduino import Arduino from flask import Flask b = Arduino("COM5",9600) pin = 13 num = 5 ON = Flask("Light_ON") OFF = Flask("Light_OFF") def LightON(): #declare output pins as a list/tuple b.output([pin]) b.setHigh(pin) b.close() return 'Light is ON' def LightOFF(): #declare output pins as a list/tuple b.output([pin]) b.setLow(pin) return 'Light is OFF' @ON.route('/ON') @OFF.route('/OFF') ON.run() OFF.run() //I know this doesn't run, just not sure how to include it. Answer: Something like this should work: from arduino import Arduino from flask import Flask app = Flask('light_control') board = Arduino('COM5', 9600) pin = 13 @app.route('/on') def light_on(): board.output([pin]) board.setHigh(pin) board.close() return 'Light is ON' @app.route('/off') def light_off(): board.output([pin]) board.setLow(pin) # board.close() ??? return 'Light is OFF' if __name__ == '__main__': app.debug = True app.run()
Why would anyone check 'x in list'? Question: In Python one can very easily check if a value is contained in a container by using the `in`-operator. I was wondering why anyone would ever use the `in`-operator on a list, though, when it's much more efficient to first transform the list to a set as such: if x in [1,2,3]: as opposed to if x in set([1,2,3]): When looking at the [time complexity](http://wiki.python.org/moin/TimeComplexity), the first one has O(n) while the second one is superior at O(1). Is the only reason to use the first one the fact that it's more readable and shorter to write? Or is there a special case in which it's more practical to use? Why did the Python devs not implement the first one by first translating it to the second one? Would this not grand both of them the O(1) complexity? Answer: if x in set([1,2,3]): is _not_ faster than if x in [1,2,3]: Converting a list to a set requires iterating over the list, and is thus at least `O(n)` time.* In practice it takes a lot longer than searching for an item, since it involves hashing and then inserting every item. Using a set is efficient when the set is converted once and then checked _multiple times._ Indeed, trying this by searching for `500` in the list `range(1000)` indicates that the tradeoff occurs once you are checking at least 3 times: import timeit def time_list(x, lst, num): for n in xrange(num): x in lst def time_turn_set(x, lst, num): s = set(lst) for n in xrange(num): x in s for num in range(1, 10): size = 1000 setup_str = "lst = range(%d); from __main__ import %s" print num, print timeit.timeit("time_list(%d, lst, %d)" % (size / 2, num), setup=setup_str % (size, "time_list"), number=10000), print timeit.timeit("time_turn_set(%d, lst, %d)" % (size / 2, num), setup=setup_str % (size, "time_turn_set"), number=10000) gives me: 1 0.124024152756 0.334127902985 2 0.250166893005 0.343378067017 3 0.359009981155 0.356444835663 4 0.464100837708 0.38081407547 5 0.600295066833 0.34722495079 6 0.692923069 0.358560085297 7 0.787877082825 0.338326931 8 0.877299070358 0.344762086868 9 1.00078821182 0.339591026306 Tests with list sizes ranging from 500 to 50000 give roughly the same result. * Indeed, in the true asymptotic sense inserting into a hash table (and, for that matter, checking a value) is not `O(1)` time, but rather a constant speedup of linear `O(n)` time (since if the list gets too large collisions will build up). That would make the `set([1,2,3])` operation be in `O(n^2)` time rather than `O(n)`. However, in practice, with reasonable sized lists with a good implementation, you can basically always assume insertion and lookup of a hash table to be `O(1)` operations.
Could not find platform independent libraries <prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Question: I'm really new to Python and Django.... What I'm trying to do is: 1. Install Python 2.7 on Mac OS 10.6.8 2. Install pip Install Django 3. Install virtualenvwrapper 4. Create virtual environment 5. Install Django-Cms I think, I'll be is ok from Install virtualenvwrapper to the Django-Cms installation because I have already done it, but in the first steps I got some troubles. I download `Python 2.z` from python.org the **Python 2.7.3 Mac OS X 64-bit/32-bit x86-64/i386 Installer** (for Mac OS X 10.6 and later [2]), installed whit the wizard . That create a directory `/System/Library/Frameworks/Python.framework/Versions` with inside my `2.7` folder. My directory `/System/Library/Python` is empty I'm sure I've Python installed cos: python --version Python 2.7.3 but when I try `easy_install pip` it gave me: Could not find platform independent libraries <prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] 'import site' failed; use -v for traceback Traceback (most recent call last): File "/usr/bin/easy_install-2.6", line 7, in <module> from pkg_resources import load_entry_point File "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/pkg_resources.py", line 16, in <module> import sys, os, zipimport, time, re, imp, new ImportError: No module named os** Now no idea of what does mean ...so if somebody could help out from this and put me on the direction where I can istall my virtualenvwrapper I can take it from there. I'm Junior a front end developer never touch back end so pls be specific and explain me what I need to do as u speak with a child. Answer: You seem to have things turned around. Virtualenv creates a python environment that encapsulates a python install. So you want to do the following: 1. Install python 2. Create a virtualenv using _that_ version of python (eg. virtualenv --python="path to python in 1" _virt_) 3. Switch to that virtualenv (workon _virt_) 4. Now install Django, etc. inside of the virtualenv _virt_ [Here is the recipe](http://www.tlswebsolutions.com/mac-os-x-lion-setting-up- django-pip-virtualenv-and-homebrew/) I used to get my environment setup and running. Are you using [homebrew](http://mxcl.github.com/homebrew/)? I've found that's the most reliable way to get stuff on the mac.
Reading CSV - Beginner Question: I have been trying to read a csv file from my desktop and have not been successful. I checked my current working directory and it is pointed to my desktop, so that doesn't seem to be the issue. Below is the module I used and the error output that I received. I am using Python 3.2.3 import csv reader = csv.reader(open(name.csv, mode = 'r')) for row in reader: print (row) Here is my result > Traceback (most recent call last): > File "C:/Users/User Name/Desktop/FileName.py", line 2,in > reader = csv.reader(open(name.csv, mode = 'r')) > NameError: name 'Beta' is not defined Help? Thanks! Answer: Try this... import csv with open('name.csv', 'r') as csvfile: reader = csv.reader(csvfile, delimiter=',') for row in reader: print row
python mechanize submitting form kicks me back to root Question: i have a mechanize python script written for submitting forms to inquire drug information. and when i run it, it gives me no error message, but when i look at the response, it's not what I see on my browser view-source page. i checked the urls after the submission: here's what I got: <http://www.accessdata.fda.gov/scripts/cder/drugsatfda/index.cfm> here's addresses I'm supposed to get: <http://www.accessdata.fda.gov/scripts/cder/drugsatfda/index.cfm?fuseaction=Search.DrugDetails> I see that the second url does not contain my query text, does that mean i need cookies? if so, how? this is my code snippet: br = mechanize.Browser() cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) .... br.addheaders = [('User-agent', 'Mozilla/6.0 (X11; U; i686; en-US; rv:1.9.0.1) Gecko/2008071615 OS X 10.2 Firefox/3.0.1')] fda_url2 = 'http://www.accessdata.fda.gov/scripts/cder/drugsatfda/index.cfm?fuseaction=Search.Addlsearch_drug_name' print br.open(fda_url2).geturl() for f in br.forms(): print 'this is a form' print f br.select_form('searchoptionB') br.form['ApplNo'] = '018780' html = br.submit(name = 'Search_Button') print html.geturl() the print form output was: <searchoptionB POST http://www.accessdata.fda.gov/scripts/cder/drugsatfda/index.cfm application/x-www-form-urlencoded <HiddenControl(fuseaction=Search.SearchAction) (readonly)> <HiddenControl(SearchType=AddlSearch) (readonly)> <HiddenControl(SearchOption=B) (readonly)> <TextControl(ApplNo=)> <SubmitControl(Search_Button=Submit) (readonly)> <SubmitControl(clearcriteria=Clear) (readonly)>> sorry for the long post ;p Answer: **UPD** Regarding your comment. Here's my test file: #!/usr/bin/env python import mechanize import cookielib br = mechanize.Browser() cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) br.addheaders = [('User-agent', 'Mozilla/6.0 (X11; U; i686; en-US; rv:1.9.0.1) Gecko/2008071615 OS X 10.2 Firefox/3.0.1')] fda_url2 = 'http://www.accessdata.fda.gov/scripts/cder/drugsatfda/index.cfm?fuseaction=Search.Addlsearch_drug_name' print br.open(fda_url2).geturl() for f in br.forms(): print 'this is a form' print f br.select_form('searchoptionB') br.form['ApplNo'] = '018780' html = br.submit() print br.response().read() And this is what I get when running it: $ ./test.py | grep HUM <td valign="top" class="product_table" height="24"><a href="index.cfm?fuseaction=Search.Overview&DrugName=HUMULIN%20R"> HUMULIN R INSULIN RECOMBINANT HUMAN <td valign="top" class="product_table" height="24"><a href="index.cfm?fuseaction=Search.Overview&DrugName=HUMULIN%20R%20PEN"> HUMULIN R PEN INSULIN RECOMBINANT HUMAN Maybe you've somehow changed your script or query a different page? * * * Have you tried to submit this form in a browser? If I navigate to [this URL](http://www.accessdata.fda.gov/scripts/cder/drugsatfda/index.cfm?fuseaction=Search.Addlsearch_drug_name), fill in "Option B" with "018780" and hit "Submit", the browser indeed redirects me to <http://www.accessdata.fda.gov/scripts/cder/drugsatfda/index.cfm>, which contains the search results. Try adding this to the end of your code snippet: print br.response().read() This will output the HTML of the page, and it does contain search results as expected. > I see that the second url does not contain my query text, does that mean i > need cookies? if so, how? This form is sent via POST, and in this case all parameters are embedded in request body (RFC 2616).
Jira-Python - jira.client import error Question: I was installing jira-python like written in the docs $ pip install jira-python but after installation I try to run the example: from jira.client import JIRA options = { 'server': 'https://jira.atlassian.com' } jira = JIRA(options) projects = jira.projects() keys = sorted([project.key for project in projects])[2:5] issue = jira.issue('JRA-1330') import re atl_comments = [comment for comment in issue.fields.comment.comments if re.search(r'@atlassian.com$', comment.author.emailAddress)] jira.add_comment(issue, 'Comment text') issue.update(summary="I'm different!", description='Changed the summary to be different.') issue.delete() getting the following error: **Traceback (most recent call last): File "jira.py", line 4, in <module> from jira.client import JIRA File "/home/ubuntu/jira.py", line 4, in <module> from jira.client import JIRA ImportError: No module named client** Any idea about the problem here? I tried it also on an Amazon instance, but same problem... Answer: seems like the reason was that my test file was named jira.py :) thanks for your help Inbar!
How to make mean shift clustering work for more then five clusters? Question: I am having troubles with mean shift clustering . It works very fast and outputs correct results when clusters number is small (2, 3, 4) but when clusters number increases it fails. For example 3 clusters are detected fine: ![cluster success](http://img- fotki.yandex.ru/get/6436/39672792.e/0_7c349_e4746d7f_orig) But when number increases it fails: ![cluster centers fail](http://img- fotki.yandex.ru/get/6442/39672792.e/0_7c347_b5c0cb83_orig) ![clusters fail](http://img-fotki.yandex.ru/get/4137/39672792.e/0_7c348_93ec87d5_orig) Here is complete code listing: #!/usr/bin/env python import sys import logging import numpy as np import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plot from sklearn.cluster import estimate_bandwidth, MeanShift, get_bin_seeds from sklearn.datasets.samples_generator import make_blobs def test_mean_shift(): logging.debug('Generating mixture') count = 5000 blocks = 7 std_error = 0.5 mixture, clusters = make_blobs(n_samples=count, centers=blocks, cluster_std=std_error) logging.debug('Measuring bendwith') bandwidth = estimate_bandwidth(mixture) logging.debug('Bandwidth: %r' % bandwidth) mean_shift = MeanShift(bandwidth=bandwidth) logging.debug('Clustering') mean_shift.fit(mixture) shifted = mean_shift.cluster_centers_ guess = mean_shift.labels_ logging.debug('Centers: %r' % shifted) def draw_mixture(mixture, clusters, output='mixture.png'): plot.clf() plot.scatter(mixture[:, 0], mixture[:, 1], c=clusters, cmap=plot.cm.coolwarm) plot.savefig(output) def draw_mixture_shifted(mixture, shifted, output='mixture_shifted.png'): plot.clf() plot.scatter(mixture[:, 0], mixture[:, 1], c='r') plot.scatter(shifted[:, 0], shifted[:, 1], c='b') plot.savefig(output) logging.debug('Drawing') draw_mixture_shifted(mixture, shifted) draw_mixture(mixture, guess) if __name__ == '__main__': logging.basicConfig(level=logging.DEBUG) test_mean_shift() What am I doing wrong? Answer: You probably have to choose a smaller bandwidth. I am not very familiar with the way the bandwidth is chosen by the heuristic. So the "problem" here is with the heuristic, not the actual algorithm.
Using requests module to export csv Question: I'm a beginner to Python and I have been trying to export my data to a csv file but I can't figure out how to get rid of all the brackets and my comma separated. Ideally I need two columns: one with all values for "count" and one with values for "month". Any advice appreciated. My code: from sunlight import capitolwords import requests import csv r = requests.get ('http://capitolwords.org/api/1/dates.json?phrase=guns&start_date=2011- 12-01&end_date=2013-01- 15&granularity=month&sort=count&apikey=ab02633fb17841d09f4c3660e0384ae5') data = r.text ifile = open('guns.csv', 'rb') reader = csv.reader(data.splitlines(), delimiter=',') for row in reader: print row Result: ['{'] [' "results": ['] [' {'] [' "count": 62.0', ''] [' "month": "201212"'] [' }', ''] [' {'] [' "count": 36.0', ''] [' "month": "201207"'] [' }', ''] [' {'] [' "count": 35.0', ''] [' "month": "201112"'] [' }', ''] [' {'] [' "count": 27.0', ''] [' "month": "201202"'] [' }', ''] [' {'] [' "count": 27.0', ''] Answer: Since the response is a Json, load the json data with: data = r.json()["results"] # read the json response and keep the results part and then write to the csv file: with open("guns.csv", "wb") as csvfile: f = csv.writer(csvfile) f.writerow(["Count", "Month"]) # write the headers if you like for elem in data: f.writerow([elem["count"], elem["month"]])
Error in collections-deque - Python Question: I am trying to have a queue using deque in python. The error I keep getting is index out of range perf_his[b][c] = 0 IndexError: deque index out of range Here is a small prototype of the code that I implemented. import collections apps = [1,2,3] num_thrs = len(apps) perf_his = [] for a in range(num_thrs): perf_his += [collections.deque(maxlen=1)] for b in range(num_thrs): for c in range(0, 1): perf_his[b][c] = 0 Inorder to check if I did understand deque correctly, I implemented this code: #!/usr/bin/env python from collections import deque something = ["foo","bar","baz"] output = [] diff = 0 d = deque() for i in something: d.append(i) print("-> %s" % i) for i in xrange(len(d)): print(d[i]) output.append(d[i]) for i in xrange(len(something)): if output[i] != something[i]: diff += 1 print(something,output,diff) I've been trying to fix the error in like 2 days, I don't seem to understand the problem. can someone please shed some light? Answer: When _maxlen_ is set, the deque still is size zero until elements are added. The effect of _maxlen=5_ is that after five appends, then next append will automatically pop the oldest element so that the size never get bigger. In other words, _maxlen_ is the maximum size, not the minimum. For your application, the deque needs to be prepopulated with initial values before you can make any assignments: >>> d = deque([0] * 5, maxlen=5) >>> d[2] = 100 >>> d deque([0, 0, 100, 0, 0], maxlen=5)
Concatanating array by While on Python Question: I had some question on python, I am trying to write something, where after each raw-input , I should input into program some data (with array form). Then this data (arrays 2 dimensioanl ) should be added to the other 2 D array Full of zeros. Then when I input the second data (it should be again changed to 2 D array) and must be added to array Which already contain the array with Zeros and array from input). And so , after each input , array should be added already created arrays. Here is the code. from numpy import * var=zeros(shape=(1,11)) #making 2D array with zeros while True: file=raw_input("write data file name or path") c=open(file, 'r') ArrayStr=loadtxt(c, dtype='S',) #making array var=vstack((ArrayStr, var)) # concatantaing array print var So finally I am getting array, which have only 2 arrays concatanatted (arrays with zeros and last inputted data array) Thanks for attention and help Answer: **Working code** _**in1.txt_** ATOM 1 C1 POS X 1 16.774 117.860 10.374 1.00 0.00 _**in2.txt_** ATOM 2 C2 POS X 2 18.774 17.860 30.374 2.00 0.00 **Code** import numpy zeros = numpy.zeros(shape=(1,11)) #making 2D array with zeros var = None for i in range(1, 3): i1 = open("in" + str(i) + ".txt") a = numpy.loadtxt(i1, dtype='S') if var is None: var = numpy.vstack((a, zeros)) else: var = numpy.vstack((var, a, zeros)) print var **Output** >>> [['ATOM' '1' 'C1' 'POS' 'X' '1' '16.774' '117.860' '10.374' '1.00' '0.00'] ['0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0'] ['ATOM' '2' 'C2' 'POS' 'X' '2' '18.774' '17.860' '30.374' '2.00' '0.00'] ['0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0' '0.0']] >>>
Random combination of letters? Question: Using Python, how can I get a randomized combination of letters? I want to do something like this: def ranStr(): # some code return random_string So, if I call `ranStr()`, I want to get out things like (random combination of random length of letters): 'aAFGresdFTg' 'EFRwdfWe' 'rgAijD' Answer: This is one way of doing it. import random def generate_key(): STR_KEY_GEN = 'ABCDEFGHIJKLMNOPQRSTUVWXYzabcdefghijklmnopqrstuvwxyz' return ''.join(random.choice(STR_KEY_GEN) for _ in xrange(70)) OR import random, string def generate_key(): return ''.join(random.choice(string.ascii_letters) for _ in xrange(70)) Based on the suggestions.
How to simulate click event for a link using Qt with python Question: I want to screen scrape a web site having multiple pages. These pages are loaded dynamically without changing the URL. I dont want to use Selenium since it opens browser every time you need content.Does QT work the same way?If not, how can i simulate click event using qt? PS:Google provided vague answer Thanks in advance Answer: There's already a project that does just that: [ghost.py](http://jeanphix.me/Ghost.py/). It uses PyQt4's or PySide's QWebkit browser internally. Here's their example code: from ghost import Ghost ghost = Ghost() page, extra_resources = ghost.open("http://jeanphi.fr") assert page.http_status==200 and 'jeanphix' in ghost.content You will most likely be using `ghost.click()` to interact with your webpage: ghost.click('#selector')
RuntimeError: maximum recursion depth exceeded with Python 3.2 pickle.dump Question: I'm getting the above error with the code below. The error occurs at the last line. Please excuse the subject matter, I'm just practicing my python skills. =) from urllib.request import urlopen from bs4 import BeautifulSoup from pprint import pprint from pickle import dump moves = dict() moves0 = set() url = 'http://www.marriland.com/pokedex/1-bulbasaur' print(url) # Open url with urlopen(url) as usock: # Get url data source data = usock.read().decode("latin-1") # Soupify soup = BeautifulSoup(data) # Find move tables for div_class1 in soup.find_all('div', {'class': 'listing-container listing-container-table'}): div_class2 = div_class1.find_all('div', {'class': 'listing-header'}) if len(div_class2) > 1: header = div_class2[0].find_all(text=True)[1] # Take only moves from Level Up, TM / HM, and Tutor if header in ['Level Up', 'TM / HM', 'Tutor']: # Get rows for row in div_class1.find_all('tbody')[0].find_all('tr'): # Get cells cells = row.find_all('td') # Get move name move = cells[1].find_all(text=True)[0] # If move is new if not move in moves: # Get type typ = cells[2].find_all(text=True)[0] # Get category cat = cells[3].find_all(text=True)[0] # Get power if not Status or Support power = '--' if cat != 'Status or Support': try: # not STAB power = int(cells[4].find_all(text=True)[1].strip(' \t\r\n')) except ValueError: try: # STAB power = int(cells[4].find_all(text=True)[-2]) except ValueError: # Moves like Return, Frustration, etc. power = cells[4].find_all(text=True)[-2] # Get accuracy acc = cells[5].find_all(text=True)[0] # Get pp pp = cells[6].find_all(text=True)[0] # Add move to dict moves[move] = {'type': typ, 'cat': cat, 'power': power, 'acc': acc, 'pp': pp} # Add move to pokemon's move set moves0.add(move) pprint(moves) dump(moves, open('pkmn_moves.dump', 'wb')) I have reduced the code as much as possible in order to produce the error. The fault may be simple, but I can't just find it. In the meantime, I made a workaround by setting the recursion limit to 10000. Answer: Just want to contribute an answer for anyone else who may have this issue. Specifically, I was having it with caching BeautifulSoup objects in a Django session from a remote API. The short answer is the pickling BeautifulSoup nodes is not supported. I instead opted to store the original string data in my object and have an accessor method that parsed it on the fly, so that only the original string data is pickled.
plot trajectories on an map using basemap Question: import numpy as np data = np.loadtxt('path-tracks.csv',dtype=np.str,delimiter=',',skiprows=1) print data [['19.70' '-95.20' '2/5/04 6:45 AM' '1' '-38' 'CCM'] ['19.70' '-94.70' '2/5/04 7:45 AM' '1' '-48' 'CCM'] ['19.30' '-93.90' '2/5/04 8:45 AM' '1' '-60' 'CCM'] ['19.00' '-93.50' '2/5/04 9:45 AM' '1' '-58' 'CCM'] ['19.00' '-92.80' '2/5/04 10:45 AM' '1' '-50' 'CCM'] ['19.20' '-92.60' '2/5/04 11:45 AM' '1' '-40' 'CCM'] ['19.90' '-93.00' '2/5/04 12:45 PM' '1' '-43' 'CCM'] ['20.00' '-92.80' '2/5/04 1:15 PM' '1' '-32' 'CCM'] ['23.10' '-100.20' '30/5/04 4:45 AM' '2' '-45' 'SCME'] ['23.20' '-100.00' '30/5/04 5:45 AM' '2' '-56' 'SCME'] ['23.30' '-100.00' '30/5/04 6:45 AM' '2' '-48' 'SCME'] ['23.30' '-100.20' '30/5/04 7:45 AM' '2' '-32' 'SCME'] ['23.40' '-99.00' '31/5/04 3:15 AM' '3' '-36' 'SCM'] ['23.50' '-98.90' '31/5/04 4:15 AM' '3' '-46' 'SCM'] ['23.60' '-98.70' '31/5/04 5:15 AM' '3' '-68' 'SCM'] ['23.70' '-98.80' '31/5/04 6:15 AM' '3' '-30' 'SCM']] with the above code I get an array whose columns represent: [Lat, Lon, Date, Identifier, Temperatures, Category]. Now, I will put a code that allows me to plot the first and second column on the map of Mexico: #!/usr/bin/python #Project Storm: Plot trajectories of convective systems #import libraries import numpy as np from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as pl # Plot a map for Mexico m = Basemap(projection='cyl', llcrnrlat=12, urcrnrlat=35,llcrnrlon=-120, urcrnrlon=-80, resolution='c', area_thresh=1000.) m.bluemarble() m.drawcoastlines(linewidth=0.5) m.drawcountries(linewidth=0.5) m.drawstates(linewidth=0.5) #Draw parallels and meridians m.drawparallels(np.arange(10.,35.,5.)) m.drawmeridians(np.arange(-120.,-80.,10.)) m.drawmapboundary(fill_color='aqua') #Open file whit numpy data = np.loadtxt('path-tracks.csv', dtype=np.str,delimiter=',', skiprows=1) latitude = data[:,0] longitude = data[:,1] #Convert latitude and longitude to coordinates X and Y x, y = m(longitude, latitude) #Plot the points on the map pl.plot(x,y,'ro-') pl.show() The points plotted on the map, corresponding to three different paths. Mi final idea is to draw a line connecting the points associated with each path, How I can do this? is posible draw an identifier or a mark for each path? how I can set the size of the figure so that it can distinguish the separation between the points? Answer: The size of the figure can be set by simply creating a `figure` before calling `Basemap`. I have used Pandas to read the CSV because it allows easy grouping (per path). If you dont want to use Pandas you probably can get the same result by iterating over `np.unique('cat')` or something. If you use the `datetime` as the index in pandas your points automatically get sorted by time in the case your CSV is unsorted. I'm not sure what you mean by drawing an identifier. The legend makes it possible to distinguish between paths but you could also plot the `'Cat'` on the map at the beginning or end of a line for example. Your map properties make it a bit 'zoomed out' for such small paths. Using `ax = pl.gca()` and `ax.set_xlim()` allows you to set a boundingbox in the mapcoordinates. Which you can derive from the max and min coordinates in your paths + some buffer. import numpy as np from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as pl import pandas as pd fig = plt.figure(figsize=(12,12)) # Plot a map for Mexico m = Basemap(projection='cyl', llcrnrlat=12, urcrnrlat=35,llcrnrlon=-120, urcrnrlon=-80, resolution='c', area_thresh=1000.) m.bluemarble() m.drawcoastlines(linewidth=0.5) m.drawcountries(linewidth=0.5) m.drawstates(linewidth=0.5) #Draw parallels and meridians m.drawparallels(np.arange(10.,35.,5.)) m.drawmeridians(np.arange(-120.,-80.,10.)) m.drawmapboundary(fill_color='aqua') colors = {'CCM': 'red', 'SCME': 'white', 'SCM': 'yellow'} for cat, track in df.groupby('Cat'): latitude = track.Lat.values longitude = track.Lon.values #Convert latitude and longitude to coordinates X and Y x, y = m(longitude, latitude) #Plot the points on the map pl.plot(x,y,'-', label=cat, color=colors[cat]) lg = pl.legend() lg.get_frame().set_facecolor('grey') ![enter image description here](http://i.stack.imgur.com/Z4X2U.png)
Execute different version of python in .vimrc file Question: I am trying to get powerline.vim to work and the problem is that on setup it is trying to run the wrong version of python. This line is causing the problem. python from powerline.ext.vim import source_plugin; source_plugin() How can I change this so that it executes `/usr/local/bin/python` instead? Thanks! Answer: You can't. Because you are **not** using _/anything_`/bin/python`. The vim binary is linked to the python interpreter library directly. The library is called `/usr/lib/libpythonX.Y.so.Z`. E.g. `/usr/lib/libpython2.7.so.1`. It's a shared library, so you could use newer version of it (e.g. changing library paths), but only one that has the same numbers (X, Y and Z). So you can use python 2.7.3 instead of python 2.7.0, but to use python 2.7.3 instead of python 2.6.2, you'll have to recompile vim. But I suspect you are just trying to do the things the wrong way around. You should simply try installing powerline into the python version vim uses.
which one is better to test a valid url in python Question: I want to check whether a particular url exists or not. I came across two methods. url = "<http://www.google.com>" 1. import urllib2 response = urllib2.urlopen(url) response.code # check what is the response code 2. import httplib conn = httplib.HTTPConnection(url) conn.request('HEAD', '/') response = conn.getresponse() if response.status == 200: # check the status code # do something Though both will solve my purpose, but which one is a better method to achieve this purpose. Thanks in advance for help. Answer: You can try something along those lines, the point being, that you usually also want to handle the errors you get, if there is a problem with fetching the url. In [4]: import urllib2 In [5]: def test(url): ...: try: ...: response = urllib2.urlopen(url) ...: except urllib2.HTTPError as e: ...: return e.code,None ...: return response.code,response In [6]: test('http://www.google.com') Out[6]: (200, <addinfourl at 154469068 whose fp = <socket._fileobject object at 0x92caa2c>>) In [7]: test('http://www.google.com/foobar') Out[7]: (404, None) In reality you would also need to handle `urllib2.URLError`: In [10]: def test(url): ...: try: ...: response = urllib2.urlopen(url) ...: except urllib2.HTTPError as err: ...: return err.code, None ...: except urllib2.URLError as err: ...: return err.reason, None ...: return response.code,response In [11]: test('http://www.google.foo') Out[11]: (socket.gaierror(-2, 'Name or service not known'), None)
why this python program does not work? Question: My python program gives no error but it does not do what it supposed to do either. What could be possibly wrong? Could it be that it does not have access to the imported packages? What should I do? It is supposed to go to Yahoo! search website and query the search engine and brings back the results. **Code** from [here](http://rosettacode.org/wiki/Yahoo!_search_interface#Python) import urllib import re def fix(x): p = re.compile(r'<[^<]*?>') return p.sub('', x).replace('&amp;', '&') class YahooSearch: def __init__(self, query, page=1): self.query = query self.page = page self.url = "http://search.yahoo.com/search?p=%s&b=%s" %(self.query, ((self.page - 1) * 10 + 1)) self.content = urllib.urlopen(self.url).read() def getresults(self): self.results = [] for i in re.findall('<a class="yschttl spt" href=".+?">(.+?)</a></h3></div>(.+?)</div>.*?<span class=url>(.+?)</span>', self.content): title = fix(i[0]) content = fix(i[1]) url = fix(i[2]) self.results.append(YahooResult(title, content, url)) return self.results def getnextpage(self): return YahooSearch(self.query, self.page+1) search_results = property(fget=getresults) nextpage = property(fget=getnextpage) class YahooResult: def __init__(self,title,content,url): self.title = title self.content = content self.url = url # Usage: x = YahooSearch("test") for result in x.search_results: print result.title Answer: Your regex looks for `<a class="yschttl spt" etc. etc.`, but the html you are trying to parse actually contains elements that look like `<a id="link-5" class="yschttl spt" etc. etc.`. Beside the fact that you should use a HTML-parser (like [BeatifulSoup](http://pypi.python.org/pypi/BeautifulSoup)) instead of regular expressions, you can fix your regex by using somthing like `'<a id="link-\d+" class="yschttl spt" etc. etc.`
NGINX, uWSGI and Flask not running subprocess Question: My NGINX configuration: server { server_name 127.0.0.1; listen 4450; location ~* ^/.*$ { include uwsgi_params; uwsgi_pass unix:/tmp/esrvadmin.sock; } } uWSGI start up: uwsgi --uid root -s /tmp/esrvadmin.sock --chown-socket nobody:root \ --file /var/www/sitios/manten/srv.py \ --processes 2 --callable app --pidfile /var/run/edesarrollos/esrvadmin.pid Python with flask code: import os, subprocess from flask import Flask, abort, request app = Flask(__name__) DETO_DIR = '/var/www/sitios/manten/detos' @app.route('/detonate') def index(): #return str(subprocess.check_output(['ls','-l'])) token = request.args.get('token','') if token != '': # Si no existe el directorio de detonatores, se crea if not os.path.exists(DETO_DIR): os.mkdir(DETO_DIR) if os.path.isfile(DETO_DIR+"/"+token): try: os.system(open(DETO_DIR+"/"+token).read()) except Exception as ex: return str(ex) return "Reiniciado Correctamente" else: abort(404) else: abort(404) if __name__ == "__main__": app.run(debug=True,host='0.0.0.0') In **main** mode, i can run anything inside os.system("something args") and receive its HTTP response, even with "service something start" and it finish right. Also i tried with subprocess functions, no luck. But when i run from NGINX-uWSGI, i get a waiting forever http request, of course never response but it does the job (starting a service). The best error description in log i found: 2013/01/28 03:38:24 [error] 3978#0: *3 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET /detonate?token=start- a81260812b643d8672ccf9570033109f200595779e0a352e630a75760328e2d375025ef349e 3d599f368092abb63511f-23611e81194b04d5d0a6d0f02baf7fb9 HTTP/1.1", upstream: "uwsgi://unix:/tmp/esrvadmin.sock:", host: "127.0.0.1:4450" Any idea of what could be happening ? Answer: read here: <http://uwsgi-docs.readthedocs.org/en/latest/ThingsToKnow.html> (the --close-on-exec part) and please,please, please, DO NOT RUN uWSGI as root !
MySQL and SQLAlchemy integer diversion in WHERE clause Question: I have a table that contains a column called `region_code` with `Integer` as its datatype. In my Python program I have this line: region = session.query(Region).filter(Region.region_code/100 == region_prefix).one() The important part here is the chosen **filter** method Region.region_code/100 == region_prefix where region_code is the integer column in the table and the region_prefix variable holds an integer value The `region_code` is the integer column in the table containing values like `1199, 1200, 1300, 1499` while the region_prefix variable (from the program) holds integer values like `11, 12, 13, 14`. What I now want to achive with this filter option is to perform an integer division on `region_code` (hence cut everything behind the decimal place) to be able to compare it with `region_prefix`. **Example:** region_prefix = 11 Assume that in the Region table exists a entry with region_code = 1199 The calculation within the filter should look like this 1199 / 100 = 11 So with this result I am able to filter for `region_prefix (11=11)`. **This works just fine with MSSQL (Microsoft).** Unfortunately (in my case) **MySQL** division operator `/` does not do integer divisions. So the result of `1199/100` will be `11.99` and so my query will not find a fitting entry in the DB (because 11.99 != 11). Nevertheless MySQL actually has the inter division implemented: the **DIV** operator. So the following SQL query works just fine for me: SELECT * FROM Region where region_code DIV 100 = 11; **So my question is:** _How do I get the SQLAlchemy`filter` method to use `DIV` instead of `/` operator? Is this possible? Or is the only workaround to use native queries?_ Answer: Luckily SQLAlchemy allows for custom operators. There are simple ways of doing this (simply specifying in the query) or more complex versions of doing it (fully defining the new operator and making it multi database aware). So it depends on what you're looking for. <http://docs.sqlalchemy.org/en/rel_0_8/core/expression_api.html#sqlalchemy.sql.expression.BinaryExpression.op> region = (session .query(Region) .filter(Region.region_code.op('div')(region_prefix)) .one() ) And the fancy way: <http://docs.sqlalchemy.org/en/rel_0_8/core/types.html#types-operators> from sqlalchemy import Integer class DivisibleInteger(Integer): class comparator_factory(Integer.Comparator): def div(self, other): return self.op('div')(other)
Calculating eigenvector centrality using NetworkX Question: I'm using the NetworkX library to work with some small- to medium-sized unweighted, unsigned, directed graphs representing usage of a Web 2.0 site (smallest graph: less than two dozen nodes, largest: a few thousand). One of the things I want to calculate is eigenvector centrality, as follows: >>> eig = networkx.eigenvector_centrality(my_graph) >>> eigs = [(v,k) for k,v in eig.iteritems()] >>> eigs.sort() >>> eigs.reverse() However, this gives unexpected results: nodes with 0 outdegree but receiving inward arcs from very central nodes appear at the very back of the list with 0.0 eigenvector centrality (not being a mathematician I may have got this confused, but I don't think that outward arcs should make any difference to a node's centrality to a directed graph). In the course of investigating these results, I noticed from the documentation that NetworkX calculates 'right' eigenvector centrality by default; out of curiosity, I decided to calculate 'left' eigenvector centrality by the recommended method, i.e. reversing the graph before calculating eigenvector centrality ([see Networkx documentation](http://networkx.lanl.gov/reference/generated/networkx.algorithms.centrality.eigenvector_centrality.html#networkx.algorithms.centrality.eigenvector_centrality)). To my surprise, I got exactly the same result: every node was calculated to have exactly the same eigenvector centrality as before. I think this should be a very unlikely outcome ([see Wikipedia article](http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors#Left_and_right_eigenvectors)), but I have since replicated it with all the graphs I'm working with. Can anyone explain to me what I'm doing wrong? N.B. Using the NetworkX implementation of the PageRank algorithm provides the results I was expecting, i.e. nodes receiving inward arcs from very central nodes have high centrality even if their outdegree is 0. PageRank is usually considered to be a variant of eigenvector centrality ([see Wikipedia article](http://en.wikipedia.org/wiki/PageRank#Damping_factor)). Edit: following a request from Aric, I have included some data. This is an anonymised version of my smallest graph. (I couldn't post toy data in case the problem is specific to the structure of my graphs.) Running the code below on my machine (with Python 2.7) appears to reveal (a) that each node's right and left eigenvector centrality are the same, and (b) that nodes with outdegree 0 _invariably_ also have eigenvector centrality 0, even if they are quite central to the graph as a whole (e.g. node 61). import networkx anon_e_list = [(10, 59), (10, 15), (10, 61), (15, 32), (16, 31), (16, 0), (16, 37), (16, 54), (16, 45), (16, 56), (16, 10), (16, 8), (16, 36), (16, 24), (16, 30), (18, 34), (18, 36), (18, 30), (19, 1), (19, 3), (19, 51), (19, 21), (19, 40), (19, 41), (19, 30), (19, 14), (19, 61), (21, 64), (26, 1), (31, 1), (31, 3), (31, 51), (31, 62), (31, 33), (31, 40), (31, 23), (31, 30), (31, 18), (31, 13), (31, 46), (31, 61), (32, 3), (32, 2), (32, 33), (32, 6), (32, 7), (32, 9), (32, 15), (32, 17), (32, 18), (32, 23), (32, 30), (32, 5), (32, 27), (32, 34), (32, 35), (32, 38), (32, 40), (32, 42), (32, 43), (32, 46), (32, 47), (32, 62), (32, 56), (32, 57), (32, 59), (32, 64), (32, 61), (33, 0), (33, 31), (33, 2), (33, 7), (33, 9), (33, 10), (33, 12), (33, 64), (33, 14), (33, 46), (33, 16), (33, 17), (33, 18), (33, 19), (33, 20), (33, 21), (33, 22), (33, 23), (33, 30), (33, 26), (33, 28), (33, 11), (33, 34), (33, 32), (33, 35), (33, 37), (33, 38), (33, 39), (33, 41), (33, 43), (33, 45), (33, 24), (33, 47), (33, 48), (33, 49), (33, 58), (33, 62), (33, 53), (33, 54), (33, 55), (33, 60), (33, 57), (33, 59), (33, 5), (33, 52), (33, 63), (33, 61), (34, 58), (34, 4), (34, 33), (34, 20), (34, 55), (34, 28), (34, 11), (34, 64), (35, 18), (35, 60), (35, 61), (37, 34), (37, 48), (37, 49), (37, 18), (37, 33), (37, 39), (37, 21), (37, 42), (37, 26), (37, 59), (37, 44), (37, 12), (37, 11), (37, 61), (41, 3), (41, 50), (41, 18), (41, 52), (41, 33), (41, 54), (41, 19), (41, 22), (41, 5), (41, 46), (41, 25), (41, 44), (41, 13), (41, 62), (41, 29), (44, 32), (44, 3), (44, 18), (44, 33), (44, 40), (44, 41), (44, 30), (44, 23), (44, 61), (50, 17), (50, 37), (50, 62), (50, 41), (50, 25), (50, 43), (50, 27), (50, 28), (50, 29), (54, 33), (54, 41), (54, 10), (54, 59), (54, 63), (54, 61), (58, 62), (58, 46), (59, 31), (59, 34), (59, 30), (59, 49), (59, 18), (59, 33), (59, 9), (59, 10), (59, 8), (59, 13), (59, 24), (59, 61), (60, 34), (60, 16), (60, 35), (60, 50), (60, 4), (60, 6), (60, 59), (60, 24), (63, 40), (63, 33), (63, 30), (63, 61), (63, 53)] my_graph = networkx.DiGraph() my_graph.add_edges_from(anon_e_list) r_eig = networkx.eigenvector_centrality(my_graph) my_graph2 = my_graph.reverse() l_eig = networkx.eigenvector_centrality(my_graph2) for nd in my_graph.nodes(): print 'node: {} indegree: {} outdegree: {} right eig: {} left eig: {}'.format(nd,my_graph.in_degree(nd),my_graph.out_degree(nd),r_eig[nd],l_eig[nd]) Answer: These two lines my_graph2 = my_graph.copy() my_graph2.reverse() should be replaced with my_graph2 = my_graph.reverse() since the reverse() method by default returns a copy of the graph.
Inputing floats, integers, or equations in raw_input to define a variable Question: I've written this program so solve two equations based on values defined by the user. The constants kx and ky, I've defined as floats. For the range - variables start and end - I would like the user to either enter a number, or something like 6 * np.pi (6Pi). As it is now, I get the following error. How can I define this variable to allow users to enter multiple types of inputs? Thanks! Traceback (most recent call last): File "lab1_2.py", line 11, in <module> x = np.linspace(start, end, 256, endpoint=True) File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/site- packages/numpy/core/function_base.py", line 80, in linspace step = (stop-start)/float((num-1)) TypeError: unsupported operand type(s) for -: 'str' and 'float' Here's the code: from pylab import * import numpy as np kx = float(raw_input("kx: ")) ky = float(raw_input("ky: ")) print "Define the range of the output:" start = float(raw_input("From: ")) end = float((raw_input("To: ")) x = np.linspace(start, end, 256, endpoint=True) y = np.linspace(start, end, 256, endpoint=True) dz_dx = ((1 / 2.0) * kx * np.exp((-kx * x) / 2)) * ((2 * np.cos(kx *x)) - (np.sin(kx * x))) dz_dy = ((1 / 2.0) * ky * np.exp((-ky * y) / 2)) * ((2 * np.cos(ky *y)) - (np.sin(ky * y))) plot(x, dz_dx, linewidth = 1.0) plot(y, dz_dy, linewidth = 1.0) grid(True) show() Answer: You'll need to either parse the string yourself (the `ast` module would probably be useful), or use `eval`: >>> s = '6*pi' >>> eval(s,{'__builtins__': None, 'pi': np.pi}) 18.84955592153876 Note that there are some [nasty things](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html) that users can do with `eval`. My solution protects you from _most_ of them, but not all -- pre-checking the string to make sure that there aren't any `__` will make it even safer (that eliminates all of the vulnerabilities that I know of, but there _could_ be others)
Pandas installation on Mac OS X: ImportError (cannot import name hashtable) Question: I would like to build pandas from source rather than use a package manager because I am interested in contributing. **The first time** I tried to build pandas, these were the steps I took: 1) created the virtualenv `mkvirtualenv --no-site-packages pandas` 2) activated the virtualenv 3) installed Anaconda CE. However, this was installed in ~/anaconda. 4) cloned pandas 5) built C extensions in place `(pandas)ems ~/.virtualenvs/pandas/localrepo/pandas> ~/anaconda/bin/python setup.py build_ext --inplace` 6) built pandas `(pandas)ems ~/.virtualenvs/pandas/localrepo/pandas> ~/anaconda/bin/python setup.py build` 7) ran nosetests on master branch Tests failed: (pandas)ems ~/.virtualenvs/pandas/localrepo/pandas> nosetests pandas E ====================================================================== ERROR: Failure: ValueError (numpy.dtype has the wrong size, try recompiling) \---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/EmilyChen/.virtualenvs/pandas/lib/python2.7/site- packages/nose/loader.py", line 390, in loadTestsFromName addr.filename, addr.module) File "/Users/EmilyChen/.virtualenvs/pandas/lib/python2.7/site- packages/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/Users/EmilyChen/.virtualenvs/pandas/lib/python2.7/site- packages/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/Users/EmilyChen/.virtualenvs/pandas/localrepo/pandas/pandas/**init**.py", line 6, in from . import hashtable, tslib, lib File "numpy.pxd", line 156, in init pandas.hashtable (pandas/hashtable.c:20354) ValueError: numpy.dtype has the wrong size, try recompiling * * * Ran 1 test in 0.001s FAILED (errors=1) Someone on the PyData mailing list said: > It looks like you have NumPy installed someplace else on your machine and > AnacondaCE is not playing nicely in the virtualenv. The error you are > getting is a Cython error message which occurs when the NumPy version it > built against doesn't match the installed version on your system-- I had > thought that 1.7.x was supposed to be ABI compatible with 1.6.x (so this > would not happen) but I guess not. Sigh The numpy version in Anaconda CE library is 1.7.0b2 and my system numpy installation is version 1.5.1. Setup.py linked to the numpy in the Anaconda distribution's libraries when it built pandas but my guess is it's linking to my system version when nosetests runs /pandas/**init**.py **Next** , I repeated the steps outside a virtualenv, but got the same error. **Finally** , I decided to install all the dependencies in a new virtualenv instead of using the Anaconda distribution to build pandas. This way, I can see that dependencies like numpy reside in the lib directory of the virtualenv python installation, which takes precedent when pandas.**init** runs import statements. This is what I did: 1) installed numpy, dateutil, pytz, cython, scipy, matplotlib and openpyxl using pip 2) built c extensions in place 3) pandas install output here: <http://pastebin.com/3CKf1f9i> 4) pandas did not install correctly (pandas)ems ~/.virtualenvs/pandas/localrepo/pandas> python Python 2.7.1 (r271:86832, Jul 31 2011, 19:30:53) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pandas cannot import name hashtable Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pandas/__init__.py", line 6, in <module> from . import hashtable, tslib, lib ImportError: cannot import name hashtable I took a look at [this question](http://stackoverflow.com/questions/14422976/python3-3-importerror- cannot-import-name-hashtable) but cython installed in my case, and I am trying to build successfully from source rather than using pip like the answer recommended.. (pandas)ems ~/.virtualenvs/pandas/localrepo/pandas> which cython /Users/EmilyChen/.virtualenvs/pandas/bin/cython Answer: I've received the same error (`ImportError: cannot import name hashtable`) when trying to import pandas from the source code directory. Try starting the python interpreter from a different directory and import pandas again.
Twisted deferreds firing in undesired way Question: I have the following code # logging from twisted.python import log import sys # MIME Multipart handling import email import email.mime.application import uuid # IMAP Connection from twisted.mail import imap4 from twisted.internet import protocol #SMTP Sending import os.path from OpenSSL.SSL import SSLv3_METHOD from twisted.internet import ssl from twisted.mail.smtp import ESMTPSenderFactory from twisted.internet.ssl import ClientContextFactory from twisted.internet.defer import Deferred from twisted.internet import reactor #class AccountsManager(object): def connectToIMAPServer(imap_server, username, password): factory = IMAP4ClientFactory(username, password, login_insecure = True) host, port = imap_server.split(":") # connect to reactor if port == '993': reactor.connectSSL(host, int(port), factory, ssl.ClientContextFactory()) else: if not port: port = 143 reactor.connectTCP(host, int(port), factory) d = factory.deferred d.addCallback(lambda r: factory.proto) return d class IMAP4Client(imap4.IMAP4Client): """ A client with callbacks for greeting messages from an IMAP server. """ greetDeferred = None def serverGreeting(self, caps): self.serverCapabilities = caps if self.greetDeferred is not None: d, self.greetDeferred = self.greetDeferred, None d.callback(self) class IMAP4ClientFactory(protocol.ClientFactory): usedUp = False protocol = IMAP4Client def __init__(self, username, password, mailbox = "INBOX", login_insecure = False): self.ctx = ssl.ClientContextFactory() self.username = username self.password = password self.mailbox = mailbox self.login_insecure = login_insecure self.deferred = Deferred() def buildProtocol(self, addr): """ Initiate the protocol instance. Since we are building a simple IMAP client, we don't bother checking what capabilities the server has. We just add all the authenticators twisted.mail has. Note: Gmail no longer uses any of the methods below, it's been using XOAUTH since 2010. """ assert not self.usedUp self.usedUp = True p = self.protocol(self.ctx) p.factory = self p.greetDeferred = self.deferred p.registerAuthenticator(imap4.PLAINAuthenticator(self.username)) p.registerAuthenticator(imap4.LOGINAuthenticator(self.username)) p.registerAuthenticator(imap4.CramMD5ClientAuthenticator(self.username)) self.deferred.addCallback(self.GreetingCallback) self.deferred.addErrback(self.GreetingErrback) self.proto = p return p def GreetingCallback(self, result): print "Secure Login" auth_d = self.proto.authenticate(self.password) auth_d.addCallback(self.AuthenticationCallback) auth_d.addErrback(self.AuthenticationErrback) return auth_d # attach it to the main deferred def GreetingErrback(self, error): log.err(error) self.CloseConnection() return error def AuthenticationCallback(self, result): print "Selecting Mailbox" d = self.proto.examine(self.mailbox) return d def AuthenticationErrback(self, failure): if self.login_insecure: failure.trap(imap4.NoSupportedAuthentication) return self.InsecureLogin() else: return error def InsecureLogin(self): print "Insecure Login" d = self.proto.login(self.username, self.password) d.addCallback(self.AuthenticationCallback) return d def CloseConnection(self): self.proto.transport.loseConnection() def clientConnectionFailed(self, connector, reason): d, self.deferred = self.deferred, None d.errback(reason) class MailServer(object): "Manages a server" size = 0 used_space = 0 def __init__(self, smtp_server, imap_server, username, password): self.smtp_server, self.smtp_port = smtp_server.split(":") self.imap_server, self.imap_port = imap_server.split(":") self.username = username self.password = password self.imap_connection = IMAP4ClientFactory(username, password) def upload_data(self, data): """ Uploads data to email server returns deferred that will return with the imap uid """ # Create a text/plain message id = str(uuid.uuid4()).upper() msg = email.mime.Multipart.MIMEMultipart() msg['Subject'] = 'GMA ID: %s' % id msg['From'] = self.email_address msg['To'] = self.email_address # The main body is just another attachment body = email.mime.Text.MIMEText("GMA ID: %s" % (self.uuid_id)) msg.attach(body) att = email.mime.application.MIMEApplication(data,_subtype="raw") att.add_header('Content-Disposition','attachment',filename = os.path.basename(self.filename)) msg.attach(att) # Create a context factory which only allows SSLv3 and does not verify # the peer's certificate. contextFactory = ClientContextFactory() contextFactory.method = SSLv3_METHOD d = Deferred() mime_obj = StringIO(str(msg)) senderFactory = ESMTPSenderFactory( self.username, self.password, self.email_address, self.email_address, mime_obj, d, contextFactory=contextFactory) d.addCallback(lambda r: self.email_sent(id, int(self.parts)) ) d.addErrback(self.email_error) reactor.connectTCP(self.smtp_server, self.smtp_port, senderFactory) d.addCallback(self.upload_success, *args, **kw) d.addErrback(self.upload_error, 1) return d def upload_success(self, result): print "upload was succesful!" def upload_error(self, result): print "upload error" def download_data(self, uid): """ Downloads data from the email server returns a deferred that will return with the data """ print "uid" if __name__ == "__main__": log.startLogging(sys.stdout) d = connectToIMAPServer("imap.gmail.com:993", "username", "password") def f(s): print s d.addCallback(lambda r: f("These are fired before the auth and examine callbacks, why?")) d.addCallback(lambda r: f("These are fired before the auth and examine callbacks, why?")) reactor.run() The class is suppose to handle logging in and selecting a mailbox and nicely return a IMAP proto ready to use however the two callbacks at the bottom are fired before the other ones, I get why, the callbacks are added before the other ones because buildProtocol hasn't been called yet so what is the best way to handle this, just have a dummy callback added in **init** that "holds" the first spot? Answer: from twisted.internet.endpoints import TCP4ClientEndpoint d = TCP4ClientEndpoint(reactor, host, int(port)).connect(factory) and d.addCallback(lambda r: factory.deferred) instead of d = factory.deferred in `connectToIMAPServer` should do it - your `factory.deferred` will be returned only after protocol is ready. ([Twisted Documentation on writing clients](http://twistedmatrix.com/documents/11.1.0/core/howto/clients.html))
k-means in python: Determine which data are associated with each centroid Question: I've been using `scipy.cluster.vq.kmeans` for doing some k-means clustering, but was wondering if there's a way to determine which centroid each of your data points is (putativly) associated with. Clearly you could do this manually, but as far as I can tell the kmeans function doesn't return this? Answer: There is a function `kmeans2` in `scipy.cluster.vq` that returns the labels, too. In [8]: X = scipy.randn(100, 2) In [9]: centroids, labels = kmeans2(X, 3) In [10]: labels Out[10]: array([2, 1, 2, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 2, 2, 1, 2, 1, 2, 1, 2, 0, 1, 0, 2, 0, 1, 2, 0, 1, 0, 1, 1, 2, 2, 2, 2, 1, 2, 1, 1, 1, 2, 0, 0, 2, 2, 0, 1, 0, 0, 0, 2, 2, 2, 0, 0, 1, 2, 1, 0, 0, 0, 2, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 2, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 2, 0, 2, 2, 0, 1, 1, 0, 1, 0, 0, 0, 2]) Otherwise, if you must use `kmeans`, you can also use `vq` to get labels: In [17]: from scipy.cluster.vq import kmeans, vq In [18]: codebook, distortion = kmeans(X, 3) In [21]: code, dist = vq(X, codebook) In [22]: code Out[22]: array([1, 0, 1, 0, 2, 2, 2, 0, 1, 1, 0, 2, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 2, 2, 1, 2, 0, 1, 1, 0, 2, 2, 0, 1, 0, 1, 0, 2, 1, 2, 0, 2, 1, 1, 1, 0, 1, 2, 0, 1, 2, 2, 1, 1, 1, 2, 2, 0, 0, 2, 2, 2, 2, 1, 0, 2, 2, 2, 0, 1, 1, 2, 1, 0, 0, 0, 0, 1, 2, 1, 2, 0, 2, 0, 2, 2, 1, 1, 1, 1, 1, 2, 0, 2, 0, 2, 1, 1, 1]) [Documentation: scipy.cluster.vq](http://docs.scipy.org/doc/scipy/reference/cluster.vq.html#module- scipy.cluster.vq)
IPython: redirecting output of a Python script to a file (like bash >) Question: I have a Python script that I want to run in IPython. I want to redirect (write) the output to a file, similar to: python my_script.py > my_output.txt How do I do this when I run the script in IPython, i.e. like `execfile('my_script.py')` There is an [older page](http://rcjp.wordpress.com/2009/03/24/redirecting- stdout-within-ipython/) describing a function that could be written to do this, but I believe that there is now a built-in way to do this that I just can't find. Answer: IPython has its own context manager for [capturing stdout/err](https://github.com/ipython/ipython/blob/rel-0.13.1/IPython/utils/io.py#L359), but it doesn't redirect to files, it redirects to an object: from IPython.utils import io with io.capture_output() as captured: %run my_script.py print captured.stdout # prints stdout from your script And this functionality is exposed in a `%%capture` cell-magic, as illustrated in the [Cell Magics example notebook](http://nbviewer.ipython.org/urls/raw.github.com/ipython/ipython/master/examples/notebooks/Cell%20Magics.ipynb). It's a simple context manager, so you can write your own version that would redirect to files: class redirect_output(object): """context manager for reditrecting stdout/err to files""" def __init__(self, stdout='', stderr=''): self.stdout = stdout self.stderr = stderr def __enter__(self): self.sys_stdout = sys.stdout self.sys_stderr = sys.stderr if self.stdout: sys.stdout = open(self.stdout, 'w') if self.stderr: if self.stderr == self.stdout: sys.stderr = sys.stdout else: sys.stderr = open(self.stderr, 'w') def __exit__(self, exc_type, exc_value, traceback): sys.stdout = self.sys_stdout sys.stderr = self.sys_stderr which you would invoke with: with redirect_output("my_output.txt"): %run my_script.py
How to delete a subwindow in the python curses module Question: I've got a curses application that uses subwindows, but I can't seem to be able to delete them. For example, this code doesn't work: import curses def fill(window, ch): y, x = window.getmaxyx() s = ch * (x - 1) for line in range(y): window.addstr(line, 0, s) def main(stdscr): fill(stdscr, 'M') stdscr.refresh() stdscr.getch() subwin = stdscr.subwin(1, 28, 20, 13) fill(subwin, 'J') subwin.refresh() subwin.getch() del subwin stdscr.touchwin() stdscr.refresh() stdscr.getch() curses.wrapper(main) When you run this code, the screen fills with 'M', then when you hit a key, a subwindow is created and filled with 'J'. Finally, when you press a key again, the code deletes the subwindow and completely redraws the screen. However, those Js are still there. After some experimentation, I've found that calling the clear() method of stdscr will make the subwindow go, but I would like to restore the background as it was, without blanking it and rewriting. Does anyone know a way in which this could be done? Answer: Is there a good reason why you're using a subwindow? If you create a new top- level window then the code works correctly - simply change `stdscr.subwin` to `curses.newwin` and it works as you'd expect. I'm not a curses expert, but I believe a subwindow shares the character buffer with its parent such that changes to either one will also affect the other. So, if you're looking to sub-divide a window into logical areas (perhaps a menu bar, main area and status bar) then subwindows are useful. If, however, you're looking for something more like a dialog box or pop-up menu then a whole new window (with its own separate buffer) is what you're after. I can't find any definitive reference for ncurses which agrees or disagrees with me, but [man page for AIX](https://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CDIQFjAA&url=http%3A%2F%2Fpublib.boulder.ibm.com%2Finfocenter%2Fpseries%2Fv5r3%2Ftopic%2Fcom.ibm.aix.basetechref%2Fdoc%2Fbasetrf2%2Fsubwin.htm&ei=gXoSUbadNoWW0QWsx4DABg&usg=AFQjCNHYrPajT- JtFaedSmpAv_OTuGhgLw&bvm=bv.41934586,d.d2k) seems to corroborate it: > Recall that the subwindow shares its parent's window buffer. Changes made to > the shared window buffer in the area covered by a subwindow, through either > the parent window or any of its subwindows, affects all windows sharing the > window buffer. Of course, this isn't definitive for ncurses, but I can't find anything to the contrary and it certainly seems to explain the behaviour observed. I also did a crude experiment where, immediately after the `subwin.getch()` line in your example, I added this line: raise Exception(stdscr.instr(20, 15, 3)) In your example, I get `JJJ` as the content of the actual main window. If I change to use `curses.newwin()` to create the window instead of `stdscr.subwin()` I get the expected `MMM`. I don't know how many specific Python curses resources there are, but most of the standard tutorials and documents about ncurses are quite useful for this sort of level. Back when I had to do some work in it, [this document](http://www.linuxfocus.org/English/March2002/article233.shtml) was quite useful. If you scroll down to the "An Example" section, you'll see that the menu pop-ups are _not_ subwindows - he alludes to this with the following slightly vague explanation: > We don't want this new window to overwrite previously written characters on > the background. They should stay there after the menu closes. This is why > the menu window can't be created as a subwindow of stdscr. Also, I remember that using both `stdscr` and your own windows can cause issues - [the "official" ncurses introduction](http://invisible- island.net/ncurses/ncurses-intro.html) has [some warnings](http://invisible- island.net/ncurses/ncurses-intro.html#caution) about this sort of thing. It also suggests avoiding overlapping windows entirely, as they're apparently error-prone, but I don't recall having any issues with them for short-term transient modal dialogs (which is the only use to which I put them). Of course, just because my simple use-case didn't expose any issues doesn't mean there aren't any. In something as complicated as ncurses, however, I can see the wisdom in keeping things as simple as you can. I hope that's some help. As I said, I'm by no means a curses expert, but hopefully this gets you a few steps further along.
Handling long running tasks in pika / RabbitMQ Question: We're trying to set up a basic directed queue system where a producer will generate several tasks and one or more consumers will grab a task at a time, process it, and acknowledge the message. The problem is, the processing can take 10-20 minutes, and we're not responding to messages at that time, causing the server to disconnect us. Here's some pseudo code for our consumer: #!/usr/bin/env python import pika import time connection = pika.BlockingConnection(pika.ConnectionParameters( host='localhost')) channel = connection.channel() channel.queue_declare(queue='task_queue', durable=True) print ' [*] Waiting for messages. To exit press CTRL+C' def callback(ch, method, properties, body): long_running_task(connection) ch.basic_ack(delivery_tag = method.delivery_tag) channel.basic_qos(prefetch_count=1) channel.basic_consume(callback, queue='task_queue') channel.start_consuming() After the first task completes, an exception is thrown somewhere deep inside of BlockingConnection, complaining that the socket was reset. In addition, the RabbitMQ logs show that the consumer was disconnected for not responding in time (why it resets the connection rather than sending a FIN is strange, but we won't worry about that). We searched around a lot because we believed this was the normal use case for RabbitMQ (having a lot of long running tasks that should be split up among many consumers), but it seems like nobody else really had this issue. Finally we stumbled upon a thread where it was recommended to use heartbeats and to spawn the `long_running_task()` in a separate thread. So the code has become: #!/usr/bin/env python import pika import time import threading connection = pika.BlockingConnection(pika.ConnectionParameters( host='localhost', heartbeat_interval=20)) channel = connection.channel() channel.queue_declare(queue='task_queue', durable=True) print ' [*] Waiting for messages. To exit press CTRL+C' def thread_func(ch, method, body): long_running_task(connection) ch.basic_ack(delivery_tag = method.delivery_tag) def callback(ch, method, properties, body): threading.Thread(target=thread_func, args=(ch, method, body)).start() channel.basic_qos(prefetch_count=1) channel.basic_consume(callback, queue='task_queue') channel.start_consuming() And this seems to work, but it's very messy. Are we sure that the `ch` object is thread safe? In addition, imagine that `long_running_task()` is using that connection parameter to add a task to a new queue (i.e. the first part of this long process is done, let's send the task on to the second part). So, the thread is using the `connection` object. Is that thread safe? More to the point, what's the preferred way of doing this? I feel like this is very messy and possibly not thread safe, so maybe we're not doing it right. Thanks! Answer: For now, your best bet is to turn off heartbeats, this will keep RabbitMQ from closing the connection if you're blocking for too long. I am experimenting with pika's core connection management and IO loop running in a background thread but it's not stable enough to release.
Converting values in a list to their hashed forms (Python) Question: I currently have the following code: from itertools import permutations import hashlib def hash_f(x): h = hashlib.md5(x) return int(h.hexdigest(),base=16) value = raw_input("Enter a value: ") possibleValues = 'a' for p in permutations(possibleValues): if hash_f(value) == hash_f(possibleValues): print "MATCH" (The import and use of permutations is a placeholder for now, it'll be used more once this problem is solved) What I would like to know is whether or not it's possible to iterate through a list and replace its value with the hashed form of that value. Using my current `hash_f(x)` function doesn't work with lists, which is the problem here. Thanks for advance for any help, and let me know if you need more information! Answer: I'm not understanding what your snippet is supposed to do, but your question seems like it could be answered with a list comprehension. from hashlib import md5 input_list = ['a','b','c','d','e'] hashed_list = [int(md5(x).hexdigest(), base=16) for x in input_list] # Do whatever you wanted to do with the list of hashes....
Python's string template changes brackets when variable is unset Question: I have Python code which attempts to replace variables using the special syntax $[VARIABLE] (note square brackets) and string.template.safe_substitute(). This is working fine, with the one exception that when an undefined variable is referenced, instead of leaving the reference alone as safe_substitute() is documented to do it replaces the square brackets with curly ones. The advanced use of RE's in templates is not documented in detail (<http://docs.python.org/2/library/string.html#template- strings>) so probably I'm just using them wrong. Ideas? Here's an example run of the test case; note that everything works fine when the var is defined: % python tmpl.py ===$[UNDEFINED]=== ===${UNDEFINED}=== % UNDEFINED=Hello python tmpl.py ===$[UNDEFINED]=== ===Hello=== And here's the test case itself: import os from string import Template # The strategy here is to replace $[FOO] but leave traditional # shell-type expansions $FOO and ${FOO} alone. # The '(?!)' below is a guaranteed-not-to-match RE. class _Replace(Template): pattern = r""" \$(?: (?P<escaped>(?!)) | # no escape char (?P<named>(?!)) | # do not match $FOO, ${FOO} \[(?P<braced>[A-Z_][A-Z_\d]+)\] | # do match $[FOO] (?P<invalid>) ) """ if '__main__' == __name__: text = '===$[UNDEFINED]===' tmpl = _Replace(text) result = tmpl.safe_substitute(os.environ) print text print result Answer: `Template` assumes the brace character is actually a brace: `string.py:194`: if braced is not None: try: return '%s' % (mapping[braced],) except KeyError: return self.delimiter + '{' + braced + '}' If you think it's a bug, post it up at <http://bugs.python.org>. Otherwise, I'd recommend just using `{}` as your delimiting character if possible.
TypeErrorException was unhandled IronPython Question: I have a script written in python, that I am invoking using IronPython from C#. But I run into an exception as soon as I call a method, and it throws as exception. This is the script that I am invoking: import sys import ctypes class EAHashingAlgorithm(): def __init__(self): # shift amounts in each round self.r1Shifts = [ 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22, 7, 12, 17, 22 ] self.r2Shifts = [ 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20, 5, 9, 14, 20 ] self.r3Shifts = [ 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23, 4, 11, 16, 23 ] self.r4Shifts = [ 6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21, 6, 10, 15, 21 ] self.hexChars = "0123456789abcdef" def zero_fill_right_shit(self, data, bits): return (data & 0xffffffff) >> bits def num2hex(self, num): ''' Convert a decimal number to hexadecimal ''' temp = '' for i in range(0, 4): x = self.hexChars[ ( num >> (i * 8 + 4) ) & 0x0F ] y = self.hexChars[ ( num >> (i * 8) ) & 0x0F ] temp += (x + y) return temp def chunkMessage(self, string): # TODO: ctypes.c_int32() in this method nblk = (( len(string) + 8) >> 6) + 1 blks = [0] * (nblk * 16) for i in range(0, len(string)): blks[i >> 2] |= ord(string[i]) << ((i % 4) * 8) i = i + 1 blks[i >> 2] |= 0x80 << ((i % 4) * 8) blks[nblk * 16 - 2] = len(string) * 8 return blks def add(self, x, y): lsw = (x & 0xFFFF) + (y & 0xFFFF) msw = (ctypes.c_int32(x >> 16).value) + (ctypes.c_int32(y >> 16).value) + (ctypes.c_int32(lsw >> 16).value) return ctypes.c_int32( (ctypes.c_int32(msw << 16).value) | (lsw & 0xFFFF) ).value # Bitwise rotate 32bit num to left def bitwiseRotate(self, x, c): return ctypes.c_int32( ctypes.c_int32(x << c).value | self.zero_fill_right_shit(x, 32 - c) ).value # Basic MD5 operations def cmn(self, q, a, b, x, s, t): z1 = self.add(a, q) z2 = self.add(x, t) a1 = self.add(z1, z2) x1 = self.bitwiseRotate(a1, s) return self.add(x1, b) def md5_f(self, a, b, c, d, x, s, t): return self.cmn( ctypes.c_int32((b & c) | ((~b) & d)).value, a, b, x, s, t ) def md5_g(self, a, b, c, d, x, s, t): return self.cmn( ctypes.c_int32((b & d) | (c & (~d))).value, a, b, x, s, t ) def md5_h(self, a, b, c, d, x, s, t): return self.cmn( ctypes.c_int32(b ^ c ^ d).value, a, b, x, s, t ) def md5_i(self, a, b, c, d, x, s, t): return self.cmn( ctypes.c_int32(c ^ (b | (~d))).value, a, b, x, s, t ) def EAHash(self, string): x = self.chunkMessage(string) a = 1732584193 b = -271733879 c = -1732584194 d = 271733878 for i in range(0, 16, 16): tempA = a tempB = b tempC = c tempD = d # F a = self.md5_f(a, b, c, d, x[i+0], self.r1Shifts[0], -680876936) d = self.md5_f(d, a, b, c, x[i+1], self.r1Shifts[1], -389564586) c = self.md5_f(c, d, a, b, x[i+2], self.r1Shifts[2], 606105819) b = self.md5_f(b, c, d, a, x[i+3], self.r1Shifts[3], -1044525330) a = self.md5_f(a, b, c, d, x[i+4], self.r1Shifts[4], -176418897) d = self.md5_f(d, a, b, c, x[i+5], self.r1Shifts[5], 1200080426) c = self.md5_f(c, d, a, b, x[i+6], self.r1Shifts[6], -1473231341) b = self.md5_f(b, c, d, a, x[i+7], self.r1Shifts[7], -45705983) a = self.md5_f(a, b, c, d, x[i+8], self.r1Shifts[8], 1770035416) d = self.md5_f(d, a, b, c, x[i+9], self.r1Shifts[9], -1958414417) c = self.md5_f(c, d, a, b, x[i+10], self.r1Shifts[10], -42063) b = self.md5_f(b, c, d, a, x[i+11], self.r1Shifts[11], -1990404162) a = self.md5_f(a, b, c, d, x[i+12], self.r1Shifts[12], 1804603682) d = self.md5_f(d, a, b, c, x[i+13], self.r1Shifts[13], -40341101) c = self.md5_f(c, d, a, b, x[i+14], self.r1Shifts[14], -1502002290) b = self.md5_f(b, c, d, a, x[i+15], self.r1Shifts[15], 1236535329) # G a = self.md5_g(a, b, c, d, x[i+1], self.r2Shifts[0], -165796510) d = self.md5_g(d, a, b, c, x[i+6], self.r2Shifts[1], -1069501632) c = self.md5_g(c, d, a, b, x[i+11], self.r2Shifts[2], 643717713) b = self.md5_g(b, c, d, a, x[i+0], self.r2Shifts[3], -373897302) a = self.md5_g(a, b, c, d, x[i+5], self.r2Shifts[4], -701558691) d = self.md5_g(d, a, b, c, x[i+10], self.r2Shifts[5], 38016083) c = self.md5_g(c, d, a, b, x[i+15], self.r2Shifts[6], -660478335) b = self.md5_g(b, c, d, a, x[i+4], self.r2Shifts[7], -405537848) a = self.md5_g(a, b, c, d, x[i+9], self.r2Shifts[8], 568446438) d = self.md5_g(d, a, b, c, x[i+14], self.r2Shifts[9], -1019803690) c = self.md5_g(c, d, a, b, x[i+3], self.r2Shifts[10], -187363961) b = self.md5_g(b, c, d, a, x[i+8], self.r2Shifts[11], 1163531501) a = self.md5_g(a, b, c, d, x[i+13], self.r2Shifts[12], -1444681467) d = self.md5_g(d, a, b, c, x[i+2], self.r2Shifts[13], -51403784) c = self.md5_g(c, d, a, b, x[i+7], self.r2Shifts[14], 1735328473) b = self.md5_g(b, c, d, a, x[i+12], self.r2Shifts[15], -1926607734) # H a = self.md5_h(a, b, c, d, x[i+5], self.r3Shifts[0], -378558) d = self.md5_h(d, a, b, c, x[i+8], self.r3Shifts[1], -2022574463) # line below uses self.r2Shifts[2] where as MD5 would use self.r3Shifts[2] c = self.md5_h(c, d, a, b, x[i+11], self.r2Shifts[2], 1839030562) b = self.md5_h(b, c, d, a, x[i+14], self.r3Shifts[3], -35309556) a = self.md5_h(a, b, c, d, x[i+1], self.r3Shifts[4], -1530992060) d = self.md5_h(d, a, b, c, x[i+4], self.r3Shifts[5], 1272893353) c = self.md5_h(c, d, a, b, x[i+7], self.r3Shifts[6], -155497632) b = self.md5_h(b, c, d, a, x[i+10], self.r3Shifts[7], -1094730640) a = self.md5_h(a, b, c, d, x[i+13], self.r3Shifts[8], 681279174) d = self.md5_h(d, a, b, c, x[i+0], self.r3Shifts[9], -358537222) c = self.md5_h(c, d, a, b, x[i+3], self.r3Shifts[10], -722521979) b = self.md5_h(b, c, d, a, x[i+6], self.r3Shifts[11], 76029189) a = self.md5_h(a, b, c, d, x[i+9], self.r3Shifts[12], -640364487) d = self.md5_h(d, a, b, c, x[i+12], self.r3Shifts[13], -421815835) c = self.md5_h(c, d, a, b, x[i+15], self.r3Shifts[14], 530742520) b = self.md5_h(b, c, d, a, x[i+2], self.r3Shifts[15], -995338651) # I a = self.md5_i(a, b, c, d, x[i+0], self.r4Shifts[0], -198630844) d = self.md5_i(d, a, b, c, x[i+7], self.r4Shifts[1], 1126891415) c = self.md5_i(c, d, a, b, x[i+14], self.r4Shifts[2], -1416354905) b = self.md5_i(b, c, d, a, x[i+5], self.r4Shifts[3], -57434055) a = self.md5_i(a, b, c, d, x[i+12], self.r4Shifts[4], 1700485571) d = self.md5_i(d, a, b, c, x[i+3], self.r4Shifts[5], -1894986606) c = self.md5_i(c, d, a, b, x[i+10], self.r4Shifts[6], -1051523) b = self.md5_i(b, c, d, a, x[i+1], self.r4Shifts[7], -2054922799) a = self.md5_i(a, b, c, d, x[i+8], self.r4Shifts[8], 1873313359) d = self.md5_i(d, a, b, c, x[i+15], self.r4Shifts[9], -30611744) c = self.md5_i(c, d, a, b, x[i+6], self.r4Shifts[10], -1560198380) b = self.md5_i(b, c, d, a, x[i+13], self.r4Shifts[11], 1309151649) a = self.md5_i(a, b, c, d, x[i+4], self.r4Shifts[12], -145523070) d = self.md5_i(d, a, b, c, x[i+11], self.r4Shifts[13], -1120210379) c = self.md5_i(c, d, a, b, x[i+2], self.r4Shifts[14], 718787259) b = self.md5_i(b, c, d, a, x[i+9], self.r4Shifts[15], -343485551) # This line is doubled for some reason, line below is not in the MD5 version b = self.md5_i(b, c, d, a, x[i+ 9], self.r4Shifts[15], -343485551) a = self.add(a, tempA) b = self.add(b, tempB) c = self.add(c, tempC) d = self.add(d, tempD) return self.num2hex(a) + self.num2hex(b) + self.num2hex(c) + self.num2hex(d) This is how I invoke the method in the class: ScriptSource source = engine.CreateScriptSourceFromFile("EAHashingAlgorithm.py"); ScriptScope scope = engine.CreateScope(); engine.SetSearchPaths(new[] { "C:/Program Files (x86)/IronPython 2.7/Lib" }); source.Execute(scope); dynamic eaHash = scope.GetVariable("EAHashingAlgorithm"); dynamic hash = eaHash(); this.answer = hash.EAHash(_answer); //lets suppose the answer is "123456" It throws the following exception: expected signed long, got long This is the stacktrace: at IronPython.Modules.ModuleOps.GetSignedLong(Object value, Object type) at IronPython.Modules.CTypes.SimpleType.IronPython.Modules.CTypes.INativeType.SetValue(MemoryHolder owner, Int32 offset, Object value) at IronPython.Modules.CTypes.SimpleCData.__init__(Object value) at CallSite.Target(Closure , CallSite , CodeContext , Object , Object ) at System.Dynamic.UpdateDelegates.UpdateAndExecute3[T0,T1,T2,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2) at Microsoft.Scripting.Interpreter.DynamicInstruction`4.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.LightLambda.Run4[T0,T1,T2,T3,TRet](T0 arg0, T1 arg1, T2 arg2, T3 arg3) at System.Dynamic.UpdateDelegates.UpdateAndExecute3[T0,T1,T2,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2) at Microsoft.Scripting.Interpreter.DynamicInstruction`4.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.LightLambda.Run4[T0,T1,T2,T3,TRet](T0 arg0, T1 arg1, T2 arg2, T3 arg3) at IronPython.Compiler.PythonCallTargets.OriginalCallTarget3(PythonFunction function, Object arg0, Object arg1, Object arg2) at IronPython.Runtime.FunctionCaller`3.Call3(CallSite site, CodeContext context, Object func, T0 arg0, T1 arg1, T2 arg2) at IronPython.Runtime.Method.MethodBinding`2.SelfTarget(CallSite site, CodeContext context, Object target, T0 arg0, T1 arg1) at System.Dynamic.UpdateDelegates.UpdateAndExecute4[T0,T1,T2,T3,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3) at Microsoft.Scripting.Interpreter.DynamicInstruction`5.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.LightLambda.Run8[T0,T1,T2,T3,T4,T5,T6,T7,TRet](T0 arg0, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7) at IronPython.Compiler.PythonCallTargets.OriginalCallTarget7(PythonFunction function, Object arg0, Object arg1, Object arg2, Object arg3, Object arg4, Object arg5, Object arg6) at IronPython.Runtime.FunctionCaller`7.Call7(CallSite site, CodeContext context, Object func, T0 arg0, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6) at System.Dynamic.UpdateDelegates.UpdateAndExecute9[T0,T1,T2,T3,T4,T5,T6,T7,T8,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7, T8 arg8) at IronPython.Runtime.Method.MethodBinding`6.SelfTarget(CallSite site, CodeContext context, Object target, T0 arg0, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5) at System.Dynamic.UpdateDelegates.UpdateAndExecute8[T0,T1,T2,T3,T4,T5,T6,T7,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7) at Microsoft.Scripting.Interpreter.DynamicInstruction`9.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.LightLambda.Run9[T0,T1,T2,T3,T4,T5,T6,T7,T8,TRet](T0 arg0, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7, T8 arg8) at IronPython.Compiler.PythonCallTargets.OriginalCallTarget8(PythonFunction function, Object arg0, Object arg1, Object arg2, Object arg3, Object arg4, Object arg5, Object arg6, Object arg7) at IronPython.Runtime.FunctionCaller`8.Call8(CallSite site, CodeContext context, Object func, T0 arg0, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7) at System.Dynamic.UpdateDelegates.UpdateAndExecute10[T0,T1,T2,T3,T4,T5,T6,T7,T8,T9,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7, T8 arg8, T9 arg9) at IronPython.Runtime.Method.MethodBinding`7.SelfTarget(CallSite site, CodeContext context, Object target, T0 arg0, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6) at System.Dynamic.UpdateDelegates.UpdateAndExecute9[T0,T1,T2,T3,T4,T5,T6,T7,T8,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7, T8 arg8) at Microsoft.Scripting.Interpreter.DynamicInstruction`10.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.LightLambda.Run3[T0,T1,T2,TRet](T0 arg0, T1 arg1, T2 arg2) at IronPython.Compiler.PythonCallTargets.OriginalCallTarget2(PythonFunction function, Object arg0, Object arg1) at IronPython.Runtime.FunctionCaller`2.Call2(CallSite site, CodeContext context, Object func, T0 arg0, T1 arg1) at System.Dynamic.UpdateDelegates.UpdateAndExecute4[T0,T1,T2,T3,TRet](CallSite site, T0 arg0, T1 arg1, T2 arg2, T3 arg3) at CallSite.Target(Closure , CallSite , Object , String ) at System.Dynamic.UpdateDelegates.UpdateAndExecute2[T0,T1,TRet](CallSite site, T0 arg0, T1 arg1) at CallSite.Target(Closure , CallSite , Object , String ) at System.Dynamic.UpdateDelegates.UpdateAndExecute2[T0,T1,TRet](CallSite site, T0 arg0, T1 arg1) at FUTBuyer.FUT.ConnectFUT.work(String _uname, String _password, String _answer, String _platform) in E:\Projects\FUTBuyer\FUTBuyer\FUT\ConnectFUT.cs:line 214 at FUTBuyer.FUT.ConnectFUT..ctor(String userName, String password, String secretAnswer, String platform) in E:\Projects\FUTBuyer\FUTBuyer\FUT\ConnectFUT.cs:line 58 at FUTBuyer.Form1.button1_Click(Object sender, EventArgs e) in E:\Projects\FUTBuyer\FUTBuyer\Form1.cs:line 21 at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at FUTBuyer.Program.Main() in E:\Projects\FUTBuyer\FUTBuyer\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() The code runs fine when run using python alone. I am using IronPython, version 2.7.3..Help anyone? Answer: The problem seems to emerge from how IronPython's `ctypes.c_int32` handles long values greater than the maximum integer value. This could either be a general difference in the run-time or the ctypes implementation. In order to achieve the same behavior as in CPython (as a workaround), you would have to ensure that no argument larger than the max integer size is passed into c_int32. But looking at your code this is probably not what you want, because you seem to call c_int32 just for the very reason to truncate shift results flowing into the long range. Have you considered masking instead of c_int32? This might have performance implications but that depends on your use-case/algorithm. Instead of ctypes.c_int32(x).value you could x & 0xffffffff and your results should remain the same. To be sure you could [file a bug](http://ironpython.codeplex.com/workitem/list/basic) with IronPython which is probably your best chance of finding out why IronPython behaves different to CPython.
Python math module logarithm functions Question: > **Possible Duplicate:** > [Inaccurate Logarithm in > Python](http://stackoverflow.com/questions/931995/inaccurate-logarithm-in- > python) Why are the `math.log10(x)` and `math.log(x,10)` results different? In [1]: from math import * In [2]: log10(1000) Out[2]: 3.0 In [3]: log(1000,10) Out[3]: 2.9999999999999996 Answer: It's a known bug : <http://bugs.python.org/issue3724> Seems logX(y) is always more precise than the equivalent log(Y, X).
Python: how to print a variable inside a defined function separately? Question: I want to know how to print a variable inside a function, with python, for example: import math def number(): print number_enters = input("Please enter the number: ") square_roots = math.sqrt(number_enters) print square_roots But how can I print the square_roots separately, like print it at the end of the program(after other functions) for example: import math def number(): print number_enters = input("Please enter the number: ") square_roots = math.sqrt(number_enters) print square_roots def bla_bla(): ..... ..... ....... ....... print square_roots I hope that make sense, please help me out, how can I print square_roots after all of that? Answer: Replace `print square_roots` with `return square_roots` in the function. This will allow you to use the variable later. def number(): print number_enters = input("Please enter the number: ") square_roots = math.sqrt(number_enters) return square_roots Then you can store the result in a variable and then print it. square_root = number() ..... ..... print square_root
python integer division error - modulo by zero - BUT divisor != 0 Question: I am new to doing simple math using python, so sorry if this is a silly question. I have 8 variables that are all set to integers and these integers are used when performing a simple calculation. a = 0 b = 17 c = 152 d = 1 e = 133 f = 19 g = 20 h = 0 answer = ( ( ( a / f ) + b + c ) - ( g + ( h / f ) ) ) / ( d / f ) print answer When I run this code, I get the error, ZeroDivisionError: integer division or modulo by zero. I have read about this error and all documentation points towards my divisor being zero, but if I print this with the numbers as strings in place of the variables, I get: ( ( ( 0 / 19 ) + 17 + 152 ) - ( 20 + ( 0 / 19 ) ) ) / ( 1 / 19 ) Nowhere in this statement is the divisor zero. Please let me know how I need to change my expression in order to get the answer 2831. Note that I can change the type of the variables to a float or other. Thank you for your help! Answer: Probably you are using `Python 2.x`, where `x / y` is an `integer division`. So, in the below code: - ( 20 + ( 0 / 19 ) ) ) / ( 1 / 19 ) `1 / 19` is an `integer division`, which results in `0`. So the expression is essentially same as: - ( 20 + ( 0 / 19 ) ) ) / 0 Now you see where the error comes from. * * * You can add following import in you `python` code, to enforce floating-point division: - from __future__ import division Or you could cast one of the integers to a float, using either `float()` or just by adding `.0` to the initial value.
Advice on robot control - image processing design architecture Question: This is more a software engineering question than a Python question, but since I am using Python to implement everything, any Python solution is very welcome. I have two classes. One class controls a robot and sends to it commands to perform. Like moving to some position or do some action. The communication between the robot and the computer is USB serial. The robot has a camera. This camera is connected to the computer using a USB cable. The other class I have receives frames from this camera, and analyzes them. So I have two classes. One that controls the robot, and one that performs all the image processing. My initial idea was to have a file "main.py", which creates an object for each class, and handles it. I think this solution is too simple, ugly and hard to escale. At the same time, the image processing part requires a continuous loop where the frames are grabbed. This means that I will have to insert my robot code inside the loop. I would this not to happen, and have the classes running on different processes. I am open to any suggestion, like using sockets or anything. I am just starting to implement it so I am very open. Answer: Okay, this example should give you an idea on how you could split the vision and motor control tasks. Run it to see how the tasks run side by side. Can't help with the specifics of _receiving_ the image without more details though. Whether you put your classes in separate files or not is entirely up to you, it doesn't really matter. I tend to put several classes together in a file for smaller projects, but spread them out more in larger ones. import threading import time import random import sys class Robot: def __init__(self): self.worldKnowledge = {"currentFrame": 0, "objects":[]} self.visionThread = threading.Thread(target=self.vision) self.visionThread.daemon = True self.motorThread = threading.Thread(target=self.motorControl) self.motorThread.daemon = True self.visionThread.start() self.motorThread.start() while True: # This is to keep the main thread alive and responsive while the # vision and motor threads are running try: time.sleep(1) except KeyboardInterrupt: sys.exit() def getFrame(self): # simulate retrieval of latest camera frame # return value indicates whether there is a new frame or not if random.random() > 0.7: self.worldKnowledge["currentFrame"] += 1 return True return False def vision(self): while True: if self.getFrame(): print "Processing incoming frame!" # do object detection here = detected objects can go into self.worldKnowledge, # for example: # self.worldKnowledge["objects"] += [(objectID, objectLocation, objectSize, isMoving, currentFrame )] time.sleep(1) def motorControl(self): # move the robot based on the most recent world knowledge while True: print("Acting on latest information (from frame %i)" % self.worldKnowledge["currentFrame"]) time.sleep(1) if __name__ == "__main__": Robot()
Get location coordinates using bing or google API in python Question: Here is my problem. I have a sample text file where I store the text data by crawling various html pages. This text contains information about various events and its time and location. I want to fetch the coordinates of these locations. I have no idea on how I can do that in python. I am using nltk to recognize named entities in this sample text. Here is the code: import nltk with open('sample.txt', 'r') as f: sample = f.read() sentences = nltk.sent_tokenize(sample) tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences] tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences] chunked_sentences = nltk.batch_ne_chunk(tagged_sentences, binary=True) #print chunked_sentences #print tokenized_sentences #print tagged_sentences def extract_entity_names(t): entity_names = [] if hasattr(t, 'node') and t.node: if t.node == 'NE': entity_names.append(' '.join([child[0] for child in t])) else: for child in t: entity_names.extend(extract_entity_names(child)) return entity_names entity_names = [] for tree in chunked_sentences: # Print results per sentence # print extract_entity_names(tree) entity_names.extend(extract_entity_names(tree)) # Print all entity names #print entity_names # Print unique entity names print set(entity_names) Sample file is something like this: > La bohΓ¨me at Covent Garden > > When: 18 Jan 2013 (various dates) , 7.30pm Where: Covent Garden, London, > John Copley's perennially popular Royal Opera production of Puccini's La > bohΓ¨me is revived for the first of two times this season, aptly over the > Christmas period. Sir Mark Elder conducts Rolando VillazΓ³n as Rodolfo and > Maija Kovalevska as MimΓ¬. MimΓ¬ meets poet Rodolfo (Dmytro Popov sings the > role on 5 and 18 January) one cold Christmas Eve in Paris' Latin Quarter. > Fumbling around in the dark after her candle has gone out, they fall in > love. Rodolfo lives with three other lads: philosopher Colline (Nahuel di > Pierro/Jihoon Kim on 18 January), musician Schaunard (David Bizic) and > painter Marcello (Audun Iversen), who loves Musetta (Stefania Dovhan). Both > couples break up and the opera ends in tragedy as Rodolfo finds MimΓ¬ dying > of consumption in a freezing garret. I want to fetch coordinates for Covent Garden,London from this text. How can I do it ? Answer: You really have two questions: 1. How to extract location text (or potential location text). 2. How to get location (latitude, longitude) by calling a Geocoding service with location text. I can help with the second question. (But see edit below for some help with your first question.) With the old Google Maps API (which is still working), you could get the geocoding down to one line (one ugly line): def geocode(address): return tuple([float(s) for s in list(urllib.urlopen('http://maps.google.com/maps/geo?' + urllib.urlencode({'output': 'csv','q': address})))[0].split(',')[2:]]) Check out the [Google Maps API Geocoding Documentation](https://developers.google.com/maps/documentation/geocoding/): Here’s the readable 7 line version plus some wrapper code (when calling from the command line remember to enclose address in quotes): import sys import urllib googleGeocodeUrl = 'http://maps.google.com/maps/geo?' def geocode(address): parms = { 'output': 'csv', 'q': address} url = googleGeocodeUrl + urllib.urlencode(parms) resp = urllib.urlopen(url) resplist = list(resp) line = resplist[0] status, accuracy, latitude, longitude = line.split(',') return latitude, longitude def main(): if 1 < len(sys.argv): address = sys.argv[1] else: address = '1600 Amphitheatre Parkway, Mountain View, CA 94043, USA' coordinates = geocode(address) print coordinates if __name__ == '__main__': main() It's simple to parse the CSV format, but the XML format has better error reporting. ## Edit - Help with your first question I looked in to `nltk`. It's not trivial, but I can recommend [Natural Language Toolkit Documentation, CH 7 - Extracting Information from Text](http://nltk.org/book/ch07.html), specifically, `7.5 Named Entity Recognition`. At the end of the section, they point out: > NLTK provides a classifier that has already been trained to recognize named > entities, accessed with the function nltk.ne_chunk(). If we set the > parameter binary=True , then named entities are just tagged as NE; > otherwise, the classifier adds category labels such as PERSON, ORGANIZATION, > and GPE. You're specifying `True`, but you probably want the category labels, so: chunked_sentences = nltk.batch_ne_chunk(tagged_sentences) This provides category labels (named entity type), which seemed promising. But after trying this on your text and a few simple phrases with location, it's clear more rules are needed. Read the documentation for more info.
Display waiting time in text box using WXPython Question: I want to make a application in which there are two buttons(say click & ok) and a text box.I want to generate waiting time between clicking on two buttons and display in text box. **For Example:-** If I first click on CLICK Button and wait for few seconds/minutes then click on OK Button. Then this waiting time should be displayed on text box. Please help me. Thanks in advance Answer: This is very simple to do with [Glade Interface Designer](http://glade.gnome.org/). `import time` then, create a variable `self.timer = None`. Then when Click's `event` is raised simply state something like: `self.timer = time.time()`. When OK's `event` is raised, write to the `TextBox`: `str((time.time()-self.timer))`. This worked for me. If you aren't using glade, you can still use the same concept after you create your GUI. * * * See Also - [Python - time.clock() vs. time.time() - accuracy?](http://stackoverflow.com/questions/85451/python-time-clock-vs-time- time-accuracy) * * * If you are asking how to code everything (including the GUI) in wxPython, I would suggest looking at some [tutorials](http://wiki.wxpython.org/Getting%20Started#A_First_Application%3a_.22Hello.2C_World.22) first.
create a python shell environment launcher to use pyQgis Question: I am trying to modify the shell launcher found at "<http://inasafe.linfiniti.com/html/id/developer-docs/platform_windows.html>" so that I can use it to directly launch any shell I'd like (in my case, I wanna use the default IDLE gui in Python 27 library folder for windows). My changes didn't bring me to an acceptable result so far. Here is my version of the launcher, where I should change it? @echo off SET PyShell=C:\Programmi\Quantum GIS Lisboa call "%PyShell%"\apps\Python27\Lib\idlelib\PyShell.pyc @echo off SET GDAL_DRIVER_PATH=%PyShell%\bin\gdalplugins\1.9 path %PATH%;%PyShell%\apps\qgis\bin path %PATH%;%PyShell%\apps\grass\grass-6.4.2\lib path %PATH%;"%PyShell%\apps\Python27\Scripts\" set PYTHONPATH=%PYTHONPATH%;%PyShell%\apps\qgis\python; set PYTHONPATH=%PYTHONPATH%;%PyShell%\apps\Python27\Lib\site-packages set QGIS_PREFIX_PATH=%PyShell%\apps\qgis start "Quantum GIS Shell" /B "cmd.exe" %* My OS is Windows XP, the version of Python is 2.7.3, while Qgis is 1.8 (Lisboa). I am reeeeally new to Python and stuff, so please forgive my big mistakes if there are some (but I'm pretty sure there are). Answer: Finally the solution was int the line where I specify which program to use as shell. In my case, I found bot "Pyshell.pyc" and "idle.pyw". The second seems to be the right to poit at, as everything works fine now. So, my personal shell launcher looks like this: @echo off SET IDLE=C:\PROGRA~1\QUANTU~1 call "%IDLE%"\apps\Python27\Lib\idlelib\idle.pyw @echo off SET GDAL_DRIVER_PATH=%IDLE%\bin\gdalplugins\1.9 path %PATH%;%IDLE%\apps\qgis\bin path %PATH%;%IDLE%\apps\grass\grass-6.4.2\lib path %PATH%;"%IDLE%\apps\Python27\Scripts\" set PYTHONPATH=%PYTHONPATH%;%IDLE%\apps\qgis\python; set PYTHONPATH=%PYTHONPATH%;%IDLE%\apps\Python27\Lib\site-packages set QGIS_PREFIX_PATH=%IDLE%\apps\qgis start "Quantum GIS Shell" /B "cmd.exe" %* Anyway, this was a workaround to set the environment variables to use PyQGIS, as I had difficulties to set them in the "normal" way. Unfortuntely, I didn't fix my problem, as I'm still getting errors while importing the "qgis.core" module, while using the original shell launcher through cmd.exe it works, but it's not a good IDE though...
"ImportError: No module named tkinter" when using Pmw Question: Here's my problem: I'm running the code in [this](http://code.activestate.com/recipes/271249-how-to-create-linked- optionmenus-or-other-lists-in/) example. I have Python 2.7 and 3 installed on my RaspberryPi but I have checked and double-checked, and I am running the code in 2.7. I've installed Pmw 2.0.0 under 2.7, not 3, but when I try to run, I get the "ImportError: No module named tkinter" error. I use Tkinter all the time, so it usually works fine, and I've done a search to verify that I'm definitely calling "Tkinter", not "tkinter", so I think it has to be a problem with Pmw, which also seems to be indicated by the traceback (posted in full at the bottom of my question). I can't for the life of me find a specific place where Pmw is looking for lower-case "tkinter", and I'm at a loss for how to work around this. I'm not eager to switch platforms--this is for work, so unless this is unfixable, I need to stick with Tkinter. Oh, and I am pretty new to Python, so I would love to find out that it's a simple problem that someone on here can spot easily. import sys; print sys.path gives me: ['/home/pi/Desktop', '/home/pi', '/usr/bin', '/usr/local/lib/python2.7/dist-packages/distribute-0.6.28-py2.7.egg', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/pymodules/python2.7'] The full traceback is: Traceback (most recent call last): File "/home/pi/Desktop/LinkedMenusSample.py", line 151, in <module> Pmw.initialise(root) File "/usr/local/lib/python2.7/dist-packages/Pmw/Pmw_2_0_0/lib/PmwLoader.py", line 131, in __getattr__ self._initialise() File "/usr/local/lib/python2.7/dist-packages/Pmw/Pmw_2_0_0/lib/PmwLoader.py", line 89, in _initialise raise ImportError(msg) ImportError: No module named tkinter Answer: Maybe I can help you on how to remove the error. here are two thoughts: 1) you use python 2.xx and have installed the python 3 pwm module (Tkinter was renamed to tkinter from Python 2 to 3) 2) you do the following before the import and hope it helps: #import tkinter #Traceback (most recent call last): # File "<pyshell#11>", line 1, in <module> # import tkinter #ImportError: No module named tkinter import sys, Tkinter sys.modules['tkinter'] = Tkinter # put the module where python looks first for modules #import tkinter # now works!
How can Linux program, e.g. bash or python script, know how it was started: from command line or interactive GUI? Question: I want to do the following: If the bash/python script is launched from a terminal, it shall do something such as printing an error message text. If the script is launched from GUI session like double-clicking from a file browser, it shall do something else, e.g. display a GUI message box. Answer: You can check to see whether `stdin` and `stdout` are connected to a terminal or not. When run from a GUI, generally `stdin` is not connected at all, and `stdout` is connected to a log file. When run from a terminal, both `stdin` and `stdout` will be connected to a terminal. In Python: import os import sys if os.isatty(sys.stdout.fileno()): # print error message text else: # display GUI message You should check that this will work for you, though, since it doesn't do precisely what you asked for. But it's the best thing that I can think of that doesn't depend on too much magic. You should check that the `DISPLAY` environment variable is set before going with GUI code too, since it won't work without that. Note that terminal users can still redirect `stdin` or `stdout` to `/dev/null` (for example) and this might cause your program to go with the GUI behaviour. So it's far from perfect. Finally, even though I've given you an answer, please don't do this! It is confusing to users for a program's behaviour to change depending on how it was called.
Segmentation fault at the end of python program in Ubuntu Question: I have a python script that seems to work in an Eclipse runtime configuration. When I run it in at the Ubuntu command-line, I get a segmentation fault after the main program ends. Why is it happening and how can I solve it or even debug it? $:~/ober/code/impute/impute/batch-beagle$ python ~/ober/code/impute/bin/ibd_segments.py -v 1 ~/ober/data/hutt/chr22/hutt.stage5.npz ~/ober/data/hutt/hutt.kinship /home/oren/ober/code/impute/impute/batch-beagle/out/node-0/node-0/ibd-segment-0.in [[0 0]] Pair 1/4: (0,0) (0,0) 0 3218 16484792 51156934 0 0 Pair 2/4: (0,0) (0,1) Pair 3/4: (0,1) (0,0) Pair 4/4: (0,1) (0,1) 0 3218 16484792 51156934 0 1 Done Segmentation fault (core dumped) Script: import os, sys, impute as im, itertools, csv, optparse, traceback, util, numpy as np #################################################################################### if __name__ == '__main__': ''' -------------------------------------------------- Main program -------------------------------------------------- ''' # Parse and validate command-line arguments PROGRAM = os.path.basename(sys.argv[0]) usage = 'Usage: %s [flags] <phased-data-file> <kinship-file> <input-file>\n\n' \ 'Locate IBD segments among a subset of sample in an NPZ phased data set.\n' \ 'Sample pairs are read from standard input. Segments are written to standard output.\n' \ '\tphased-data-file - NPZ file containing phasing results\n' \ '\tkinship-file - Sorted identity coefficient file\n' \ '\tpair-list-file - Sorted identity coefficient file\n' \ '\tout-file - File to output segments to\n' \ '\n' \ 'Example:\n' \ 'phased-data-file = /home/oren/ober/data/hutt/chr22/hutt.stage5.npz\n' \ 'kinship-file = /home/oren/ober/data/hutt/hutt.kinship\n' \ 'pair-list-file contains the lines\n' \ '0 1\n' \ '...\n' \ '0 100\n' \ '\n' \ 'Type ''%s -h'' to display full help.' % (PROGRAM, PROGRAM) parser = optparse.OptionParser(usage=usage) parser.add_option('-v', '--debug', type='int', dest='debug', default=0, help='Debug Level (0=quiet; 1=summary; 2=full debug)') (options, args) = parser.parse_args(sys.argv[1:]) if len(args) != 3: print usage sys.exit(1) phased_data_file, kinship_file, input_file = args try: # Load data problem = im.io.read_npz(phased_data_file) params = im.PhaseParam(kinship_file=kinship_file, debug=(options.debug >= 2)) # Read all pairs from stdin first # pairs = [(int(line[0]), int(line[1])) for line in csv.reader(sys.stdin, delimiter=' ', skipinitialspace=True) if line] pairs = np.loadtxt(input_file, dtype=np.uint) if len(pairs.shape) < 2: pairs = pairs[np.newaxis] print pairs # Loop over pairs and output segments to output file num_pairs = 4 * len(pairs) for k, ((i, j), (a, b)) in enumerate(itertools.product(pairs, itertools.product(im.constants.ALLELES, im.constants.ALLELES))): if options.debug >= 1: print 'Pair %d/%d: (%d,%d) (%d,%d)' % (k + 1, num_pairs, i, a, j, b) segments = im.ih.hap_segments(problem, i, a, j, b, params) segments.save(sys.stdout) print 'Done' except: traceback.print_exc(file=sys.stdout) sys.exit(util.EXIT_FAILURE) Answer: It turns out I was loading a numpy npz file (with `numpy.load()`) from a corrupt file that I transferred via rsync from my home computer to this one. After I regenerated the NPZ file on this machine, everything worked. Thanks for your feedback.
Is there a python module to solve/integrate a system of stochastic differential equations? Question: I have a system of stochastic differential equations that I would like to solve. I was hoping that this issue was already address. I am a bit concerned about constructing my own solver because I fear my solver would be too slow, and there could be the issues with numerical stability. Is there a python module for such problems? If not, is there a standard approach for solving such systems. Answer: There is one: <http://diffusion.cgu.edu.tw/ftp/sde/> Example from the site: """ add required Python packages """ from pysde import * from sympy import * """ Variables acclaimed """ x,dx=symbols('x dx') r,G,e,d=symbols('r G epsilon delta') """ Solve Kolmogorov Forward Equation """ l=sde.KolmogorovFE_Spdf(r*(G-x),e*x*(1-x),0,1) sol=l.subs({e:r*d}) pprint(sol)
How to execute python script file (filename.py) using sikuli Question: I have a python code (name.py) written in separate file and now I want to execute that code using sikuli. I have tried openApp but its not working could be possible I did some mistake but still looking for working logic. Answer: openApp is for running an application for example: openApp("calc.exe"). Please read about importing [python modules](http://docs.python.org/2/tutorial/modules.html). Technically this has nothing to do with Sikuli.
Can't import * from sqlalchemy.ext.declarative Question: If I try to execute `from sqlalchemy.ext.declarative import *` it fails. I've tried do uninstall package with `pip uninstall sqlalchemy` and reinstall it again. I've tried removed the version from the Ubuntu repositary (the `python- sqlalchemy`-package) but it wasn't installed. I've got to a clean directory and tried the same command. Still don't work. This is both with sqlalchemy 0.7.8 and 0.7.9. This is the traceback: In [1]: from sqlalchemy.ext.declarative import * --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /home/nine/slask/<ipython-input-1-7f210e4ec48b> in <module>() ----> 1 from sqlalchemy.ext.declarative import * /usr/local/lib/python2.7/dist-packages/sqlalchemy/ext/declarative/__init__.py in <module>() 1252 """ 1253 -> 1254 from .api import declarative_base, synonym_for, comparable_using, \ 1255 instrument_declarative, ConcreteBase, AbstractConcreteBase, \ 1256 DeclarativeMeta, DeferredReflection, has_inherited_table,\ /usr/local/lib/python2.7/dist-packages/sqlalchemy/ext/declarative/api.py in <module>() 101 102 --> 103 class declared_attr(interfaces._MappedAttribute, property): 104 """Mark a class-level method as representing the definition of 105 a mapped property or special declarative member name. AttributeError: 'module' object has no attribute '_MappedAttribute' For the record, I'm running Ubuntu 12.04. Answer: Now it works. This is what i did: * I uninstalled sqla with pip * Deleted the remaining files manually (`rm -fr /usr/local/lib/python2.7/dist-packages/sqlalchemy`) * Reinstalled sqla with pip and now it works.
Iron Python and PyMongo Error Question: I'm getting this error when I try to connect to mongodb (using pymongo) with Iron Python... Traceback (most recent call last): File "test.py", line 3, in <module> File "c:\Program Files (x86)\IronPython 2.7\lib\site-packages\pymongo\connecti on.py", line 179, in __init__ File "c:\Program Files (x86)\IronPython 2.7\lib\site-packages\pymongo\mongo_cl ient.py", line 269, in __init__ pymongo.errors.ConnectionFailure: Specified cast is not valid. Code is pretty simple, I have replaced the db name. import pymongo c = pymongo.Connection('mongodb://testuser:[email protected]:10021/sometestdb') It works fine with regular python. Any ideas? Answer: Ironpython isn't supported by pymongo - so I wouldn't advise trying to use it. You can see on the pypi page a list of supported implementations: <http://pypi.python.org/pypi/pymongo>
Python first and last element from array Question: I am trying to dynamically get the first and last element from an array. So, let us suppose the array has 6 elements. test = [1,23,4,6,7,8] If I am trying to get the `first and last = 1,8`, `23,7` and `4,6`. Is there a way to get elements in this order? I looked at a couple of questions [Link](http://stackoverflow.com/questions/12218796/python-slice-first-and- last-element-in-list) [Link2](http://stackoverflow.com/questions/930397/how- to-get-the-last-element-of-a-list). I took help of these links and I came up with this prototype.. #!/usr/bin/env python import numpy test = [1,23,4,6,7,8] test1 = numpy.array([1,23,4,6,7,8]) len_test = len(test) first_list = [0,1,2] len_first = len(first_list) second_list = [-1,-2,-3] len_second = len(second_list) for a in range(len_first): print numpy.array(test)[[first_list[a] , second_list[a]]] print test1[[first_list[a], second_list[a]]] But this prototype won't scale for if you have more than 6 elements. So, I was wondering if there is way to dynamically get the pair of elements. Thanks! Answer: I ended here, because I googled for "python first and last element of array", and found everything else but this. So here's the answer to the title question: a = [1,2,3] a[0] # first element (returns 1) a[-1] # last element (returns 3)
How to connect to Facebook Graph API from Python using Requests if I do not need user access token? Question: I am trying to find the easiest way how to use Facebook Graph API using my favorite [Requests](http://docs.python-requests.org/) library. The problem is, all examples I found are about getting **user access token** , about redirects and user interaction. All I need is only **application access token**. I do not handle any non- public data, so I need no user interaction and as my final app is supposed to be command-line script, no redirects are desired. I found something similar [here](http://stackoverflow.com/questions/3058723/programmatically-getting-an- access-token-for-using-the-facebook-graph-api), but it seems to be everything but elegant. Moreover, I would prefer something using [Requests](http://docs.python-requests.org/) or [Requests- OAuth2](https://github.com/maraujop/requests-oauth2). Or maybe there is library for that? I found [Requests- Facebook](https://github.com/maraujop/requests-oauth2) and [Facepy](https://github.com/jgorset/facepy) (both Requests based), but again, all examples are with redirection, etc. Facepy does not handle authorization at all, it just accepts your token and it is up to you to get it somehow. Could someone, please, provide a short, sane, working example how to get just the application access token? Answer: Following <https://developers.facebook.com/docs/technical- guides/opengraph/publishing-with-app-token/>: import requests r = requests.get('https://graph.facebook.com/oauth/access_token?grant_type=client_credentials&client_id=123&client_secret=XXX') access_token = r.text.split('=')[1] print access_token (using the correct values for `client_id` and `client_secret`) gives me something that looks like an access token.
ImportError while rendering: No module named app Question: I'm running a twitter application on [pythonanywhere](https://www.pythonanywhere.com), the app works perfectly on localhost development server, but when I run it on [pythonanywhere](https://www.pythonanywhere.com) I get this error: 2013-01-30 20:04:12,843 :Traceback (most recent call last): 2013-01-30 20:04:12,843 : File "/usr/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 272, in __call__ 2013-01-30 20:04:12,843 : response = self.get_response(request) 2013-01-30 20:04:12,844 : File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 169, in get_response 2013-01-30 20:04:12,844 : response = self.handle_uncaught_exception(request, resolver, sys.exc_info()) 2013-01-30 20:04:12,844 : File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 203, in handle_uncaught_exception 2013-01-30 20:04:12,844 : return debug.technical_500_response(request, *exc_info) 2013-01-30 20:04:12,844 : File "/usr/local/lib/python2.7/site-packages/django/views/debug.py", line 59, in technical_500_response 2013-01-30 20:04:12,845 : html = reporter.get_traceback_html() 2013-01-30 20:04:12,845 : File "/usr/local/lib/python2.7/site-packages/django/views/debug.py", line 151, in get_traceback_html 2013-01-30 20:04:12,845 : return t.render(c) 2013-01-30 20:04:12,845 : File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 123, in render 2013-01-30 20:04:12,845 : return self._render(context) 2013-01-30 20:04:12,845 : File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 117, in _render 2013-01-30 20:04:12,845 : return self.nodelist.render(context) 2013-01-30 20:04:12,845 : File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 744, in render 2013-01-30 20:04:12,846 : bits.append(self.render_node(node, context)) 2013-01-30 20:04:12,846 : File "/usr/local/lib/python2.7/site-packages/django/template/debug.py", line 73, in render_node 2013-01-30 20:04:12,846 : result = node.render(context) 2013-01-30 20:04:12,846 : File "/usr/local/lib/python2.7/site-packages/django/template/debug.py", line 90, in render 2013-01-30 20:04:12,846 : output = self.filter_expression.resolve(context) 2013-01-30 20:04:12,846 : File "/usr/local/lib/python2.7/site-packages/django/template/base.py", line 536, in resolve 2013-01-30 20:04:12,847 : new_obj = func(obj, *arg_vals) 2013-01-30 20:04:12,847 : File "/usr/local/lib/python2.7/site-packages/django/template/defaultfilters.py", line 695, in date 2013-01-30 20:04:12,847 : return format(value, arg) 2013-01-30 20:04:12,847 : File "/usr/local/lib/python2.7/site-packages/django/utils/dateformat.py", line 285, in format 2013-01-30 20:04:12,847 : return df.format(format_string) 2013-01-30 20:04:12,847 : File "/usr/local/lib/python2.7/site-packages/django/utils/dateformat.py", line 30, in format 2013-01-30 20:04:12,847 : pieces.append(force_unicode(getattr(self, piece)())) 2013-01-30 20:04:12,848 : File "/usr/local/lib/python2.7/site-packages/django/utils/dateformat.py", line 191, in r 2013-01-30 20:04:12,848 : return self.format('D, j M Y H:i:s O') 2013-01-30 20:04:12,848 : File "/usr/local/lib/python2.7/site-packages/django/utils/dateformat.py", line 30, in format 2013-01-30 20:04:12,848 : pieces.append(force_unicode(getattr(self, piece)())) 2013-01-30 20:04:12,848 : File "/usr/local/lib/python2.7/site-packages/django/utils/encoding.py", line 71, in force_unicode 2013-01-30 20:04:12,848 : s = unicode(s) 2013-01-30 20:04:12,848 : File "/usr/local/lib/python2.7/site-packages/django/utils/functional.py", line 206, in __unicode_cast 2013-01-30 20:04:12,848 : return self.__func(*self.__args, **self.__kw) 2013-01-30 20:04:12,849 : File "/usr/local/lib/python2.7/site-packages/django/utils/translation/__init__.py", line 81, in ugettext 2013-01-30 20:04:12,849 : return _trans.ugettext(message) 2013-01-30 20:04:12,849 : File "/usr/local/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 286, in ugettext 2013-01-30 20:04:12,849 : return do_translate(message, 'ugettext') 2013-01-30 20:04:12,849 : File "/usr/local/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 276, in do_translate 2013-01-30 20:04:12,849 : _default = translation(settings.LANGUAGE_CODE) 2013-01-30 20:04:12,849 : File "/usr/local/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 185, in translation 2013-01-30 20:04:12,850 : default_translation = _fetch(settings.LANGUAGE_CODE) 2013-01-30 20:04:12,850 : File "/usr/local/lib/python2.7/site-packages/django/utils/translation/trans_real.py", line 162, in _fetch 2013-01-30 20:04:12,850 : app = import_module(appname) 2013-01-30 20:04:12,850 : File "/usr/local/lib/python2.7/site-packages/django/utils/importlib.py", line 35, in import_module 2013-01-30 20:04:12,850 : __import__(name) 2013-01-30 20:04:12,850 :django.template.base.TemplateSyntaxError: Caught ImportError while rendering: No module named twitterApp What could be causing this error? where should I look? Answer: if may caused by wsgi.py configuration issues in pythonanywhere,in local env,django handle python path and static files well. According to this [part](http://tutorial.pythonanywhere.com/django#existing-apps-manual-config) in pythonanywhere deploy doc: 1. adding the right path to sys.path in wsgi.py assuming your Django settings file is at /home/my_username/projects/my_project/settings.py' path = '/home/my_username/projects' if path not in sys.path: sys.path.append(path) os.environ['DJANGO_SETTINGS_MODULE'] = 'my_project.settings' from my_project.myapp.models import Kitchen, Sink **please note:my_project prefix is needed.** 2.static files should handle well from django.contrib.staticfiles.urls import staticfiles_urlpatterns urlpatterns += staticfiles_urlpatterns() check this two points according to document should fix that
ImportError: Django Cities Question: I'm wondering why I am getting an `ImportError` when using [django- cities](https://github.com/coderholic/django-cities). from cities.models import PostalCode I have already synced db and `cities` tables exist in the database. The traceback: Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/lib/python2.7/django/core/management/__init__.py", line 443, in execute_from_command_line utility.execute() File "/lib/python2.7/django/core/management/__init__.py", line 382, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/lib/python2.7/django/core/management/__init__.py", line 261, in fetch_command klass = load_command_class(app_name, subcommand) File "/lib/python2.7/django/core/management/__init__.py", line 69, in load_command_class module = import_module('%s.management.commands.%s' % (app_name, name)) File "/lib/python2.7/django/utils/importlib.py", line 35, in import_module __import__(name) File "/lib/python2.7/South-0.7.5-py2.7.egg/south/management/commands/__init__.py", line 10, in <module> import django.template.loaders.app_directories File "/lib/python2.7/django/template/loaders/app_directories.py", line 23, in <module> raise ImproperlyConfigured('ImportError %s: %s' % (app, e.args[0])) django.core.exceptions.ImproperlyConfigured: ImportError haystack: cannot import name PostalCode The ImportError suggests that python is looking for `PostalCode` in `haystack`, but I am asking it to import from `cities`.. Why does this happen? Thanks for your ideas! Answer: I figured out the problem. Although both the github and pypi versions claim to be 0.2, it turns out that they are different. I installed via pip, and the pypi version does not have a `PostalCode` model class, which was the source of my `ImportError`. Thanks!
How to use ddbmock with dynamodb-mapper? Question: Can someone please explain how to set up [dynamodb_mapper](http://pypi.python.org/pypi/dynamodb-mapper) (together with [boto](https://github.com/boto/boto)?) to use [ddbmock](http://pypi.python.org/pypi/ddbmock/1.0.1) with sqlite backend as [Amazon DynamoDB](http://aws.amazon.com/dynamodb/)-replacement for functional testing purposes? Right now, I have tried out "plain" boto and managed to get it working with ddbmock (with sqlite) by starting the ddbmock server locally and connect using boto like this: db = connect_boto_network(host='127.0.0.1', port=6543) ..and then I use the db object for all operations against the database. However, dynamodb_mapper uses this way to get a db connection: conn = ConnectionBorg() As I understand, it uses boto's default way to connect with (the real) DynamoDB. So basically I'm wondering if there is a (preferred?) way to get ConnectionBorg() to connect with my local ddbmock server, as I've done with boto above? Thanks for any suggestions. Answer: ## Library Mode In library mode rather than server mode: import boto from ddbmock import config from ddbmock import connect_boto_patch # switch to sqlite backend config.STORAGE_ENGINE_NAME = 'sqlite' # define the database path. defaults to 'dynamo.db' config.STORAGE_SQLITE_FILE = '/tmp/my_database.sqlite' # Wire-up boto and ddbmock together db = connect_boto_patch() Any access to dynamodb service via boto will use ddbmock under the hood. ## Server Mode If you still want to us ddbmock in server mode, I would try to change `ConnectionBorg._shared_state['_region']` in the really beginning of test setup code: ConnectionBorg._shared_state['_region'] = RegionInfo(name='ddbmock', endpoint="localhost:6543") As far as I understand, any access to dynamodb via any `ConnectionBorg` instance after those lines will use ddbmock entry point. This said, I've never tested it. I'll make sure authors of ddbmock gives an update on this.
how to get python to return a list? Question: So I'm writing a python script that will clean file names of useless and unwanted characters, but I'm running into a problem, I can't seem to figure out how to return a list or dictionary with all of the items in it that I iterated over. it only returns the first item I iterated over. this is my first time writing in python. any help would be greatly appreciated. I'm mostly writing this to learn. the clean_title() method is the one I'm trying to return from. and I call it at the bottom. import os import re # here is how all my video files will look after this # show name Season 1 Episode 1 filename = os.listdir("E:/Videos/TV/tv-show") def clean_title(filename): name = {} for title in filename: n_title = title.split('.') index = [i for i, item in enumerate(n_title) if re.search('\w\d{2}\w\d{2}', item)] if len(index) > 0: name = {'title':n_title[0:index[0]], 'ep_info':n_title[index[0]]} return name def get_show_name(filename): pass def update_title(): #show_title = get_show_name + ' ' + get_episode_info #print show_title if __name__=="__main__": test = clean_title(filename) print test Answer: You have two distinct problems. The first is that you're returning from inside your loop, so you will only have processed a single iteration of the loop when you hit the `return` statement. That's why it looks like you're getting the first iterated value when, in fact, you're never reaching the other iterations. You can fix this by out-denting the `return` statement to the correct level. The second problem is in the way you're accruing your results to return from `clean_title()`. You are only ever storing a single cleaned title in the `name` variable. Each time through the loop you're overwriting the previously calculated title with the new one from that iteration. Even if you fix the `return` issue, the current version would then return the _last_ title you computed. You can accrue your results in either a list or a dictionary. If a list, you initialize with `name = []` and add new titles with `name.append(title_goes_here)`. If you want to accrue your results in a dictionary, you initialize with `name = {}` and add new titles with `name[index_goes_here] = title_goes_here`. Note that in the case of a dictionary you need to have some logical key value (usually an integer or string) that you will use to recover the value later on. Finally, I have to add that I find your use of singular case (`filename`, `title`, `clean_title`) for plural objects and actions to be confusing. I'd call a list of file names `filenames`, and so on.
How to replace some string with capturing group in Python 3? Question: I'm a Python beginner. I'd like to find <(.+?)> from a string, and replace it with [\1]. For example, string_input = '<age>' string_output = '[age]' I tried, import re string = '<age>' re.sub('<.+?>, '[' + \1 + ']', string) But it failed. Answer: >>> re.sub('<(.+)>', '[\\1]', '<age>') '[age]' Double `\\` is used to escaping the `\`, or else `\1` will be recognised as `\x01`. Brackets `()` are used as a capturing group. You can use multiple capturing groups like this: >>> re.sub('<(.+)=+(.*)>', '[\\1: \\2]', '<age=5>') '[age: 5]'
Card Matching Game on Python Question: I am current building a simple card matching game in python, with a 5x4 (row*column) grid, in which two players try to match a deck of twenty cards (2,10 of only suit Hearts) * 2. The problem I am running into is in iterating through the deck, printing the cards out in a grid fashion so it would look like this: ----- ----- ----- ----- - - - - - - - - 4-H 6-H 7-H 8-H - - - - - - - - ----- ----- ----- ----- The code I currently have is below: #needed import for shuffle function from random import shuffle #class for my deck class Deck: #constructor starts off with no cards def __init__( self ): self._deck = [] #populate the deck with every combination of suits and values def Populate( self ): #Heart, Diamond, Spades, Clubs for suit in 'HDSC': #Jack = 11, Queen = 12, King = 13, Ace = 14 for value in range(2, 15): if value == 11: value = 'J' elif value == 12: value = 'Q' elif value == 13: value = 'K' elif value == 14: value = 'A' #add to deck list self._deck.append(str(value) + '-' + suit) #populate the deck with only hears hearts and all cards except face cards and aces (2, 3, 4, 5, 6, 7, 8, 9, 10) twice def gamePop( self ): suit = 'H' for x in range(2): for value in range(2, 11): self._deck.append(str(value) + '-' + suit) #shuffle the deck with the random import def Shuffle( self ): shuffle( self._deck ) #length of the deck def len( self ): return len( self._deck ) def stringIt( self ): #Returns the string representation of a deck result = '' for c in self._deck: result = result + str(c) + '\n' return result #class for a single card class Card: #constructor for what type of card it is def __init__( self, value, suit ): self._value = value self._suit = suit self._card = self._value + self._suit #print the type of card def Description( self ): return ( self._card ) #overloaded == def __eq__( self, another ): if ( self._card == another.Description() ): return True else: return False #main function which plays the game def main(): #sets player counters to zero, pOneCount = 0 pTwoCount = 0 #creates the deck to be put on the board gameDeck = Deck() gameDeck.gamePop() gameDeck.Shuffle() print(gameDeck._deck) currentCard = 0 for row in range(5): for card in range(0,4+i): mystring = print ('------- ' * 4) print ('| | ' * 4) for x in range(4): print ('| ' +gameDeck._deck[currentCard]+'|'), currentCard += 1 print ('| | ' * 4) print ('------- ' * 4) Edit: I cleared up the code which I've tried. The current output is this: ------- ------- ------- ------- | | | | | | | | | 7-H| | 5-H| | 7-H| | 9-H| | | | | | | | | ------- ------- ------- ------- Answer: the problem is in the def main(): def main(): print ('------- ' * 4) print ('| | ' * 4) for x in range(4): print ('| ' +gameDeck._deck[currentCard]+'|'), currentCard += 1 print ('| | ' * 4) print ('------- ' * 4) the * 4 just mean that this: print ('------- ' * 4) will become this: print ('------- ' + '------- ' + '------- ' + '------- ' ) it can also be type as: print ('------- ------- ------- ------- ' ) so. your problem is here: for x in range(4): print ('| ' +gameDeck._deck[currentCard]+'|'), currentCard += 1 this would print as: | 7-H| | 5-H| | 7-H| | 9-H| you need to put it as something like this: print ('| ' +gameDeck._deck[currentCard]+'|'+'| ' +gameDeck._deck[currentCard+1]+'|'+'| ' +gameDeck._deck[currentCard+2]+'|'+'| ' +gameDeck._deck[currentCard+3]+'|') so it would print in one line like how you want it: | 7-H| | 5-H| | 7-H| | 9-H| here is the code that i clean up a little. if it work like it should, it should work: def main(): #sets player counters to zero, pOneCount = 0 pTwoCount = 0 #creates the deck to be put on the board gameDeck = Deck() gameDeck.gamePop() gameDeck.Shuffle() print(gameDeck._deck) currentCard = 0 for row in range(5): for card in range(0,4+i): print (' ------- ' * 4) print (' | | ' * 4) print (' | ' +gameDeck._deck[currentCard]+' | '+' | ' +gameDeck._deck[currentCard+1]+' | '+' | ' +gameDeck._deck[currentCard+2]+' | '+' | ' +gameDeck._deck[currentCard+3]+' | ') print (' | | ' * 4) print (' ------- ' * 4) oh, and like John Y say (copy and paste): The main function has a dangling mystring =, which is a blatant syntax error here what i use to test, because the whole code don't work for me, i just tested the print part: print (' ------- ' * 4) print (' | | ' * 4) print (' | ' +"1-H"+' | '+' | ' +"2-H"+' | '+' | ' +"3-H"+' | '+' | ' +"4-H"+' | ') print (' | | ' * 4) print (' ------- ' * 4) that got me: ------- ------- ------- ------- | | | | | | | | | 1-H | | 2-H | | 3-H | | 4-H | | | | | | | | | ------- ------- ------- ------- >>>
Writing append only gzipped log files in Python Question: I am building a service where I log plain text format logs from several sources (one file per source). I do not intend to rotate these logs as they must be around forever. To make these forever around files smaller I hope I could gzip them in fly. As they are log data, the files compress very well. What is a good approach in Python to write append-only gzipped text files, so that the writing can be later resumed when service goes on and off? I am not that worried about losing few lines, but if gzip container itself breaks down and the file becomes unreadable that's no no. Also, if it's no go, I can simply write them in as plain text without gzipping if it's not worth of the hassle. Answer: Note: On unix systems you should seriously consider using an external program, written for this exact task: * [`logrotate`](http://linuxcommand.org/man_pages/logrotate8.html) (rotates, compresses, and mails system logs) You can set the number of _rotations_ so high, that the first file would be deleted [in 100 years](http://serverfault.com/a/50185/4346) or so. * * * In Python 2, `logging.FileHandler` takes an keyword argument `encoding` that can be set to `bz2` or `zlib`. This is because `logging` [uses](http://hg.python.org/cpython/file/8ee6d96a1019/Lib/logging/__init__.py#l919) the `codecs` module, which in turn treats `bz2` (or `zlib`) as _encoding_ : >>> import codecs >>> with codecs.open("on-the-fly-compressed.txt.bz2", "w", "bz2") as fh: ... fh.write("Hello World\n") $ bzcat on-the-fly-compressed.txt.bz2 Hello World * * * Python 3 version (although the docs [mention](http://docs.python.org/3.3/library/codecs.html#standard-encodings) `bz2` as alias, you'll actually have to use `bz2_codec` \- at least w/ 3.2.3): >>> import codecs >>> with codecs.open("on-the-fly-compressed.txt.bz2", "w", "bz2_codec") as fh: ... fh.write(b"Hello World\n") $ bzcat on-the-fly-compressed.txt.bz2 Hello World
python syntax error output Question: Please could you help me by fixing the error with my code. When I print last line I get a syntax error message: import math m_ = 900 # identifier for normal distribution mean [mm] s_d = 1 # identifier for normal distribution standard deviation [mm] print "DISTRIBUTION OF A PLATE WIDTH:" " MEAN", "=",m_,"," "STD DEV", "=", s_d print "" print "Using functions from the math library ..." # The probability density function(pdf) for a normal distribution with mean m_ and standard deviation s_d ftotal = 0 term = 0.0 count = 0 while abs(term) > 911: ftotal += term count += 1 term = term * xx / float(count) print "x" " " " f(x)" " " " F(x)" print "890" "" (1 / ((s_d * (2 * math.pi) ** -0.5)) * math.exp((- (x - m_) ** 2) / (2 * (s_d) ** 2), 0.5 * (1 + math.erf((x - m_) / s_d * m.sqrt(2)) Answer: Define x before the while loop. From the last two lines where you have denoted x by 890 so I am guessing `x = 890`. x = 890 #your while loop goes here print "x" " " " f(x)" " " " F(x)" print "890" , (1 / ((s_d * (2 * math.pi) ** -0.5))) * math.exp((- (x - m_) ** 2) / (2 * (s_d) ** 2)) , 0.5 * (1 + math.erf((x - m_) / s_d * math.sqrt(2))) I can't remember the exact formula but if the above expressions are correctly put, you won't get a syntax error.
I need clarification on how apache works with modwsgi and pyramid Question: We have a server that is configured to run a `pyramid+sqlalchemy` app with `modwsgi+apache2` We have a few things in the `__init__.py` of the pyramid app to create database and prepopulate some test users and accounts. It is similar to the initialize_sql function in [pyramid cookbook example here](http://docs.pylonsproject.org/projects/pyramid_cookbook/en/latest/database/sqlalchemy.html#importing- all-sqlalchemy-models) Out apache config looks something like this (Copied from somewhere and hacked): WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess pyramid user=ubuntu group=staff processes=1 \ threads=4 \ python-path= VIRTUAL_ENV_SITE_PACKAGES WSGIScriptAlias / WSGI_SCRIPT_PATH <Directory /Users/chrism/modwsgi/env> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> Whats been bothering us is that the `initialize_function` gets called on many requests instead of just being called once. We do not understand how apache works or what we have done in the config. We want to be able to call the functions in `__init__.py` once and thats it. Can someone explain how apache actually works and why pyramid `__init__.py` is being reloaded all the time. Also explain how we can make sure that the `__init__.py` is not rerun every request Consider sharing simple to understand links regarding the same stuff :) Answer: Things like creating a database and prepopulating it shouldn't go into the `__init__.py` file of a WSGI application because, as you've noticed, this file gets run whenever you start your server (and at certain other points as well). What you need is a script which can be called from the command line whenever you need to. Every framework provides their own way to build these. In Pyramid they are called `console scripts`. [Here](http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/commandline.html#making- your-script-into-a-console-script) you can find the documentation of how to build one yourself. However, I recommend you use the scaffold (project template) from [this tutorial](http://docs.pylonsproject.org/projects/pyramid/en/1.4-branch/tutorials/wiki2/installation.html#making- a-project) and have a look at the console script called `initialize_tutorial_db`. It does exactly what you want and allows you to create / initialize your database from the command line whenever you need to. * * * Regarding the apache configuration, the line which is relevant to your question is the following: WSGIDaemonProcess pyramid user=ubuntu group=staff processes=1 \ threads=4 \ python-path= VIRTUAL_ENV_SITE_PACKAGES It means you will start **1** process with **4** threads. This shouldn't mean that the code in your `__init__.py` file is run multiple times. That is more likely to be caused from you restarting the server manually or through code changes.
Receive attachment with urllib - Python Question: I am testing my webpage software by sending requests from python to it. I am able to send requests, receive responses and parse the json. However, one option on the webpage is to download files. I send the download request and can confirm that the response headers contain what I expect (application/octet-stream and the appropriate filename) but the Content-Length is 0. If the length is 0, I assume the file was not actually sent. I am able to download files from other means so I know my software works but I am having trouble with getting it to work with python. I build up the request then do: f = urllib.request.urlopen(request) f.body = f.read() I expect data to be in f.body but it is empty (I see "b''") Is there a different way to access the file contents from an attachment in python? Answer: > Is there a different way to access the file contents from an attachment in > python? This is in [python-requests](http://docs.python-requests.org/en/latest/) instead `urllib`, since I'm more familiar with that. import requests url = "http://www.mysite.com/totallynotavirus.jpg.exe" #make request r = requests.get(url) attachment_data = r.content #save to file with open(r"C:/system/win32/funnypictures", 'wb') as f: f.write(attachment)
Unescaping URL Parameter w/ Web.py Question: Reading an url as follows: example.com/product/xy&z urls = ('/product/(.*)', product) In the product class on GET I am reading the that product ID pulled (xy&z) from the URL to create a DB query. Some of those IDs have an '&' in them, when I receive that value in python it has been escaped. I have had zero luck unescaping it back so I can run the query correctly. The following do not work: product = product.replace("&amp;", "&") product = HTMLParser.unescape.__func__(HTMLParser,product) Whats the best practice in Web.py or even Python? Whats the best way to even diagnose this on my own? Thanks! Answer: I am getting these product ids out of box, see the following code. import web urls = ( "/product/(.*)", "Product" ) class Product: def GET(self, product_id): return product_id app = web.application(urls, globals()) if __name__ == "__main__": #app.run() assert web.__version__ == "0.37" b = app.browser() b.open('/product/xy&amp;z') assert b.status == 200 assert "xy&amp;z" in b.data assert "xy&z" not in b.data b.open('/product/xy&z') assert "xy&z" in b.data Please provide failing url that doesn't return desired product id.
Python Generator Cutoff Question: I have a generator that will keep giving numbers that follow a specific formula. For sake of argument let's say this is the function: # this is not the actual generator, just an example def Generate(): i = 0 while 1: yield i i+=1 I then want to get a list of numbers from that generator that are below a certain threshold. I'm trying to figure out a pythonic way of doing this. I don't want to edit the function definition. I realize you could just use a while loop with your cutoff as the condition, but I'm wondering if there is a better way. I gave this a try, but soon realized why it wouldn't work. l = [x for x in Generate() x<10000] # will go on infinitely So is there a correct way of doing this. Thanks Answer: An [`itertools`](http://docs.python.org/2/library/itertools.html#itertools.takewhile) solution to create another iterator: from itertools import takewhile l = takewhile(lambda x: x < 10000, generate()) Wrap it in `list()` if you are sure you want a list: l = list(takewhile(lambda x: x < 10000, generate())) Or if you want a list and like inventing wheels: l = [] for x in generate(): if x < 10000: l.append(x) else: break
How do you reload and autoreload using IPython? Question: I just installed IPython 0.13.1 and am having two problems. I have a small 'demo' project that contains an application called 'app': . β”œβ”€β”€ app β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  β”œβ”€β”€ __init__.pyc β”‚Β Β  β”œβ”€β”€ models.py β”‚Β Β  β”œβ”€β”€ models.pyc β”‚Β Β  β”œβ”€β”€ tests.py β”‚Β Β  └── views.py β”œβ”€β”€ demo β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  β”œβ”€β”€ __init__.pyc β”‚Β Β  β”œβ”€β”€ settings.py β”‚Β Β  β”œβ”€β”€ settings.pyc β”‚Β Β  β”œβ”€β”€ urls.py β”‚Β Β  └── wsgi.py └── manage.py models.py contains: from django.db import models class Customer(models.Model): fname = models.CharField(max_length=25) My first problem is reloading the models.py file after I make a change. If I open IPython, import my Customer class, and try to reload the models module, I get this error: In [1]: from app.models import Customer In [2]: reload(app.models) -------------------------------------------------- NameError Traceback(most recent call last) /Users/me/dotfiles/.virtualenvs/demo/lib/.../shell.pyc in <module>() ----> 1 reload(app.models) NameError: name 'app' is not defined While I'm new to Python, I do believe that 'models' is the module, and I it is in my INSTALLED_APPS settings. I also tried reload(app) and reload(models) with no success. What am I doing wrong? My second problem is in using autoreload. After reading the doc page, I enabled it like this: In [1]: %load_ext autoreload In [2]: %autoreload 2 Now if I create a test file 'foo.py' with some function in it as the documentation illustrates, any changes I make to that function are reflected in IPython. But if I import my Customer class (as shown above) and add a second field "lname" to it and save the file, that change isn't reflected in IPython. If I run the command, "Customer??", the change doesn't show up. Also, if I run the aimport command, I see this: In [5]: %load_ext autoreload In [6]: %autoreload 2 In [7]: %aimport Modules to reload: all-except-skipped Modules to skip: What am I doing wrong? Is this the AppCache issue discussed [here](http://stackoverflow.com/questions/890924/how-do-you-reload-a-django- model-module-using-the-interactive-interpreter-via-m/903943#903943)? I tried implementing the script shown but it throws errors when I run it. Thanks. Answer: For your first issue, you can't `reload(app.models)` because you didn't add `app.models` to your namespace. You added only `Customer`. You can add `import app.models` to solve that. I think your second problem is related. Since you imported the actual class into your namespace, reloading doesn't help. Presumably reloading just replaces the module in your namesspace, so if you refer to `models.Customer` rather than directly using `Customer` you should be in business.
write() not called when subclassing code.InteractiveInterpreter Question: When subclassing code.InteractiveInterpreter I can't seem to get the write() method to run as I would expect per the [documentation](http://docs.python.org/2/library/code.html). import code class PythonInterpreter(code.InteractiveInterpreter): def __init__(self, localVars): self.runResult = '' print 'init called' code.InteractiveInterpreter.__init__(self, localVars) def write(self, data): print 'write called' self.runResult = data test = 'Hello' interpreter = PythonInterpreter({'test':test}) interpreter.runcode('print test') print 'Result:' + interpreter.runResult Expected output: init called write called Result: Hello Actual output: init called Hello <- shouldn't print Result: Any thoughts? Answer: The write method is not used by the code passed to runcode at all. You would have to redirect stdout for this to work, e.g. something like: import code class PythonInterpreter(code.InteractiveInterpreter): def __init__(self, localVars): self.runResult = '' print 'init called' code.InteractiveInterpreter.__init__(self, localVars) def write(self, data): # since sys.stdout is probably redirected, # we can't use print sys.__stdout__.write('write called\n') self.runResult = data def runcode(cd): # redirecting stdout to our method write before calling code cd sys.stdout = self code.InteractiveInterpreter.runcode(self,cd) # redirecting back to normal stdout sys.stdout = sys.__stdout__
Python subproces.call not working as expected Question: I can not get the subprocess.call() to work properly: >>> from subprocess import call >>> call(['adduser', '--home=/var/www/myusername/', '--gecos', 'GECOS', '--disabled-login', 'myusername'], shell=True) adduser: Only one or two names allowed. 1 But without shell=True: >>> call(['adduser', '--home=/var/www/myusername/', '--gecos', 'GECOS', '--disabled-login', 'myusername']) Adding user `myusername' ... Adding new group `myusername' (1001) ... Adding new user `myusername' (1001) with group `myusername' ... Creating home directory `/var/www/myusername/' ... Copying files from `/etc/skel' ... 0 Or the same directly in shell: root@www1:~# adduser --home=/var/www/myusername/ --gecos GECOS --disabled-login myusername Adding user `myusername' ... Adding new group `myusername' (1001) ... Adding new user `myusername' (1001) with group `myusername' ... Creating home directory `/var/www/myusername/' ... Copying files from `/etc/skel' ... I miss some logic in the shell=True behavior. Can somebody explain me why? What is wrong with the first example? From the adduser command error message it seems that arguments are somehow crippled. Thanks! Answer: When you specify shell=True you switch to quite different behaviour. From the docs: > On Unix with shell=True, the shell defaults to /bin/sh. If args is a string, > the string specifies the command to execute through the shell. This means > that the string must be formatted exactly as it would be when typed at the > shell prompt. This includes, for example, quoting or backslash escaping > filenames with spaces in them. If args is a sequence, the first item > specifies the command string, and any additional items will be treated as > additional arguments to the shell itself. That is to say, Popen does the > equivalent of: > > Popen(['/bin/sh', '-c', args[0], args[1], ...]) So you are running the equivalent of /bin/sh -c "adduser" --home=/var/www/myusername/ --gecos GECOS --disabled-login myusername The error message you are getting is what happens when you try and run `adduser` without any arguments as all the arguments are being passed to `sh`. If you want to set shell=True then you would need to call it like this: call('adduser --home=/var/www/myusername/ --gecos GECOS --disabled-login myusername', shell=True) OR like this: call(['adduser --home=/var/www/myusername/ --gecos GECOS --disabled-login myusername'], shell=True) But mostly you just want to use `call` without the `shell=True` and use a list of arguments. As per your second, working, example.
Determining if a given Python module is part of the standard library Question: How can I determine whether a Python module is part of the standard library? In other words: is there a Python equivalent of perl's corelist utility? I would use this to set my expectations on portability during development. In case it's implementation dependent, I'm interested in CPython. The best answer I found so far is this: [Which parts of the python standard library are guaranteed to be available?](http://stackoverflow.com/questions/13334536/which-parts-of-the- python-standard-library-are-guaranteed-to-be-available) That is to search for the module name on the index page of the Python Standard Library Documentation: <http://docs.python.org/2/library/>. However, this is less convenient than having a utility and also does not tell me anything about minimally required versions. Answer: When using a setuptools install script (`setup.py`), you _test_ for the required module, and update the installation dependencies list to add backports if needed. For example, say you need the `collections.OrderedDict` class. The documentation states it was added in Python 2.7, but a [backport is available](http://pypi.python.org/pypi/ordereddict) that works on Python 2.4 and up. In `setup.py` you test for the presence of the class in `collections`. If the import fails, add the backport to your requirements list: from setuptools import setup install_requires = [] try: from collections import OrderedDict except ImportError: install_requires.append('ordereddict') setup( # ... install_requires=install_requires ) then in your code where you need `OrderedDict` use the same test: try: from collections import OrderedDict except ImportError: # use backported version from ordereddict import OrderedDict and rely on `pip` or `easy_install` or `zc.buildout` or other installation tools to fetch the extra library for you. Many recent core library additions have backports available, including `json` (called `simplejson`), `argparse`, `sqlite3` (the `pysqlite` package, use `from pysqlite2 import dbapi as sqlite3` as a fallback). You'll _still_ have to read the documentation; the Python documentation is _excellent_ , and for new modules, classes, methods, functions or arguments the documentation mentions explicitly in what Python version they were added.
Python-based Socket program accept by libPcap(C-based) Question: Dear all: I use **python-based socket client** to send string data (i.e log data). On the other hand, I use **libpcap to sniff string data on the server side**. # But I got an error on the client side when I send string data to the server side at the second time. The error like below: Traceback (most recent call last): File "./udp_client_not_sendback.py", line 21, in <module> s.sendall(data) #Send UDP data File "/usr/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) socket.error: [Errno 111] Connection refused And below are my codes on the client and server side: _**Client side(Python)_** import socket, sys host = sys.argv[1] #Server IP Address textport = sys.argv[2] #Server Binding Port s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) #socket try: port = int(textport) except ValueError: port = socket.getservbyname(textport, 'udp') s.connect((host, port)) #connect while(1): print "Enter data to transmit:" data = sys.stdin.readline().strip() #UDP data s.sendall(data) #Send UDP data _**Server side(C libpcap)_** pcap_handler_func(u_char *user, const struct pcap_pkthdr *h, const u_char *bytes) { char timebuf[64]; char addrstr[64]; struct ether_header *ethhdr = (struct ether_header *)bytes; struct iphdr *ipv4h; struct ip6_hdr *ipv6h; memset(timebuf, 0, sizeof(timebuf)); if (ctime_r(&h->ts.tv_sec, timebuf) == NULL) { return; } timebuf[strlen(timebuf) - 1] = '\0'; printf("%s, caplen:%d, len:%d, ", timebuf, h->caplen, h->len); ipv4h = (struct iphdr *)(bytes + sizeof(struct ether_header)); inet_ntop(AF_INET, &ipv4h->saddr, addrstr, sizeof(addrstr)); printf("src[%s]\n", addrstr); return; } int main() { pcap_t *p; char errbuf[PCAP_ERRBUF_SIZE]; char cmdstr[] = "udp"; struct bpf_program bpfprog; p = pcap_open_live("eth1", 65536, 1, 10, errbuf); //Filter if (pcap_setfilter(p, &bpfprog) < 0) { fprintf(stderr, "%s\n", pcap_geterr(p)); return 1; } //Packet action if (pcap_loop(p, -1, pcap_handler_func, NULL) < 0) { fprintf(stderr, "%s\n", pcap_geterr(p)); pcap_close(p); return 1; } pcap_close(p); return 0; } I think the problem is I don't bind the socket on the server side and I just use pcap to capture the string data. So at the second time it happened the socket error on the client side. Anyone can give me some suggestion to overcome this problem? Thanks a lot for your helping. Answer: libpcap, and the capture mechanisms it uses, are **_NOT_** intended for use when writing TCP/UDP/IP servers! They're intended to 1) allow passive capture of packets and (in some cases and with newer versions of libpcap) injection of packets and 2) user-mode implementation of protocols that run atop the link layer and that don't have an implementation in the OS kernel. If you have a process running on some machine, listening for packets using libpcap, that will **_NOT_** create a socket on that machine to receive packets sent to some particular TCP or UDP port. The first time your program tries to send a packet to the UDP port in question, the machine to which you're sending it probably sees that it's being sent to a UDP port on which no socket is listening, and sends back an ICMP Port Unreachable message. The send call has already been done; UDP is a connectionless protocol with no reliable delivery guarantees, so a UDP send does _NOT_ wait for a reply of any sort - if the packet made it out onto the wire, that's considered "success", as no other checks for successful delivery can be made at the UDP level. However, as you connected the socket in the client program, the client machine's ICMP implementation, if it sees the ICMP Port Unreachable message, will set an indicator on the socket to tell the program that the _previous_ UDP packet couldn't be delivered (as there was nothing to which to deliver it). A receive attempt on that socket - or, it appears, a subsequent attempt to send a packet on that socket - will return the "connection refused" error as an indication that the _previously-sent_ packet couldn't be delivered. It will also mean that the second attempt to send the packet won't actually send the packet; you'd have to make _another_ attempt, which will, I think, succeed in sending the packet, although it will again get an ICMP Port Unreachable if there's no program listening for that UDP port on the server. So _that's_ why you saw the first packet, but not the second, in your sniffer (**_NOT_** server!) application using libpcap. So millimoose is 100% correct when he/she says "there has to be an actual server running on the server side". Write one - using regular sockets, not libpcap and whatever mechanism it might use (AF_PACKET sockets on Linux, BPF devices on *BSD and OS X, etc.), and run it on the server machine. You can _also_ run your libpcap-based _sniffer_ program, or some other sniffer program such as tcpdump or {Wireshark, TShark}, at the same time to see what's going on the wire.
Zeroth-order Bessel function Python Question: Apologies for the simplicity of this question. I would like to implement an equation in Python. In this equation, K_0 is the zeroth-order modifed Bessel function. What is the best way of implementing K_0 in Python? Answer: No need to implement it; it's included. See the docs for the [scipy.special](http://docs.scipy.org/doc/scipy/reference/special.html) module, in particular the optimized common ones [here](http://docs.scipy.org/doc/scipy/reference/special.html#faster-versions- of-common-bessel-functions): >>> import scipy.special >>> print scipy.special.k0.__doc__ k0(x[, out]) y=k0(x) returns the modified Bessel function of the second kind (sometimes called the third kind) of order 0 at x. >>> scipy.special.k0(1) 0.42102443824070823 or more generally: >>> print scipy.special.kn.__doc__ kn(x1, x2[, out]) y=kn(n,x) returns the modified Bessel function of the second kind (sometimes called the third kind) for integer order n at x. >>> scipy.special.kn(0, 1) 0.42102443824070834
Calculating strings as values Question: Is it possible in Python to calculate a term in a string? For example: string_a = "4 ** (3 - 2)" unknown_function(string_a) = 4 Is this possible? Is there a function that mimics "unknown_function" in my example? Thanks! Answer: Just like `sympy` was a useful module for your [last question](http://stackoverflow.com/questions/14668495/mathematical-function- unassigned-variables), it can apply here: >>> import sympy >>> sympy.sympify("4**(3-2)") 4 and even >>> sympy.sympify("2*x+y") 2*x + y >>> sympy.sympify("2*x+y").subs(dict(x=2, y=3)) 7 Note though that this will return `sympy` objects, and if you want to get an integer or a float out of it you should do the conversion explicitly: >>> type(sympy.sympify("4**(3-2)")) <class 'sympy.core.numbers.Integer'> >>> int(sympy.sympify("4**(3-2)")) 4 I hacked together a recipe to turn string expressions into functions [here](http://stackoverflow.com/a/13734176/487339) which is kind of cute.
Python - include a sum in a IF condition Question: It's probably a dumb question but I don't manage to put a sum expression in a If condition. I work on a CSV file composed of 3 rows A, B and C. Here is my code : #import and export files test = "/home/julien/excel/test.csv" file1 = open (test, 'rb') export_test = "/home/julien/excel/export_test.csv" file2 = open (export_test, 'wb') reader1 = csv.reader (file1, delimiter = ';', quotechar=' ') writer1 = csv.writer (file2, delimiter= ';', quotechar=' ') #conditions on test.csv for row1 in reader1: A = row1[0] B = row1[1] C = row1[2] if (A + B) == C: writer1.writerow (["calcul ok"]) else: writer1.writerow (["fail"]) The problem is that I only have "fail" results whereas it should not. Could you help me ? Thanks ! Answer: Convert the strings into numbers: A = float(row1[0]) B = float(row1[1]) C = float(row1[2]) Or more concisely: A, B, C = map(float, row1[:3]) If your numbers are integers, replace `float` with `int`.
Automatically refresh label python Question: # -*- Mode: Python; coding: utf-8; indent-tabs-mode: nil; tab-width: 4 -*- ### BEGIN LICENSE # Copyright (C) 2012 Marios Papachristou [email protected] # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License version 3, as published # by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranties of # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR # PURPOSE. See the GNU General Public License for more details. # # You should have received a copy of the GNU General Public License along # with this program. If not, see <http://www.gnu.org/licenses/>. ### END LICENSE import gettext from gettext import gettext as _ gettext.textdomain('quickbrowser') from gi.repository import Gtk, WebKit # pylint: disable=E0611 import logging logger = logging.getLogger('quickbrowser') from quickbrowser_lib import Window from quickbrowser.AboutQuickbrowserDialog import AboutQuickbrowserDialog from quickbrowser.PreferencesQuickbrowserDialog import PreferencesQuickbrowserDialog # See quickbrowser_lib.Window.py for more details about how this class works class QuickbrowserWindow(Window): __gtype_name__ = "QuickbrowserWindow" def finish_initializing(self, builder): # pylint: disable=E1002 """Set up the main window""" super(QuickbrowserWindow, self).finish_initializing(builder) self.AboutDialog = AboutQuickbrowserDialog self.PreferencesDialog = PreferencesQuickbrowserDialog self.goBack = self.builder.get_object('goBack') self.homeButton = self.builder.get_object('homeButton') self.refreshButton = self.builder.get_object('refreshButton') self.goButton = self.builder.get_object('goButton') self.currentaddresslabel = self.builder.get_object('currentaddresslabel') self.addressbar = self.builder.get_object('addressbar') self.viewwindow = self.builder.get_object('viewwindow') self.goForward = self.builder.get_object('goForward') self.zoomIn = self.builder.get_object('zoomIn') self.zoomOut = self.builder.get_object('zoomOut') self.webview = WebKit.WebView() self.viewwindow.add(self.webview) self.webview.show() def on_addressbar_activate(self, widget): address = widget.get_text() self.webview.open(address) def on_refreshButton_clicked(self, widget): self.webview.reload() def on_goBack_clicked(self,widget): self.webview.go_back(); def on_goForward_clicked(self,widget): self.webview.go_forward(); def on_zoomIn_activate(self,widget): self.webview.zoom_in(); def on_zoomOut_activate(self,widget): self.webview.zoom_out(); def on_goButton_clicked(self,widget): self.webview.open(self.addressbar.get_text()) I am currently developing a web browser in Python using python-webkit. The source code above is written in order to manage the main window. How can I constantly display current URL using webview.get_uri() method return value inside a label? Thanks in advance Answer: while True: updateLocationBar() # ;) Surely the location bar already provides this? You could call your label modifying function wherever `self.webview.open()` is called by writing a wrapper like def self.webview.mysuperawesomeopen(uri): updateLocationBar() self.webview.open(uri) and then modifying the calls appropriately. Or if you don't want to change the calls, just move the existing `.open()` function like self.webview.openit = self.window.open def self.window.open(uri): updateLocationBar() self.webview.openit(uri) **Edit:** You could also use a function decorator on `self.window.open()` like def labelled(f): def labelgo(*args, **kwargs): updateLocationBar(*args, **kwargs) self.webview.open(*args, **kwargs) return labelgo @labelled # You add this and leave the rest of self.window.open alone def self.window.open(uri): ... This would be a better solution if you want to generalise (and you would make the labelled decorator more general - I left this simple as an example).
How to insert value to specific cell in array in Python? Question: I need to get 10 numbers from the user, and than calc the amount of times each digit is appear in all the numbers. I wrote the next code: # Reset variable aUserNum=[] aDigits=[] # Ask the user for 10 numbers for i in range(0,2,1): iNum = int(input("Please enter your number: ")) aUserNum.append(iNum) # Reset aDigits array for i in range(0,10,1): aDigits.append(0) # Calc the count of each digit for i in range(0,2,1): iNum=aUserNum[i] print("a[i] ",aUserNum[i]) while (iNum!=0): iLastNum=iNum%10 temp=aDigits[iLastNum]+1 aDigits.insert(iLastNum,temp) iNum=iNum//10 print(aDigits) from the result, I can see that the temp is not working. When I write this temp=aDigits[iLastNum]+1, Shouldn't it say that the array in cell iLastNum will get the value of the cell +1? thanks, Yaniv Answer: You can concatenate all the inputs to get a single string and use this with `collections.Counter()` import collections ct = collections.Counter("1234567890123475431234") ct['3'] == 4 ct.most_common() # gives a list of tuples, ordered by times of occurrence
Unloading an Unused Module in Python (A Very Specific Case) Question: First of all, I must tell you that I have already looked this this [bug](http://bugs.python.org/issue9072) and I understand that the feature is (in general) not possible for a long time. However, I have a use case which is very specific. Hence, I will try to describe my use case, and ask for suggestions. I am writing an interactive python application which is run from the interpreter. So, the user might make a mistake or two when importing modules, and I would like it, if I could provide a method for the user to delete the module (as soon as he has imported it). So, one of the problems with the references to the module being already incorporated into other objects is gone. Once, I am sure that the module has not been used at all, what can I do to remove it? Is it still technically possible? I was thinking that if I could create a function which manually deletes every object/function created by the module when imported, then what I want might be accomplished. Is this true? IPython does a [similar operation](http://ipython.org/ipython- doc/dev/api/generated/IPython.core.extensions.html#module- IPython.core.extensions) with its extensions. Is that the correct way to go? Answer: Modules are just a namespace, an object, stored in the `sys.modules` mapping. If there are no other references to anything belonging in that module, you can remove it from the `sys.modules` mapping to delete it again: >>> import timeit >>> import sys >>> del timeit # remove local reference to the timeit module >>> del sys.modules['timeit'] # remove module
python-twitter in google app engine Question: I am trying to use python-twitter api in GAE. I need to import Oauth2 and httplib2. Here is how I did For OAuth2, I downloaded github.com/simplegeo/python- oauth2/tree/master/oauth2. For HTTPLib2, I dowloaded code.google.com/p/httplib2/wiki/Install and extracted folder python2/httplib2 to project root folder. my views.py import twitter def index(request): api = twitter.Api(consumer_key='XNAUYmsmono4gs3LP4T6Pw',consumer_secret='xxxxx',access_token_key='xxxxx',access_token_secret='iHzMkC6RRDipon1kYQtE5QOAYa1bVfYMhH7GFmMFjg',cache=None) return render_to_response('fbtwitter/index.html') I got the error [paste.shehas.net/show/jbXyx2MSJrpjt7LR2Ksc](http://paste.shehas.net/show/jbXyx2MSJrpjt7LR2Ksc) AttributeError AttributeError: 'module' object has no attribute 'SignatureMethod_PLAINTEXT' Traceback (most recent call last) File "D:\PythonProj\fbtwitter\kay\lib\werkzeug\wsgi.py", line 471, in __call__ return app(environ, start_response) File "D:\PythonProj\fbtwitter\kay\app.py", line 478, in __call__ response = self.get_response(request) File "D:\PythonProj\fbtwitter\kay\app.py", line 405, in get_response return self.handle_uncaught_exception(request, exc_info) File "D:\PythonProj\fbtwitter\kay\app.py", line 371, in get_response response = view_func(request, **values) File "D:\PythonProj\fbtwitter\fbtwitter\views.py", line 39, in index access_token_secret='iHzMkC6RRDipon1kYQtE5QOAYa1bVfYMhH7GFmMFjg',cache=None) File "D:\PythonProj\fbtwitter\fbtwitter\twitter.py", line 2235, in __init__ self.SetCredentials(consumer_key, consumer_secret, access_token_key, access_token_secret) File "D:\PythonProj\fbtwitter\fbtwitter\twitter.py", line 2264, in SetCredentials self._signature_method_plaintext = oauth.SignatureMethod_PLAINTEXT() AttributeError: 'module' object has no attribute 'SignatureMethod_PLAINTEXT' It seems I did not import Oauth2 correctly when I tracked the error in twitter.py self._signature_method_plaintext = oauth.SignatureMethod_PLAINTEXT() I even go to twitter.py and add `import oauth2 as oauth` but it couldnt solve the problem Can anybody help? Answer: I fixed it. In twitter.py, try: from hashlib import md5 except ImportError: from md5 import md5 import oauth CHARACTER_LIMIT = 140 # A singleton representing a lazily instantiated FileCache. DEFAULT_CACHE = object() REQUEST_TOKEN_URL = 'https://api.twitter.com/oauth/request_token' ACCESS_TOKEN_URL = 'https://api.twitter.com/oauth/access_token' AUTHORIZATION_URL = 'https://api.twitter.com/oauth/authorize' SIGNIN_URL = 'https://api.twitter.com/oauth/authenticate' Need to change `import oauth` to `import oauth2 as oauth`
AttributeError: 'module' object has no attribute 'TreeTagger' Question: I am trying to use the **Python** wrapper for `TreeTagger`, a Part-of-Speech Tagger. The code I use for importing and invoking the wrapper is: import TreeTaggerWrapper tagger = TreeTaggerWrapper.TreeTagger(TAGLANG='en',TAGDIR='D:/Programme/TreeTagger') tags = tagger.TagText("This is a very short text to tag.") print tags the error is: `'AttributeError: 'module' object has no attribute 'TreeTagger''` The **init**.py exists in the TreeTaggerWrapper directory and is empty. How would I go about systematically resolving the issue? Answer: Try this wrapper: <http://cental.fltr.ucl.ac.be/team/~panchenko/def/treetaggerwrapper.py> There is documentation inside the file. **Update** Copy the file `treetaggerwrapper.py` to `python/Lib`. Try this: import treetaggerwrapper tagger = treetaggerwrapper.TreeTagger(TAGLANG='en',TAGDIR='D:/Programme/TreeTagger') tags = tagger.TagText("This is a very short text to tag.") print tags **Update 2** If you have `Lib/site-packages/TreeTaggerWrapper/treetaggerwrapper.py`, then you should do this: from TreeTaggerWrapper import treetaggerwrapper tagger = treetaggerwrapper.TreeTagger(TAGLANG='en',TAGDIR='D:/Programme/TreeTagger') tags = tagger.TagText("This is a very short text to tag.") print tags