text
stringlengths 226
34.5k
|
---|
Debugging modifications of sys.path
Question: Some library seems to modify my `sys.path`, although I don't want ìt to be
changed.
How can I find the python code line which alters `sys.path`?
**Related**
* [Where does sys.path get replaced?](http://stackoverflow.com/questions/33212466/where-does-sys-path-get-replaced/33212467)
Answer: One of the first things imported is the `sitecustomize` and `usercustomize`
modules; you could replace `sys.path` with a custom list implementation that
records all changes being made.
First, find where to place a `usercustomize` or `sitecustomize` module; the
[`site` module](https://docs.python.org/2/library/site.html) can tell you
where to place the first:
python -m site --user-site
If that directory doesn't exist yet, create it and in it put a
`usercustomize.py` with:
import sys
class VerboseSysPath(list):
def croak(self, action, args):
frame = sys._getframe(2)
print('sys.path.{}{} from {}:{}'.format(
action, args, frame.f_code.co_filename, frame.f_lineno))
def insert(self, *args):
self.croak('insert', args)
return super(VerboseSysPath, self).insert(*args)
def append(self, *args):
self.croak('append', args)
return super(VerboseSysPath, self).append(*args)
def extend(self, *args):
self.croak('extend', args)
return super(VerboseSysPath, self).extend(*args)
def pop(self, *args):
self.croak('pop', args)
return super(VerboseSysPath, self).pop(*args)
def remove(self, *args):
self.croak('remove', args)
return super(VerboseSysPath, self).pop(*args)
def __delitem__(self, *args):
self.croak('__delitem__', args)
return super(VerboseSysPath, self).__delitem__(*args)
def __setitem__(self, *args):
self.croak('__setitem__', args)
return super(VerboseSysPath, self).__setitem__(*args)
def __setslice__(self, *args):
self.croak('__setslice__', args)
return super(VerboseSysPath, self).__setslice__(*args)
sys.path = VerboseSysPath(sys.path)
This now will complain about all attempts at altering the `sys.path` list.
Demo, with the above placed in either the `site-packages/sitecustomize.py` or
``python -m site --user-site`/usercustomize.py` modules:
$ cat test.py
import sys
sys.path.append('')
$ bin/python test.py
sys.path.append('',) from test.py:3
|
Why widgets declared later appear first?
Question: I am using Python's `tkinter` library to create a small GUI. The code of the
app is as follows :
import tkinter as tk
from tkinter import ttk
APP_TITLE = "HiDE"
class Application(tk.Frame):
"""This class establishes an entity for the application"""
#The constructor of this class
def __init__(self, master=None):
tk.Frame.__init__(self,master)
self.grid()
def setupWidgets(self):
self.message = tk.Label(self,text='Login')
self.quitButton = tk.Button(self,text='Quit',command=self.quit)
self.logButton = tk.Button(self,text='Login',command=self.quit)
self.master.title(APP_TITLE)
self.master.minsize("300","300")
self.master.maxsize("300","300")
self.message.grid()
self.logButton.grid()
self.quitButton.grid()
#Setting up the application
app = Application()
img = tk.PhotoImage(file='icon.png')
#getting screen parameters
w = h = 300#max size of the window
ws = app.master.winfo_screenwidth() #This value is the width of the screen
hs = app.master.winfo_screenheight() #This is the height of the screen
# calculate position x, y
x = (ws/2) - (w/2)
y = (hs/2) - (h/2)
#This is responsible for setting the dimensions of the screen and where it is placed
app.master.geometry('%dx%d+%d+%d' % (w, h, x, y))
app.master.tk.call('wm', 'iconphoto', app.master._w, img)
app.master.i_con = tk.Label(app.master, image=img)
app.master.i_con.grid()
app.setupWidgets() #<-- This is the function call
app.mainloop()
The `setupWidgets` function is called after the image is set up, but the
output is :

Answer: By the time you grid the Label with the image, you have already called grid on
the Frame in your class. This class is where the other widgets are placed in,
so it's placed above the Label with the image.
Instead of
app.master.i_con = tk.Label(app.master, image=img)
try
app.master.i_con = tk.Label(app, image=img)
to put the Label with the image in the Frame with the other widgets.
* * *
On a side note, calling grid without specifying a row and column doesn't
really make sense.
|
Python: Finding text, transforming it, replacing it
Question: Can't seem to articulate this problem well enough to look this up. I'm looking
to use regular expressions to identify certain strings of text, transform
them, and then replace that string with the transformed text.
For instance: "Some random stock symbol: MSFT AAPL INTC"
If I wanted to replace the stock symbol with say a stock symbol that is linked
with HTML, what's the best way to go about doing this?
I can write a regular expression to find the stock symbol and do a simple
substitution of a fixed string, but I don't know how to tell Python to "find
string in text, transform it, and replace the string with the transformed
text"
In this particular example, I could split the string on a space and iterate
through the list, but I'd like something that just finds and replaces the text
as Python finds it as there might be cases where a consistent delimiter might
not be available.
Answer:
import re
string = 'Some random stock symbol: MSFT AAPL INTC'
tickers = re.findall('[A-Z]{4}', string)
for x in tickers:
string = re.sub(x, 'example.com/'+x, string)
This returns 'Some random stock symbol: example.com/MSFT example.com/AAPL
example.com/INTC'
|
Python - My class raise ValueError, but it is not handled by except
Question: My Sample Code:
from tkinter import *
class first:
def __init__(self):
self.top = Tk()
...
def test(self):
try:
self.value = self.dict[key]
except KeyError:
try:
second()
except ValueError:
print('Finally')
class second:
def __init__(self):
self.frame = Toplevel()
...
self.button = ttk.Button(parent=self.frame, text='GO', command=self.go_click)
...
def go_click(self):
raise ValueError('Not Valid')
That´s just an example! The problem is that the ValueError is raised by the
second class, but it is not handled by the except clause of the first class.
Below the traceback:
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python34\lib\tkinter\__init__.py", line 1487, in __call__
return self.func(*args)
File "........", line xxx, in goclick
raise ValueError('Not Valid')
ValueError: Not Valid
How can I properly handle it?
Thanks,
Answer: try this
from tkinter import *
class first:
def __init__(self):
self.top = Tk()
...
def test(self):
try:
self.value = self.dict[key]
except KeyError:
try:
second()
except ValueError:
print('Finally')
print "OK CALLED SECOND()!!!!" #######THIS PRINT MEANS YOUR DONE HERE
class second:
def __init__(self):
self.frame = Toplevel()
...
self.button = ttk.Button(parent=self.frame, text='GO', command=self.go_click)
...
def go_click(self):
raise ValueError('Not Valid')
in order to actually handle that error you would need to override tkinters
event loop ... not very easy(or good practice in general)
a better way to do it would be to handle the error in the `go_click` function
its self something like
def go_click(self):
try:
self.submit_form()
except ValueError:
print "VALIDATION ERROR!!"
|
python functions call in javascript autocomplete
Question: this is my views.py
from flask import Flask
app = Flask(__name__)
@app.route('/autocomplete')
*def get_contacts():*
cat_list = []
cat_list = contact.objects.all()
return dumps(cat_list)
and this is my js
function showDialog() {
$("#dialog").dialog(
{
width: 600,
height: 400,
});
$.ajax({
url:'{{url_for("autocomplete")}}'
}).done(function(data){
$( "#search" ).autocomplete({
source: data
});
});
}
when I'm trying to run this i give the following error
GET <http://127.0.0.1:8000/share_win/%7B%7Burl_for(%22autocomplete%22)%7D%7D>
404 (NOT FOUND)
Any idea?
Answer: Your issue is coming from here:
$.ajax({
url:'{{url_for("autocomplete")}}'
})
The `url_for` method requires the name of the function for the desired route
rather than physical route.
To fix this, you can either do:
$.ajax({
url:'{{url_for("get_contacts")}}'
})
or if your Javascript is in a JS file, then do the following:
$.ajax({
url:'/autocomplete'
})
|
What is the best way to unconvert sanitized data?
Question: I have a very large set of data (one of stackoverflow's data dumps) which is
completely in raw and sanitized form.
For example: </p>
Is there an already established way to convert the above and similar back to
their original form for readability and usability? A python script or function
call by chance?
Answer: Here is a solution I had to use to get everything working correctly - note
that the HTML parser didn't do everything I wanted with my data-set
# !/usr/bin/python3
import html.parser
import string
import sys
# Amount of lines to put into a buffer before writing
BUFFER_SIZE_LINES = 1024
html_parser = html.parser.HTMLParser()
# Few HTML reserved chars that are not being cleaned up by HTMLParser
dict = {}
dict[ '"' ] = '"'
dict[ ''' ] = "'"
dict[ '&' ] = '&'
dict[ '<' ] = '<'
dict[ '>' ] = '>'
# Process the file
def ProcessLargeTextFile(fileIn, fileOut):
r = open(fileIn, "r")
w = open(fileOut, "w")
buff = ""
buffLines = 0
for lineIn in r:
lineOut = html_parser.unescape(lineIn)
for key, value in dict.items():
lineOut = lineOut.replace(key,value)
buffLines += 1
if buffLines >= BUFFER_SIZE_LINES:
w.write(buff)
buffLines = 1
buff = ""
buff += lineOut + "\n"
w.write(buff)
r.close()
w.close()
# Now run
ProcessLargeTextFile(sys.argv[1],sys.argv[2])
|
eBay LMS uploadFile - Cannot specify file with valid format
Question: I am attempting to use the eBay Large Merchant Services API to upload calls in
bulk. This API and documentation are notoriously bad (I found a blog post that
explains it and rails against it [here](http://medium.com/on-coding/dont-
write-your-api-this-way-a1b745078b94)). The closest I can find to a working
Python solution is based on
[this](http://stackoverflow.com/questions/8482135/http-post-request-and-
headers-with-mime-attachments-multipart-related-and-xop) Question/Answer and
his related [Github](https://github.com/wrhansen/eBay-LMS-API) version which
is no longer maintained.
I have followed these examples very closely and as best I can tell they never
fully worked in the first place. I emailed the original author and he says it
was never put in production. However it should be very close to working.
All of my attempts result in a **error 11 Please specify a File with Valid
Format** message.
I have tried various ways to construct this call, using both the github
method, or a method via the email.mime package, and via Requests library, and
the http.client library. The output below is what I believe is the closest I
have to a what it should be. The file attached is using the Github's
[sample](https://github.com/wrhansen/eBay-LMS-
API/blob/master/examples/AddItem.xml) XML file, and it is read and gzipped
using the [same methods](https://github.com/wrhansen/eBay-LMS-
API/blob/master/lmslib.py#L326-L346) as in the Github repo.
Any help on this would be appreciated. I feel like I have exhausted my
possibilities.
POST https://storage.sandbox.ebay.com/FileTransferService
X-EBAY-SOA-SERVICE-VERSION: 1.1.0
Content-Type: multipart/related; boundary=MIME_boundary; type="application/xop+xml"; start="<0.urn:uuid:86f4bbc4-cfde-4bf5-a884-cbdf3c230bf2>"; start-info="text/xml"
User-Agent: python-requests/2.5.1 CPython/3.3.4 Darwin/14.1.0
Accept: */*
Accept-Encoding: gzip, deflate
X-EBAY-SOA-SECURITY-TOKEN: **MYSANDBOXTOKEN**
X-EBAY-SOA-SERVICE-NAME: FileTransferService
Connection: keep-alive
X-EBAY-SOA-OPERATION-NAME: uploadFile
Content-Length: 4863
--MIME_boundary
Content-Type: application/xop+xml; charset=UTF-8; type="text/xml; charset=UTF-8"
Content-Transfer-Encoding: binary
Content-ID: <0.urn:uuid:86f4bbc4-cfde-4bf5-a884-cbdf3c230bf2>
<uploadFileRequest xmlns:sct="http://www.ebay.com/soaframework/common/types" xmlns="http://www.ebay.com/marketplace/services">
<taskReferenceId>50009042491</taskReferenceId>
<fileReferenceId>50009194541</fileReferenceId>
<fileFormat>gzip</fileFormat>
<fileAttachment>
<Size>1399</Size>
<Data><xop:Include xmlns:xop="http://www.w3.org/2004/08/xop/include" href="cid:urn:uuid:1b449099-a434-4466-b8f6-3cc2d59797da"/></Data>
</fileAttachment>
</uploadFileRequest>
--MIME_boundary
Content-Type: application/octet-stream
Content-Transfer-Encoding: binary
Content-ID: <urn:uuid:1b449099-a434-4466-b8f6-3cc2d59797da>
b'\x1f\x8b\x08\x08\x1a\x98\xdaT\x02\xffuploadcompression.xml\x00\xcdW[W\xdb8\x10~\x0e\xbfB\x9b}\x858\xd0\xb4\x14\x8e\xebnn\x94l\xe3\x96M\xccv\xfb\xd4#\xec!\xd1V\x96\\I\x06\xfc\xefw$\xdbI\x9cK\x0f\xec\xbe,\xe5P{f\xbe\xb9i\xe6\xb3\xed\xbf\x7fJ9y\x00\xa5\x99\x14\xef\xda\xa7\x9dn\x9b\x80\x88e\xc2\xc4\xe2]\xfb6\xba:y\xdb~\x1f\x1c\xf9\x83\x9c\x7f\x1fQC\xc7O\xf1\x92\x8a\x05\xcc\xe0G\x0e\xdah\x82p\xa1\xdf\xb5s%.\xe1\x8e\x16\x974c\xfa\x12\x06\xd3\x01\xd50\x94i&\x05\x08\xa3\xdb\xc1Q\xcb\xbf\x06\x9a\x80\xc2\xab\x96\xffg\x1908\xef\x9d\xfb^}c\x15sf`2\nN\xbb]\xdf\xab\xae\x11\xe9\xad\xa1-\xbf\x9f$\x13\x03i\x95\xc1\x0b\x12 \x04\xd1c\xa5\xa4\x9ab\t9]@\x00\xe2\xdb\xed\xdc\xf7\x9a\xc2\xca\xf2\x0bU\x02\xbb0\x85\x07\xe0\xc15[,}\xaf!\xaa\xcc\x0e\x95\xd2\xf2C\xd0\x1a\xfda\t\x11&\x8a8\xf2\t\x1e\xc9\xdc({y%UJ\x8d\xef\xad\x8d\x10\x83\x0e}[\x9b\xc3\xb7\xfc\x88\x19\x0e\xc1\xf5\xefC2\xffz\x12\xf6\xff\xc2\xff\xec\xdf\xc9\x84\x9c\x91\xeb\xf14\x1cG\xa4\xff)\xba\x9e\xf5\x87\x93hLB/\x1c\x8f&\xb7\xa1\xef\x95\xb8\xd2\xc7\x08t\xacXflV\xd1\x92i\x82\xbf\x94\xc4Rr\xb2\x04\x9e\x829&\xccX\xe1\x9d\xa2"!\x023\xbc+\x88Y\x02y\xa4\xc5/\xbe\xb7\x89/=\xde(\x96RU\x0c\xa9\x81\x85TE)m\xf9\xf5=V\xf2\xe6\xbcw\xe1{\x1b\x82\x12\xe8\xedE\xfasC\x95AU\x0c\xc1\xc5E\xe7\x02\x91\x1b\x92\xd2d(E\xc2l\n\xe5h\xe0llJ\x8e\x1a\xf1C\x9ae\xd8\xe0>\xe7\xf2\x11\x92 R9\xacs\xd9R\xd6\xdesa0\x1d;\t\xf5u\xa5\xc9\x95\xc2m\xb0\xaa\x11\xea\xea\xbb\xaa\xb3Lg\xd4\xc4\xcb\x88\xa5\x10\xd2\xa7\xe0\x156kKT\x1aN\x99;\xfdQ\xae\xa8k\xe3\x88\x16\xfa\xdb)\x16\xb1\xadh\x98GE\x06\xc1\x15{\x82\xc4u\xc2\x8e\xc5\n\xe1t\xd5i\xd0"\xc5\x01\x0f\xc1,e\xa2\x03\xbc\xbd\xa1\x1c[\xdd\x14\xaflQ9N)\xe3\xb8D\n\'/\xb0+\xf3\x9bb\xb8\\:a:\xb6\xd5wb\x99:\x07\xdb\xb6\x95\x13\x16\x9b\\\xc1\x08\x0c\xaat}\xfa\x95\xf4v6\r\x96\xc6d\x97\x9e\x07\x9d]\xb7\x1e\x9e\xff\x02\xb4\x87#\xedu\xcf\xce\xce\xbbo\xbd\xd7\xe7oN^\xbf9\xfd\xa6\xd3\xce\xdf\xd9\x02\xe3\xae\x1d\xd5S\xb3\'\xa0\x7f#\xb5\xa1|(\x13\x08z\x17\xbd\xb3\x1e\x9a\xad%\xa5\xc9\x1f9\x15\x86\x99"\xb0\xad^\xdd\x94\xba\x19\xa0Kq#9\x8bW\x03<\x83\xfb\\$\x9f\xcbQ\xafy\xce\xf7\x1a\xe2\x86\xe9\x8e\xd1Zm\xbd\xeb/\xcc,\x99\xa8\x90\xe5\xa1\xf7\xac\xe9\xaer\x1f.8\xed\x11\x0b\xdaBl\xd9\xf6\xe3\x182\x03u~[\xd2\x15v\xcbl\xbf\x8f\x1aM\x0e\xc2k\xe0\x0e)\xb4\xeaV \xb9( \x19\xa8\x94\x19\x04\x1c\x13x\xb2P"\x05\x01\x0e\xb1QR\xb0\x18\x19\x07R}L,\xe1\xa49r\xf8\x1d\x10&p\x9dqK\x13\xf2\xe8\xea$X~\x82\xe5\x13yO\x14\xc4\x80\xd1:$\x04e\xc3\xe0H\xc1\x06\xd0\x91V\\\x13\x82\xc3\x13\xca9\xc9h\xfc\x9d.p]\x8eIJENy\x152\xa3\x98\xdf\xa3T\xdf\x11khl\x9c0\x17\x94\x1bP\x90tH$m\xd6\xae\x9cc\x92q\xc0\x07\x89u\xefLc\x8c*SPD\x83z\xc0\xb5$F\x96\xe9=\x00\xd2\xea,\xec\xff\xea\xbc\xc9\\\xad|\x90{\xa4\xfa\x0e\x19O\xc7\xc3h\xf6\xf9\xd3dH\x90\xad\xc3\xf91Ir\x07G\xaee\xe8/\x83\x98QN\x04\xb5\xc3Nb*\x84t\xf5\xd5n\x12\xeb\x07\x9d\x17\x18\x8fj\xac\xd3\xc6\xb1\xcd\xd6\x92\x03/0\xc3\x07\x9b>I\x18\xe6c\xb8\xe5p%\xf3\xc5\xb2\xf2\x8f\x1b\x8c\x11\x8c\xcd\xd36\xe3\x9e\xba\xa5R\xbaS\x1d\xe9.\xd1#3/\x99\xa3\xcb!\xae\xd6\re\xc9\xa0\xa8\xe6g\x90\x17\xa0\x90\xa7\x0f\xe9\x0f\xe2\x0f#\xebm\xdf\xdd\xcc\x95\x9b\r\x06X\xc9\xe6\xe51\x94qKr\xd8\xd6\x05\xf5}I\x86\xf8p\x11\tU\xc9:\x89\xda\xae\x19\xad\x92\xda\x0c\x83n\xa7\xbbc\xee\x14{!\xc8\x97n\x12-\x1b\x1d\x00o\x99\xecu\x83\xb4/\x95\xe3\xaf\x1d\xf8JU\x02\xc7O\x19\xa0?H\xeaJ\xeeq\xd6\x91\x95v\xe4\xae=W\n\xa0\xf6\x17\x18\xf7xl\x88l{\xbd\xc3\xfd\xf5\'\x02\xf7D\xd02\xfd\xbdv\x07\x8e\xa1j|\x03\xbf\xff\x14\xf6\x1d\x02\xae^\xf9\xf8\x9d\x8c\xf0\xc5t>j\x07g\xf8\xb6\xf0\xf6\xf0\xb9\x1c\xec\xe7\xd9\xcf\xfb\xe9p\x91\x9c\xca\xb8|*\x0f\xfb\xa5\xfd\x86\xc8\xb5\xe8y}\xf8\xff\xb4\xeb\xd5\xbfl\xd7\xab\x97\xb5\xab\x8f\xec\xc8b\xaa\xf75m\xc7x\x9c+\x99\xc1\xb3L\xfb\x9a\xd1\xe7\x19\xde\xfe\xa7\xf3\xaa5\xe5\xfb\x17\xb7\xef\xe8\r\xd1\x91[\xb8\x98\xe7\tlE\x99~\xb4+\xb7Os\x183\x19\xbd\x1c\x13~}9\xe6\xc3\xe0\xe5\x98\xf9\x87\x9fa\xe6\xc09\xa8\xbdz}\xa3\xe0\x1e\xec\xf4AE0\xcf4>\xda`\x9e\xe6\xfb\x9e\xfd\x16\x0c`@\x8bP\x1a\xa9t\xf9qX\xeb>\xde\xba\xaf\x02\xfc9\xb1\xbb\x8d\xb7n.\xbc\xfaS\xca\xf7\x9a\xdf\x8cVv\xe4{\x87\xbeiQ]\xff\xfb\x07\xe0\xf2E>\x1f\x0f\x00\x00'
--MIME_boundary--
**EDIT:** Showing the methods I have tried to actually generate the binary
data:
with open('testpayload.xml', mode='rt') as myfile:
full_xml = myfile.read() # replacing my full_xml with testpayload.xml, from Github repo
# METHOD 1: Simple gzip.compress()
payload = gzip.compress(bytes(full_xml, 'UTF-8'))
# METHOD 2: Implementing http://stackoverflow.com/a/8507012/1281743 which should produce a gzip-compatible string with header
import io
out = io.BytesIO()
with gzip.GzipFile(fileobj=out, mode="w") as f:
f.write(bytes(full_xml, 'UTF-8'))
payload = out.getvalue()
# METHOD 3: Simply reading in a pre-compressed version of same testpayload.xml file. Compressed on commandline with 'gzip testpayload.xml'
with open('testpayload.xml.gz', mode='rb') as myfile:
payload = myfile.read()
# METHOD 4: Adoping the _generate_date() function from github eBay-LMS-API repo http://goo.gl/YgFyBi
mybuffer = io.BytesIO()
fp = open('testpayload.xml', 'rb') # This file is from https://github.com/wrhansen/eBay-LMS-API/
# Create a gzip object that reads the compression to StringIO buffer
gzipbuffer = gzip.GzipFile('uploadcompression.xml.gz', 'wb', 9, mybuffer)
gzipbuffer.writelines(fp)
gzipbuffer.close()
fp.close()
mybuffer.seek(0)
payload = mybuffer.read()
mybuffer.close()
# DONE: send payload to API call
r = ebay.fileTransfer_upload_file(file_id='50009194541', task_id='50009042491', gzip_data=payload)
**EDIT2, THE QUASI-SOLUTION:** Python was not actually sending binary data,
but rather a string representation of binary, which did not work. I ended up
having to base64-encode the file first. I had tried that before, but have been
waived off of that by a suggestion on the eBay forums. I just changed one of
the last headers to `Content-Transfer-Encoding: base64` and I got a success
response.
**EDIT3, THE REAL SOLUTION:** Base 64 encoding only solved my problems with
the example XML file. I found that my own XML payload was NOT sending
successfully (same error as before), but that the script on Github could send
that same file just fine. The difference was that the Github script is Python2
and able to mix string and binary data without differentiation. To send binary
data in Python3 I just had to make a few simple changes to change all the
strings in the request_part and binary_part into bytes, so that the bytes from
the gzip payload would concatenate with it. Looks like this:
binary_part = b'\r\n'
binary_part += b'--MIME_boundary\r\n'
binary_part += b'Content-Type: application/octet-stream\r\n'
binary_part += b'Content-Transfer-Encoding: binary\r\n'
binary_part += b'Content-ID: <' + URN_UUID_ATTACHMENT.encode('utf-8') + b'>\r\n\r\n'
binary_part += gzip_data + b'\r\n'
binary_part += b'--MIME_boundary--'
So, no base64 encoding, just needed to figure out how to make Python send real
binary data. And the original github repo **does** work as advertised, in
python2.
Answer: So after scanning all my code about this, it looks like it's setup right. So
if your error is a good error then it means your binary data is wrong. Can you
show how you're gzipping and then reading that data back in? Here's what I'm
doing
$dir = '/.../ebayUpload.gz';
if(is_file($dir)) unlink($dir);
$gz = gzopen($dir,'w9');
gzwrite($gz, $xmlFile);
chmod($dir, 0777);
gzclose($gz);
// open that file as data;
$handle = fopen($dir, 'r');
$fileData = fread($handle, filesize($dir));
fclose($handle);
And in your case the $xmlFile should be the string found in
<https://github.com/wrhansen/eBay-LMS-API/blob/master/examples/AddItem.xml>
I should add that this is how I'm using $fileData
$binaryPart = '';
$binaryPart .= "--" . $boundry . $CRLF;
$binaryPart .= 'Content-Type: application/octet-stream' . $CRLF;
$binaryPart .= 'Content-Transfer-Encoding: binary' . $CRLF;
$binaryPart .= 'Content-ID: <urn:uuid:'.$uuid_attachment.'>' . $CRLF . $CRLF;
$binaryPart .= $fileData . $CRLF;
$binaryPart .= "--" . $boundry . "--";
|
Python plot legend is not draggble. gives an error
Question: I have data and I am trying for produce the plot. It does produce plot but
legend is not draggable. It gives me an error mentioned below.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import ast
...................
...................
.
.
.
.
.
fig, ax = plt.subplots()
for f in datafile:
plt.plot(f[0],f[1])
plt.tick_params(axis='both', which='major', labelsize=15)
plt.legend(datanames)
plt.legend(loc='best', numpoints=1)
plt.legend().draggable()
plt.grid()
plt.show()
I am trying plot and it gives me an error
AttributeError: 'NoneType' object has no attribute 'draggable'
How can it be fixed? Thanks.
Answer: Every time you call `plt.legend` it creates a _new_ legend (discarding the
old). None of you artists have a label, so no legend gets created and
`plt.legend` returns `None`. Just do it like so:
legend = plt.legend(datanames, loc='best', numpoints=1)
legend.draggable()
|
learning GUI with python (PyQt4) showmessage on statusbar
Question: I am learning python.I am new to it.
<http://zetcode.com/gui/pyqt4/firstprograms/>
Right now, I am trying to make calculator for exercise. I changed a bit of
sample program. But I got error.
Traceback (most recent call last):
File "customCalculatorGUI.py", line 50, in buttonClicked
self.showMessage(sender.text() + ' was pressed')
AttributeError: 'Example' object has no attribute 'showMessage'
## #
import sys
from PyQt4 import QtGui, QtCore
"""
ZetCode PyQt4 tutorial
In this example, we create a skeleton
of a calculator using a QtGui.QGridLayout.
author: Jan Bodnar
website: zetcode.com
last edited: July 2014
"""
class Example(QtGui.QWidget):
def __init__(self):
super(Example, self).__init__()
self.initUI()
def initUI(self):
grid = QtGui.QGridLayout()
self.setLayout(grid)
names = ['Cls', 'Bck', '', 'Close',
'7', '8', '9', '/',
'4', '5', '6', '*',
'1', '2', '3', '-',
'0', '.', '=', '+']
positions = [(i,j) for i in range(5) for j in range(4)]
for position, name in zip(positions, names):
if name == '':
continue
button = QtGui.QPushButton(name)
button.clicked.connect(self.buttonClicked)
grid.addWidget(button, *position)
self.move(300, 150)
self.setWindowTitle('Calculator')
self.show()
def buttonClicked(self):
sender = self.sender()
self.showMessage(sender.text() + ' was pressed')
def main():
app = QtGui.QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
I checked <http://qt-project.org/doc/qt-4.8/qwidget.html>
and there is no showMessage class. But now, I do not know how to show a
message.
If I want to show message on statusbar when button is clicked. How can I do
it?
Answer: Check
[this](http://pyqt.sourceforge.net/Docs/PyQt4/qstatusbar.html#showMessage).
You will need to use an object of `statusBar` and call `showMessage` on that
object.
Did this long back, so its not a coding help. But this should guide you the
direction. Check if this helps?
|
Capturing raw packets into an output file through python script
Question: The script that i created receives raw tcp packets and capturing them in a
file. But the problem is it is not capturing into a file and throwing me an
error message: 'str' object has no attribute 'write'. Can someone tell me what
I am doing wrong in file handling part of script.
import struct
import socket
import os
from struct import *
#create INET Streaming socket
try:
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_TCP)
except socket.error , msg:
print 'Socket could not be created. Error Code : ' + str(msg[0]) + ' Message ' + msg[1]
sys.exit()
#Open a file
try:
file = raw_input("[*] Please provide a name for capture file: \n")
FileOpen = open(file, "a")
print "\n[*] Capture file %s will be written to %s. " % (file,os.getcwd())
except:
print "\n[*] ERROR! There was issue opening your file"
# receive a packet
while True:
packet = s.recvfrom(65565)
#packet string from tuple
packet = packet[0]
#take first 20 characters for the ip header
ip_header = packet[0:20]
#now unpack them :)
iph = unpack('!BBHHHBBH4s4s' , ip_header)
version_ihl = iph[0]
version = version_ihl >> 4
ihl = version_ihl & 0xF
iph_length = ihl * 4
ttl = iph[5]
protocol = iph[6]
s_addr = '111.111.111.111'
d_addr = '127.0.0.1'
print 'Version : ' + str(version) + ' IP Header Length : ' + str(ihl) + ' TTL : ' + str(ttl) + ' Protocol : ' + str(protocol) + ' Source Address : ' + str(s_addr) + ' Destination Address : ' + str(d_addr)
file.write("\n\t[-] Layer 3[-]\n\n[*] Source IP: %s\n[*] Destination IP: %s\n" % (s_addr, d_addr))
tcp_header = packet[iph_length:iph_length+20]
#now unpack them :)
tcph = unpack('!HHLLBBHHH' , tcp_header)
source_port = 1234
dest_port = 80
sequence = tcph[2]
acknowledgement = tcph[3]
doff_reserved = tcph[4]
tcph_length = doff_reserved >> 4
print 'Source Port : ' + str(source_port) + ' Dest Port : ' + str(dest_port) + ' Sequence Number : ' + str(sequence) + ' Acknowledgement : ' + str(acknowledgement) + ' TCP header length : ' + str(tcph_length)
file. write("\n\t[-]Layer 4[-]\n\n[*]Source Port: %s\n[*]Destination Port: %s\n" % (source_port,dest_port))
file.close()
Answer: You are writing to wrong object. `FileOpen = open(file, "a")` should probably
be `file = open(file, "a")` – zakjan
|
Python Requests - getting list of usable methods/URIs?
Question: I need to check how to make integration with one system, but there is no
documentation. The only thing I got, is an URL and user/password and that it
uses restful.
Is there a way to simply get list of methods/URIs with needed parameters that
can be used in that system?
I mean is python requests have some methods that would search for such thing?
for example I have this:
import requests
s = requests.Session()
s.auth = ('user', 'pass')
r = s.get('https://url.com/', verify=True)
I tried to use `r.__dict__`, but didn't find much with that.
Answer: You need a web spider. Try [Scrapy](http://scrapy.org/).
|
Python: How can I load a created class from a function?
Question: I'm quite new at this language and I've been facing the following trouble:
I have defined a class and I want to create multiple objects of that class
using a function, but this doesn't seem to work
class age:
def __init__ (self):
self.age = value
def function (name):
global value
value = 27
name = age()
function("anna")
print "Age:", anna.age
Output:
Traceback (most recent call last):
File "test.py", line 15, in <module>
print "Age:", anna.age
NameError: name 'anna' is not defined
Trying different thing I found this will work. I don't understand the
difference between them... The problem is that I had to leave the variable
_value_ in the environment. And I can't do this since my actual uses too many
variables that might overlap with it, I mean, that have the same name.
class age:
def __init__ (self):
self.age = value
def function ():
global value
value = 27
function()
anna = age()
print "Age:",anna.age
Output:
Age: 27
I would like to make the first code to work... Any advice or ideas? Thank you!
Answer: There are several mistakes here. I should start by saying you need to brush up
on your knowledge (read the docs, look at more tutorials, etc). So first, you
do need to return the object you've just created. So:
def function (name):
global value
value = 27
name = age() # This is weird. You've passed in `name`, only to re-assign it.
return name # Very important
anna = function("anna")
print "Age:", anna.age
Secondly, the class doesn't make much sense... It shouldn't be called `age`,
only to have an `age` attribute. I recommend the following adjustments (in my
humble opinion):
class Person:
def __init__ (self, age, name):
self.age = age
self.name = name
def createPerson(age, name):
return Person(age, name)
anna = createPerson(27, "anna")
print "Age:", anna.age
But with such a simple function, however, you're almost just better off
declaring the instance in your global scope:
anna = Person(27, "anna")
print "Age:", anna.age
### Note:
Your second version works because of scope. This is a _fundamental_ concept in
programming languages. When you define a variable/name-identifier, your
ability to reference it again will depend on the scope it was defined in.
Functions have their own scope, and trying to reference a name created inside
of it from the outside doesn't work.
As I've shown you, another fundamental part of programming is the ability to
use **a return statement** that allows you to catch the result and use it
outside of where it was created originally (the `function` in this case).
### One Last thing
I noticed you meant to use the `global` keyword to declare `value` and use
_that_ to define your `self.age` inside of your `age` class. Never do that;
that's what the constructor is for (`__init__`). As I've shown you in the
above examples, you can _easily pass parameters_ to it (`def __init__(self,
age, name)`).
**_Finally_** , I'll say this: The use of the `global` keyword exists for a
reason, as there _may be_ situations where it's warranted. I personally don't
use it and don't see it used very often at all. There's a good reason for
that, which you may come to understand later and which is really outside the
scope of this question. Just bear in mind that it's _usually_ unnecessary to
declare variables `global` inside a function and that it _usually_ resutls in
bad code in the long run.
You'll be much better off in the beginning (and in the future, too) if you get
used to passing parameters around.
|
Sparql query JSON error from BNCF endpoint
Question: I'm trying to retrieve results from the BNCF at [this
endpoint](http://digitale.bncf.firenze.sbn.it/openrdf-
workbench/repositories/NS_03_2014/query).
My query (with "ab" as example) is:
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
SELECT DISTINCT ?source ?label ?content
WHERE {
?source a skos:Concept;
skos:prefLabel ?label;
skos:scopeNote ?content.
FILTER regex(str(?label), "ab", "i")
}
The query is correct in fact if you try to run it works. But when I try to get
the results from my python this is the error:
SyntaxError: JSON Parse error: Unexpected EOF
This is my python code:
__3store = "http://digitale.bncf.firenze.sbn.it/openrdf-workbench/repositories/NS_03_2014/query"
sparql = SPARQLUpdateStore(queryEndpoint=__3store)
sparql.setReturnFormat(JSON)
results = sparql.query(query_rdf).convert()
print json.dumps(result, separators=(',',':'))
I tried the code above according to [this
answer](http://stackoverflow.com/questions/26885347/results-
resultsresultsbindings-flask-error), before my code was like this:
__3store = "http://digitale.bncf.firenze.sbn.it/openrdf-workbench/repositories/NS_03_2014/query"
sparql = SPARQLWrapper(__3store,returnFormat="json")
sparql.setQuery(query_rdf)
result = sparql.query().convert()
print json.dumps(result, separators=(',',':'))
but both throw the same error.
Does anyone know how to fix it? Thanks
**EDIT:**
This is python code, hope it is enough to understand
import sys
sys.path.append ('cgi/lib')
import rdflib
from rdflib.plugins.stores.sparqlstore import SPARQLUpdateStore, SPARQLStore
import json
from SPARQLWrapper import SPARQLWrapper, JSON
#MAIN
print "Content-type: application/json"
print
prefix_SKOS = "prefix skos: <http://www.w3.org/2004/02/skos/core#>"
crlf = "\n"
query_rdf = ""
query_rdf += prefix_SKOS + crlf
query_rdf += '''
SELECT DISTINCT ?source ?title ?content
WHERE {
?source a skos:Concept;
skos:prefLabel ?title;
skos:scopeNote ?content.
FILTER regex(str(?title), "ab", "i")
}
'''
__3store = "http://digitale.bncf.firenze.sbn.it/openrdf-workbench/repositories/NS_03_2014/query"
sparql = SPARQLWrapper(__3store,returnFormat="json")
sparql.setQuery(query_rdf)
result = sparql.query().convert()
print result
Running this in Python shell returns:
Content-type: application/json
Warning (from warnings module):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/SPARQLWrapper-1.6.4-py2.7.egg/SPARQLWrapper/Wrapper.py", line 689
RuntimeWarning: Format requested was JSON, but XML (application/sparql-results+xml;charset=UTF-8) has been returned by the endpoint
<xml.dom.minidom.Document instance at 0x105add710>
So I think the result is always an XML also if I specificied Json as a return
format.
Answer: There are a couple of problems playing together here:
First, you should only use `SPARQLUpdateStore` from rdflib if you want to
access a SPARQL store via rdflib's Graph interface (e.g., you can add triples,
you can iterate over them, etc.). If you want to write a SPARQL query yourself
you should use `SPARQLWrapper`.
Second, if you ask SPARQLWrapper to return JSON, what it does is actually ask
the server for a couple of mime types that are most common and standardized
for what we just call "json", as shown
[here](https://github.com/RDFLib/sparqlwrapper/blob/master/SPARQLWrapper/Wrapper.py#L92)
and
[here](https://github.com/RDFLib/sparqlwrapper/blob/master/SPARQLWrapper/Wrapper.py#L451):
_SPARQL_JSON = ["application/sparql-results+json", "text/javascript", "application/json"]
It seems as if your sever does understand `application/sparql-results+json`,
but not a combined "give me any of these mime-types header" as rdflib compiles
it for maximum interoperability (so your server essentially doesn't fully
support [HTTP Accept
Headers](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.1)):
curl -i -G -H 'Accept: application/sparql-results+json' --data-urlencode 'query=PREFIX skos:
<http://www.w3.org/2004/02/skos/core#>
SELECT DISTINCT ?source ?label ?content
WHERE {
?source a skos:Concept;
skos:prefLabel ?label;
skos:scopeNote ?content.
FILTER regex(str(?label), "ab", "i")
}' http://digitale.bncf.firenze.sbn.it/openrdf-workbench/repositories/NS_03_2014/query
will return:
HTTP/1.1 200 OK
Date: Mon, 18 May 2015 13:13:45 GMT
Server: Apache/2.2.17 (Unix) PHP/5.3.6 mod_jk/1.2.31
...
Content-Type: application/sparql-results+json;charset=UTF-8
{
"head" : {
"vars" : [ ],
"vars" : [ "source", "label", "content" ],
"link" : [ "info" ]
},
"results" : {
"bindings" : [ {
"content" : {
"type" : "literal",
"value" : "Il lasciare ingiustificatamente qualcuno o qualcosa di cui si è responsabili"
},
"source" : {
"type" : "uri",
"value" : "http://purl.org/bncf/tid/12445"
},
"label" : {
"xml:lang" : "it",
"type" : "literal",
"value" : "Abbandono"
}
},
...
so everything is ok, but if we ask for the combined, more interoperable mime
types:
curl -i -G -H 'Accept: application/sparql-results+json,text/javascript,application/json' --data-urlencode 'query=PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
SELECT DISTINCT ?source ?label ?content
WHERE {
?source a skos:Concept;
skos:prefLabel ?label;
skos:scopeNote ?content.
FILTER regex(str(?label), "ab", "i")
}' http://digitale.bncf.firenze.sbn.it/openrdf-workbench/repositories/NS_03_2014/query
we get an xml result:
HTTP/1.1 200 OK
Server: Apache/2.2.17 (Unix) PHP/5.3.6 mod_jk/1.2.31
...
Content-Type: application/sparql-results+xml;charset=UTF-8
<?xml version='1.0' encoding='UTF-8'?>
...
So long story short: it's a bug in the server you're using. The following is a
nasty workaround (it seems SPARQLWrapper doesn't just allow us to manually set
the headers, but unconditionally overrides them in `_createRequest`), but it
works:
In [1]: import SPARQLWrapper as sw
In [2]: sparql = sw.SPARQLWrapper("http://digitale.bncf.firenze.sbn.it/openrdf-workbench/repositories/NS_03_2014/query")
In [3]: sparql.setReturnFormat(sw.JSON)
In [4]: sparql.setQuery(''' PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
SELECT DISTINCT ?source ?label ?content
WHERE {
?source a skos:Concept;
skos:prefLabel ?label;
skos:scopeNote ?content.
FILTER regex(str(?label), "ab", "i")
}
''')
In [5]: request = sparql._createRequest()
In [6]: request.add_header('Accept', 'application/sparql-results+json')
In [7]: from urllib2 import urlopen
In [8]: response = urlopen(request)
In [9]: res = sw.Wrapper.QueryResult((response, sparql.returnFormat))
In [10]: result = res.convert()
In [11]: result
Out[11]:
{u'head': {u'link': [u'info'], u'vars': [u'source', u'label', u'content']},
u'results': {u'bindings': [{u'content': {u'type': u'literal',
u'value': u'Il lasciare ingiustificatamente qualcuno o qualcosa di cui si \xe8 responsabili'},
u'label': {u'type': u'literal',
u'value': u'Abbandono',
u'xml:lang': u'it'},
u'source': {u'type': u'uri', u'value': u'http://purl.org/bncf/tid/12445'}},
...
|
Celery: how to separate different environments with different workers?
Question: I need to route all tasks of a certain django site instance to a certain
queue. My setup is as following:
* several webservers running a Django project (1.7)
* one server running celery workers (3.1.7)
* Three environments: production, staging, development. Each environment runs with a different `DJANGO_SETTINGS_MODULE`, with a different `CELERY_DEFAULT_QUEUE` setting.
* One redis instance as broker (everything in the same database)
On the "celery server", I run multiple worker instances through supervisor
(simplified conf):
[program:production_queue]
environment=PYTHONPATH=/pth/to/src/:/pth/to/site-packages/,DJANGO_SETTINGS_MODULE=website.settings.production
command=/pth/to/python celery -A website.celery worker --events --queues myserver --loglevel WARNING --concurrency 4 -n [email protected]
[program:staging_queue]
environment=PYTHONPATH=/pth/to/src/:/pth/to/site-packages/,DJANGO_SETTINGS_MODULE=website.settings.staging
command=/pth/to/python celery -A website.celery worker --events --queues myserver_staging --loglevel WARNING --concurrency 1 -n [email protected]
[program:development_queue]
environment=PYTHONPATH=/pth/to/src/:/pth/to/site-packages/,DJANGO_SETTINGS_MODULE=website.settings.development
command=/pth/to/python celery -A website.celery worker --events --queues myserver_development --loglevel INFO --concurrency 1 -n [email protected]
This works, with inspection:
$ celery -A website.celery inspect activeues
-> [email protected]: OK
* {u'exclusive': False, u'name': u'myserver', u'exchange': {u'name': u'celery', u'durable': True, u'delivery_mode': 2, u'passive': False, u'arguments': None, u'type': u'direct', u'auto_delete': False}, u'durable': True, u'routing_key': u'celery', u'no_ack': False, u'alias': None, u'queue_arguments': None, u'binding_arguments': None, u'bindings': [], u'auto_delete': False}
-> [email protected]: OK
* {u'exclusive': False, u'name': u'myserver_staging', u'exchange': {u'name': u'celery', u'durable': True, u'delivery_mode': 2, u'passive': False, u'arguments': None, u'type': u'direct', u'auto_delete': False}, u'durable': True, u'routing_key': u'celery', u'no_ack': False, u'alias': None, u'queue_arguments': None, u'binding_arguments': None, u'bindings': [], u'auto_delete': False}
-> [email protected]: OK
* {u'exclusive': False, u'name': u'myserver_development', u'exchange': {u'name': u'celery', u'durable': True, u'delivery_mode': 2, u'passive': False, u'arguments': None, u'type': u'direct', u'auto_delete': False}, u'durable': True, u'routing_key': u'celery', u'no_ack': False, u'alias': None, u'queue_arguments': None, u'binding_arguments': None, u'bindings': [], u'auto_delete': False}
(Names accord with the CELERY_DEFAULT_QUEUE settings)
`website/celery.py` contains the basics (imports skipped):
app = Celery('proj')
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
I would therefore expect tasks generated by a webserver running with the
development settings, to end up only in the `development_queue`, and so on.
However, I see tasks being processed by a different queue, or by all three,
which is problematic.
**Are my expectations wrong in that this would be a good way to separate these
tasks?** All [documentation on
routing](http://docs.celeryproject.org/en/latest/userguide/routing.html#manual-
routing) is about routing different tasks to different queues, which I don't
need. I need to route all tasks of a certain site (environment) to a certain
queue. **What can I do to separate these environments?**
Answer: I got an answer from the developer of Celery here:
<https://github.com/celery/celery/issues/2508>, which is:
> You have to set all of `CELERY_DEFAULT_QUEUE`, `CELERY_DEFAULT_EXCHANGE` and
> `CELERY_DEFAULT_ROUTING_KEY`. Otherwise you will end up with three queues
> all bound to the same exchange and routing key.
>
> Or use the method here which will set it up explicitly:
> <http://docs.celeryproject.org/en/latest/userguide/routing.html#changing-
> the-name-of-the-default-queue>
That works!
|
blinker does not work in Python 2.6
Question:
import blinker
from blinker import signal
class Ticket():
@classmethod
def update(cls):
pass
ticket_created = signal('CREATED')
ticket_created.connect(Ticket.update)
This snippet of code works well on Python 2.7. But does not work on Python
2.6. I get this trace:
Traceback (most recent call last):
File "btest.py", line 10, in <module>
ticket_created.connect(Ticket.update)
File "/home/gavika/env2/lib/python2.6/site-packages/blinker/base.py", line 113, in connect
receiver_ref = reference(receiver, self._cleanup_receiver)
File "/home/gavika/env2/lib/python2.6/site-packages/blinker/_utilities.py", line 134, in reference
weak = callable_reference(object, callback)
File "/home/gavika/env2/lib/python2.6/site-packages/blinker/_utilities.py", line 145, in callable_reference
return BoundMethodWeakref(target=object, on_delete=callback)
File "/home/gavika/env2/lib/python2.6/site-packages/blinker/_saferef.py", line 143, in __new__
base.__init__(target, on_delete, *arguments, **named)
File "/home/gavika/env2/lib/python2.6/site-packages/blinker/_saferef.py", line 185, in __init__
self.weak_self = weakref.ref(im_self, remove)
TypeError: cannot create weak reference to 'classobj' object
Is there a way I could get this to work in 2.6 environment?
Answer: Yes, new-style classes is the answer.
class Ticket(object):
...
Solves the issue.
|
Reading txt files in python
Question: I'm currently following "Learn Python the Hard Way". However, when I use the
.read() command on my .txt files it outputs the text in a very weird way, with
extra spaces, and a square at the start:

The console is Windows Powershell.
My code looks like this:
from sys import argv #imports argv from sys
script, filename = argv #unpacks script and filename from argv
txt = open(filename) #declares the variable txt as the text in filename
print "Here's your file %r" % filename #prints the string and the filename
print txt.read() #prints a reading of txt
txt.close()
print "Type the filename again:" #prints the string
file_again = raw_input("> ") #declares the variable file_again as the raw input
txt_again = open(file_again) #declares the variable txt_again as the text in file_again
print txt_again.read() #prints a reading of txt_again
txt.close()
And the files looks like this:
This is stuff I typed into a file.
It is really cool stuff.
Lots and lots of fun to have in here.
Please help!
Answer: Your file seems to be encoded with a 2-Byte encoding; presumably UTF-16. Since
python can't guess that, it just outputs the bytes as it gets them; for ASCII-
only text, this means that every other character is plain-text readable.
|
How do I import a local module in a Mako template?
Question: Suppose we have a local python module in the same directory as a mako
template: `./my_template.mako ./my_module/module.py`
How do I import that module into a Mako template that is supposed to be
rendered from the command line using mako-render? The following does not work:
<%! import my_module.module %>
It seems that the local path is not part of the search path. However, putting
each custom module into the global path is not an option.
Answer: My guess is that you are missing the `__init__.py` in your ./my_module folder
that is required for my_module to be a package. This file can be left empty,
it just needs to be present to create the package.
Here is a working example doing what you describe.
**Directory Layout**
.
├── example.py
├── my_module
│ ├── __init__.py <----- You are probably missing this.
│ └── module.py
└── my_template.mako
**Files**
example.py
from mako.template import Template
print Template(filename='my_template.mako').render()
my_template.mako
<%! import my_module.module %>
${my_module.module.foo()}
module.py
def foo():
return '42'
__init__.py
# Empty
**Running the Example**
>> python example.py
42
|
Python: parse xml file built up with dicts
Question: [Python 3.4][Windows 7]
If there is any easy way to get a whole .xml file like a .txt as one string,
that would be enough, but to describe the problem precisely:
This is the first time for me to deal with a .xml file. I have a .xml file
containing mainly dictionaries (of further dictionaries). It also says now, i
want to get very certain keys and values out of the dictionaries and write
them in a .txt file, so therefore a dict (or sth else) in python would be
enough.
To make an example:
This is the xml file (library.xml):
<?xml version="1.0" encoding="UTF-8"?>
<plist version="1.0">
<dict>
<key>Version<\key><integer>1</integer>
<key>Tracks</key>
<dict>
<key>0001</key>
<dict>
<key>Name</key><string>spam</string>
<key>Detail</key><string>spam spam</string>
</dict>
<key>0002</key>
<dict>
<key>Name</key><string>ham</string>
<key>Detail</key><string>ham ham</string>
</dict>
</dict>
</dict>
</plist>
I researched and thought i can do it with the xml.etree.ElementTree module: So
if i try this:
tree = ET.parse('library.xml')
root = tree.getroot()
I only get this message:
(Unicode Error) 'unicodeescape' codec can't decode bytes…
What I want is obviously some kind of this (or as a dict, it doesnt matter)
[['Name: spam', 'Detail: spam spam'], ['Name: ham', 'Detail: ham ham']
EDIT: xml code was incorrect, sry EDIT: Added last paragraph
Answer: Update input content from `<\key>` to `</key>` and removed `dict` tag because
key is not define for that.
1. Parse XML data by `lxml.html` module.
2. Get target main `dict` tag by `xpath()` method.
3. Call `XMLtoDict()` function.
4. Iterate on children of input tag by `getchildren()` method and `for` loop.
5. Check tag name is key or not by `if` loop.
6. If yes then get next tag of current tag by `getnext()` method.
7. If next tag is `integer` tag then get value type `int`.
8. If next tag is `string` tag then value type is `string`.
9. If next tag is `dict` tag then value type is `dict` and call function again i.e. recursive call.
10. Add key and value into result dictionary.
11. return result dictionary.
12. print result dictionary.
code:
data = """<?xml version="1.0" encoding="UTF-8"?>
<plist version="1.0">
<dict>
<key>Version</key>
<integer>1</integer>
<key>Tracks</key>
<dict>
<key>0001</key>
<dict>
<key>Name</key><string>spam</string>
<key>Detail</key><string>spam spam</string>
</dict>
<key>0002</key>
<dict>
<key>Name</key><string>ham</string>
<key>Detail</key><string>ham ham</string>
</dict>
</dict>
</dict>
</plist>
"""
def XMLtoDict(root):
result = {}
for i in root.getchildren():
if i.tag=="key":
key = i.text
next_tag = i.getnext()
next_tag_name = next_tag.tag
if next_tag_name=="integer":
value = int(next_tag.text)
elif next_tag_name=='string':
value = next_tag.text
elif next_tag_name=='dict':
value = XMLtoDict(next_tag)
else:
value = None
result[key] = value
return dict(result)
import lxml.html as ET
import pprint
root = ET.fromstring(data)
result = XMLtoDict(root.xpath("//plist/dict")[0])
pprint.pprint(result)
Output:
vivek@vivek:~/Desktop/stackoverflow$ python 12.py
{'Tracks': {'0001': {'Detail': 'spam spam', 'Name': 'spam'},
'0002': {'Detail': 'ham ham', 'Name': 'ham'}},
'Version': 1}
* * *
1. I am not getting such exception.
(Unicode Error) 'unicodeescape' codec can't decode bytes…
2. Tagging not correct in library.xml
import xml.etree.ElementTree as ET tree = ET.parse('library.xml')
Get following exception for input
vivek@vivek:~/Desktop/stackoverflow$ python 12.py
Traceback (most recent call last):
File "12.py", line 46, in <module>
tree = ET.parse('library.xml')
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1183, in parse
tree.parse(source, parser)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 656, in parse
parser.feed(data)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1643, in feed
self._raiseerror(v)
File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1507, in _raiseerror
raise err
xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 4, column 15
This exception due to invalid tagging. To fix this exception, do following:
Change from `<key>Version<\key>` to `<key>Version</key>`
3. By `xml.etree.ElementTree` module:
code:
def XMLtoDict(root):
result = {}
chidren_tags = root.getchildren()
for j, i in enumerate(chidren_tags):
if i.tag=="key":
key = i.text
next_tag = chidren_tags[j+1]
next_tag_name = next_tag.tag
if next_tag_name=="integer":
value = int(next_tag.text)
elif next_tag_name=='string':
value = next_tag.text
elif next_tag_name=='dict':
value = XMLtoDict(next_tag)
else:
value = None
result[key] = value
return dict(result)
def XMLtoList(root):
result = []
chidren_tags = root.getchildren()
for j, i in enumerate(chidren_tags):
if i.tag=="key":
key = i.text
next_tag = chidren_tags[j+1]
next_tag_name = next_tag.tag
if next_tag_name=="integer":
value = int(next_tag.text)
elif next_tag_name=='string':
value = next_tag.text
elif next_tag_name=='dict':
value = XMLtoList(next_tag)
else:
value = None
result.append([key, value])
return list(result)
import xml.etree.ElementTree as ET
import pprint
tree = ET.parse('library.xml')
root = tree.getroot()
dict_tag = root.find("dict")
if dict_tag is not None:
result = XMLtoDict(dict_tag)
print "Result in Dictinary:-"
pprint.pprint(result)
result = XMLtoList(dict_tag)
print "\nResult in Dictinary:-"
pprint.pprint(result)
output: vivek@vivek:~/Desktop/stackoverflow$ python 12.py
Result in Dictinary:-
{'Tracks': {'0001': {'Detail': 'spam spam', 'Name': 'spam'},
'0002': {'Detail': 'ham ham', 'Name': 'ham'}},
'Version': 1}
Result in Dictinary:-
[['Version', 1],
['Tracks',
[['0001', [['Name', 'spam'], ['Detail', 'spam spam']]],
['0002', [['Name', 'ham'], ['Detail', 'ham ham']]]]]]
|
Relative imports and package structure in eclipse?
Question: I'm having trouble with relative imports, but I think its because i'm not
understanding package structure completely.
For example, here is my package structure.
neo_autorig/ Source folder, Top level
__init__.py
basic/ Subpackage for basic utiltites for the script
__init__.py
name.py
name_test.py
module_locator.py
Theres more than this, but this is basically what i'm using for imports
In name.py i'm importing module locator using
from .. import module_locator
But it says
# Error: line 1: Attempted relative import beyond toplevel package
Are top level scripts (like my main script/ui used to execute everything)
supposed to go in the top source folder in the eclipse package? Or am i
setting this up wrong. There are also other sub packages in the source folder,
each with scripts in them.
Edit: If i putanother package in a sub package, I can relative import, its
only the case where i cant relative import from a sub package to a top level
package, and the scripts source is in my python path.
Answer: The python import mechanism works with the `__name__` of the file. Executing a
file directly gives the file the name `"__main__"` instead of its usual name.
The common answer to questions like this would be to run the program with the
-m option. I recommend reading [pep
366](https://www.python.org/dev/peps/pep-0366/) and maybe
[this](http://stackoverflow.com/questions/11536764/attempted-relative-import-
in-non-package-even-with-init-py?rq=1) or
[this](http://stackoverflow.com/questions/1918539/can-anyone-explain-pythons-
relative-imports) question as well.
|
Beautiful Soup 4 confusion
Question: I'm working on a little python program to get into web scraping:
from bs4 import BeautifulSoup
import requests
hero = raw_input("Enter Hero Name: ")
url = "http://www.dotahut.com/heroes/" + hero.lower().replace(' ', '_')
#to double check url in case of error
print (url)
r = requests.get(url)
soup = BeautifulSoup(r.text)
# find 'weak against' heroes
print('\nWeak Against:\n')
for champ in soup.find(class_='weak-block').find_all(class_='champ-block'):
print(champ.find(class_='name').get_text())
# find 'strong against' heroes
print('\nStrong Against:\n')
for champ in soup.find(class_='strong').find_all(class_='champ-block'):
print(champ.find(class_='name').get_text())
When I run this progra I get the following error:
Traceback (most recent call last):
File "C:\Users\LewisJames\Google Drive\Personal Files\Programming\Python\Dota 2 Counter.py", line 20, in <module>
for champ in soup.find(class_='weak-block').find_all(class_='champ-block'):
AttributeError: 'NoneType' object has no attribute 'find_all'
After reading the BS4 documentation and inspecting elements of the website I'm
very confused how this isn't working.
I am a begginer BS4 and python so please bear with me, It would be great if
you guys could help me!
Answer: The problem is with the line
for champ in soup.find(class_='weak-block').find_all(class_='champ-block'):
Specifically,
soup.find(class_='weak-block')
returns 'None', so the next step,
find_all(class_='champ-block'):
has nothing to search, thus it returns AttributeError: 'NoneType' object has
no attribute 'find_all'
|
Rolling window polynomial fit in pandas
Question: I am trying to compute coefficients from a _n_ -degree polynomial applied to a
_t_ -day window of a time series. However, I receive an exception `TypeError:
only length-1 arrays can be converted to Python scalars`.
My versions are:
* Python 2.7.8
* pandas version 0.14.1
* numpy version 1.9.0
The code:
import pandas as pd
import numpy as np
my_ts = pd.Series(data = np.random.normal(size = 365 * 2), index = pd.date_range(start = '2013-01-01', periods = 365 * 2))
coefs = pd.rolling_apply(my_ts, 21, lambda x: np.polyfit(range(len(x)), x, 3))
Yet, when I wrap `np.polyfit` so that it returns only one coefficient,
`rolling_apply` has no issue.
def pf_wrapper(x):
coef_lst = np.polyfit(range(len(x)), x, 3)
return coef_lst[0]
coefs = pd.rolling_apply(my_ts, 21, pf_wrapper)
UPDATE:
Since `pd.rolling_apply()` is unable to return a non-scalar, my current
solution is the following:
def get_beta(ts, deg):
coefs = polyfit(range(len(ts)), ts, deg = 3)[::-1]
return coefs[deg]
b0 = pd.rolling_apply(my_ts, 21, lambda x: get_beta(x, 0))
...
b3 = pd.rolling_apply(my_ts, 21, lambda x: get_beta(x, 3))
Answer: I don't think it is possible with `rolling_apply`. The
[documentation](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.stats.moments.rolling_apply.html) says that the
applied function "must produce a single value from an ndarray input". What it
seems to actually mean is "must produce a value _that is or can be converted
into a single float_ ". If you follow up on the full exception traceback it
leads you to this code in `algos.pyx`:
output = np.empty(n, dtype=float)
counts = roll_sum(np.isfinite(input).astype(float), win, minp)
bufarr = np.empty(win, dtype=float)
oldbuf = <float64_t*> bufarr.data
n = len(input)
for i from 0 <= i < int_min(win, n):
if counts[i] >= minp:
output[i] = func(input[int_max(i - win + 1, 0) : i + 1], *args,
**kwargs)
else:
output[i] = NaN
The error is raised on the line with `output[i] = func(...)`. You can see that
the output array is hardcoded to have dtype float. The error you receive is
the same as what you get if you attempt to convert a numpy array (of length
more than 1) to a float:
>>> float(np.array([1, 2, 3]))
Traceback (most recent call last):
File "<pyshell#14>", line 1, in <module>
float(np.array([1, 2, 3]))
TypeError: only length-1 arrays can be converted to Python scalars
So what is happening is that it tries to assign the output of `polyfit` to a
single element of the float ndarray, and fails because the output of polyfit
is an array that can't be converted to a float.
This could be "fixed" by making `output` have dtype object, but this would
slow things way down.
I think you have to consider `rolling_apply` as usable only for functions that
return a single float. To support non-scalar outputs, you would have to roll
(har har) your own version of `rolling_apply`.
|
Accessing netflix api from python's urllib2 results in 500 error
Question: I'm currently trying to fix a Kodi plugin called NetfliXBMC.
It uses this url to get information on specific movies:
http://www.netflix.com/JSON/BOB?movieid=<SOMEID>
While trying to build a minimal case to ask this question I discovered that
it's not even necessary to be logged in to access the information, which
simplifies my question a lot.
Querying information about a movie works from wget, from curl, from incognito
chrome etc. It just never works from urllib2:
# wget works just fine
$: wget -q -O- http://www.netflix.com/JSON/BOB?movieid=80021955
{"contextData":"{\"cookieDisclosure\":{\"data\":{\"showCookieBanner\":false}}}","result":"success","actionErrors":null,"fieldErrors":null,"actionMessages":null,"data":[output omitted for brevity]}
# so does curl
$: curl http://www.netflix.com/JSON/BOB?movieid=80021955
{"contextData":"{\"cookieDisclosure\":{\"data\":{\"showCookieBanner\":false}}}","result":"success","actionErrors":null,"fieldErrors":null,"actionMessages":null,"data":[output omitted for brevity}
# but python's urllib always gets a 500
$: python -c "import urllib2; urllib2.urlopen('http://www.netflix.com/JSON/BOB?movieid=80021955').read()"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 500: Internal Server Error
$: python --version
Python 2.7.6
What I've tried so far: several different user-agent strings, initializing a
urlopener with a cookie jar, plain old urllib (doesn't raise an exception but
receives the same error page).
I'm really curious as to why this might be. Thanks in advance!
Answer: It turned out to be a bug on netflix' side when no `Accept` header is sent.
This doesn't work:
opener = urllib2.build_opener()
opener.open("http://www.netflix.com/JSON/BOB?movieid=80021955")
Adding a proper accept header makes it work:
opener = urllib2.build_opener()
mimeAccept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
opener.addheaders = [('Accept', mimeAccept)]
opener.open("http://www.netflix.com/JSON/BOB?movieid=80021955")
[...]
Of course, there is another bug there: it returns a `500 internal server
error` instead of a `400 bad request` when the problem was clearly on the
request.
|
get hex from color name matplotlib
Question: I know this list of colors that matplotlib support:
<https://pythonhosted.org/ete2/reference/reference_svgcolors.html>
Is there a programmatic way to convert this names to hex ?
I would like a function that receive the color name and return the hex value
of this color.
Thanks.
Answer: `matplotlib.colors.cnames` is a dictionary of all of matplotlib's colors and
their hex values:
import matplotlib
print matplotlib.colors.cnames["blue"]
--> u'#0000FF'
|
Python BeautifulSoup City Scraping Complications
Question: I am attempting to scrape the countries and cities off craigslist and i am so
close.
The problem i am having is that the cities skip and go to the next box.
The output i am trying to achieve is:
COUNTRY | STATE | CITY
US: ALABAMA: AUBURN
US: ALABAMA: BIRMINGHAM
US: ALABAMA: DOTHAN
But instead i get:
COUNTRY | STATE | CITY
US: ALABAMA: AUBURN
US: ALABAMA: ANCHORAGE / MAT-SU
US: ALABAMA: FLAGSTAFF / SEDONA
Then when i reach the end of the column, the STATE changes to the next STATE.
This is my current code:
from BeautifulSoup import BeautifulSoup
import urllib2
soup = BeautifulSoup(urllib2.urlopen("http://www.craigslist.org/about/sites").read())
soup.prettify()
for h1 in soup.findAll('h1'):
colmask_div = h1.findNextSibling('div')
for box_div in colmask_div.findAll('div'):
h4 = box_div.find('h4')
for ul in box_div.findAll('ul'):
print'{} : {} : {}'.format(h1.text, h4.text, ul.li.a.text)
raw_input()
Skipping boxes somewhere but can't find where! Thanks. This is not homework,
just a personal project to learn beautifulsoup :)
Answer: The problem with your code is that you are:
1. Getting the first `h4` element (state name) in the column
2. Getting all the `ul` elements (lists of towns) in the whole column
3. For each list of towns, outputting the first `li` element (town name) only
4. Moving on to the next list of towns, without moving on to the next state
I'd go for something more like this:
for country in soup.findAll('h1'):
country_data = country.findNextSibling('div')
for state, towns in zip(country_data.findAll('h4'), country_data.findAll('ul')):
for town in towns.findAll('li'):
print '{} : {} : {}'.format(country.text, state.text, town.text)
raw_input()
You don't need to process each column in turn. Here I am getting BS to do the
work of fetching all the nested `h4` and `ul` elements in the top level `div`
for a country as two lists.
I then use the python
[`zip()`](https://docs.python.org/2/library/functions.html#zip) function to
iterate over those two lists in step.
* * *
### Output
US : Alabama : auburn
US : Alabama : birmingham
US : Alabama : dothan
...
US : Alabama : tuscaloosa
US : Alaska : anchorage / mat-su
...
US : Territories : U.S. virgin islands
Canada : Alberta : calgary
...
* * *
### Performance
In Python 2, you could improve the performance of this code by using
[`itertools.izip()`](https://docs.python.org/2/library/itertools.html#itertools.izip).
This doesn't create the whole list of pairs of elements in memory from the two
inputs, but uses a generator instead.
In Python 3, the regular
[`zip()`](https://docs.python.org/3.4/library/functions.html#zip) function
does this by default.
|
Extracting from HTML with beautifulsoup
Question: I try to extract lotto numbers from
<https://www.lotto.de/de/ergebnisse/lotto-6aus49/archiv.html> (I know there is
an easier way, but it's rather for learning).

Tried with Python, beautifulsoup the following:
from BeautifulSoup import BeautifulSoup
import urllib2
url="https://www.lotto.de/de/ergebnisse/lotto-6aus49/archiv.html"
page=urllib2.urlopen(url)
soup = BeautifulSoup(page.read())
numbers=soup.findAll('li',{'class':'winning_numbers.boxRow.clearfix'})
for number in numbers:
print number['li']+","+number.string
Returns nothing, which I actually expected. I read the tutorial, but still
didn't understand the parsing totally. Could someone give me a hint?
Thank you!
Answer: As the data content is dynamically generated, one of the _EASIER_ solutions
you may use **[Selenium](https://selenium-python.readthedocs.org)** or alike
to simulate the action as a browser (I use
**[PhantomJS](http://phantomjs.org)** as webdriver), like so:
from selenium import webdriver
url="https://www.lotto.de/de/ergebnisse/lotto-6aus49/archiv.html"
# I'm using PhantomJS, you may use your own...
driver = webdriver.PhantomJS(executable_path='/usr/local/bin/phantomjs')
driver.get(url)
soup = BeautifulSoup(driver.page_source)
# I just simply go through the div class and grab all number texts
# without special number, like in the Sample
for ul in soup.findAll('div', {'class': 'winning_numbers'}):
n = ','.join(li for li in ul.text.split() if li.isdigit())
if n:
print 'number: {}'.format(n)
number: 6,25,26,27,28,47
To also grab the special number:
for ul in soup.findAll('div', {'class': 'winning_numbers'}):
# grab only numeric chars, you may apply your own logic here
n = ','.join(''.join(_ for _ in li if _.isdigit()) for li in ul.text.split())
if n:
print 'number: {}'.format(n)
number: 6,25,26,27,28,47,5 # with special number
Hope this helps.
|
importing and renaming multiple objects in maya with python
Question: I am trying to import multiple object files in maya using python and use the
file name as object name in maya. I have managed to import the objects, but I
have only been able to use the file name as a namespace and not as a object
name.
import maya.cmds as cmds
def import_multiple_object_files():
files_to_import = cmds.fileDialog2(fileFilter = '*.obj', dialogStyle = 2, caption = 'import multiple object files', fileMode = 4)
for file_to_import in files_to_import:
names_list = file_to_import.split('/')
object_name = names_list[-1].replace('.obj', '')
cmds.file('%s' % file_to_import, i = True, type = "OBJ", namespace = object_name, mergeNamespacesOnClash = False, ignoreVersion = True, options = "mo=0", loadReferenceDepth = "all" )

Answer: Ok, so this script assumes that there is **only one mesh** in your .obj file.
In fact, only the first mesh returned from your import is renamed.
To retrieve the returned nodes I used the [returnNewNodes
flag](http://download.autodesk.com/us/maya/2011help/CommandsPython/file.html#flagreturnNewNodes)
from file command.
Then I used the [rename
command](http://download.autodesk.com/us/maya/2011help/CommandsPython/rename.html)
to rename the imported node to your file name.
I also deleted the namespace and mergeNamespacesOnClash flags.
**Note:** I'm a bit lazy today and I hav'nt any .obj files here so I did not
test this code. If you want to load an obj file with multiple mesh, give me a
comment, I'll edit my answer.
import maya.cmds as cmds
def import_multiple_object_files():
files_to_import = cmds.fileDialog2(fileFilter = '*.obj', dialogStyle = 2, caption = 'import multiple object files', fileMode = 4)
for file_to_import in files_to_import:
names_list = file_to_import.split('/')
object_name = names_list[-1].replace('.obj', '')
returnedNodes = cmds.file('%s' % file_to_import, i = True, type = "OBJ", rnn=True, ignoreVersion = True, options = "mo=0", loadReferenceDepth = "all" )
cmds.rename( returnedNodes[0], object_name)
|
How to make modules from exec("import xxx") in a Python function available?
Question: I have written myself a simple function that sieves through my Python folder
and look for where a possible module is. What I want to do is simple. I pass a
string of module import and the function will find the folder of the module,
cd there, and import it to whichever environment I am working in, e.g.:
anyimport('from fun_abc import *')
Originally I tried:
class anyimport(object):
def __init__(self, importmodule, pythonpath='/home/user/Python', finddir=finddir):
##################################################################
### A BUNCH OF CODES SCANNING THE DIRECTORY AND LOCATE THE ONE ###
##################################################################
### "pointdir" is where the directory of the file is ###
### "evalstr" is a string that looks like this : ---
### 'from yourmodule import *'
os.chdir(pointdir)
exec evalstr
As I coded the whole thing in iPython Notebook, it works. So the problem
slipped by me. Then I found it does not work properly because the modules the
function imports stay in the function's local variable space.
Then I found this Stack Overflow discussion ["In Python, why doesn't an import
in an exec in a function
work?"](http://stackoverflow.com/questions/12505047/in-python-why-doesnt-an-
import-in-an-exec-in-a-function-work). Thus I changed the code to the
following:
class anyimport(object):
def __init__(self, importmodule, pythonpath='/home/user/Python', finddir=finddir):
##################################################################
### A BUNCH OF CODES SCANNING THE DIRECTORY AND LOCATE THE ONE ###
##################################################################
### "pointdir" is where the directory of the file is ###
### "evalstr" is a string that looks like this : ---
### 'from yourmodule import *'
sys.path.append(os.path.join(os.path.dirname(__file__), pointdir))
exec (evalstr, globals())
It still does not work. The function runs without error but the modules are
not available to me, say if I run `script.py` in which I do `anyimport('from
fun_abc import *')` but nothing from `fun_abc` is there. Python will tell me
"NameError: name 'fun_you_want' is not defined".
Would anyone be kind enough to point me in the right direction as to how to
solve this problem?
Thank you for your attention and I really appreciate your help!
## Attention:
In addition to @Noya's spot-on answer that one has to pass the scope to make
the `exec` work, to avoid "ImportError", you need to add this line before
running `exec`:
sys.path.append(os.path.join(os.path.dirname(__file__), pointdir))
exec (evalstr, scope)
This is due to the reason that our modification of `sys.path` assumes the
current working directory is always in `main/`. We need to add the parent
directory to `sys.path`. See this Stack Overflow discussion ["ImportError: No
module named -
Python"](http://stackoverflow.com/questions/7587457/importerror-no-module-
named-python) for more information on resolving this issue.
Answer: You might want to try something like this:
_globals = {}
code = """import math;"""
code += """import numpy;"""
code = compile(code, '<string>', 'exec')
exec code in _globals
It's safer than just doing an `exec`, and it should import correctly inside a
function's local scope.
You can then update `globals()` with whatever modules (or functions) you
import.
When using `exec` for functions, you can get a handle to `globals` with `g =
globals()`, then do an update on `g`. For modules, you should do another
step... you will want to also update the modules in `sys.modules` as well.
UPDATE: to be explicit:
>>> def foo(import_string):
... _globals = {}
... code = compile(import_string, '<string>', 'exec')
... exec code in _globals
... import sys
... g = globals()
... g.update(_globals)
... sys.modules.update(_globals)
...
>>> foo('import numpy as np')
>>> np.linspace
<function linspace at 0x1009fc848>
|
OSError: cannot open shared object file: No such file or directory even though file is in the folder
Question: I've been fighting with this for quite some time now. I'm trying to install
Yaafe for audio feature extraction. I follow instructions here:
<https://github.com/Yaafe/Yaafe>
Everything installs up nicely, but when I try to run the test file "frames.py"
I get following error:
File "frames.py", line 6, in <module>
from yaafelib import FeaturePlan, Engine, AudioFileProcessor
File "/usr/local/lib/python2.7/dist-packages/yaafelib/__init__.py", line 36, in <module>
from yaafelib.core import (loadComponentLibrary,
File "/usr/local/lib/python2.7/dist-packages/yaafelib/core.py", line 35, in <module>
yaafecore = cdll.LoadLibrary('libyaafe-python.so')
File "/usr/lib/python2.7/ctypes/__init__.py", line 443, in LoadLibrary
return self._dlltype(name)
File "/usr/lib/python2.7/ctypes/__init__.py", line 365, in __init__
self._handle = _dlopen(self._name, mode)
OSError: libyaafe-python.so: cannot open shared object file: No such file or directory
I have included the lib directory to LD_LIBRARY_PATH with following command:
export LD_LIBRARY_PATH=/usr/local/lib
And indeed when I echo the LIBRARY_PATH it is there. Also when I check the
/usr/local/lib it has following contents:
libyaafe-components.so libyaafe-io.so python2.7
libyaafe-components.so.0 libyaafe-io.so.0 python3.4
libyaafe-components.so.0.70.0 libyaafe-io.so.0.70.0 site_ruby
libyaafe-core.so libyaafe-python.so yaafe
libyaafe-core.so.0 libyaafe-python.so.0
libyaafe-core.so.0.70.0 libyaafe-python.so.0.70.0
So shouldn't everything be okay? I don't understand what is the problem. I've
followed instructions to the point.
Answer: Change your code so that you print `os.environ` right before that exception
occurs. That way you will see whether the Python process has the correct
environment set or not. The other obvious thing to check are whether your
Python process has sufficient permission to open and read `libyaafe-
python.so`. Note that `sudo` by default limits the environment of the invoked
command, for security reasons (see
[here](http://unix.stackexchange.com/questions/16084/what-environment-do-i-
get-with-sudo), for instance).
|
Creating a Treeview with Checkboxes
Question: I am trying to build an application based on the following usage example ([How
to create a tree view with checkboxes in
Python](http://stackoverflow.com/questions/5104330/how-to-create-a-tree-view-
with-checkboxes-in-python)).
This example builds a `Treeview` with checkboxes using the `Tix` library.
However, when I run this example, whenever a `Checkbox` is checked, the text
label of that box disappears.
Could someone help me to avoid the behaviour mentioned just above?
import Tix
class View(object):
def __init__(self, root):
self.root = root
self.makeCheckList()
def makeCheckList(self):
self.cl = Tix.CheckList(self.root, browsecmd=self.selectItem)
self.cl.pack()
self.cl.hlist.add("CL1", text="checklist1")
self.cl.hlist.add("CL1.Item1", text="subitem1")
self.cl.hlist.add("CL2", text="checklist2")
self.cl.hlist.add("CL2.Item1", text="subitem1")
self.cl.setstatus("CL2", "on")
self.cl.setstatus("CL2.Item1", "on")
self.cl.setstatus("CL1", "off")
self.cl.setstatus("CL1.Item1", "off")
self.cl.autosetmode()
def selectItem(self, item):
print item, self.cl.getstatus(item)
def main():
root = Tix.Tk()
view = View(root)
root.update()
root.mainloop()
if __name__ == '__main__':
main()
Answer: The problem is that the default foreground color on selection is the same as
the background color. The problem is resolved by adding the following line of
code after the initialization of self.cl:
self.cl.hlist.config(selectforeground="black")
|
Using PySide custom widgets in Qt Designer
Question: PyQt has a plugin system for adding python widgets to Qt Designer so that they
can be drag'n'dropped into the `.ui` files for other widgets.
<http://pyqt.sourceforge.net/Docs/PyQt4/designer.html#writing-qt-designer-
plugins>
How could this be done for PySide widgets?
Answer: To answer the specific question: "How could this be done for PySide widgets?"
There was once a QtDesigner plugin for PySide that that allowed custom widget
plugins to be written in python. However, development on it stopped before it
became fully viable. To quote from the pyside mailing list:
> On Thursday 24 March 2011 19:18:02 Hugo Parente Lima wrote:
>
>> On Wednesday 23 March 2011 22:46:32 Gerald Storer wrote:
>>
>> I vote for removing the QtDesigner plugin example from pyside-examples, it
doesn't work and the support for QtDesign plugins isn't on our roadmap yet,
besides IMO it's not a very important feature to have and we have more urgent
bugs and features to do at the moment.
>>
>> Any objections?
>
> Timeout! Plugin examples removed from pyside-examples repository.
So all you have to do is find that old QtDesigner source code, get it working,
and then submit the necessary patches :-)
|
How to convert an urlopen into a string in python
Question: Most certainly my question is not asked properly . Anyway, I am looking for a
way to convert some data I ve extracted from the web to a list or a string
(string would be better tho).
At the moment, this is what I wrote :
import urllib as web
message = "http://hopey.netfonds.no/posdump.php?date=20150209&paper=AAPL.O&csv_format=txt"
data = web.urlopen(message)
data has a very weird type I ve never seen before (yes yes still new to that
python thing). So any help would be very helpful .
Thanks in advance.
Answer: You can use the metod `.read()`
data = web.urlopen(message)
str_data = data.read()
This will return the html on the page. You can use `dir(web.urlopen(message))`
to see all the methods available for that object. You can use `dir()` to see
the methods available for anything in python.
To sum up the answer, on that `object` you crated you can call the method
`.read()` ( like `data.read()`) or you can use `.readline`( like
`data.readline()`, this will read just a line from the file, you can use this
method to read just a line when you need it).When you read from that object
you will get a string back.
If you do `data.info()` you will get something like this :
<httplib.HTTPMessage instance at 0x7fceeb2ca638>
You can read more about this
[here](https://docs.python.org/2/library/httplib.html) .
|
Is a list a variable?
Question: I am new to python and i was just reading up about lists. I have been trying
to find out if a list is a variable
e.g. Hello = []
This is because I read that you assign a variable by using the '=' sign. Or,
am I just assigning the empty list a name in the example above.
Answer: In Python, the concept of object is quite important (as other users might have
pointed out already, I am being slow!).
You can think of `list` as a list (or actually, an Object) of elements. As a
matter of fact, list is a `Variable`-sized `object` that represents a
collection of items. Python lists are a bit special because you can have mixed
types of elements in a `list` (e.g. strings with int)But at the same time, you
can also argue,"What about set, map, tuple, etc.?". As an example,
>>> p = [1,2,3,'four']
>>> p
[1, 2, 3, 'four']
>>> isinstance(p[1], int)
True
>>> isinstance(p[3], str)
True
>>>
In a `set`, you can vary the size of the set - yes. In that respect, set is a
variable that contains unique items - if that satisfies you....
In this way, a `map` is also a "Variable" sized key-value pair where every
unique key has a value mapped to it. Same goes true for `dictionary`.
If you are curious because of the `=` sign - you have already used a keyword
in your question; "Assignment". In all the high level languages (well most of
them anyway), `=` is the assignment operator where you have a variable name on
lhs and a valid value (either a variable of identical type/supertype, or a
valid value).
|
How do i load a specific column of information from a table into an array in Python?
Question: So, i'm supposed to load information from the SORCE database using a url with
variables for the begin and end date of information and then create an array
for the wavelength.
i'm saving the data using:
url = "http://lasp.colorado.edu/lisird/tss/sorce_ssi.csv?&time>=%(YYYY)04d-%(MM)02d-%(DD)02d&time<%(yyyy)04d-%(mm)02d-%(dd)02d" %{"YYYY":YYYY, "MM":MM, "DD":DD, "yyyy":yyyy, "mm":mm, "dd":dd}
urlptr = urllib2.urlopen(url)
data = ascii.read(urlptr)
this gets an output like:
time (days since 2003-01-24) wavelength (nm) ... instrument (id) version
---------------------------- --------------- ... --------------- -------
2534.5 0.5 ... 57.0 10.0
2534.5 1.5 ... 57.0 10.0
2534.5 2.5 ... 57.0 10.0
2534.5 3.5 ... 57.0 10.0
2534.5 4.5 ... 57.0 10.0
2534.5 5.5 ... 57.0 10.0
2534.5 6.5 ... 57.0 10.0
2534.5 7.5 ... 57.0 10.0
2534.5 8.5 ... 57.0 10.0
2534.5 9.5 ... 57.0 10.0
2534.5 10.5 ... 57.0 10.0
... ... ... ... ...
2898.5 2300.43 ... nan nan
2898.5 2311.89 ... nan nan
2898.5 2323.28 ... nan nan
2898.5 2334.63 ... nan nan
2898.5 2345.9 ... nan nan
2898.5 2357.11 ... nan nan
2898.5 2368.28 ... nan nan
2898.5 2379.37 ... nan nan
2898.5 2390.42 ... nan nan
2898.5 2401.4 ... nan nan
2898.5 2412.34 ... nan nan
and my first thought to create the wavelength array was to write something
like:
wlength = loadtxt(data, usecols=(1))
However, when i run this i get a type error saying that the 'int' object isn't
iterable.
I know that ints aren't iterable, but how do i make the information i'm
looking for into something that is iterable?
Answer: Here's an example using the csv module:
import csv
import urllib2
wavelengths = []
# small time range for example
url = "http://lasp.colorado.edu/lisird/tss/sorce_ssi.csv?&time%3E=2014-01-01&time%3C2014-01-10"
urlptr = urllib2.urlopen(url)
csv_reader = csv.reader(urlptr)
first_line = True
for row in csv_reader:
if first_line:
first_line = False
continue
wavelengths.append(row[1])
print "There's", len(wavelengths), "wavelengths"
print "First 10", wavelengths[:10]
|
Django Lint throwing error
Question: I installed Python Lint for static analysis of Python code
pylint --version:
No config file found, using default configuration
pylint 1.4.1,
astroid 1.3.4, common 0.63.2
Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2]
I am currently working on a django project, for which I installed python-
django-lint package. But when I invoke the django-lint it throws an error:
Traceback (most recent call last):
File "/usr/bin/django-lint", line 25, in <module>
sys.exit(script.main())
File "/usr/lib/pymodules/python2.7/DjangoLint/script.py", line 119, in main
AstCheckers.register(linter)
File "/usr/lib/pymodules/python2.7/DjangoLint/AstCheckers/__init__.py", line 22, in register
from size import SizeChecker
File "/usr/lib/pymodules/python2.7/DjangoLint/AstCheckers/size.py", line 19, in <module>
from pylint.interfaces import IASTNGChecker
ImportError: cannot import name IASTNGChecker
I am using Python 3.4 and Django 1.6.
Answer: Install the `python3-pip` package, remove the `python-django-lint` package and
call then `sudo pip3 install pylint-django`.
To invoke it, call `pylint --load-plugins pylint_django...`
|
Not able to import views in djano, but it works in django shell
Question: I have the following simple urls.py:
from django.conf.urls import patterns, include, url
from base import views
urlpatterns = patterns('',
url(r'^test$', 'views.test', name='test'),
)
And the following basic view: from django.shortcuts import render
def test(request):
return render(request, "base/test.html")
I have a project named blog and an app named base.
I know i can fix this by using "base.views.test" in the urls.py but in theory
this should be working i think because there is a **init**.py file in the base
directory.
When using views.test im getting "no module named views" as an error in django
but when i do this in the django shell it works:
from base import views
views.test
function test
I'm wondering why this works in the django shell and not in django itself. No
matter what im doing in django, except from using the full absolute path to
the view im getting an error saying that i couldnt find a module named
views..so basically im looking for an explaination of why its not working, not
a solution since i know i could be using the full path to the view and make it
work.
Ive seen other threads on stackoverflow where a user has the same problem but
only a solution is provided, no explaination..they only tell people to use the
full path to the view but i dont really understand why it wouldnt work doing
it like that.. i know i also could be doing it this way:
from django.conf.urls import patterns, include, url
urlpatterns = patterns('base',
url(r'^test$', 'views.test', name='test'),
)
but im trying to understand why it isnt working when using "from base import
views" since it thought this would be working since its working with other
regular python modules.
File structure:
.
├── base
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── models.py
│ ├── static
│ ├── templates
│ │ └── base
│ │ └── test.html
│ ├── tests.py
│ ├── views.py
│ └── views.pyc
├── blog
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── settings.py
│ ├── settings.pyc
│ ├── urls.py
│ ├── urls.pyc
│ ├── wsgi.py
│ └── wsgi.pyc
└── manage.py
New traceback when adding (views.test without quotes, remove the old traceback
to avoid the post to get too long and unreadable):
Environment:
Request Method: GET
Request URL: http://localhost:8000/test
Django Version: 1.5
Python Version: 2.7.7
Installed Applications:
('django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'blog')
Installed Middleware:
('django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware')
Traceback:
File "/Users/exceed/code/python/lib/python27/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
92. response = middleware_method(request)
File "/Users/exceed/code/python/lib/python27/lib/python2.7/site-packages/django/middleware/common.py" in process_request
69. if (not urlresolvers.is_valid_path(request.path_info, urlconf) and
File "/Users/exceed/code/python/lib/python27/lib/python2.7/site-packages/django/core/urlresolvers.py" in is_valid_path
551. resolve(path, urlconf)
File "/Users/exceed/code/python/lib/python27/lib/python2.7/site-packages/django/core/urlresolvers.py" in resolve
440. return get_resolver(urlconf).resolve(path)
File "/Users/exceed/code/python/lib/python27/lib/python2.7/site-packages/django/core/urlresolvers.py" in resolve
319. for pattern in self.url_patterns:
File "/Users/exceed/code/python/lib/python27/lib/python2.7/site-packages/django/core/urlresolvers.py" in url_patterns
347. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/Users/exceed/code/python/lib/python27/lib/python2.7/site-packages/django/core/urlresolvers.py" in urlconf_module
342. self._urlconf_module = import_module(self.urlconf_name)
File "/Users/exceed/code/python/lib/python27/lib/python2.7/site-packages/django/utils/importlib.py" in import_module
35. __import__(name)
File "/Users/exceed/code/python/django/blog/blog/urls.py" in <module>
8. url(r'^test$', views.test, name='test'),
Exception Type: NameError at /test
Exception Value: name 'views' is not defined
Answer: Just pass your view function instead of string with dot path to it:
from django.conf.urls import patterns, include, url
from base import views
urlpatterns = patterns('',
url(r'^test$', views.test, name='test'),
)
Quick explanation: you can pass into 2nd parameter of `url` either a function
or a string with dotted path to it. When it's a function, django will simply
call it when url is opened, when it's a dotted path: django will search for
that function in your `PYTHONPATH`. It doesn't matter if you import your
module inside your `urls.py`, django will always search in `PYTHONPATH`. You
can additionally pass prefix into first argument of patterns, and all
functions given by dotted path string will be searched relative to that
prefix.
So if you don't want to pass prefix into `patterns` (because for example you
are using multiple apps and only one `patterns`, the only solution here is to
pass functions instead of dotted paths. You can additionally use multiple
`patterns` (and specify different prefixes in each of them) and then include
them into main `patterns` (without prefix passed into it).
|
Regex for matching this string
Question: With python ( regex module ), I am triying to substitute 'x' for each letter
'c' in those strings occurring in a text and:
1. delimited by 'a', at the left, and 'b' at the right, and
2. with no more 'a's and 'b's in them.
Example:
`cuacducucibcl` -> `cuaxduxuxibcl`
How can I do this?
Thank you.
Answer: With the standard `re` module in Python, you can use `a[^ab]+b` to match the
string which starts and end with `a` and `b` and doesn't have any occurence of
`a` or `b` in between, **then** supply a replacement function to take care of
the replacement of `c`:
>>> import re
>>> re.sub('a[^ab]+b', lambda m: m.group(0).replace('c', 'x'), 'cuacducucibcl')
'cuaxduxuxibcl'
Document of [`re.sub`](https://docs.python.org/2/library/re.html#re.sub) for
reference.
|
How to update my same csv file and use it for matching in python?
Question: There 2 csv files.My first task is to match the contents of both the files
with a specified keyword and create a new file with the keyword contents and
named as keyword.csv.It is working fine. Problem part:I want to use the
remaining.csv again and again to remove those values which are being matched
by the keywords. program is not throwing any errors but not writing all the
content in the csv properly.
import csv
handle2 =''
each=''
row=''
cat_var=''
handle1=''
with open('ss.csv') as handle2:
for each in handle2:
strng_conv = ''.join(map(str,each.rstrip('\n')))
print strng_conv
file_name = strng_conv+'.csv'
print file_name
cat_file = open(file_name,'wb')
cat_var = csv.writer(cat_file,delimiter=' ')
with open('repaired.csv', 'r') as csvfile:
handle1 = csv.reader(csvfile, delimiter=' ')
for row in handle1:
if each.rstrip() in row:
cat_var.writerow(row)
csvfile.close()
else:
with open('repaired.csv','w') as csvfile:
handle1 = csv.writer(csvfile,delimiter=' ')
handle1.writerow(row)
csvfile.close()
cat_file.close()
csvfile.close()
handle2.close()
**ss.csv contains items like**
yepme
reebok
adidas
baggit
**repaired.csv contains**
Waybill Contents Amount COD Amount Type
928049 123456 yepme bag 1109 0 Pre-paid
904096 15116916453 yepme polish 518 0 Pre-paid
922222 52255 puma abc 522 cod
suppose if it matches yepme contents I want repaired.csv shpuld have only row
having puma meaning all the leftovers after a match
**sample output files to be created like this-These file contains all the
matched data**
yepme.csv
reebok.csv
adidas.csv
**Repaired.csv at the end should contain those rows which are not matched in
any of the keywords** like I haven't put puma for match so only remaining
entry should be of puma left in the repaired.csv
Answer: The problem you have is that you don't fully understand how the IO is working
here;
When you do `with X...` what is actually happening under the hood is python
calls the `__enter__` method of the object, and then when the scope is left
(i.e. the indent level of the `with` statement), it calls the `__exit__`
object of the same object.
The following two snippets are equivalent*:
with open("file1", "r") as a:
...
and
tmp = open("file1", "r")
a = tmp.__enter__()
...
a.__exit__()
(* That is, if you ignore exceptions being chucked out and stuff )
For `File` objects, the `__exit__()` will clean up by closing the file and
whatnot - i.e. you shouldn't be closing the file after a `with open(...) as
X:`.
And the second thing you're doing, which is perhaps iffy and might not get you
the results you desire:
You're looping over the contents of a file, while editing the same file (or
maybe at least you think you do). And you're reading the files a couple of
different ways.
when you do `for line in file_object` ititerates through the file, moving an
internal pointer.
When you open the file, and then read it in using some special reader, you're
creating a new object - which will be independent from the actual file. Mixing
all of these different methods is likely going to give you results which
inconsistent and confusing, and a headache to debug when you make assumptions
about how they work.
It will probably be much easier and clearer to just read in the two original
files, and then parse them as python objects, rather than opening and closing
all of these files. I'll update this if you alter your question so it's a
little clearer what you want to do.
try something like this:
keywords = []
with open("ss.csv", "r") as keywords_file:
for line in keywords_file:
keywords.append(line.strip())
outputs = {}
left_overs= []
with open("remaining.csv", "r") as remaining_file:
for line in remaining_file:
for keyword in keywords:
if keyword in line:
if keyword in outputs:
outputs[keyword].append(line)
else:
outputs[keyword] = [line]
break
else:
left_overs.append(line)
for keyword, output in outputs.items():
with open(keyword + ".csv", "w") as output_file:
for line in output:
output_file.write(line)
with open("remaining.csv", "w") as remaining_file:
for line in left_overs:
remaining_file.write(line)
Does that do what you're aiming for?
|
Matplotlib Import Error : _backend_gdk.so:undefined symbol: PyArray_SetBaseObject
Question: I was trying to run the following code in OpenCV python.
#!/usr/bin/env python
import numpy as np
import time
import matplotlib
matplotlib.use('GTKAgg')
from matplotlib import pyplot as plt
def randomwalk(dims=(256, 256), n=20, sigma=5, alpha=0.95, seed=1):
""" A simple random walk with memory """
r, c = dims
gen = np.random.RandomState(seed)
pos = gen.rand(2, n) * ((r,), (c,))
old_delta = gen.randn(2, n) * sigma
while True:
delta = (1. - alpha) * gen.randn(2, n) * sigma + alpha * old_delta
pos += delta
for ii in xrange(n):
if not (0. <= pos[0, ii] < r):
pos[0, ii] = abs(pos[0, ii] % r)
if not (0. <= pos[1, ii] < c):
pos[1, ii] = abs(pos[1, ii] % c)
old_delta = delta
yield pos
def run(niter=1000, doblit=True):
"""
Display the simulation using matplotlib, optionally using blit for speed
"""
fig, ax = plt.subplots(1, 1)
ax.set_aspect('equal')
ax.set_xlim(0, 255)
ax.set_ylim(0, 255)
ax.hold(True)
rw = randomwalk()
x, y = rw.next()
plt.show(False)
plt.draw()
if doblit:
# cache the background
background = fig.canvas.copy_from_bbox(ax.bbox)
points = ax.plot(x, y, 'o')[0]
tic = time.time()
for ii in xrange(niter):
# update the xy data
x, y = rw.next()
points.set_data(x, y)
if doblit:
# restore background
fig.canvas.restore_region(background)
# redraw just the points
ax.draw_artist(points)
# fill in the axes rectangle
fig.canvas.blit(ax.bbox)
else:
# redraw everything
fig.canvas.draw()
plt.close(fig)
print "Blit = %s, average FPS: %.2f" % (
str(doblit), niter / (time.time() - tic))
if __name__ == '__main__':
run(doblit=False)
run(doblit=True)
At first I got an error that said Import Error : No module named _backend_gdk.
I searched a lot and tried various methods. Now, I get a different error.
Traceback (most recent call last):
File "testblit.py", line 7, in <module>
from matplotlib import pyplot as plt
File "/usr/local/lib/python2.7/dist-packages/matplotlib-1.5.dev1-py2.7-linux-x86_64.egg/matplotlib/pyplot.py", line 108, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/usr/local/lib/python2.7/dist-packages/matplotlib-1.5.dev1-py2.7-linux-x86_64.egg/matplotlib/backends/__init__.py", line 32, in pylab_setup
globals(),locals(),[backend_name],0)
File "/usr/local/lib/python2.7/dist-packages/matplotlib-1.5.dev1-py2.7-linux-x86_64.egg/matplotlib/backends/backend_gtkagg.py", line 14, in <module>
from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK, FigureCanvasGTK,\
File "/usr/local/lib/python2.7/dist-packages/matplotlib-1.5.dev1-py2.7-linux-x86_64.egg/matplotlib/backends/backend_gtk.py", line 36, in <module>
from matplotlib.backends.backend_gdk import RendererGDK, FigureCanvasGDK
File "/usr/local/lib/python2.7/dist-packages/matplotlib-1.5.dev1-py2.7-linux-x86_64.egg/matplotlib/backends/backend_gdk.py", line 33, in <module>
from matplotlib.backends._backend_gdk import pixbuf_get_pixels_array
ImportError: /usr/local/lib/python2.7/dist-packages/matplotlib-1.5.dev1-py2.7-linux-x86_64.egg/matplotlib/backends/_backend_gdk.so: undefined symbol: PyArray_SetBaseObject
Any help to solve this problem would be appreciated. The code above is for
real time plotting using blit that I found on stackoverflow.
Answer: Should compile with numpy 1.7 or newer, or with 1.6 after applying the fix at
<https://github.com/matplotlib/matplotlib/pull/5080>.
|
Python: random function from locals() with defined prefix
Question: I'm working on a textbased adventure and now want to run a random function.
All adventure function are "adv" followed by a 3 digit number.
If I run go() I get back :
IndexError: Cannot choose from an empty sequence
This is because allAdv is still empty. If i run go() line by line in the shell
it works but not in the function. What did I miss?
import fight
import char
import zoo
import random
#runs a random adventure
def go():
allAdv=[]
for e in list(locals().keys()):
if e[:3]=="adv":
allAdv.append(e)
print(allAdv)
locals()[random.choice(allAdv)]()
#rat attacks out of the sewer
def adv001():
print("All of a sudden an angry rat jumps out of the sewer right beneath your feet. The small, stinky animal aggressivly flashes his teeth.")
fight.report(zoo.rat)
Answer: It's mainly due to the scope issue, when you invoke `locals()` in `go()`, it
only prints out the local variables `allDev` defined in this function:
locals().keys() # ['allDev']
But if you type the following line by line in the shell, `locals()` do include
`adv001` since they are at the same level in this case.
def adv001():
print("All of a sudden an angry rat jumps out of the sewer right beneath your feet. The small, stinky animal aggressivly flashes his teeth.")
fight.report(zoo.rat)
allAdv=[]
print locals().keys() # ['adv001', '__builtins__', 'random', '__package__', '__name__', '__doc__']
for e in list(locals().keys()):
if e[:3]=="adv":
allAdv.append(e)
print(allAdv)
locals()[random.choice(allAdv)]()
If you really want to get those function variables in `go()`, you might
consider change `locals().keys()` to `globals().keys()`
|
Python Pandas, is there a way to split a date_range into equally sized intervals?
Question: Im currently working on a project in Python (Pandas module). What I want to do
is split a date_range into equally sized intervals.
import pandas as pd
startdate='2014-08-08'
enddate='2014-08-11'
n=3
pd.date_range(start=startdate,end=enddate)
What I would like is some way for it to return the intermediate dates as
string, for example:
startdate='2014-08-08'
intermediate_1='2014-08-09'
intermediate_2='2014-08-10'
enddate='2014-08-11'
This is an example with days, but I would like to be able to do the same for
hours or minutes. Is there a way to do this in the current Pandas module? Any
help would be greatly appreciated.
Regards, Alex
Answer: You can use `np.split` to split your array, this returns a an array of
datetimeindex values so we have to access the value element using `[0]` we can
then call the attribute `date()` and cast to str:
In [238]:
startdate='2014-08-08'
enddate='2014-08-11'
n=3
d = pd.date_range(start=startdate,end=enddate)
d
Out[238]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-08-08, ..., 2014-08-11]
Length: 4, Freq: D, Timezone: None
In [249]:
# split to produce n-1 blocks
splits = np.split(d, n-1)
# show the contents
for t in splits:
print(t)
# assign each element, then access the value using [0] and access the date attribute and cast to str
intermediate_1 = str(splits[0][0].date())
intermediate_2 = str(splits[-1][0].date())
# show intermediate_1 which shows it is a str representation of the date
intermediate_1
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-08-08, 2014-08-09]
Length: 2, Freq: D, Timezone: None
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-08-10, 2014-08-11]
Length: 2, Freq: D, Timezone: None
Out[249]:
'2014-08-08'
|
quick sort list of tuple with python
Question: I am trying to do this in its operation algorithm quicksort to sort though the
elements of a list of tuples. Or if I have a list of this type [(0,1), (1,1),
(2,1), (3,3), (4,2), (5,1), (6,4 )] I want to sort it in function of the
second element of each tuple and obtain [(6,4), (3,3), (4,2), (0,1), (1,1),
(2,1 ), (5,1)]. I have tried using the following algorithm:
def partition(array, begin, end, cmp):
pivot=array[end][1]
ii=begin
for jj in xrange(begin, end):
if cmp(array[jj][1], pivot):
array[ii], array[jj] = array[jj], array[ii]
ii+=1
array[ii], array[end] = pivot, array[ii]
return ii
enter code hedef sort(array, cmp=lambda x, y: x > y, begin=0, end=None):
if end is None: end = len(array)
if begin < end:
i = partition(array, begin, end-1, cmp)
sort(array, cmp, i+1, end)
sort(array, cmp, begin, i)
The problem is that the result is this: [4, (3, 3), (4, 2), 1, 1, 1, (5, 1)].
What do I have to change to get the correct result ??
Answer: Complex sorting patterns in Python are painless. Python's [sorting
algorithm](http://en.wikipedia.org/wiki/Timsort) is state of the art, one of
the fastest available in real-world cases. No algorithm design needed.
>>> from operator import itemgetter
>>> l = [(0,1), (1,1), (2,1), (3,3), (4,2), (5,1), (6,4 )]
>>> l.sort(key=itemgetter(1), reverse=True)
>>> l
[(6, 4), (3, 3), (4, 2), (0, 1), (1, 1), (2, 1), (5, 1)]
Above,
[`itemgetter`](https://docs.python.org/3/library/operator.html#operator.itemgetter)
returns a function that returns the second element of its argument. Thus the
`key` argument to `sort` is a function that returns the item on which to sort
the list.
Python's sort is stable, so the ordering of elements with equal keys (in this
case, the second item of each tuple) is determined by the original order.
|
How to set the default encoding in a buildout script, or during virtualenv creation?
Question: I have a Plone project which is created by a `buildout` script and needs a
default encoding of `utf-8`. This is usually done in the `sitecustomize.py`
file of the Python installation. Since there is a `virtualenv`, I'd like to
generate this file automatically, to contain something like:
import sys
sys.setdefaultencoding('utf-8')
After generation I have two empty `sitecustomize.py` files - one in
`parts/instance/`, and one in `parts/buildout`; but none of these seems to be
used (I couldn't find them in `sys.path`).
I tried `zopepy`:
>>> from os.path import join, isfile
>>> from pprint import pprint
>>> import sys
>>> pprint([p for p in sys.path
... if isfile(join(p, 'sitecustomize.py'))
... ])
and found another one in my local `lib/python2.7/site-packages/` directory
which looks good; but it doesn't work:
>>> import sys
>>> sys.getdefaultencoding()
'ascii'
This directory sits near the end of the `sys.path`, because I needed to add it
by an `extra-paths` entry (to get another historical package).
Any pointers? Thank you!
**System information** : CentOS 7, Python 2.7.5
**Edit** : I deleted them two empty `sitecustomize.py` files; now I have a
default encoding of `utf-8` in the `zopepy` session but still `ascii` in
Plone; this surprises me, because I have in my buildout script:
[zopepy]
recipe=zc.recipe.egg
eggs = ${instance:eggs}
extra-paths = ${instance:extra-paths}
interpreter = zopepy
scripts = zopepy
To debug this, I created a little function which I added to my code, and which
displays a little information about relevant modules in the `sys.path`:
import sys
from os.path import join, isdir, isfile
def sitecustomize_info():
plen = len(sys.path)
print '-' * 79
print 'sys.path has %(plen)d entries' % locals()
for tup in zip(range(1, plen+1), sys.path):
nr, dname = tup
if isdir(dname):
for fname in ('site.py', 'sitecustomize.py'):
if isfile(join(dname, fname)):
print '%(nr)4d. %(dname)s/%(fname)s' % locals()
spname = join(dname, 'site-packages', 'sitecustomize.py')
if isfile(spname):
print '%(nr)4d. %(spname)s' % locals()
else:
print '? %(dname)s is not a directory' % locals()
print '-' * 79
Output:
sys.path has 303 entries
8. /usr/lib64/python2.7/site-packages/sitecustomize.py
295. /opt/zope/instances/wnzkb/lib/python2.7/site-packages/sitecustomize.py
? /usr/lib64/python27.zip is not a directory
298. /usr/lib64/python2.7/site.py
? /usr/lib64/python2.7/lib-tk is not a directory
? /usr/lib64/python2.7/lib-old is not a directory
303. /usr/lib/python2.7/site-packages/sitecustomize.py
All `sitecustomize.py` files look the same (switching to `utf-8`), and I
didn't tweak `site.py` (for now; if everything else fails, I might need to.)
Answer: If you **really** want/need to use the `sitecustomize.py` trick, you could
include this part in your buildout:
[fixencode]
recipe = plone.recipe.command
stop-on-error = yes
update-command = ${fixencode:command}
command =
SITE_PACKAGES=$(${buildout:executable} -c \
'from distutils.sysconfig import get_python_lib;print(get_python_lib())')
cat > $SITE_PACKAGES/../sitecustomize.py << EOF
#!${buildout:executable} -S
import sys
sys.setdefaultencoding('utf-8')
EOF
It will be added into the site-packages folder from your virtualenv.
|
Handling specific Python error within Bash call?
Question: I am using the `line_profiler`, which allows you to drop `@profile` decorators
anywhere in a python codebase and returns line output.
However, if you try to execute python code that contains one such `@profile`
decorator without loading this `line_profiler` module, the code will fail with
a NameError, for such a decorator is defined and injected by this external
library.
I'd like a bash command that attempts to run my python script with vanilla
python. Then, if and only if the error consists of `NameError`, I want to give
it a second try. This is what I have got so far:
python -u $file || python -m kernprof -l -v --outfile=/dev/null $file"
The problem is of course that if my python code has ANY errors at all, be it
`ValueError` or `IndentationError` or anything, it tries the profiler. I want
to ONLY run the profiler if the error contains a string `NameError: name
'profile' is not defined` is found within `stderr`.
Answer: Wouldn't be better to monkey patch profile when no line_profiles is present ?
Something like
try:
import line_profiles
except:
import warnings
warnings.warn("Profile disabled")
def profile(fn):
def wrapper(*args, **kw):
return fn(*args, **kw)
return wrapper
This way your code runs in either case without complicating matters.
|
Running TextBlob in Python3
Question: I installed textblob using pip as given
[here](http://textblob.readthedocs.org/en/dev/install.html).
Now, when I try to import this in python3.4 in terminal then it says
ImportError: No module named 'textblob'
Whereas, in python2.7 it imports happily. I have tried reinstalling it. I have
even reinstalled pip. What is the problem here?
Answer: _Elementary OS_ being an Ubuntu derivative,
$ sudo apt-get install python3-pip
...
$ pip3 install textblob
should do the trick. It is possible that you have to apply some minor
variation to the above due to eventual differences between Elementary and
Ubuntu.
Remember that Python 2.7 and Python 3.x are two independent systems, what you
install in 2.7 is definitely NOT automatically available for 3.x and the
converse is equally true.
|
matplotlib quiver 3d error
Question: I am running the quiver3d example given in the following tutorial
<http://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html>
I am running python 'Python 2.7.6' on mac Yosemite with matplotlib 1.4.2
Python 2.7.6 (default, Sep 9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import matplotlib
>>> matplotlib.__version__
'1.4.2'
I am running the example given in matplotlib tutorial
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
x, y, z = np.meshgrid(np.arange(-0.8, 1, 0.2),
np.arange(-0.8, 1, 0.2),
np.arange(-0.8, 1, 0.8))
u = np.sin(np.pi * x) * np.cos(np.pi * y) * np.cos(np.pi * z)
v = -np.cos(np.pi * x) * np.sin(np.pi * y) * np.cos(np.pi * z)
w = (np.sqrt(2.0 / 3.0) * np.cos(np.pi * x) * np.cos(np.pi * y) * np.sin(np.pi * z))
ax.quiver(x, y, z, u, v, w, length=0.1)
plt.show()
but getting the following error
Traceback (most recent call last):
File "try.py", line 16, in <module>
ax.quiver(x, y, z, u, v, w, length=0.1)
File "/Library/Python/2.7/site-packages/matplotlib/axes/_axes.py", line 4179, in quiver
q = mquiver.Quiver(self, *args, **kw)
File "/Library/Python/2.7/site-packages/matplotlib/quiver.py", line 417, in __init__
X, Y, U, V, C = _parse_args(*args)
File "/Library/Python/2.7/site-packages/matplotlib/quiver.py", line 379, in _parse_args
nr, nc = U.shape
ValueError: too many values to unpack
BTW ax is an Axes3D I verified it using `isinstance(ax, axes3d.Axes3d)` and it
returned True. Plus I verified if Axes3d definition contains quiver method and
apparently it doesn't and it is using Axes quiver method instead. Now I don't
know if it must have quiver definition or not
Answer: `ax` needs to be an `axes3d` object - at the moment it's just a regular `axes`
object, and so the call to `ax.quiver` is calling the 2D routine, which has a
different function signature. If you look at those lines in the source you'll
see that line 417 of `/Library/Python/2.7/site-packages/matplotlib/quiver.py`
is trying to unpack all of your arguments, `x,y,z,u,v,w` \- 6 args, into just
5 variables.
I'm not sure why, and can't check at the moment, but for some reason
`fig.gca(projection="3d")` is _not_ giving you an `axes3d` object, and so the
2D version of quiver is being caleld instead of the 3D one.
Doing a little more digging, `fig.gca(...)` just calls
`fig.add_subplot(1,1,1,**kwargs)` if there's no axes object on the stack. Is
this a complete program? or are you doing this after doing some other stuff in
the same program?
You could try replacing the `gca` call with
`add_subplot(1,1,1,projection='3d')` \- that does not query the stack, so it
won't matter if you have other stuff there already.
|
Tallying the outcome of a coin flip
Question: I have written a little piece of code for modelling the outcome of a coin
flip, and would like to find a better way of presenting the results than a
list of consecutive coin flips. I'm one month into learning Python as part of
my physics degree, if that helps provide some context.
Here's the code;
from pylab import *
x=0
while x<=100:
num = randint(0,2)
if num == 0:
print 'Heads'
else:
print 'Tails'
x=x+1
print 'Done'
What options do I have to present this data in an easier to interpret manner?
Answer: Instead of using a `while` loop and printing results to the screen, Python can
do the counting and store the results very neatly using
[`Counter`](https://docs.python.org/3/library/collections.html#collections.Counter),
a subclass of the built in dictionary container.
For example:
from collections import Counter
import random
Counter(random.choice(['H', 'T']) for _ in range(100))
When I ran the code, it produced the following tally:
Counter({'H': 52, 'T': 48})
We can see that heads was flipped 52 times and tails 48 times.
This is already much easier to interpret, but now that you have the data in a
data structure you can also plot a simple bar chart.
Following the suggestions in a Stack Overflow answer
[here](http://stackoverflow.com/questions/16010869/python-plot-a-bar-using-
matplotlib-using-a-dictionary), you could write:
import matplotlib.pyplot as plt
# tally = Counter({'H': 52, 'T': 48})
plt.bar(range(len(tally)), tally.values(), width=0.5, align='center')
plt.xticks(range(len(tally)), ['H', 'T'])
plt.show()
This produces a bar chart which looks like this:

|
ImportError: No module named fixedpickle when running PyDsTool on Windows 7
Question: I have downloaded PyDsTool for Windows. And I believe I have pointed python in
the correct location. But I get the following error
from PyDSTool import *
Traceback (most recent call last):
File "<ipython-input-1-7b811358a37e>", line 1, in <module>
from PyDSTool import *
File "C:\Anaconda\lib\site-packages\pydstool-0.88.140328-py2.7.egg\PyDSTool\__init__.py", line 85, in <module>
from .Events import *
File "C:\Anaconda\lib\site-packages\pydstool-0.88.140328-py2.7.egg\PyDSTool\Events.py", line 13, in <module>
from .Variable import *
File "C:\Anaconda\lib\site-packages\pydstool-0.88.140328-py2.7.egg\PyDSTool\Variable.py", line 10, in <module>
from .utils import *
File "C:\Anaconda\lib\site-packages\pydstool-0.88.140328-py2.7.egg\PyDSTool\utils.py", line 8, in <module>
from .common import *
File "C:\Anaconda\lib\site-packages\pydstool-0.88.140328-py2.7.egg\PyDSTool\common.py", line 53, in <module>
import fixedpickle as pickle
ImportError: No module named fixedpickle
Answer: I solve this problem through uninstalling and reinstalling anaconda. Otherwise
the installation of PyDsTools is very simple. It is not really installation at
all, just unpack the zip folder and make sure it is in your python path (also
add its parent directories to the python path). This may be done through
Spyder IDE. Click tools> pythonpath
I received some assistance from the Sourceforge support thread for the
PyDStool project
[here](https://sourceforge.net/p/pydstool/discussion/472291/thread/b4d36830/)
|
Why does the AWS CLI pip package installation install a six package it can't use?
Question: Whenever I update my AWS CLI with
pip install -U awscli
it downgrades several packages (`colorama`, `dill`, `rsa`, and `websocket-
client`) and upgrades `six` to a version (1.9.0) that it can't use. After
updating, if I try to use the AWS CLI, (e.g. `eb status`) I get
Traceback (most recent call last):
File "/usr/local/bin/eb", line 5, in <module>
from pkg_resources import load_entry_point
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 3018, in <module>
working_set = WorkingSet._build_master()
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 614, in _build_master
return cls._build_from_requirements(__requires__)
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 627, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/Library/Python/2.7/site-packages/pkg_resources/__init__.py", line 805, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: six==1.8.0
In order to get the AWS CLIs working again I have to downgrade `six` to 1.8.0
(the other packages can be upgraded to the current versions again without ill
effect). But the next update of `awscli` (over the past half dozen or so)
always brings the current — but unusable by AWS CLIs — back again.
What's going on here? Is there a bug in the `awscli` package? Have others
experienced this?
Answer: That was actually a bug in the `awsebcli` package's setup script that has [now
been
fixed](https://github.com/gxela/awsebcli/pull/1/files#diff-2eeaed663bd0d25b7e608891384b7298).
|
Python classes with __hash__ support (depending on the instance)
Question: I have a mutable class in Python which I would like to be able to "freeze", at
that point its immutable, therefor can have a `__hash__` function.
My concern is, will having the `__hash__` function present will make Python
behave strangely because it may check for the existence of a hash function.
I realize I could use a subclass that has a hash function, copy the class to a
subtype. But I'm interested to know if having an _optional_ hash function is
supported by Python.
In the example below it works in basic cases (but may fail in others).
Note: _This assumes you don't touch`_var` or `_is_frozen` directly and only
use access methods._
Note: its probably more Pythonic not to use this method and instead have a
`FrozenMyVar` class, but Im curious if this can be considered to be
_supported_ in Python or not.
class MyVar:
__slots__ = ("_var", "_is_frozen")
def __init__(self, var):
self._var = var
self._is_frozen = False
def freeze(self):
self._is_frozen = True
def __hash__(self):
if not self._is_frozen:
raise TypeError("%r not hashable (freeze first)" % type(self))
return hash(self._var)
def __eq__(self, other):
try:
return self.val == other.val
except:
return NotImplemented
@property
def var(self):
return self._var
@var.setter
def var(self, value):
if self._is_frozen:
raise AttributeError("%r is frozen" % type(self))
self._var = value
# ------------
# Verify Usage
v = MyVar(10)
v.var = 9
try:
hash(v)
except:
print("Hash fails on un-frozen instance")
v.freeze()
try:
v.var = 11
except:
print("Assignment fails on frozen instance")
print("Hash is", hash(v))
* * *
Adding a note on the real-world use-case, We have some linear math module with
Vector/Matrix/Quaternion/Euler classes. In some cases we want to have for eg,
a "set of matrices" or a "dict with vector keys". Its always possible to
expand them into tuples but they take up more memory & loose their abilities
to behave a our own math types - so the ability to _freeze_ them is
attractive.
Answer: The original example didn't quite work "sensibly", because the class had
`__hash__` but not `__eq__`, and as
[https://docs.python.org/3/reference/datamodel.html#object.**hash**](https://docs.python.org/3/reference/datamodel.html#object.__hash__)
says "If a class does not define an **eq**() method it should not define a
**hash**() operation either". But the OP's edit fixed that side issue.
This done, if the class and its instances are indeed used with the discipline
outlined, behavior should comply with the specs: instances are "born
unhashable" but "become hashable" -- "irreversibly" given said discipline, and
only, of course, if their `self.val` is in turn hashable -- once their
`freeze` method is called.
Of course `collections.Hashable` will "mis-classify" unfrozen instances (as it
only checks for the presence of `__hash__`, not its actual working), but that
is hardly unique behavior:
>>> import collections
>>> isinstance((1, [2,3], 4), collections.Hashable)
True
>>> hash((1, [2,3], 4))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
That `tuple` does appear "hashable", like all tuples (since its type does
define `__hash__`) -- but if you in fact try `hash`ing it, you nevertheless
get a `TypeError`, as one of the items is a `list` (making the whole not
actually hashable!-). Not-yet-frozen instances of the OP's class would behave
similarly to such a tuple.
An alternative which does avoid this little glitch (yet doesn't require
potentially onerous copies of data) is to model the "freezing" as the instance
"changing type in-place", e.g...:
class MyVar(object):
_is_frozen = False
def __init__(self, var):
self._var = var
def freeze(self):
self.__class__ = FrozenMyVar
def __eq__(self, other):
try:
return self.val == other.val
except:
return NotImplemented
__hash__ = None
@property
def var(self):
return self._var
@var.setter
def var(self, value):
if self._is_frozen:
raise AttributeError("%r is frozen" % type(self))
self._var = value
class FrozenMyVar(MyVar):
_is_frozen = True
def __hash__(self):
return hash(self._var)
This behaves essentially like the original example (I've removed the "slots"
to avoid issues with `object layout differs` errors on `__class__` assignment)
but may be considered an improved object model since "changing type in-place"
models well such irreversible changes in behavior (and as a small side effect
`collections.Hashable` now behaves impeccably:-).
The concept of an object "changing type in-place" freaks some out because few
languages indeed would even tolerate it, and even in Python of course it's a
rare thing to have a practical use case for such an obscure feature of the
language. However, use cases do exist -- which is why `__class__` assignment
is indeed supported!-)
|
Remove duplicate lines from a string in python
Question: I have a string in python, and would like to remove duplicate lines (i.e. when
the text between \n is the same, then remove the second (third, fourth)
occurrence, but preserve the order of the string. for example
line1 \n line2 \n line3 \n line2 \n line2 \n line 4
would return:
line1 \n line2 \n line3 \n line 4
Other examples i have seen on stackoverflow deal with at the stage of reading
the text file into python (e.g. using readline(), seeing if already in a set
of read in lines, and then adding to string only if it is unique). in my
instance this doesn't work, as the string I have has already been heavily
manipulated since loading into python... and it seems very botched to e.g.
write the whole string to a txt file, and then read in line-by-line looking
for duplicated lines
Answer: For Python 2.7+, this can be done with a one-liner:
from collections import OrderedDict
test_string = "line1 \n line2 \n line3 \n line2 \n line2 \n line 4"
"\n".join(list(OrderedDict.fromkeys(test_string.split("\n"))))
This gives me: `'line1 \n line2 \n line3 \n line 4'`
|
Function attributes in Python
Question: I asked this question as continuation of [Limit function execution in
Python](http://stackoverflow.com/questions/28507359/limit-function-execution-
in-python)
I found a way to do it without threads etc. Just simple checking from time to
time.
Here is my decorator:
def time_limit(seconds):
def decorator(func):
func.info = threading.local()
def check_timeout():
if hasattr(func.info, 'end_time'):
if time.time() > func.info.end_time:
raise TimeoutException
func.check_timeout = check_timeout
@functools.wraps(func)
def wrapper(*args, **kwargs):
if not hasattr(func.info, 'end_time'):
func.info.end_time = time.time() + seconds
return func(*args, **kwargs)
return wrapper
return decorator
And usage:
@time_limit(60)
def algo():
do_something()
algo.check_timeout()
do_something_else()
It works fine on localhost, but it fails on server apache with mod_wsgi and
django.
1. First problem. Notice `hasattr`? I should add it, because from time to time I got error `'_thread.local'` has no attribute `end_time`
2. Why do I need threading.local? As @Graham Dumpleton pointed out, we can't have a single global end time as a subsequent request will come in and overwrite it. So if the first request hadn't finished, its `end_time` would get reset to whatever was set for the later request. The problem is that this approach doesn't help. Suppose I have following session of runs.
First run - before timeout occurs - runs perfectly
Second run - before timeout occurs - runs perfectly
Third run - timeout occurs - raises `TimeoutException`
All subsequent calls raise `TimeoutException` no matter was it or not.
It seems like all subsequent calls look at end_time copy of third run, and
since there is Timeout, they also raise `Timeout`.
How can I localize end_time for each function call? Thank you.
EDIT: Thanks to @miki725 and @Antti Haapala I simplified my function and made
it a simple class:
class TimeLimit(object):
def __init__(self, timeout=60):
self.timeout = timeout
self.end = None
def check_timeout(self):
if self.end and time.time() > self.end:
raise TimeoutException
else:
self.start()
def start(self):
if not self.end:
self.end = time.time() + self.timeout
However, it is very inconvenient for me to pass timer to function, because
algo is actually very complex recursive function. So, I did following:
timer = TimeLimit() # fails. I think because it is global
def algo()
do_stuff()
timer.check_timeout()
do_another_stuff()
sub_algo() # check inside it to
algo()
...
Is there any way to make `timer` thread-safe. Is pseudoprivate `_timer` of any
help?
Answer: The problem is that you are adding `end_time` on the function object itself.
Since each thread will import all of the Python modules, effectively you will
only set `end_time` `n` times as number of threads you are running (which
seems to be in your case 2).
To solve this you can either always set `end_time` in each thread, however
that does not seem elegant to me since you are making a couple of assumptions
about what will be executed.
Other solution is to use classes. That will allow to keep state in a class
instance and hence this issue will not occur.
class ExecuteWithTimeout(object):
def __init__(self, to_execute, timeout):
self.to_execute = to_execute
self.timeout = timeout
self.end = None
def check_timeout(self):
if time.time() > self.end:
raise TimeoutException
def __call__(self, *args, **kwargs):
self.end = time.time() + self.timeout
result = self.to_execute(*args, **kwargs)
self.check_timeout()
return result
def usage():
stuff = ExecuteWithTimeout(do_something, 10)()
do_something_else(stuff)
Another approach is to use a context manager:
@contextmanager
def timeout_limit(timeout):
end = time.time() + self.timeout
yield
if time.time() > end:
raise TimeoutException
def usage():
with timeout_limit(10):
do_stuff()
more_things()
or better yet you can combine the two!
class TimeLimit(object):
def __init__(self, timeout=60):
self.timeout = timeout
self.end = None
def __enter__(self):
self.end = time.time() + self.timeout
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.check_timeout()
def check_timeout(self):
if self.end and time.time() > self.end:
raise TimeoutException
def algo():
with TimeLimit(2) as timer:
time.sleep(1)
timer.check_timeout()
time.sleep(1)
timer.check_timeout()
* * *
update for your update:
timer = TimeLimit() # fails. I think because it is global
def algo():
...
Using the class as above does not help you since then the class will be a
thread-level instance which puts you back to the initial problem. The problem
is keeping thread-level state and so it does not matter whether you store it
in a class or as function object attribute. Your functions should explicitly
pass state around to the inner functions if those functions need it. You
should not rely on using global state to do so:
def algo():
with TimeLimit(2) as timer:
do_stuff(timer)
timer.check_timeout()
do_more_stuff(timer)
timer.check_timeout()
|
Handle large output - Python
Question: I am using the python `boto` library to access files in a `S3` bucket. i have
all the output sorted out and works fine. however I dont want to display all
the files on the terminal at once. Say if a person has 800 files and 300
folders in their bucket, displaying all of them at once will be a mess as it
wont be feasible to scroll through all of it. What would be the best way to
display such large output? I was thinking about dividing them into pages but a
little stuck up on the though process. Any help / ideas will be greatly
appreciated
How I iterate over the list
for each in file_list:
print ("{0} ,{1},{2},{3}".format(each.name,each.size,each.version)
EDIT:
I append the files into a list and print them out using a `for` loop to
iterate over them and print them using `.format`. A sample looks like this:
Files
file1
file2
file3
file4
file5
file6
file7
file8
file9
file10
file11
file12
file13
file14
file15
file16
Folders:
folder1
folder2
folder3
folder4
folder5
folder6
folder7
folder8
Answer: You can pipe the output to less (inside python using `subprocess`) to get the
`less` command effect on your output.
Sample code:
import subprocess
long_array = []
for i in xrange(1000):
line = 'Line text number {0}'.format(i)
long_array.append(line)
output_string = '\n'.join(long_array) # can be anything you want as long as it is a string
proc = subprocess.Popen('less', stdin=subprocess.PIPE)
proc.communicate(output_string)
|
how to set font for ipython matplotlib/pylab?
Question: I set my font for matplotlib to be a ttf through `~/.matplotlib/matplotlibrc`.
When I run:
` $ python myplot.py `
it uses the correct font. but if I do:
$ ipython --pylab Python 2.7.8 |Anaconda 2.1.0 (x86_64)| (default, Aug 21
2014, 15:21:46) Type "copyright", "credits" or "license" for more information.
IPython 2.2.0 -- An enhanced Interactive Python.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://binstar.org
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
In [1]: %run myplot
it does not use the font. I'm using Anaconda Python on Mac OS X. My `rcParams`
are not read. Does ipython (with pylab option) use a different configuration?
How can I set my matplotlib default font?
to add to the confusion, if I use `plt.show()` in ipython, it by default uses
one font, but if I `plt.savefig()` from ipython, it uses another.
Answer: As far as I know setting rcParams in the code itself before plotting is the
best approach. IPython does a lot of magic with its inlining which tends to be
very weird. Chances are that IPython tramples your rcParams with its own
default values, if you set them in your script, the interpreter will read them
in again.
from matplotlib import rcParams
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = ['Bitstream Vera Sans']
rcParams['font.serif'] = ['Bitstream Vera Sans']
rcParams["font.size"] = "40"
Try this at the top of the script and report back? Additionally you can try to
run your script as `ipython script.py --pylab`. Issue could also be that
you're updating your script outside IPython which could still be running byte-
compiled code from a previous version. (although not a big chance)
|
where Py_FileSystemDefaultEncoding is set in python source code
Question: i am curious about how python source code set the value of
Py_FileSystemDefaultEncoding. And i have receive a strange thing.
Since python
[doc](https://docs.python.org/2/library/sys.html#sys.getfilesystemencoding)
about sys.getfilesystemencoding() said that:
> On Unix, the encoding is the user’s preference according to the result of
> nl_langinfo(CODESET), or None if the nl_langinfo(CODESET) failed.
i use python 2.7.6
```
>>>import sys
>>>sys.getfilesystemencoding()
>>>'UTF-8'
>>>import locale
>>>locale.nl_langinfo(locale.CODESET)
>>>'ANSI_X3.4-1968'
```
Here is the question: why the value of getfilesystemencoding() is different
from the value of locale.nl_landinfo() since the doc says that
getfilesystemencoding() is derived from locale.nl_landinfo().
Here is the locale command output in my terminal:
LANG=en_US.UTF-8
LANGUAGE=en_US:en
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC=zh_CN.UTF-8
LC_TIME=zh_CN.UTF-8
LC_COLLATE="en_US.UTF-8"
LC_MONETARY=zh_CN.UTF-8
LC_MESSAGES="en_US.UTF-8"
LC_PAPER=zh_CN.UTF-8
LC_NAME=zh_CN.UTF-8
LC_ADDRESS=zh_CN.UTF-8
LC_TELEPHONE=zh_CN.UTF-8
LC_MEASUREMENT=zh_CN.UTF-8
LC_IDENTIFICATION=zh_CN.UTF-8
LC_ALL=
Answer: Summary: `sys.getfilesystemencoding()` behaves as documented. The confusion is
due to the difference between `setlocale(LC_CTYPE, "")` (user's preference)
and the default C locale.
* * *
The script always starts with the default C locale:
>>> import locale
>>> locale.nl_langinfo(locale.CODESET)
'ANSI_X3.4-1968'
But `getfilesystemencoding()` uses user's locale:
>>> import sys
>>> sys.getfilesystemencoding()
'UTF-8'
>>> locale.setlocale(locale.LC_CTYPE, '')
'en_US.UTF-8'
>>> locale.nl_langinfo(locale.CODESET)
'UTF-8'
Empty string as a locale name [selects a locale based on the user choice of
the appropriate environment
variables](http://www.gnu.org/software/libc/manual/html_node/Setting-the-
Locale.html).
$ LC_CTYPE=C python -c 'import sys; print(sys.getfilesystemencoding())'
ANSI_X3.4-1968
$ LC_CTYPE=C.UTF-8 python -c 'import sys; print(sys.getfilesystemencoding())'
UTF-8
* * *
> where can i find the source code about setting Py_FileSystemDefaultEncoding.
There are two places in the source code for Python 2.7:
* [`bltinmodule.c`](https://hg.python.org/cpython/file/2.7/Python/bltinmodule.c#l17) specifies `Py_FileSystemDefaultEncoding` on Windows and OS X
* [`Py_InitializeEx()` sets it on other Unix systems](https://hg.python.org/cpython/file/2.7/Python/pythonrun.c#l283) \-- notice: `setlocale(LC_CTYPE, "")` is called before `nl_langinfo(CODESET)` and it is restored back `setlocale(LC_CTYPE, saved_locale)` after.
* * *
> Can you give me some advice how to search some keywords in python source
> code
To find these places:
* [clone Python 2.7 source code](https://docs.python.org/devguide/):
$ hg clone https://hg.python.org/cpython && cd cpython
$ hg update 2.7
* search for `Py_FileSystemDefaultEncoding *=` regex in your editor e.g.:
$ make TAGS # to create tags table
in Emacs: `M-x tags-search RET Py_FileSystemDefaultEncoding *= RET` and `M-,`
to continue the search.
|
Change color of figures with PyOpenGL
Question: I have to do a basic program in Python using the library Opengl...when
somebody press the key 'r', the figure change to red, when somebody pressed
key 'g' change green and when somebody pressed 'b' change blue. I don't know
why the color doesn't change, but i know the program know when a key is
pressed, this is my code...
from OpenGL.GL import *
from OpenGL.GLUT import *
from math import pi
from math import sin
from math import cos
def initGL(width, height):
glClearColor(0.529, 0.529, 0.529, 0.0)
glMatrixMode(GL_PROJECTION)
def dibujarCirculo():
glClear(GL_COLOR_BUFFER_BIT)
glColor3f(0.0, 0.0, 0.0)
glBegin(GL_POLYGON)
for i in range(400):
x = 0.25*sin(i) #Cordenadas polares x = r*sin(t) donde r = radio/2 (Circunferencia centrada en el origen)
y = 0.25*cos(i) #Cordenadas polares y = r*cos(t)
glVertex2f(x, y)
glEnd()
glFlush()
def keyPressed(*args):
key = args[0]
if key == "r":
glColor3f(1.0, 0.0, 0.0)
print "Presionaste",key
elif key == "g":
glColor3f(0.0, 1.0, 0.0)
print "Presionaste g"
elif key == "b":
glColor3f(0.0, 0.0, 1.0)
print "Presionaste b"
def main():
global window
glutInit(sys.argv)
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB)
glutInitWindowSize(500,500)
glutInitWindowPosition(200,200)
#creando la ventana
window = glutCreateWindow("Taller uno")
glutDisplayFunc(dibujarCirculo)
glutIdleFunc(dibujarCirculo)
glutKeyboardFunc(keyPressed)
initGL(500,500)
glutMainLoop()
if __name__ == "__main__":
main()
Answer: I suspect that because the 2nd line in `dibujarCirculo` resets glColor3f to
(0,0,0), you keep losing the change you made in `keyPressed`. Have you tried
initializing glColor3f somewhere other than `dibujarCirculo` ?
|
OpenShift repo not included in path
Question: I started a Django 1.7 OpenShift instance. When I have python print all of the
paths from `sys.path` I do not see `OPENSHIFT_REPO_DIR`
(`/var/lib/openshift/xxxxx/app-root/runtime/repo`).
When I use `https://github.com/jfmatth/openshift-django17` to create a project
I do see `OPENSHIFT_REPO_DIR` in the path.
Looking through the example app above I don't see anywhere that this is
specifically added to the path. What am I missing?
To clarify: I have to add the following to my wsgi.py:
import os
import sys
ON_PASS = 'OPENSHIFT_REPO_DIR' in os.environ
if ON_PASS:
x = os.path.abspath(os.path.join(os.environ['OPENSHIFT_REPO_DIR'], 'mysite'))
sys.path.insert(1, x)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
OPENSHIFT_REPO_DIR is not in my path as I would expect. When I used the
example git above, I did not have to add anything to the path.
Answer: A little while back I had issues with some of the pre-configured OpenShift
environment variables not appearing until I restarted my application.
For what its worth, I started up a brand new Django gear, printed the
environment variables to the application log, and verified that I do see
OPENSHIFT_REPO_DIR (and all other env vars) properly.
|
How do I import pcap or pyshark on python
Question: I want to write the Ethernet packets capture while using python. I googling
and found that I should using Pcap library or PyShark but I try to import
pcap, it said that can not found module name Pcap, so I try to using PyShark
instance but it show like this on Python shell
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
import pyshark
File "D:\Python27\lib\site-packages\pyshark-0.3.3-py2.7.egg\pyshark\__init__.py", line 1, in <module>
from pyshark.capture.live_capture import LiveCapture
File "D:\Python27\lib\site-packages\pyshark-0.3.3-py2.7.egg\pyshark\capture\live_capture.py", line 1, in <module>
from pyshark.capture.capture import Capture
File "D:\Python27\lib\site-packages\pyshark-0.3.3-py2.7.egg\pyshark\capture\capture.py", line 7, in <module>
import trollius as asyncio
ImportError: No module named trollius
About this problem, what should I do? How can I import the library to python?
OS is Windows 8.1 and Python version 2.7.9
Answer: The `pyshark` project requires that
[`trollius`](https://pypi.python.org/pypi/trollius) is installed for Python
versions before Python 3.4. You'll need to install that separately.
It _should_ have been installed when you installed the `pyshark` package
however. Make sure to always use a tool like `pip` to install your packages
and dependencies like these are taken care of automatically; the `pyshark`
project [declares the dependencies
correctly](https://github.com/KimiNewt/pyshark/blob/master/src/setup.py#L12):
install_requires=['lxml', 'py', 'trollius', 'logbook'],
|
Pygame text not going away
Question: So I have been working on a game and for some odd reason down in the level1
function I thought if I had a while loop with leveltext1 = false then I could
write the text onto the screen, and then wait 2 seconds and then make
level1text = true and it would go away. I guess it didnt... Code:
#!/usr/bin/python
import pygame
import time
pygame.init()
blue = (25,25,112)
black = (0,0,0)
red = (200,0,0)
bright_red = (255,0,0)
white = (255,255,255)
groundcolor = (139,69,19)
green = (80,80,80)
other_green = (110,110,110)
lives = 0
gameDisplay = pygame.display.set_mode((1336,768))
pygame.display.set_caption("TheAviGame")
direction = 'none'
clock = pygame.time.Clock()
img = pygame.image.load('guyy.bmp')
famousdude = pygame.image.load('kitten1.bmp')
kitten = pygame.image.load('macharacterbrah.bmp')
dorritosman = pygame.image.load('Doritos.bmp')
backround = pygame.image.load('backroundd.bmp')
playernormal = pygame.image.load('normalguy.bmp')
playerocket = pygame.image.load('rocketguyy.bmp')
rocketcat = pygame.image.load('jetpackcat.bmp')
mlglogo = pygame.image.load('mlg.bmp')
level1bg = pygame.image.load('level1background.bmp')
def mts(text, textcolor, x, y, fs):
font = pygame.font.Font(None,fs)
text = font.render(text, True, textcolor)
gameDisplay.blit(text, [x,y])
def button(x,y,w,h,ic,ac):
mouse = pygame.mouse.get_pos()
click = pygame.mouse.get_pressed()
if x+w > mouse[0] > x and y+h > mouse[1] > y:
pygame.draw.rect(gameDisplay, ac,(x,y,w,h))
if click[0] == 1:
gameloop()
else:
pygame.draw.rect(gameDisplay, ic,(x,y,w,h))
def game_intro():
intro = True
while intro:
for event in pygame.event.get():
#print(event)
if event.type == pygame.QUIT:
pygame.quit()
quit()
gameDisplay.fill(black)
button(450,450,250,70,green,other_green)
mts("Play", white, 540, 470, 50)
mts("THE AVI GAME", red, 380, 100, 100)
gameDisplay.blit(famousdude, (100,100))
gameDisplay.blit(kitten, (700,300))
gameDisplay.blit(dorritosman, (1000,400))
pygame.display.update()
clock.tick(15)
def gameloop():
imgx = 1000
imgy = 100
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
quit()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_RIGHT:
imgx += 10
gameDisplay.blit(backround, (30,30))
gameDisplay.blit(rocketcat, (imgx,imgy))
pygame.display.update()
clock.tick(15)
def level1():
imgx = 500
imgy = 500
leveltext = False
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
quit()
while not leveltext:
gameDisplay.blit(level1bg, (0,0))
mts("LEVEL 1:", red, 450, 220, 60)
mts('Controls: W or Up to move up.', white, 450, 300, 40)
mts('Controls: A or Left to move left.', white, 450, 340, 40)
mts('Controls: S or Down to move down.', white, 450, 390, 40)
mts('Controls: D or Right to move Right.', white, 450, 430, 40)
time.sleep(2)
leveltext = True
pygame.display.update()
level1()
Answer: From my understanding, when you blit text onto a surface it permanently
becomes part of that surface. If you want the font to disappear, you could try
blitting it onto a copy of that surface and then blit the altered surface to
the display.
|
axes.fmt_xdata in matplotlib not being called
Question: I'm trying to format my X axis dates in a Django application where I'm
returning the graph in memory in a response object. I followed the same
example that I already use in an ipython notebook, and do this:
def pretty_date(date):
log.info("HELLO!")
return date.strftime("%c")
def image_calls(request):
log.info("in image_loadavg")
datetimes = []
calls = []
for m in TugMetrics.objects.all():
datetimes.append(m.stamp)
calls.append(m.active_calls)
plt.plot(datetimes, calls, 'b-o')
plt.grid(True)
plt.title("Active calls")
plt.ylabel("Calls")
plt.xlabel("Time")
fig = plt.gcf()
fig.set_size_inches(8, 6)
fig.autofmt_xdate()
axes = plt.gca()
#axes.fmt_xdata = mdates.DateFormatter("%w %H:%M:%S")
axes.fmt_xdata = pretty_date
buf = io.BytesIO()
fig.savefig(buf, format='png', dpi=100)
buf.seek(0)
return HttpResponse(buf, content_type='image/png')
The graph is returned but I seem to have no control over how to X axis looks,
and my HELLO! log is never called. Note that the m.stamp is a datetime object.
This works fine in ipython notebook, both running matplotlib 1.4.2.
Help appreciated.
Answer: `axes.fmt_xdata` controls the coordinates that are interactively displayed in
the lower right-hand corner of the toolbar when you mouse over the plot. It's
never being called because you're not making an interactive plot using a gui
backend.
What you want is `ax.xaxis.set_major_formatter(formatter)`. Also, if you'd
just like a default date formatter, you can use `ax.xaxis_date()`.
As a quick example based on your code (with random data):
import datetime as dt
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
time = mdates.drange(dt.datetime(2014, 12, 20), dt.datetime(2015, 1, 2),
dt.timedelta(hours=2))
y = np.random.normal(0, 1, time.size).cumsum()
y -= y.min()
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(time, y, 'bo-')
ax.set(title='Active Calls', ylabel='Calls', xlabel='Time')
ax.grid()
ax.xaxis.set_major_formatter(mdates.DateFormatter("%w %H:%M:%S"))
fig.autofmt_xdate() # In this case, it just rotates the tick labels
plt.show()

And if you'd prefer the default date formatter:
import datetime as dt
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
time = mdates.drange(dt.datetime(2014, 12, 20), dt.datetime(2015, 1, 2),
dt.timedelta(hours=2))
y = np.random.normal(0, 1, time.size).cumsum()
y -= y.min()
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(time, y, 'bo-')
ax.set(title='Active Calls', ylabel='Calls', xlabel='Time')
ax.grid()
ax.xaxis_date() # Default date formatter
fig.autofmt_xdate()
plt.show()

|
lxml.etree.XML ValueError for Unicode string
Question: I'm transforming [an
xml](https://gist.github.com/guinslym/5ce47460a31fe4c4046b#file-original_xml-
xml) document with
[xslt](https://gist.github.com/guinslym/5ce47460a31fe4c4046b#file-test_xslt-
xslt). While doing it with python3 I had this following error. But I don't
have any errors with python2
-> % python3 cstm/artefact.py
Traceback (most recent call last):
File "cstm/artefact.py", line 98, in <module>
simplify_this_dataset('fisheries-service-des-peches.xml')
File "cstm/artefact.py", line 85, in simplify_this_dataset
xslt_root = etree.XML(xslt_content)
File "lxml.etree.pyx", line 3012, in lxml.etree.XML (src/lxml/lxml.etree.c:67861)
File "parser.pxi", line 1780, in lxml.etree._parseMemoryDocument (src/lxml/lxml.etree.c:102420)
ValueError: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.
#!/usr/bin/env python3
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
# -*- coding: utf-8 -*-
from lxml import etree
def simplify_this_dataset(dataset):
"""Create A simplify version of an xml file
it will remove all the attributes and assign them as Elements instead
"""
module_path = os.path.dirname(os.path.abspath(__file__))
data = open(module_path+'/data/ex-fire.xslt')
xslt_content = data.read()
xslt_root = etree.XML(xslt_content)
dom = etree.parse(module_path+'/../CanSTM_dataset/'+dataset)
transform = etree.XSLT(xslt_root)
result = transform(dom)
f = open(module_path+ '/../CanSTM_dataset/otra.xml', 'w')
f.write(str(result))
f.close()
Answer: You can also decode the UTF-8 string and encode it with ascii before passing
it to etree.XML
xslt_content = data.read()
xslt_content = xslt_content.decode('utf-8').encode('ascii')
xslt_root = etree.XML(xslt_content)
|
How to save a changed item to an external file? (Python 3)
Question: I'm fairly new to python, but I'm making a script and I want one of the
functions to update a variable from another file. It works, but when I exit
the script and reload it, the changes aren't there anymore. For example (this
isn't my script):
#File: changeFile.txt
number = 0
#File: changerFile.py
def changeNumber():
number += 1
If I retrieve _number_ during that session, it will return 1, but if I exit
out and go back in again and retrieve _number_ without calling changeNumber,
it returns 0.
How can I get the script to actually save the number edited in changeNumber to
changeFile.txt? As I said, I'm fairly new to python, but I've looked just
about everywhere on the Internet and couldn't really find an answer that
worked.
EDIT: Sorry, I forgot to include that in the actual script, there are other
values.
So I want to change number and have it save without deleting the other 10
values stored in that file.
Answer: Assuming, as you show, that `changeFile.txt` has no other content whatever,
then just change the function to:
def changeNumber():
global number # will not possibly work w/o this, the way you posted!
number += 1
with open('changeFile.txt', 'w') as f:
f.write('number = {}\n'.format(number))
ADDED: the OP edited the Q to mention (originally omitted!-) the crucial fact
that `changefile.txt` has other lines that need to be preserved as well as the
one that needs to be changed.
That, of course, changes everything -- but, Python can cope!-)
Just add `import fileinput` at the start of this module, and change the last
two lines of the above snippet (starting with `with`) to:
for line in fileinput.input(['changefile.txt'], inplace=True):
if line.startswith('number ');
line = 'number = {}\n'.format(number)'
print line,
This is the Python 2 solution (the OP didn't bother to tell us if using Py2 or
Py3, a crucial bit of info -- hey, who cares about making it **easy** rather
than **very hard** for willing volunteers to _help_ you, right?!-). If Python
3, change the last statement from `print line,` to
print(line, end='')
to get exactly the same desired effect.
|
Fail Load Data Fixtures Django - integer out of range
Question: I'm developing app using Django and Postgre.
Basically, I create new model on my app.
from django.db import models
from manager.models.state import State
from django.contrib.auth.models import User
class Address (models.Model):
address_id = models.AutoField(primary_key=True)
address_name = models.CharField(max_length=500)
address_city = models.CharField(max_length=250)
address_phone = models.IntegerField()
address_postcode = models.CharField(max_length=10, default = 0)
state = models.ForeignKey(State, null=True, blank=True, default = None)
user = models.ForeignKey(User, null=True, blank=True, default = None)
def __unicode__(self):
return self.address_id
@classmethod
def get_list(cls):
return list(cls.objects.values())
After that I run makemigrations and migrate commandline. All of the models are
created on database normally.
I successfully load data from other json file (fixtures). When I tried to
loaddata address.json on Address table, I got this error.
The Error Message:
django.db.utils.DataError: Problem installing fixture '/Users/eeldwin/Documents/Django/fbt/manager/fixtures/address.json': Could not load manager.Address(pk=1): integer out of range
The Traceback:
Traceback (most recent call last):
File "manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django /core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/core/management/commands/loaddata.py", line 61, in handle
self.loaddata(fixture_labels)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/core/management/commands/loaddata.py", line 91, in loaddata
self.load_label(fixture_label)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/core/management/commands/loaddata.py", line 148, in load_label
obj.save(using=self.using)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/core/serializers/base.py", line 173, in save
models.Model.save_base(self.object, using=using, raw=True)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/db/models/base.py", line 617, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/db/models/base.py", line 679, in _save_table
forced_update)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/db/models/base.py", line 723, in _do_update
return filtered._update(values) > 0
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/db/models/query.py", line 600, in _update
return query.get_compiler(self.db).execute_sql(CURSOR)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 1004, in execute_sql
cursor = super(SQLUpdateCompiler, self).execute_sql(result_type)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 786, in execute_sql
cursor.execute(sql, params)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/db/backends/utils.py", line 81, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Users/eeldwin/.virtualenvs/fbt/lib/python2.7/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
django.db.utils.DataError: Problem installing fixture '/Users/eeldwin/Documents/Django/fbt/manager/fixtures/address.json': Could not load manager.Address(pk=1): integer out of range
Here's my fixtures
[{
"model": "manager.address",
"pk": 1,
"fields": {
"address_id": "1",
"address_name": "Jalan Muara Mas Timur 242",
"address_city": "Semarang",
"address_phone": "087832270893",
"address_postcode": "3122",
"state": "1"
}}]
However, when I inserted data to database manually, it will run perfectly. Any
ideas what's happened here? Thanks.
Answer: You store `address_phone` as `IntegerField`. Its max value is 2147483647
(<https://docs.djangoproject.com/en/1.7/ref/models/fields/#integerfield>).
But in your fixtures you try to add `087832270893` as `address_phone`, that
why you receive `integer out of range` error.
|
Making my own web crawler in python which shows main idea of the page rank
Question: I'm trying to make web crawler which shows basic idea of page rank. And code
for me seems fine for me but gives me back errors e.x.
`Traceback (most recent call last):
File "C:/Users/Janis/Desktop/WebCrawler/Web_crawler.py", line 89, in <module>
webpages()
File "C:/Users/Janis/Desktop/WebCrawler/Web_crawler.py", line 17, in webpages
get_single_item_data(href)
File "C:/Users/Janis/Desktop/WebCrawler/Web_crawler.py", line 23, in get_single_item_data
source_code = requests.get(item_url)
File "C:\Python34\lib\site-packages\requests\api.py", line 65, in get
return request('get', url, **kwargs)
File "C:\Python34\lib\site-packages\requests\api.py", line 49, in request
response = session.request(method=method, url=url, **kwargs)
File "C:\Python34\lib\site-packages\requests\sessions.py", line 447, in request
prep = self.prepare_request(req)
File "C:\Python34\lib\site-packages\requests\sessions.py", line 378, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "C:\Python34\lib\site-packages\requests\models.py", line 303, in prepare
self.prepare_url(url, params)
File "C:\Python34\lib\site-packages\requests\models.py", line 360, in prepare_url
"Perhaps you meant http://{0}?".format(url))
requests.exceptions.MissingSchema: Invalid URL '//www.hm.com/lv/logout': No schema supplied. Perhaps you meant http:////www.hm.com/lv/logout?`
and the last row of code which python gives me back after I run it is:
//www.hm.com/lv/logout
Maybe problem is with two `//` but I'm sure, anyway when I try to crall other
web pages e.x. _http://en.wikipedia.org/wiki/Wiki_ it gives me back `None` and
same errors.
import requests
from bs4 import BeautifulSoup
from collections import defaultdict
from operator import itemgetter
all_links = defaultdict(int)
def webpages():
url = 'http://www.hm.com/lv/'
source_code = requests.get(url)
text = source_code.text
soup = BeautifulSoup(text)
for link in soup.findAll ('a'):
href = link.get('href')
print(href)
get_single_item_data(href)
return all_links
def get_single_item_data(item_url):
#if not item_url.startswith('http'):
#item_url = 'http' + item_url
source_code = requests.get(item_url)
text = source_code.text
soup = BeautifulSoup(text)
for link in soup.findAll('a'):
href = link.get('href')
if href and href.startswith('http://www.'):
if href:
all_links[href] += 1
print(href)
def sort_algorithm(list):
for index in range(1,len(list)):
value= list[index]
i = index - 1
while i>=0:
if value < list[i]:
list[i+1] = list[i]
list[i] = value
i=i -1
else:
break
vieni = ["", "viens", "divi", "tris", "cetri", "pieci",
"sesi", "septini", "astoni", "devini"]
padsmiti = ["", "vienpadsmit", "divpadsmit", "trispadsmit", "cetrpadsmit",
"piecpadsmit", 'sespadsmit', "septinpadsmit", "astonpadsmit", "devinpadsmit"]
desmiti = ["", "desmit", "divdesmit", "trisdesmit", "cetrdesmit",
"piecdesmit", "sesdesmit", "septindesmit", "astondesmit", "devindesmit"]
def num_to_words(n):
words = []
if n == 0:
words.append("zero")
else:
num_str = "{}".format(n)
groups = (len(num_str) + 2) // 3
num_str = num_str.zfill(groups * 3)
for i in range(0, groups * 3, 3):
h = int(num_str[i])
t = int(num_str[i + 1])
u = int(num_str[i + 2])
print()
print(vieni[i])
g = groups - (i // 3 + 1)
if h >= 1:
words.append(vieni[h])
words.append("hundred")
if int(num_str) % 100:
words.append("and")
if t > 1:
words.append(desmiti[t])
if u >= 1:
words.append(vieni[u])
elif t == 1:
if u >= 1:
words.append(padsmiti[u])
else:
words.append(desmiti[t])
else:
if u >= 1:
words.append(vieni[u])
return " ".join(words)
webpages()
for k, v in sorted(webpages().items(),key=itemgetter(1),reverse=True):
print(k, num_to_words(v))
Answer: The links come from the loop of webpages functions may be start with two
slash.It means this link use the current Schema . For ex, open
<https://en.wikipedia.org/wiki/Wiki> the link "//en.wikipedia.org/login" will
be "<https://en.wikipedia.org/login>". open
<http://en.wikipedia.org/wiki/Wiki> will be <http://en.wikipedia.org/login>.
A better way to open url in a html "a" tag is using the urlparse.urljoin
function.It joins the target and current url. Regardless of absolute /
relative path.
hope this can help u.
|
Problems with Python Wand and paths to JXR imagery when attempting to convert JXR imagery to JPG format?
Question: I need to be able to convert JPEG-XR images to JPG format, and have gotten
this working through ImageMagick itself. However, I need to be able to do this
from a python application, and have been looking at using Wand. Wand does not
seem to properly use paths to JXR imagery.
with open(os.path.join(args.save_location, img_name[0], result[0]+".jxr"), "wb") as output_file:
output_file.write(result[1])
with Image(filename=os.path.join(args.save_location, img_name[0], result[0]+".jxr")) as original:
with original.convert('jpeg') as converted:
print(converted.format)
pass
The first part of this - creating output_file and writing result[1] (blob of
JXR imagery from a SQLite database) - works fine. However, when I attempt to
then open that newly-saved file as an image using Python and Wand, I get an
error that ultimately suggests Wand is not looking in the correct location for
the image:
Extracting panorama 00000
FAILED: -102=pWS->Read(pWS, szSig, sizeof(szSig))
JXRGlueJxr.c:1806
FAILED: -102=ReadContainer(pID)
JXRGlueJxr.c:1846
FAILED: -102=pDecoder->Initialize(pDecoder, pStream)
JXRGlue.c:426
FAILED: -102=pCodecFactory->CreateDecoderFromFile(args.szInputFile, &pDecoder)
e:\coding\python\sqlite panoramic image extraction tool\jxrlib\jxrencoderdecoder\jxrdecapp.c:477
JPEG XR Decoder Utility
Copyright 2013 Microsoft Corporation - All Rights Reserved
... [it outputs its help page in case of errors; snipped]
The system cannot find the file specified.
Traceback (most recent call last):
File "E:\Coding\Python\SQLite Panoramic Image Extraction Tool\SQLitePanoramicImageExtractor\trunk\PanoramicImageExtractor.py", line 88, in <module>
with Image(filename=os.path.join(args.save_location, img_name[0], result[0]+".jxr")) as original:
File "C:\Python34\lib\site-packages\wand\image.py", line 1991, in __init__
self.read(filename=filename, resolution=resolution)
File "C:\Python34\lib\site-packages\wand\image.py", line 2048, in read
self.raise_exception()
File "C:\Python34\lib\site-packages\wand\resource.py", line 222, in raise_exception
raise e
wand.exceptions.BlobError: unable to open image `C:/Users/RPALIW~1/AppData/Local/Temp/magick-14988CnJoJDwMRL4t': No such file or directory @ error/blob.c/OpenBlob/2674
As you can see at the very end, it seems to have attempted to run off to open
a temporary file
'C:/Users/RPALIW~1/AppData/Local/Temp/magick-14988CnJoJDwMRL4'. The filename
used at this point should be exactly the same as the filename used to save the
imagery as a file just a few lines above, but Wand has substituted something
else? This looks similar to the last issue I had with this in ImageMagick,
which was fixed over the weekend (detailed here:
[http://www.imagemagick.org/discourse-
server/viewtopic.php?f=1&t=27027&p=119702#p119702](http://www.imagemagick.org/discourse-
server/viewtopic.php?f=1&t=27027&p=119702#p119702)).
Has anyone successfully gotten Wand to open JXR imagery as an Image in Python,
and convert to another format? Am I doing something wrong here, or is the
fault with ImageMagick or Wand?
Answer: Something very similar is happening to me. I'm getting an error:
wand.exceptions.BlobError: unable to open image `/var/tmp/magick-454874W--g1RQEK3H.ppm': No such file or directory @ error/blob.c/OpenBlob/2701
The path given is not the file path I of the image I am trying to open.
From the docs:
A binary large object could not be allocated, read, or written.
And I am trying to open a large file. (18mb .cr). Could the file size be the
problem?
For me:
from wand.image import Image as WImage
with open(file_name, 'r+') as f:
with WImage(file = f) as img:
print 'Opened large image'
Or:
with open(file_name, 'r+') as f:
image_binary = f.read()
with WImage(blob = image_binary) as img:
print 'Opened Large Image'
Did the trick
~Victor
|
Determine ROT encoding
Question: I want to determine which type of ROT encoding is used and based off that, do
the correct decode.
Also, I have found the following code which will indeed decode rot13 "sbbone"
to "foobart" correctly:
import codecs
codecs.decode('sbbone', 'rot_13')
The thing is I'd like to run this python file against an existing file which
has rot13 encoding. (for example rot13.py encoded.txt).
Thank you!
Answer: To answer the second part of your first question, decode something in `ROT-x`,
you can use the following code:
def encode(s, ROT_number=13):
"""Encodes a string (s) using ROT (ROT_number) encoding."""
ROT_number %= 26 # To avoid IndexErrors
alpha = "abcdefghijklmnopqrstuvwxyz" * 2
alpha += alpha.upper()
def get_i():
for i in range(26):
yield i # indexes of the lowercase letters
for i in range(53, 78):
yield i # indexes of the uppercase letters
ROT = {alpha[i]: alpha[i + ROT_number] for i in get_i()}
return "".join(ROT.get(i, i) for i in s)
def decode(s, ROT_number=13):
"""Decodes a string (s) using ROT (ROT_number) encoding."""
return encrypt(s, abs(ROT_number % 26 - 26))
To answer the first part of your first question, find the rot encoding of an
arbitrarily encoded string, you probably want to brute-force. Uses all rot-
encodings, and check which one makes the most sense. A quick(-ish) way to do
this is to get a space-delimited (e.g.
`cat\ndog\nmouse\nsheep\nsay\nsaid\nquick\n...` where `\n` is a newline) file
containing most common words in the English language, and then check which
encoding has the most words in it.
with open("words.txt") as f:
words = frozenset(f.read().lower().split("\n"))
# frozenset for speed
def get_most_likely_encoding(s, delimiter=" "):
alpha = "abcdefghijklmnopqrstuvwxyz" + delimiter
for punctuation in "\n\t,:; .()":
s.replace(punctuation, delimiter)
s = "".join(c for c in s if c.lower() in alpha)
word_count = [sum(w.lower() in words for w in encode(
s, enc).split(delimiter)) for enc in range(26)]
return word_count.index(max(word_count))
A file on Unix machines that you could use is
[`/usr/dict/words`](http://en.wikipedia.org/wiki/Words_\(Unix\)), which can
also be found [here](https://raw.githubusercontent.com/eneko/data-
repository/master/data/words.txt)
|
Attempting to read data from multiple files to multiple arrays
Question: I would like to be able to read data from multiple files in one folder to
multiple arrays and then perform analysis on these arrays such as plot graphs
etc. I am currently having trouble reading the data from these files into
multiple arrays.
My solution process so far is as follows;
import numpy as np
import os
#Create an empty list to read filenames to
filenames = []
for file in os.listdir('C\\folderwherefileslive'):
filenames.append(file)
This works so far, what I'd like to do next is to iterate over the filenames
in the list using numpy.genfromtxt.
I'm trying to use os.path join to put the individual list entry at the end of
the path specified in listdir earlier. This is some example code:
for i in filenames:
file_name = os.path.join('C:\\entryfromabove','i')
'data_'+[i] = np.genfromtxt('file_name',skiprows=2,delimiter=',')
This piece of code returns "Invalid syntax".
To sum up the solution process I'm trying to use so far: 1\. Use os.listdir to
get all the filenames in the folder I'm looking at. 2\. Use os.path.join to
direct np.genfromtxt to open and read data from each file to a numpy array
named after that file.
I'm not experienced with python by any means - any tips or questions on what
I'm trying to achieve are welcome.
Answer: For this kind of task you'd want to use a dictionary.
data = {}
for file in os.listdir('C\\folderwherefileslive'):
filenames.append(file)
path = os.path.join('C:\\folderwherefileslive', i)
data[file] = np.genfromtxt(path, skiprows=2, delimiter=',')
# now you could for example access
data['foo.txt']
* * *
Notice, that everything you put within single or double quotes ends up being a
character string, so `'file_name'` will just be some characters, whereas using
`file_name` would use the value stored in variable by that name.
|
Python: Printing letters one by one in Command Prompt
Question: Recently I've picked up Python 3.4 programming and I read this other question
regarding how to [print slowly in python, (Simulate
typing)](http://stackoverflow.com/questions/4099422/print-slowly-in-python-
simulate-typing), and used a similar def in my code.
import time
def type(str):
for letter in str:
print(letter, end='')
time.sleep(0.02)
print("\n")
type("This sentence is typed.")
It all worked fine in IDLE, but once I tried running it using Windows Command
prompt, CMD waits the time it took IDLE to type it (half a second in this
case) and then spits it out as if it had been printed.
I figured the time.sleep statement was broken one way or another, although
print("One")
time.sleep(2)
print("Two")
worked just fine.
Is there any way of printing one letter at a time with a short interval
between the letters, or is it simply not possible?
Thanks in advance!
Answer: Try forcing a flush on `stdout` after each character. The problem is that
`stdout` is normally buffered until either a newline, EOF or a certain number
of bytes have been output.
import time
import sys
def type(str):
for letter in str:
print(letter, end='')
sys.stdout.flush()
time.sleep(0.02)
print("\n")
type("This sentence is typed.")
Alternately, in Python3, as @PeterWood mentioned, you can change your print so
that it auto-flushes.
print(letter, end='', flush=True)
|
Python and RabbitMQ - Best way to listen to consume events from multiple channels?
Question: I have two, separate RabbitMQ instances. I'm trying to find the best way to
listen to events from both.
For example, I can consume events on one with the following:
credentials = pika.PlainCredentials(user, pass)
connection = pika.BlockingConnection(pika.ConnectionParameters(host="host1", credentials=credentials))
channel = connection.channel()
result = channel.queue_declare(Exclusive=True)
self.channel.queue_bind(exchange="my-exchange", result.method.queue, routing_key='*.*.*.*.*')
channel.basic_consume(callback_func, result.method.queue, no_ack=True)
self.channel.start_consuming()
I have a second host, "host2", that I'd like to listen to as well. I thought
about creating two separate threads to do this, but from what I've read, pika
isn't thread safe. Is there a better way? Or would creating two separate
threads, each listening to a different Rabbit instance (host1, and host2) be
sufficient?
Answer: The answer to "what is the best way" depends heavily on your usage pattern of
queues and what you mean by "best". Since I can't comment on questions yet,
I'll just try to suggest some possible solutions.
In each example I'm going to assume exchange is already declared.
# Threads
You can consume messages from two queues on separate hosts in single process
using [`pika`](http://pika.readthedocs.org/).
You are right - as [its own FAQ
states](http://pika.readthedocs.org/en/latest/faq.html), `pika` is not thread
safe, but it can be used in multi-threaded manner by creating connections to
RabbitMQ hosts per thread. Making this example run in threads using
[`threading`](https://docs.python.org/2/library/threading.html) module looks
as follows:
import pika
import threading
class ConsumerThread(threading.Thread):
def __init__(self, host, *args, **kwargs):
super(ConsumerThread, self).__init__(*args, **kwargs)
self._host = host
# Not necessarily a method.
def callback_func(self, channel, method, properties, body):
print("{} received '{}'".format(self.name, body))
def run(self):
credentials = pika.PlainCredentials("guest", "guest")
connection = pika.BlockingConnection(
pika.ConnectionParameters(host=self._host,
credentials=credentials))
channel = connection.channel()
result = channel.queue_declare(exclusive=True)
channel.queue_bind(result.method.queue,
exchange="my-exchange",
routing_key="*.*.*.*.*")
channel.basic_consume(self.callback_func,
result.method.queue,
no_ack=True)
channel.start_consuming()
if __name__ == "__main__":
threads = [ConsumerThread("host1"), ConsumerThread("host2")]
for thread in threads:
thread.start()
I've declared `callback_func` as a method purely to use `ConsumerThread.name`
while printing message body. It might as well be a function outside the
`ConsumerThread` class.
# Processes
Alternatively, you can always just run one process with consumer code per
queue you want to consume events.
import pika
import sys
def callback_func(channel, method, properties, body):
print(body)
if __name__ == "__main__":
credentials = pika.PlainCredentials("guest", "guest")
connection = pika.BlockingConnection(
pika.ConnectionParameters(host=sys.argv[1],
credentials=credentials))
channel = connection.channel()
result = channel.queue_declare(exclusive=True)
channel.queue_bind(result.method.queue,
exchange="my-exchange",
routing_key="*.*.*.*.*")
channel.basic_consume(callback_func, result.method.queue, no_ack=True)
channel.start_consuming()
and then run by:
$ python single_consume.py host1
$ python single_consume.py host2 # e.g. on another console
If the work you're doing on messages from queues is [CPU-
heavy](http://en.wikipedia.org/wiki/CPU-bound) and as long as number of cores
in your CPU >= number of consumers, it is generally better to use this
approach - unless your queues are empty most of the time and consumers won't
utilize this CPU time*.
# Async
Another alternative is to involve some asynchronous framework (for example
[`Twisted`](https://twistedmatrix.com/)) and running whole thing in single
thread.
You can no longer use `BlockingConnection` in asynchronous code; fortunately,
`pika` has adapter for `Twisted`:
from pika.adapters.twisted_connection import TwistedProtocolConnection
from pika.connection import ConnectionParameters
from twisted.internet import protocol, reactor, task
from twisted.python import log
class Consumer(object):
def on_connected(self, connection):
d = connection.channel()
d.addCallback(self.got_channel)
d.addCallback(self.queue_declared)
d.addCallback(self.queue_bound)
d.addCallback(self.handle_deliveries)
d.addErrback(log.err)
def got_channel(self, channel):
self.channel = channel
return self.channel.queue_declare(exclusive=True)
def queue_declared(self, queue):
self._queue_name = queue.method.queue
self.channel.queue_bind(queue=self._queue_name,
exchange="my-exchange",
routing_key="*.*.*.*.*")
def queue_bound(self, ignored):
return self.channel.basic_consume(queue=self._queue_name)
def handle_deliveries(self, queue_and_consumer_tag):
queue, consumer_tag = queue_and_consumer_tag
self.looping_call = task.LoopingCall(self.consume_from_queue, queue)
return self.looping_call.start(0)
def consume_from_queue(self, queue):
d = queue.get()
return d.addCallback(lambda result: self.handle_payload(*result))
def handle_payload(self, channel, method, properties, body):
print(body)
if __name__ == "__main__":
consumer1 = Consumer()
consumer2 = Consumer()
parameters = ConnectionParameters()
cc = protocol.ClientCreator(reactor,
TwistedProtocolConnection,
parameters)
d1 = cc.connectTCP("host1", 5672)
d1.addCallback(lambda protocol: protocol.ready)
d1.addCallback(consumer1.on_connected)
d1.addErrback(log.err)
d2 = cc.connectTCP("host2", 5672)
d2.addCallback(lambda protocol: protocol.ready)
d2.addCallback(consumer2.on_connected)
d2.addErrback(log.err)
reactor.run()
This approach would be even better, the more queues you would consume from and
the less CPU-bound the work performing by consumers is*.
# Python 3
Since you've mentioned `pika`, I've restricted myself to Python 2.x-based
solutions, because `pika` is not yet ported.
But in case you would want to move to >=3.3, one possible option is to use
[`asyncio`](https://docs.python.org/3/library/asyncio.html) with one of AMQP
protocol (the protocol you speak in with RabbitMQ) , e.g.
[`asynqp`](https://github.com/benjamin-hodgson/asynqp) or
[`aioamqp`](https://github.com/polyconseil/aioamqp).
* - please note that these are very shallow tips - in most cases choice is not that obvious; what will be the best for you depends on queues "saturation" (messages/time), what work do you do upon receiving these messages, what environment you run your consumers in etc.; there's no way to be sure other than to benchmark all implementations
|
How to kill subprocess python in windows
Question: How would I go about killing a process on Windows?
I am starting the process with
self.p = Process(target=self.GameInitialize, args=(testProcess,))
self.p.start()
I have tried
self.p.kill()
self.p.terminate()
os.kill(self.p.pid, -1)
os.killpg(self.p.pid, signal.SIGTERM) # Send the signal to all the process groups
Errors
Process Object has no Attribute kill
Process Object has no Attribute terminate
Access Denied
I cannot use `.join.`
Answer: On windows, `os.killpg` will not work because it sends a signal to the process
ID to terminate. This is now how you kill a process on Windows, instead you
have to use the win32 API's `TerminateProcess` to kill a process.
So, you can kill a process by the following on windows:
import signal
os.kill(self.p.pid, signal.CTRL_C_EVENT)
If the above does not work, then try `signal.CTRL_BREAK_EVENT` instead.
|
How to compare cell_values with xlrd and calculate the difference between them in Python 3.4
Question: It's kind of long story, sorry. So as ive been teaching myself python I've
been wondering how I could apply what im learning to my everyday life or work
so im not just doing random exercises and tutorials. I figured i could write a
script that could automate a portion of what i do at work.
Im a Food and Beverage manager for a hotel, so one thing i do is keep an
inventory of all the liquor, beer, and wine we have. I figured i could come up
with something that could:
* Check my current inventory from an excel spread sheet
* Compare it to what my par levels should be (quantities of liquor, beer, and wine that I should always have available at the hotel)
* If my current inventory is less than what my par should be, the script will calculate the difference as well as the name of the liquor and export it to an email that would get sent out automatically.
* It would do this twice a week Monday and Thursday at 9 am.
I'm not looking for solutions or an answer to this; this is something i want
to figure out on my own, but i've always had trouble imagining how to begin to
structure a script like this. I am not a beginner but im nowhere near
intermediate either. Should I use a bunch of functions, or maybe classes with
some methods? What would the "skeleton", i guess i could call it, of the
script be composed of?
Also, if someone could point me in the right direction to read up and learn
how to accomplish some of the things I want to do, like:
exporting data to an email and sending it out.
how to read individual cells in excel, and just working with python and excel
in general.
how to basically set the script up on a timer.
My computer is always on at work so i have no issue with this script
constantly running in the background.
I just started working on this today, i haven't had much time because of work.
Im a bit stuck on pulling info from my cell_value. Here is the code:
import xlrd
import xlwt
import datetime
file_location = "Your file path here"
def par_num():
par_workbook = xlrd.open_workbook(file_location)
par_worksheet = par_workbook.sheet_by_name('Par')
# Total number of rows with content in cells.
num_rows = par_worksheet.nrows - 1
num_cells = par_worksheet.ncols - 2
# Current row for when iterating over spread sheet.
cur_row = 2
print(num_cells)
# Iterates over work sheet
while cur_row < num_rows:
cur_row += 1
cur_cell = -1
print('--------------------')
while cur_cell < num_cells:
cur_cell += 1
# Cell Types: 0 = Empty, 1 = Text, 2 = Number, 3 = Date, 4 = Boolean, 5 = Error, 6 = Blank
cell_type = par_worksheet.cell_type(cur_row, cur_cell)
# (Liq Name, Quantity in house)
cell_value = par_worksheet.cell_value(cur_row, cur_cell)
print(' ', cell_type, ' : ', cell_value)
def inventory_num():
inv_workbook = xlrd.open_workbook(file_location)
inv_worksheet = inv_workbook.sheet_by_name('Sheet1')
# Total number of rows with content in cells.
num_rows = inv_worksheet.nrows - 1
num_cells = inv_worksheet.ncols - 2
# Current row for when iterating over spread sheet.
cur_row = 2
# Iterates over work sheet
while cur_row < num_rows:
cur_row += 1
row = inv_worksheet.row(cur_row)
cur_cell = -1
print('--------------------')
while cur_cell < num_cells:
cur_cell += 1
# Cell Types: 0 = Empty, 1 = Text, 2 = Number, 3 = Date, 4 = Boolean, 5 = Error, 6 = Blank
cell_type = inv_worksheet.cell_type(cur_row, cur_cell)
# (Liq Name, Quantity in house)
cell_value = inv_worksheet.cell_value(cur_row, cur_cell)
print(' ', cell_type, ' : ', cell_value)
ok, so I want to compare my cell_value, which comes up as (Ketel One, 5.6) for
example, against my par levels which would be like Ketel One: 8. But i only
want to compare the 5.6 to the 8, so I was guessing i would be able to put in
an " for/ if" statement within the while statement that would look something
like :
for i in cell_value:
if cell_value[1] < (float representing my par):
print ((float representing my par) - i)
but i get back : "TypeError: unorderable types: str() < float()"
And when i try :
print(' ', cell_type, ' : ', cell_value[1])
Just to see if i can pull up the quantity, it returns the following error:
print(' ', cell_type, ' : ', cell_value[1])
TypeError: 'float' object is not subscriptable
How are the contents of "cell_value" actually stored? Is it a tuple (x,y)? or
is it a list? why can't i just call the value with cell_value[1]?
Im kind of lost as to how i can actually compare the values on my spread sheet
against another value to calculate that difference in my "while" loop.
Also, please forgive me if my formatting is wonky, and I would appreciate any
and all criticism on my code layout or anything else.
Here is the link to my inventory sheet :
<https://docs.google.com/spreadsheets/d/1yXVFKiJMDJGPS6a6gaGn9qpIPNcgGetrLgfHGfdIm_U/edit?usp=sharing>
Answer: 1. The "`TypeError: unorderable types: str() < float()`"
This happens because you are evidently looking at a cell which has text
content, rather than a number content. Notice that in Excel (and hence xlrd) a
cell containing the text "123" is not the same thing than a cell containing
the number 123. So you have to ensure
* that numbers in your inventory workbook are really recorded as numbers
* that the script is not looking i.e. at a header thinking it contains a number
(for the records: python2 would not have raised this error... but things
wouldn't have been better, because the result of the comparison between a
float and a str would have been of no use to you)
In any case, to prevent the error you can just check beforehand that
`cell_type == 2`.
2. The "`TypeError: 'float' object is not subscriptable`"
Notice this error should come out for a different cell than the previous - a
cell which does contain a number.
It simply depends from the fact that you are trying to access the value of the
cell as if it was a list (`cell_value[1]`), while it is not. The consequences
of this will depend on the content of such cell. If you try to access the item
in position 1 of the string "123", as in the previous example, you do get "2"
- and since this is not a number, but a string, you then get the error as in
the previous example. If instead you try to access the item in position 1 of
the float 123, you get an error even before, because you cannot "access an
item of a float".
|
List all installed packages in Ubuntu using python
Question: I am using python 2.7. How do I get a list of all installed packages in python
on Ubuntu
Answer: You need to import `subprocess` module.
>>> import subprocess
>>> subprocess.call('dpkg -l', shell=True)
|
file accessing in python
Question: i searched for the answer couldn't get it.
i have the following code,
filePointer = open(r'c:\temp\logFile.txt', 'w')
filePointer.write(str( datetime.datetime.now()) + 'entered into table \n' )
but before i enter the above line into file, i want to check if it is already
present in it. i can read the file and string compare and all, but I'm looking
for a better way, help me out if there is any.
Answer: Shrinidhi,
I believe this what you're looking for. First you need to not use `'w'` on
`open` since that will cause any new writing to the file to replace the
current contents. Instead you should use `'a'` or `'a+'` to append to the
file. See code below:
from datetime import datetime
line_to_add = 'def'
with open('file.txt', 'a+') as openfile:
if not (item for item in openfile if line_to_add not in line):
openfile.write(str(datetime.now()) + " " + line_to_add + ' \n')
This will check if the line is already in the file. If it's not there it will
add it with a time stamp.
You'll have to string compare, but at lease it's using a generator to do so.
|
I can't connect to FDB database from Python
Question: I'm triyng to connect to Firebird DB from Python. I'm using FDB module.
import fdb
con = fdb.connect(host='10.7.0.115',database=r'C:\ProgramData\Entensys\UserGate6\USERGATE.FDB', user='SYSDBA', password='masterkey',charset='UTF8' )
cur = con.cursor()
cur.execute("select * from baz")
for c in cur.fetchall():
print(c)
conn.close()
But, I get error:
Traceback (most recent call last):
File "C:\Python34\fdb_test.py", line 4, in <module>
con = fdb.connect(host='10.7.0.115',database=r'C:\ProgramData\Entensys\UserGate6\USERGATE.FDB', user='SYSDBA', password='masterkey',charset='UTF8' )
File "C:\Python34\lib\site-packages\fdb\fbcore.py", line 653, in connect
load_api()
File "C:\Python34\lib\site-packages\fdb\fbcore.py", line 183, in load_api
setattr(sys.modules[__name__],'api',fbclient_API(fb_library_name))
File "C:\Python34\lib\site-packages\fdb\ibase.py", line 1173, in __init__
key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, baseKey)
FileNotFoundError: [WinError 2] Can not find the specified file
This file exist.I know.
Answer: You need to install fbclient.dll on the system , no need for full server (you
can uncheck at install time )
<http://www.firebirdsql.org/en/firebird-2-5/>
|
python and error I"ndentationError"
Question: I have a sequence as follows:
>gnl|GNOMON|230560472.m Model predicted by Gnomon on Homo sapiens unplaced genomic scaffold, alternate assembly HuRef DEGEN_1103279082069, whole genome shotgun sequence (NW_001841731.1)
GCCGGCGTTTGACCGCGCTTGGGTGGCCTGGGACCCTGTGGGAGGCTTCCCCGGCGCCGAGAGCCCTGGC
TGACGGCTGATGGGGAGGAGCCGGCGGGCGGAGAAGGCCACGGGCTCCCCAGTACCCTCACCTGCGCGGG
ATCGCTGCGGGAAACCAGGGGGAGCTTCGGCAGGGCCTGCAGAGAGGACAAGCGAAGTTAAGAGCCTAGT
GTACTTGCCGCTGGGAGCTGGGCTAGGCCCCCAACCTTTGCCCTGAAGATGCTGGCAGAGCAGGATGTTG
TAACGGGAAATGTCAGAAATACTGCAAGCAAACTGAAAACAACCCATCCATGTAGGAAAGAATAACACGG
ACTACACACTATGAGGAAACCACAGGGGAGTTTCAGGCCAGTCAGCTTTTGATCTTCAACTTTATAACTT
TCACCTTAGGATATGACGAGCCCACCGGAGTTTCAAAAATGGTATCATTTTGTATCAGGCTTGTTTTTTA
CACTCTTGGTTTCTCACAGAGATAGGTGGTTTCTCCTTAAAATCGAACATTTATATGATGCATTTTACTG
TAGTTACTATCAGAAAAGTTAGTTTTCCCAAATTTAAGTTCACTCTGGGGTACTATAGCGTGAATGTAGT
TCATTCTGTTGAGCTAGTTGTTCATGTTAGTGTAGTTCACATATTTATCTGGAACTCAAAAATGAGGGGT
TGAGAGGGGAAGCTAAAATTCAAAACATGTCCAAATATATAATTTTAATATTTTACTTTATATTTAAAAT
AGAAAAGCAATTGATTCTAGAATTAGACTAATTGCTAGCATTGCTAGGATATATAAAATGAAGCTGAATG
TTTTAACTCTGGAATTTTTCTGAATAGTCTAAGAAATAAGGCTGAAGTGTATCACTTGCCTTAAGTTTAC
TTTTGCGTGTGTGTTTTAATTTTGTTCAGTGGGGCTTTCACTTAAAAAAAAAACCATAATATTATTACCT
GGATAAAAAATACAGCTGAAAGTAGATCACTTTATCTTTAAGCAGAAGGATGGAAATAGAAGAATTTTAA
GAATGTATTGGTTGAAAAACATCTATATTATTTTATTTTTATTTCTCTTCTTGTGGGAGTAAAATAATTT
CCAACCAAATCAGTCCACCTAGATTATACACTGTTCAGTTTGTTTTCTGCCCTGCAGCACAAGCAATAAC
CAGCAGAGACTGGAACCACAGCTGAGGCTCTGTAAATGAGTTGACTGCTAAGGACTTCATGGGGATATTA
ACCTGGGGCATTAAGAGAATCAACATGCTAAAGTACTTGGAGACAGCTCTGTAATGTTTTATGAGGTTTT
TTGTTTTTTTTTTTTGAGACAGAGTCTTGCACTGTCGCCCAGGCTGG
I try to translate it to protein. I used other posts to do so, but I get
several error when i run it.
The code is as follows:
import re
from itertools import takewhile
from collections import Counter
# prints how many start and stop codons are in the sequence
pat = re.compile(r"(TAA|TGA|TAG|ATG)") #additional space required!
c = re.findall(pat,sequence) #additional space required!
print(Counter(c)) #additional space required!
3)]
print(len(codons))
print(trimmed_sequence)
print(codons)
# Take all codons until first stop codon
coding_sequence = takewhile(lambda x: x not in stop_codons and len(x) == 3 , codons)
return "{0}_".format(protein_sequence)
I first cd to desktop in terminal (Mac) then I run python the name of the
code.py or python -t the name of the code
both situation I get error for example
**File "translate_dna2.py", line 34 start = sequence.find('ATG') ^
IndentationError: unexpected indent**
the same for stop_codones etc
Answer: On line with 'codontable' (line 6) you have a missing space. Should be:
def (...) :
----codontable
The proper indent for Python is one tabulation = 4 spaces. Also here:
----pat = re.compile(r"(TAA|TGA|TAG|ATG)")
----c = re.findall(pat,sequence)
----print(Counter(c))
Every dash means one space.
|
Check if file line sttime is older than 2 days - Python
Question: I have a series of entries in a text file like this:
20150217_00:47:32 - AAAAAA
20150217_00:47:32 - BBBBBB
20150217_00:47:32 - CCCCCC
I want to make a function that will periodically read each line in the file
and do something depending on whether the entry is less than 2 days old, more
than 2 days old, or more than 7 days old.
I'm not sure how to get the code to understand the sttime timestamp in the
file though. My code (so far) is as follows:
with open('entries.txt', 'r+') as entries:
for line in entries:
lineitem = line.strip()
print lineitem[:17]
That retrieves the timestamps ok, but how to read them and interpret them as
times I have no idea :/
(This will end up in an if loop eventually that does the function described
above, but first I just need to know how to read those times...)
Answer: This will give you a
[datetime](https://docs.python.org/2/library/datetime.html) object which you
can [compare](https://docs.python.org/2/library/datetime.html#timedelta-
objects) against other datetime objects:
from datetime import datetime, timedelta
...
~~linets = datetime.strptime('%Y%m%d_%H:%M:%s', line.strip()[:17])~~
Oops! Mark got those arguments the wrong way around. It should be
linets = datetime.strptime(line.strip()[:17], '%Y%m%d_%H:%M:%S')
**test**
>>> print datetime.strptime('20150217_00:47:32', '%Y%m%d_%H:%M:%S')
2015-02-17 00:47:32
|
index.wsgi not finding virtualenv in root
Question: I am trying to install a django site on an Apache VPS, following this
[tutorial](http://thecodeship.com/deployment/deploy-django-apache-virtualenv-
and-mod_wsgi/)
my index.wsgi should activate a virtualenv in the root, it looks like this:
import os
import sys
import site
# Add the site-packages of the chosen virtualenv to work with
site.addsitedir('~/.virtualenvs/DBENV/local/lib/python2.7/site-packages')
# Add the app's directory to the PYTHONPATH
sys.path.append('/home/DB2015/')
sys.path.append('/home/DB2015/davidcms/')
os.environ['DJANGO_SETTINGS_MODULE'] = 'davidcms.settings'
# Activate your virtual env
activate_env=os.path.expanduser("~/.virtualenvs/DBENV/bin/activate_this.py")
execfile(activate_env, dict(__file__=activate_env))
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
and I get this error
[Tue Feb 17 07:13:30.701511 2015] [:error] [pid 16103:tid 140396130674432] [client 217.44.75.146:58169] execfile(activate_env, dict(__file__=activate_env))
[Tue Feb 17 07:13:30.701653 2015] [:error] [pid 16103:tid 140396130674432] [client 217.44.75.146:58169] IOError: [Errno 2] No such file or directory: '/var/www/.virtualenvs/DBENV/bin/activate_this.py'
So its looking in `var/www/` instead of the root. When I try to change
`os.path.expanduser` to just the path to .virtualenvs in the root as such:
# Activate your virtual env
activate_env= "~/.virtualenvs/DBENV/bin/activate_this.py"
execfile(activate_env, dict(__file__=activate_env))
it still says
[Tue Feb 17 07:17:12.019641 2015] [:error] [pid 16104:tid 140396206208768] [client 217.44.75.146:58200] execfile(activate_env, dict(__file__=activate_env))
[Tue Feb 17 07:17:12.019852 2015] [:error] [pid 16104:tid 140396206208768] [client 217.44.75.146:58200] IOError: [Errno 2] No such file or directory: '~/.virtualenvs/DBENV/bin/activate_this.py'
What am I missing?!
Answer: Presumably Apache is not running as you, but as its own user, so it won't use
your home directory.
You shouldn't really have your code in a home directory anyway. Pick a
standard place - eg a new top-level directory like `/srv`, or `/var/sites`, or
something.
|
How to run processes in the background and override standard streams
Question: I want to write a python script which should create a process **in the
background** , redirect its **stdin** , **stdout** and **stderr** for
communication with that process (this executable is my program) to a
**separate** virtual streams.
I need to run several instances of my program at once from python script and i
need a proper solution to receive/send messages to their overriden streams. I
have no experience on sub/process/streams in python, i am looking for a
diamong code sample, thank you..
Answer: What you're after is
[subprocess.Popen](https://docs.python.org/2/library/subprocess.html#subprocess.Popen):
import subprocess
p = subprocess.Popen(["mycmd", "--somearg"], stdout=subprocess.PIPE)
out, err = p.communicate()
|
Python: Remove the certain string from list if string includes certain keyword
Question: I am trying to exclude certain strings in the list of strings if the string
includes certain words.
For example, if there is a word, "cinnamon" or "fruit" or "eat", in the
string, I hope to exclude it from the list of strings.
['RT @haussera: Access to Apple Pay customer data, no, but another way? everybody wins - MarketWatch http://t.co/Fm3LE2iTkY', "Landed in the US, tired w horrible migrane. The only thing helping- Connie's new song on repeat. #SoGood #Nashville https://t.co/AscR4VUkMP", 'I wish jacob would be my cinnamon apple', "I've collected 9,112 gold coins! http://t.co/T62o8NoP09 #iphone, #iphonegames, #gameinsight", 'HAHAHA THEY USED THE SAME ARTICLE AS INDEPENDENT http://t.co/mC7nfnhqSw', '@hot1079atl Let me know what you think of the new single "Mirage "\nhttps://t.co/k8DJ7oxkyg', 'RT @SWNProductions: Hey All so we have a new iTunes listing due to our old one getting messed up please resubscribe via the following https…', 'Shawty go them apple bottoms jeans and the boots with the furrrr with furrrr the whole club is looking at her', 'I highly recommend you use MyMedia - a powerfull download manager for the iPhone/iPad. http://t.co/TWmYhgKwBH', 'Alusckが失われた時間の異常を解消しました http://t.co/peYgajYvQY http://t.co/sN3jAJnd1I', 'Театр радует туземцев! Теперь мой остров стал еще круче! http://t.co/EApBrIGghO #iphone, #iphonegames, #gameinsight', 'RT @AppIeOfficiel: Our iPhone 7 http://t.co/d2vCOCOTqt', 'Я выполнил задание "Подключаем резервы"! Заходите ко мне в гости! http://t.co/ZReExwwbxh #iphone #iphonegames #gameinsight', "RT @Louis_Tomlinson: @JennSelby Google 'original apple logo' and you will see the one printed on my shirt that you reported on. Trying to l…", "I've collected 4,100 gold coins! http://t.co/JZLQJdRtLG #iphone, #iphonegames, #gameinsight", "I've collected 28,800 gold coins! http://t.co/r3qXNHwUdp #iphone, #iphonegames, #gameinsight", 'RT @AppIeOfficiel: Our iPhone 7 http://t.co/d2vCOCOTqt']
keywordFilter=['eat','cinnamon','fruit']
for sent in list:
for word in keywordFilter:
if word in sent:
list.remove(sent)
But it does not filter the keyword that I hope and return the original list.
Does anyone have idea why?
**1st Edit:**
import json
from json import *
tweets=[]
for line in open('apple.json'):
try:
tweets.append(json.loads(line))
except:
pass
keywordFilter=set(['pie','juice','cinnamon'])
for tweet in tweets:
for key, value in tweet.items():
if key=='text':
tweetsF.append(value)
print(type(tweetsF))
print(len(tweetsF))
tweetsFBK=[sent for sent in tweetsF if not any(word in sent for word in keywordFilter)]
print(type(tweetsFBK))
print(len(tweetsFBK))
Above is the code I have so far. Up to **tweetsF** , string is stored well and
I have tried to exclude the words by using keywordFilter.
However **tweetsFBK** returns me 0 (nothing). Does anyone have any idea why?
Answer: One solution is the following:
list = [sent for sent in list
if not any(word in sent for word in keywordFilter)]
It will remove all strings that contain one of the words in the list
`keywordFilter` as a substring. For instance, it will remove the second
string, since it contains the word `repeat` (and `eat` is a substring of
`repeat`).
If you want to avoid this, you can do the following:
list = [sent for sent in list
if not any(word in sent.split(' ') for word in keywordFilter)]
It will remove only strings containing one of the words in the list
`keywordFilter` as a subword (i.e. delimited by spaces in the sentence).
|
Python stopwatch example - starting all class instances at the same time?
Question: In this example how can I start all 4 stop watches at the same time?
This example code is over 12 years old but it is the best stopwatch example I
have been able to find via Google. You can see that I have 4 instances of the
class in use. I need to be able to start all the instances at the exact same
time. Tinker button doesn't allow for calling multiple functions. Even if it
did it would be one function before the next so technically they wouldn't all
start at the exact same time.
I will need to stop each stopwatch at different times but that is easy by just
calling each Stop function in the class. But I can't figure out how to start
them all at the same time.
from Tkinter import *
import time
class StopWatch(Frame):
""" Implements a stop watch frame widget. """
def __init__(self, parent=None, **kw):
Frame.__init__(self, parent, kw)
self._start = 0.0
self._elapsedtime = 0.0
self._running = 0
self.timestr = StringVar()
self.makeWidgets()
def makeWidgets(self):
""" Make the time labels. """
l = Label(self, textvariable=self.timestr)
l.pack(fill=X, expand=NO, pady=2, padx=2)
self._setTime(self._elapsedtime)
def _update(self):
""" Update the label with elapsed time. """
self._elapsedtime = time.time() - self._start
self._setTime(self._elapsedtime)
self._timer = self.after(50, self._update)
def _setTime(self, elap):
""" Set the time string to Minutes:Seconds:Hundreths """
minutes = int(elap/60)
seconds = int(elap - minutes*60.0)
hseconds = int((elap - minutes*60.0 - seconds)*100)
self.timestr.set('%02d:%02d:%02d' % (minutes, seconds, hseconds))
def Start(self):
global sw2
""" Start the stopwatch, ignore if running. """
if not self._running:
self._start = time.time() - self._elapsedtime
self._update()
self._running = 1
def Stop(self):
""" Stop the stopwatch, ignore if stopped. """
if self._running:
self.after_cancel(self._timer)
self._elapsedtime = time.time() - self._start
self._setTime(self._elapsedtime)
self._running = 0
def Reset(self):
""" Reset the stopwatch. """
self._start = time.time()
self._elapsedtime = 0.0
self._setTime(self._elapsedtime)
def main():
root = Tk()
sw1 = StopWatch(root)
sw1.pack(side=TOP)
sw2 = StopWatch(root)
sw2.pack(side=TOP)
sw3 = StopWatch(root)
sw3.pack(side=TOP)
sw4 = StopWatch(root)
sw4.pack(side=TOP)
Button(root, text='Start', command=sw1.Start).pack(side=LEFT)
Button(root, text='Stop', command=sw1.Stop).pack(side=LEFT)
Button(root, text='Reset', command=sw1.Reset).pack(side=LEFT)
Button(root, text='Quit', command=root.quit).pack(side=LEFT)
root.mainloop()
if __name__ == '__main__':
main()
Answer: The following program may be close to want you want. Please note that since it
takes time to start and stop the stopwatches, you may find small discrepancies
among the times they are showing.
#! /usr/bin/env python3
import tkinter
import time
class StopWatch(tkinter.Frame):
@classmethod
def main(cls):
tkinter.NoDefaultRoot()
root = tkinter.Tk()
root.title('Stop Watch')
root.resizable(True, False)
root.grid_columnconfigure(0, weight=1)
padding = dict(padx=5, pady=5)
widget = StopWatch(root, **padding)
widget.grid(sticky=tkinter.NSEW, **padding)
root.mainloop()
def __init__(self, master=None, cnf={}, **kw):
padding = dict(padx=kw.pop('padx', 5), pady=kw.pop('pady', 5))
super().__init__(master, cnf, **kw)
self.grid_columnconfigure(1, weight=1)
self.grid_rowconfigure(1, weight=1)
self.__total = 0
self.__label = tkinter.Label(self, text='Total Time:')
self.__time = tkinter.StringVar(self, '0.000000')
self.__display = tkinter.Label(self, textvariable=self.__time)
self.__button = tkinter.Button(self, text='Start', command=self.click)
self.__label.grid(row=0, column=0, sticky=tkinter.E, **padding)
self.__display.grid(row=0, column=1, sticky=tkinter.EW, **padding)
self.__button.grid(row=1, column=0, columnspan=2,
sticky=tkinter.NSEW, **padding)
def click(self):
if self.__button['text'] == 'Start':
self.__button['text'] = 'Stop'
self.__start = time.clock()
self.__counter = self.after_idle(self.__update)
else:
self.__button['text'] = 'Start'
self.after_cancel(self.__counter)
def __update(self):
now = time.clock()
diff = now - self.__start
self.__start = now
self.__total += diff
self.__time.set('{:.6f}'.format(self.__total))
self.__counter = self.after_idle(self.__update)
class ManyStopWatch(tkinter.Tk):
def __init__(self, count):
super().__init__()
self.title('Stopwatches')
padding = dict(padx=5, pady=5)
tkinter.Button(self, text='Toggle All', command=self.click).grid(
sticky=tkinter.NSEW, **padding)
for _ in range(count):
StopWatch(self, **padding).grid(sticky=tkinter.NSEW, **padding)
def click(self):
for child in self.children.values():
if isinstance(child, StopWatch):
child.click()
if __name__ == '__main__':
ManyStopWatch(4).mainloop()
|
Working with dates in Access using pyodbc giving "Too few parameters" error
Question: I am using Python with a pyodbc import.
I am using Microsoft Office 2013 64bit.
I am attempting to query an accdb database to select distinct dates within a
range and assign them to a cursor so I can then append them to a list.
My Access database has a table named Closing_prices, and a column named Date_,
which has the data type "Date/Time".
My code is as follows:
cursor=conx.cursor()
query="select distinct Date_ FROM Closing_prices where Date_ >= '10/8/2011' and Date_ < '30/04/2014'"
cursor.execute(query)
dates=list()
for date in cursor:
dates.append(date[0])
However I am receiving the error message:
Traceback (most recent call last):
File "C:/Users/Stuart/PycharmProjects/untitled/Apache - Copy.py", line 20, in <module>
cursor.execute(query)
pyodbc.Error: ('07002', '[07002] [Microsoft][ODBC Microsoft Access Driver] Too few parameters. Expected 1. (-3010) (SQLExecDirectW)')
As Date_ is a datetime, I have also tried:
query="select distinct Date_ FROM Closing_prices where Date_ >= '10/8/2011 00:00:00' and Date_ < '30/04/2014 00:00:00'"
When I run:
cursor = conx.cursor()
query="select Date_ FROM Closing_prices"
cursor.execute(query)
for row in cursor:
print row
print type(row[0])
I get the following output as an example:
(datetime.datetime(2014, 3, 24, 0, 0), )
(datetime.datetime(2014, 3, 25, 0, 0), )
(datetime.datetime(2014, 3, 26, 0, 0), )
(datetime.datetime(2014, 3, 27, 0, 0), )
I am relatively new to Python and even newer to SQL queries, so could someone
please point out where I am going wrong, and perhaps how I can change my code
to help me append the distinct dates into a list as desired.
Many thanks.
Answer: To
1. save yourself the hassle of finding the applicable date delimiter, and
2. promote good coding practice
you should simply use a _parameterized query_ like this:
db = pyodbc.connect(connStr)
crsr = db.cursor()
sql = """
SELECT DISTINCT Date_ FROM Closing_prices WHERE Date_ >= ? AND Date_ < ?
"""
params = (datetime.date(2011, 8, 10), datetime.date(2014, 4, 30))
crsr.execute(sql, params)
|
Problems saving cookies when making HTTP requests using Python
Question: I‘m trying to make a web-spider using python but I've got some problems when I
tried to login the web site Pixiv.My code is as below:
import sys
import urllib
import urllib2
import cookielib
url="https://www.secure.pixiv.net/login.php"
cookiename='123.txt'
cookie = cookielib.MozillaCookieJar(cookiename)
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie))
cookie.save()
values={'model':'login',
'return_to':'/',
'pixiv_id':'username',
'pass':'password',
'skip':'1'}
headers = { 'User-Agent' : 'User-Agent' }
data=urllib.urlencode(values)
req=urllib2.Request(url,data)
response=urllib2.urlopen(req)
the_page=response.read()
cookie.save()
To make sure it works, I used the cookielib to save the cookie as a txt file.I
ran the code and got a "cookie.txt",but when I open the file I found that it
was rmpty,in another word,my code didn't work. I don't know what's wrong with
it.
Answer: The problem is you're not using the `opener` that you created with the
cookiejar attached to it in order to make the request. `urllib2.urlopen` has
no way of knowing that you want to use that opener to start the request.
You can either use the opener's
[`open`](https://docs.python.org/2/library/urllib2.html#urllib2.OpenerDirector.open)
method directly or, if you want to use this by default for the rest of your
application, you can install it as the default opener for all requests made
with `urllib2` using
[`urllib2.install_opener`](https://docs.python.org/2/library/urllib2.html#urllib2.install_opener).
So give that a try and see if it does the trick.
|
Write numpy array to wave file in buffers using wave (not scipy.io.wavfile) module
Question: This caused me a day's worth of headache, but since I've figured it out I
wanted to post it somewhere in case it's helpful.
I am using python's wave module to write data to a wave file. I'm NOT using
scipy.io.wavfile because the data can be a huge vector (hours of audio at
16kHz) that I don't want to / can't load into memory all at once. My
understanding is that scipy.io.wavfile only gives you full-file interface,
while wave can allow you to read and write in buffers. I'd love to be
corrected on that if I'm wrong.
The problem I was running into comes down to how to convert the float data
into bytes for the wave.writeframes function. My data were not being written
in the correct order. This is because I was using the numpy.getbuffer()
function to convert the data into bytes, which does not respect the
orientation of the data:
x0 = np.array([[0,1],[2,3],[4,5]],dtype='int8')
x1 = np.array([[0,2,4],[1,3,5]],dtype='int8').transpose()
if np.array_equal(x0, x1):
print "Data are equal"
else:
print "Data are not equal"
b0 = np.getbuffer(x0)
b1 = np.getbuffer(x1)
result:
Data are equal
In [453]: [b for b in b0]
Out[453]: ['\x00', '\x01', '\x02', '\x03', '\x04', '\x05']
In [454]: [b for b in b1]
Out[454]: ['\x00', '\x02', '\x04', '\x01', '\x03', '\x05']
I assume the order of bytes is determined by the initial allocation in memory,
as numpy.transpose() does not rewrite data but just returns a view. However
since this fact is buried by the interface to numpy arrays, debugging this
before knowing that this was the issue was a doozy.
A solution is to use numpy's tostring() function:
s0 = x0.tostring()
s1 = x1.tostring()
In [455]: s0
Out[455]: '\x00\x01\x02\x03\x04\x05'
In [456]: s1
Out[456]: '\x00\x01\x02\x03\x04\x05'
This is probably obvious to anyone who say the tostring() function first, but
somehow my search did not dig up any good documentation on how to format an
entire numpy array for wave file writing other than to use scipy.io.wavfile.
So here it is. Just for completion (note that "features" is originally
n_channels x n_samples, which is why I had this data order issue to begin
with:
outfile = wave.open(output_file, mode='w')
outfile.setnchannels(features.shape[0])
outfile.setframerate(fs)
outfile.setsampwidth(2)
bytes = (features*(2**15-1)).astype('i2').transpose().tostring()
outfile.writeframes(bytes)
outfile.close()
Answer: For me `tostring` works fine. Note that in WAVE an 8-bit file must be signed,
whereas others (16- or 32-bit) must be unsigned.
Some dirty demo code that works for me:
import wave
import numpy as np
SAMPLERATE=44100
BITWIDTH=8
CHANNELS=2
def gensine(freq, dur):
t = np.linspace(0, dur, round(dur*SAMPLERATE))
x = np.sin(2.0*np.pi*freq*t)
if BITWIDTH==8:
x = x+abs(min(x))
x = np.array( np.round( (x/max(x)) * 255) , dtype=np.dtype('<u1'))
else:
x = np.array(np.round(x * ((2**(BITWIDTH-1))-1)), dtype=np.dtype('<i%d' % (BITWIDTH/8)))
return np.repeat(x,CHANNELS).reshape((len(x),CHANNELS))
output_file="test.wav"
outfile = wave.open(output_file, mode='wb')
outfile.setparams((CHANNELS, BITWIDTH/8, SAMPLERATE, 0, 'NONE', 'not compressed'))
outfile.writeframes(gensine(440, 1).tostring())
outfile.writeframes(gensine(880, 1).tostring())
outfile.close()
|
random number and tkinter not working
Question: i'm new to python as i've only used it for 4 months, im trying to program a
tkinter window in which a label displays a random number ever second i have
this so far:
from tkinter import *
from random import *
testy = "0"
root = Tk()
lbl = Label(root,text="0")
def callback():
global testy
lbl.configure(text=testy)
testy = str(randint(0,10))
root.after(2000,callback)
lbl.pack()
root.after(2000,callback)
root.mainloop()
any help is apprieciated
Answer: Yours seems to work for me, this is a slightly more condensed version however
which is called every 1 second instead of 2, as that is what you outlined
wanting in your question.
from tkinter import *
import random
root = Tk()
lbl = Label(root)
lbl.pack()
def replace_text():
lbl.config(text=str(random.random()))
root.after(1000, replace_text)
replace_text()
root.mainloop()
|
Not understanding the row-key in Python API of Hbase
Question: I have a hbase table (`customers`) in the following form:
hbase(main):004:0> scan 'customers'
ROW COLUMN+CELL
4000001 column=customers_data:age, timestamp=1424123059769, value=55
4000001 column=customers_data:firstname, timestamp=1424123059769, value=Kristina
4000001 column=customers_data:lastname, timestamp=1424123059769, value=Chung
4000001 column=customers_data:profession, timestamp=1424123059769, value=Pilot
I tried to extract these data using python API
<http://happybase.readthedocs.org/en/latest/>:
import happybase
connection = happybase.Connection('localhost',autoconnect=False)
connection.open()
table = connection.table('customers')
print table.families()
row = table.row('ROW')
print row
It prints `table.families()` as:
{'customers_data': {'block_cache_enabled': True,
'bloom_filter_nb_hashes': 0,
'bloom_filter_type': 'ROW',
'bloom_filter_vector_size': 0,
'compression': 'NONE',
'in_memory': False,
'max_versions': 1,
'name': 'customers_data:',
'time_to_live': 2147483647}}
But it prints {}, nothing, in row. I am just confused about my understanding
row-key value in hbase table. Do have any suggestion how to get data from
hbase table using python API?. Thanks.
Answer: You rowkey is "4000001" not "ROW".
|
Walk through each column in a numpy matrix efficiently in Python
Question: I have a very big two-dimensions array in Python, using `numpy` library. I
want to walk through each column efficiently and check each time if elements
are different from 0 to count their number in every column.
Suppose I have the following matrix.
M = array([[1,2], [3,4]])
The following code enables us to walk through each row efficiently, for
example (it is not what I intend to do of course!):
for row_idx, row in enumerate(M):
print "row_idx", row_idx, "row", row
for col_idx, element in enumerate(row):
print "col_idx", col_idx, "element", element
# update the matrix M: square each element
M[row_idx, col_idx] = element ** 2
However, in my case I want to walk through each column efficiently, since I
have a very big matrix.
I've heard that there is a very efficient way to achieve this using numpy,
instead of my current code:
curr_col, curr_row = 0, 0
while (curr_col < numb_colonnes):
result = 0
while (curr_row < numb_rows):
# If different from 0
if (M[curr_row][curr_col] != 0):
result += 1
curr_row += 1
.... using result value ...
curr_col += 1
curr_row = 0
Thanks in advance!
Answer: In the code you showed us, you treat `numpy`'s arrays as lists and for what
you can see, it works! But arrays are not lists, and while you can treat them
as such it wouldn't make sense to use arrays, or even `numpy`.
To really exploit the usefulness of `numpy` you have to operate directly on
arrays, writing, e.g.,
M = M*M
when you want to square the elements of an array and using the rich set of
`numpy` functions to operate directly on arrays.
That said, I'll try to get a bit closer to your problem... If your intent is
to count the elements of an array that are different from zero, you can use
the `numpy` function `sum`.
Using `sum`, you can obtain the sum of all the elements in an array, or you
can sum across a particular axis.
import numpy as np
a = np.array(((3,4),(5,6)))
print np.sum(a) # 18
print np.sum(a, axis=0) # [8, 10]
print np.sum(a, axis=1) # [7, 11]
Now you are protesting: I don't want to sum the elements, I want to count the
non-zero elements... but
1. if you write a logical test on an array, you obtain an array of booleans, e.g, we want to test which elements of `a` are even
print a%2==0
# [[False True]
# [False True]]
2. `False` is zero and `True` is one, at least when we sum it...
print np.sum(a%2==0) # 2
or, if you want to sum over a column, i.e., the index that changes is the 0-th
print np.sum(a%2==0, axis=0) # [0 2]
or sum across a row
print np.sum(a%2==0, axis=1) # [1 1]
To summarize, for your particular use case
by_col = np.sum(M!=0, axis=0)
# use the counts of non-zero terms in each column, stored in an array
...
# if you need the grand total, use sum again
total = np.sum(by_col)
|
Saving a Python dictionary in external file?
Question: I'm working on a code that is essentially a super basic AI system (basically a
simple Python version of Cleverbot).
As part of the code, I've got a starting dictionary with a couple keys that
have lists as the values. As the file runs, the dictionary is modified - keys
are created and items are added to the associated lists.
So what I want to do is have the dictionary saved as an external file in the
same file folder, so that the program doesn't have to "re-learn" the data each
time I start the file. So it will load it at the start of running the file,
and at the end it will save the new dictionary in the external file. How can I
do this?
Do I have to do this using JSON, and if so, how do I do it? Can I do it using
the built-in json module, or do I need to download JSON? I tried to look up
how to use it but couldn't really find any good explanations.
I have my main file saved in C:/Users/Alex/Dropbox/Coding/AI-Chat/AI-Chat.py
The phraselist is saved in C:/Users/Alex/Dropbox/Coding/AI-Chat/phraselist.py
I'm running Python 2.7 through Canopy.
When I run the code, this is the output:
In [1]: %run "C:\Users\Alex\Dropbox\Coding\AI-Chat.py"
File "C:\Users\Alex\Dropbox\Coding\phraselist.py", line 2
S'How are you?'
^
SyntaxError: invalid syntax
EDIT: I got it now. I had to specify the sys.path to import phrase frome
phraselist.py
Here's the full code I have:
############################################
################ HELPER CODE ###############
############################################
import sys
import random
import json
sys.path = ['C:\\Users\\Alex\\Dropbox\\Coding\\AI-Chat'] #needed to specify path
from phraselist import phrase
def chooseResponse(prev,resp):
'''Chooses a response from previously learned responses in phrase[resp]
resp: str
returns str'''
if len(phrase[resp])==0: #if no known responses, randomly choose new phrase
key=random.choice(phrase.keys())
keyPhrase=phrase[key]
while len(keyPhrase)==0:
key=random.choice(phrase.keys())
keyPhrase=phrase[key]
else:
return random.choice(keyPhrase)
else:
return random.choice(phrase[resp])
def learnPhrase(prev, resp):
'''prev is previous computer phrase, resp is human response
learns that resp is good response to prev
learns that resp is a possible computer phrase, with no known responses
returns None
'''
#learn resp is good response to prev
if prev not in phrase.keys():
phrase[prev]=[]
phrase[prev].append(resp)
else:
phrase[prev].append(resp) #repeat entries to weight good responses
#learn resp is computer phrase
if resp not in phrase.keys():
phrase[resp]=[]
############################################
############## END HELPER CODE #############
############################################
def chat():
'''runs a chat with Alan'''
keys = phrase.keys()
vals = phrase.values()
print("My name is Alan.")
print("I am an Artifical Intelligence Machine.")
print("As realistic as my responses may seem, you are talking to a machine.")
print("I learn from my conversations, so I get better every time.")
print("Please forgive any incorrect punctuation, spelling, and grammar.")
print("If you want to quit, please type 'QUIT' as your response.")
resp = raw_input("Hello! ")
prev = "Hello!"
while resp != "QUIT":
learnPhrase(prev,resp)
prev = chooseResponse(prev,resp)
resp = raw_input(prev+' ')
else:
with open('phraselist.py','w') as f:
f.write('phrase = '+json.dumps(phrase))
print("Goodbye!")
chat()
And phraselist.py looks like:
phrase = {
'Hello!':['Hi!'],
'How are you?':['Not too bad.'],
'What is your name?':['Alex'],
}
Answer: You can use `pickle` module for that. This module have two methods,
1. **Pickling(dump)** : Convert Python objects into string representation.
2. **Unpickling(load)** : Retrieving original objects from stored string representstion.
<https://docs.python.org/3.3/library/pickle.html> code:
>>> import pickle
>>> l = [1,2,3,4]
>>> with open("test.txt", "wb") as fp: #Pickling
... pickle.dump(l, fp)
...
>>> with open("test.txt", "rb") as fp: # Unpickling
... b = pickle.load(fp)
...
>>> b
[1, 2, 3, 4]
* * *
Following is sample code for our problem:
1. Define phrase file name and use same file name during create/update phrase data and also during get phrase data.
2. Use exception handling during get phrase data i.e. check if file is present or not on disk by `os.path.isfile(file_path)` method.
3. As use `dump` and `load` pickle methods to set and get phrase.
code:
import os
import pickle
file_path = "/home/vivek/Desktop/stackoverflow/phrase.json"
def setPhrase():
phrase = {
'Hello!':['Hi!'],
'How are you?':['Not too bad.'],
'What is your name?':['Alex'],
}
with open(file_path, "wb") as fp:
pickle.dump(phrase, fp)
return
def getPhrase():
if os.path.isfile(file_path):
with open(file_path, "rb") as fp:
phrase = pickle.load(fp)
else:
phrase = {}
return phrase
if __name__=="__main__":
setPhrase()
#- Get values.
phrase = getPhrase()
print "phrase:", phrase
output:
vivek@vivek:~/Desktop/stackoverflow$ python 22.py
phrase: {'How are you?': ['Not too bad.'], 'What is your name?': ['Alex'], 'Hello!': ['Hi!']}
|
How to make python setuptools find top level modules
Question: I have a package with a structure that would (simplified) look like:
mypackage/
__init__.py
setup.py
module1.py
module2.py
mysubpackage/
__init__.py
mysubmodule1.py
mysubmodule2.py
I'm using a configuration for setup.py like this:
from setuptools import setup, find_packages
setup(
name = "mypackage",
version = "0.1",
author = "Foo",
author_email = "[email protected]",
description = ("My description"),
packages=find_packages(),
)
The default `where` argument for `find_packages()` is `'.'`, but it doesn't
include my top-level modules (module1.py nor module2.py). However, all child
submodules and subpackages are added when running `python setup.py build`.
How could I get top-level Python modules added too, without moving setup.py
one level higher?
Answer: Thank you all for your responses.
Finally, I added a directory (not Python package) containing mypackage and the
setup.py module. The structure now looks as follows:
myapp/
setup.py
mypackage/
__init__.py
module1.py
module2.py
mysubpackage/
__init__.py
mysubmodule1.py
mysubmodule2.py
Now using `find_packages()` works as expected. Thanks!
|
Python: Remove the certain string from list if string includes certain keyword v.2
Question: I am trying to make a code to exclude certain keywords in the string from the
list of strings. This morning with help from stackoverflow, I could add the
code that can exclude certain strings which include the certain keywords.
[Python: Remove the certain string from list if string includes certain
keyword](http://stackoverflow.com/questions/28565920/python-remove-the-
certain-string-from-list-if-string-includes-certain-keyword/28567010#28567010)
However, when I change the data set, it does not work.
# -*- coding: utf-8 -*-
import nltk, json, os, csv, matplotlib, pylab, re
from matplotlib import *
from nltk import *
from pylab import *
from re import *
'Start with empty list'
tweets=[]
tweetsF=[]
for line in open('apple.json'):
try:
tweets.append(json.loads(line))
except:
pass
keywordFilter=['pie','juice','cinnamon']
for tweet in tweets:
for key, value in tweet.items():
if key=='text':
tweetsF.append(value)
print(tweetsF[:50])
original_list=tweetsF[:50]
tweetsFBK=[str for str in original_list if not any(word in str for word in keywordFilter)]
print (tweetsFBK)
From this morning code to above code, I only changed **tweetsF** part which
returns the list of strings from the data source. However, I think it isn't
really a matter because it is a list of strings like this morning question.
Do you have any idea why excluding part does not return any value (i.e returns
0). ?
[EDITED]
original_list=['RT @haussera: Access to Apple Pay customer data, no, but another way? everybody wins - MarketWatch http://t.co/Fm3LE2iTkY', "Landed in the US, tired w horrible migrane. The only thing helping- Connie's new song on repeat. #SoGood #Nashville https://t.co/AscR4VUkMP", 'I wish jacob would be my cinnamon apple', "I've collected 9,112 gold coins! http://t.co/T62o8NoP09 #iphone, #iphonegames, #gameinsight", 'HAHAHA THEY USED THE SAME ARTICLE AS INDEPENDENT http://t.co/mC7nfnhqSw', '@hot1079atl Let me know what you think of the new single "Mirage "\nhttps://t.co/k8DJ7oxkyg', 'RT @SWNProductions: Hey All so we have a new iTunes listing due to our old one getting messed up please resubscribe via the following https…', 'Shawty go them apple bottoms jeans and the boots with the furrrr with furrrr the whole club is looking at her', 'I highly recommend you use MyMedia - a powerfull download manager for the iPhone/iPad. http://t.co/TWmYhgKwBH', 'Alusckが失われた時間の異常を解消しました http://t.co/peYgajYvQY http://t.co/sN3jAJnd1I', 'Театр радует туземцев! Теперь мой остров стал еще круче! http://t.co/EApBrIGghO #iphone, #iphonegames, #gameinsight', 'RT @AppIeOfficiel: Our iPhone 7 http://t.co/d2vCOCOTqt', 'Я выполнил задание "Подключаем резервы"! Заходите ко мне в гости! http://t.co/ZReExwwbxh #iphone #iphonegames #gameinsight', "RT @Louis_Tomlinson: @JennSelby Google 'original apple logo' and you will see the one printed on my shirt that you reported on. Trying to l…", "I've collected 4,100 gold coins! http://t.co/JZLQJdRtLG #iphone, #iphonegames, #gameinsight", "I've collected 28,800 gold coins! http://t.co/r3qXNHwUdp #iphone, #iphonegames, #gameinsight", 'RT @AppIeOfficiel: Our iPhone 7 http://t.co/d2vCOCOTqt', '“@EleanorDiamonds: truth hurts doesnt it” i still wonder why u didnt tweet the apple shirt pic funny how u only tweet whats convenient for u', "I'm now an E-List celebrity in Kim Kardashian: Hollywood. You can be famous too by playing on iPhone! http://t.co/HUZSnzu8pO", "RT @Louis_Tomlinson: @JennSelby Google 'original apple logo' and you will see the one printed on my shirt that you reported on. Trying to l…", '【朗報】ぱるると乃木坂生田ちゃんが相思相愛 https://t.co/5QacaMdASN', '【ONE PIECE ドンジャラ】ワンピースのドンジャラがアプリで登場!登場キャ・・・URL→[https://t.co/QVlDXfOG7S] http://t.co/YlV9pwoVZT', "RT @leedsparadise: people@connecting tis shit with larry wtf the apple wasn't about larry this is about supporting the community get that i…", 'RT @AppIeOfficiel: Our iPhone 7 http://t.co/d2vCOCOTqt', "RT @Real_Liam_Payne: Hey everyone I have a new track out for Cheryl it's called I won't break https://t.co/2rUQbKZkSn enjoy!! ", 'Apple pulls <b>Fitbit</b> trackers fr... https://t.co/IDhDv6w8lA via @fitbit_fan #fitbit | https://t.co/w8dEhQjEf3', "RT @lunaesio: @wyfesio If you were a tropical fruit, you'd be a fine-apple", 'Apple Removes <b>FitBit</b> Fitness T... https://t.co/gpFeYj8heh via @fitbit_fan #fitbit | https://t.co/w8dEhQjEf3', 'Emily_alicexx gathered the Animal Tracks collection http://t.co/aztmTe7rrN http://t.co/FNMNSzDYkB', "RT @leedsparadise: people@connecting tis shit with larry wtf the apple wasn't about larry this is about supporting the community get that i…", "RT @Louis_Tomlinson: @JennSelby Google 'original apple logo' and you will see the one printed on my shirt that you reported on. Trying to l…", 'fogo, ate o apple quicktime falta ao meu pc, ah bom...', '#Kiahnassong https://t.co/GxYyyzcAwT Raising money for sick children at #birminghamchildrenhospital #BBCCiN', "RT @Real_Liam_Payne: Hey everyone I have a new track out for Cheryl it's called I won't break https://t.co/2rUQbKZkSn enjoy!! ", "I've collected $17844! Who can collect more? It's a challenge! http://t.co/NV4KzSF9zX #gameinsight #iphonegames #iphone", "RT @Louis_Tomlinson: @JennSelby Google 'original apple logo' and you will see the one printed on my shirt that you reported on. Trying to l…", 'RT @ZaynMalikx69: @Louis_Tomlinson is this also a apple logo?. http://t.co/QHlcZpxhc2', 'Emily_alicexx completed the Zigzag of a snake quest http://t.co/aztmTe7rrN http://t.co/dt4m4ifNDV', '【サカつくシュート】カップ戦<サカつくスターズカップ>をクリア!\nhttps://t.co/X528wy2tcx', "Apple: Call It the iWatch and We'll Kill You http://t.co/cgNp0DusYw", "RT @Louis_Tomlinson: @JennSelby Google 'original apple logo' and you will see the one printed on my shirt that you reported on. Trying to l…", 'Урожай собран - 1 300 еды! Ты тоже проверь свои грядки! http://t.co/kZlFe1lmFM #iphone, #iphonegames, #gameinsight', 'RT @AppIeOfficiel: Our iPhone 7 http://t.co/d2vCOCOTqt', "RT @Louis_Tomlinson: @JennSelby Google 'original apple logo' and you will see the one printed on my shirt that you reported on. Trying to l…", "RT @AppIeOfflciaI: WE'RE GIVING A NEW IPHONE 6\nRULES:\n1. Follow @comedyortruth\n2. Fav this.\n3. 15 winners will be chosen! http://t.co/4Y0y7…", '5,000 SUBSCRIBERS Give away! iphone 6 6+ Nexus 6 Note 4 http://t.co/LiauOBl0gw', 'RT @ZaynMalikx69: @Louis_Tomlinson is this also a apple logo?. http://t.co/QHlcZpxhc2', "There's no limit to perfection, now Administrative building is better then it was! http://t.co/i5X4hGU6Mg #gameinsight #iphonegames #iphone", 'Alusckがクエスト癒やしの水をクリアしました http://t.co/peYgajYvQY http://t.co/o9jF6iyhRn', 'Emily_alicexx completed the Connoisseur achievement and received rewards http://t.co/aztmTe7rrN http://t.co/kR6N6auxYn']
Answer: EDIT: Seeing your edit this will probably not fix your problem. You might
still want to parse the json file as a whole.
Original Message:
How does your input file `apple.json` look like? It's probably a json file and
you should not read and parse it line for line. Instead try something like:
with open('apple.json') as f:
jsonlist = json.loads(f.read())
for tweet in jsonlist:
#do things
Also you have some bad practices in your code. With `open('apple.json')` you
are opening a file but not closing it (that's not too bad in this case,
because it will be closed automatticaly when your script reach its end, but if
you open lots of files this might cause problems. Also explicit is better than
implicit).
Second you are using `try: [...] except:` which mutes **all** errors. Errors
normally want to tell you something, so you want to either deal with them or
get them reported so you can act on them. If you get errors and you are 100%
sure you can ignore them, do something like:
try:
stuff() #that might throw IndexError
except IndexError as e:
pass
#or deal with the error e.g. log
Code is not tested.
|
Python global empty in class
Question: I tried to get array parameters values in another class , where and why i
wrong here ?
> My second python file => myModule.py :
parameters = ([])
class MyFirstClass():
def __init__(self, params):
global parameters
parameters = params
class MySecondClass():
def __init__(self):
global parameters
print parameters
class MyClassWhereIHaveAProblem(http.HTTPFactory):
proto = .....
global parameters
print parameters **// array is empty here**
class start_server():
def __init__(self, params):
self.x_params = params[0] //ip
self.y_params = int(params[1]) //port
global parameters
parameters = params[2]
def start():
reactor.listenTCP(self.y, MyClassWhereIHaveAProblem(), interface=self.x)
> My first python file => Handle.py :
from myModule import MyFisrtClass
from myModule import MySecondClass
from myModule import MyClassWhereIHaveAProblem
from myModule import start_server
class Handle():
def __init__(self):
params = (["vector1", "vector2"])
self.params = (["127.0.0.1","3128", params])
def go_to_another(self):
s = start_server(self.params)
s.start()
if __name__ == '__main__':
H = Handle()
H.go_to_another()
I tried to get array parameters values in another class , where and why i
wrong here ?
Answer: It looks like you are simply:
* forgetting the second set of double underscores for the special method names
* making typos in the class names, you had "First" spelled "Fisrt"
* you never did anything to use the `MySecondClass` class, so I initialized one in your main routine with: ` y = MySecondClass()`
> Handle.py:
#!/usr/bin/env python
from myModule import MyFirstClass
from myModule import MySecondClass
class Handle():
def __init__(self):
self.params = (["ele1","ele2","ele3"])
def go_to_another(self):
X = MyFirstClass(self.params)
if __name__ == '__main__':
H = Handle()
H.go_to_another()
y = MySecondClass()
> myModule.py:
#!/usr/bin/env python
parameters = ([])
class MyFirstClass():
def __init__(self, params):
global parameters
parameters = params
class MySecondClass():
def __init__(self):
global parameters
print 'this should not be empty: %s' % parameters # array is no longer empty here
> Output:
this should not be empty: ['ele1', 'ele2', 'ele3']
|
urllib not taking context as a parameter
Question: I'm trying to add the sssl.SSlContext to a urlopen method but keep getting the
error:
TypeError: urlopen() got an unexpected keyword argument 'context'
I'm using python 3 and urllib. This has a context parameter defined -
<https://docs.python.org/2/library/urllib.html>. So I don't understand why it
is throwing the error. But either way this is the code:
try:
# For Python 3.0 and later
from urllib.request import urlopen, Request
except ImportError:
# Fall back to Python 2's urllib2
from urllib2 import urlopen, Request
request = Request(url, content, headers)
request.get_method = lambda: method
if sys.version_info[0] == 2 and sys.version_info[1] < 8:
result = urlopen(request)
else:
gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
result = urlopen(request, context=gcontext)
Can someone explain what I am doing wrong?
Answer: According to [`urllib.request.urlopen`
documentation](https://docs.python.org/3/library/urllib.request.html#module-
urllib.request):
> Changed in version 3.4.3: context was added.
the parameter `context` will be added in Python 3.4.3. You need to fall back
for lower version.
* * *
In Python 2.x, it's added in Python 2.7.9.
([`urllib.urlopen`](https://docs.python.org/2/library/urllib.html#urllib.urlopen),
[`urllib2.urlopen`](https://docs.python.org/2/library/urllib2.html#urllib2.urlopen))
|
How can I plot an axis on MatPlotLib in terms of text, not numbers, taken from a CSV file?
Question: I have a .csv file that looks something like this (small sample, disregard the
periods):
Year | Month | Carrier | Elapsed Time
1987 | 10.......|UN.......|15
1987 | 11.......|AM.......|17
1987 | 12.......|HK.......|20
I'm plotting a 3D graph in MatPlotLib (Python), where the z-axis (vertical
axis) is Elapsed time, the x-axis is the month, and the y-axis is the carrier.
If I'm not mistaken MatPlotLib only allows the values of each axis to be an
integer and not a string. That's where the problem lies: the carrier values
are strings, or letters, such as UN, AM, and HK. My code so far is:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
readFile = open('/home/andrew/Downloads/1987.txt', 'r')
sepFile = readFile.readlines()[1:50000]
readFile.close()
rect = fig.patch
rect.set_facecolor('white')
X = []
Y = []
Z = []
for plotPair in sepFile:
xAndY = plotPair.split(',')
X.append(int(xAndY[1]))
Y.append(str(xAndY[2])) #Putting str() instead of int() didn't solve the problem
Z.append(int(xAndY[3]))
ax.scatter(X,Y,Z, c ='r', marker='o')
ax.set_xlabel('x axis')
ax.set_ylabel('y axis')
ax.set_zlabel('z axis')
plt.show()
I understand that I could just say x = [UN, AM, HK] but the problem with this
is that the x list would not be taken from the .csv file. The Python program
wouldn't know which point belongs to which carrier name. I want to be able to
tell Python to search the column with the name of the carrier for each point
and then be able to extract that information so it can plot successfully from
the csv file as shown in the picture:
[3D Graph Skeleton](http://postimg.org/image/6u9cnpf9h/)
I'm still a newbie and getting the hang of Python so I thank you so much for
taking the time to answer. Your help is seriously appreciated.
Answer: Credit goes to the answer by Tom, this is just an adaptation.
Also, I'm pretty sure there are better answers out there but this should work
for what you're doing.
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
X = {}
Z = {}
for plotPair in sepFile:
xAndY = plotPair.split(',')
label = str(xAndY[2])
if label in x.keys():
X[label].append(int(xAndY[1]))
Z[label].append(int(xAndY[3]))
else:
X[label] = [int(xAndY[1])]
Z[label] = [int(xAndY[3])]
Y=0
for label in X.keys():
ax.scatter(X[label],Y[-1]*np.ones_like(X[label]),Z[label], c ='r', marker='o')
Y.append(Y[-1]+1)
ax.set_xlabel('x axis')
ax.set_ylabel('y axis')
ax.set_zlabel('z axis')
ax.set_yticks(Y)
ax.set_yticklabels(X.keys())
One bonus is that each carrier can have a different colour.
|
django <-> node.js fast communication
Question: I tried `requests` but it seems slow because of the tcp connection takes long
(I don't know how to keep the socket open)
I'm now trying `zerorpc` and it has notion of `persistent connection`.
Django <-> node.js communication works fine for the first message but it fails
with `Lost remote after 10s heartbeat` error from the second attempt.
I am probably missing something obvious.
# following connection step is done in python a module so that it gets called only one time
import zerorpc
client = zerorpc.Client()
client.connect("tcp://127.0.0.1:7015")
def something(...):
# this gets called for a http request, and we are messaging node.js using the zerorpc client.
...
client.call_rpc(message)
Other clients (from command line) can still talk to server and get a response,
so I guess it has to do with the above django code.
Answer: zerorpc uses gevent for cooperative asynchronous IOs while django handles one
request at a time. When django is handling some IOs, zerorpc doesn't get its
fair share of CPU time, and cannot handle the heartbeat. Turning off the
heartbeat is possible in the zerorpc-python (for this very reason) but is not
on zerorpc-node!
One solution is to run django on top of the gevent ioloop, it looks like
<http://gunicorn.org/> can be of some help.
|
Dictionary for finding orientation of the words
Question: I am looking for a dictionary that finds the
orientation(positive/negative/neutral) of the words as part of analyzing the
sentiment of the phrase. Preferably a source that can be imported into python
code
Answer: You seem to be looking for something like
[OpinionFinder](http://mpqa.cs.pitt.edu/lexicons/subj_lexicon/).
This particular link points to a lexicon of 8233 adjectives, verbs and nouns
and their orientation.
You can download it, so you'll be able to simply read the file into python.
|
Generate an OAuth2 token in a view
Question: Let's say I have an AngularJS application that consumes the REST API of a
Django application.
The Django application has got a built-in OAuth2 provider that can be called
to retrieve an access token and use the protected endpoints of the API. This
provider is using `django-oauth-toolkit`.
Let's assume there is a registered client with "password" grant type, so that
the end users only need to provide their credentials in the front-end in order
to get an access token from the back-end.
At some point we want to add some support for social networks login and we
decide to use `python-social-auth` (PSA) to that end. Here is the workflow I
want to achieve:
1. The user logs in on Facebook from the front-end (via the Facebook SDK) and we get an access token back from the OAuth2 provider of Facebook.
2. We send the Facebook token to an endpoint of our REST API. This endpoint uses the Facebook token and `django-social-auth` to authenticate the user in our Django application (basically matching a Facebook account to a standard account within the app).
3. If the authentication succeeds, the API endpoint **requests an access token from the OAuth2 provider for this newly authenticated user**.
4. The Django access token is sent back to the front-end and can be used to access the REST API in exactly the same way that a regular user (i.e. logged in with his credentials) would do.
Now my problem is: **how do I achieve step 3?** I first thought I would
register a separate OAuth2 client with [Client Credentials
Grant](https://tools.ietf.org/html/rfc6749#section-4.4) but then the generated
token is not user-specific so it does not make sense. Another option is to use
the `TokenAuthentication` from DRF but that would add too much complexity to
my project. I already have an OAuth server and I don't want to set up a second
token provider to circumvent my problem, unless this is the only solution.
I think my understanding of PSA and `django-oauth-toolkit` is not deep enough
to find the best way of reaching my goal, but there must be a way.
[Help!](https://www.youtube.com/watch?v=XvbWL3TlaRU)
Answer: I managed to get something working using urllib2. I can't speak towards
whether or not this is good practice, but I can successfully generate an
OAuth2 token within a view.
Normally when I'd generate an access token with cURL, it'd look like this:
curl -X POST -d "grant_type=password&username=<user_name>&password=<password>" -u"<client_id>:<client_secret>" http://localhost:8000/o/token/
So we're tasked with making urllib2 accomplish this. After playing around for
some bit, it is fairly straightforward.
import urllib, urlib2, base64, json
# Housekeeping
token_url = 'http://localhost:8000/auth/token/'
data = urllib.urlencode({'grant_type':'password', 'username':<username>, 'password':<password>})
authentication = base64.b64encode('%s:%s' % (<client_id>, <client_secret>))
# Down to Business
request = urllib2.Request(token_url, data)
request.add_header("Authorization", "Basic %s" % authentication)
access_credentials = urllib2.urlopen(request)
json_credentials = json.load(access_credentials)
I reiterate, I do not know if this is in bad practice and I have not looked
into whether or not this causes any issues with Django. AFAIK this will do
this trick (as it did for me).
|
Can I take a screenshot on a windows machine that is running without a monitor?
Question: I have a bank of virtual machines (running windows) that I remote into. As
such, none of these machines have a monitor attached, they are only accessed
by Remote Desktop.
I want to get a screenshot of an application that is running on the desktop.
What I have found is that if I am not connected via Remote Desktop, then the
screen is not rendering and I am unable capture the screen (the best I've
managed is getting a black image).
Is there any way to force the desktop to render for the purpose of my screen
grab?
EDIT: OK to be more specific, here is some Python code that takes a screenshot
provided I am remoted in to the machines:
import win32ui
import win32gui
hwnd = win32gui.FindWindow(None, window_name)
wDC = win32gui.GetWindowDC(hwnd)
dcObj = win32ui.CreateDCFromHandle(wDC)
cDC=dcObj.CreateCompatibleDC()
dataBitMap = win32ui.CreateBitmap()
dataBitMap.CreateCompatibleBitmap(dcObj, width, height)
cDC.SelectObject(dataBitMap)
cDC.BitBlt((0, 0), (width, height), dcObj, (0, 0), win32con.SRCCOPY)
dataBitMap.SaveBitmapFile(cDC, image_name)
# Free Resources
dcObj.DeleteDC()
cDC.DeleteDC()
win32gui.ReleaseDC(hwnd, wDC)
win32gui.DeleteObject(dataBitMap.GetHandle())
If I run this while I am remoted in, it works fine. As soon as I am not
remoted in, I get the following error:
> win32ui.error: BitBlt failed
This error is a result of the screen not being rendered when no one is remoted
in.
I need a solution that will allow me to get a screenshot in this scenario,
when I am not connected via remote desktop.
EDIT 2: To be clear, the code is running on the VM itself. But it is running
when no-one is remoted in to the machine.
Answer: Obvious workaround is using 2 virtual machines: master host runs remote
session for target one. It also allows input actions like `mouse_event` or
`keybd_event`. The only one requirement is not to minimize RDP window (or VNC
software, it's doesn't matter), though it may be out of focus.
It's widely used method for build/test machines pool. I worked in big testing
team several years and never heard about other approaches.
P.S. How about [Pillow](https://pypi.python.org/pypi/Pillow/2.7.0) or
[pyscreenshot](https://pypi.python.org/pypi/pyscreenshot)?
|
Prefer BytesIO or bytes for internal interface in Python?
Question: I'm trying to decide on the best internal interface to use in my code,
specifically around how to handle file contents. Really, the file contents are
just binary data, so bytes is sufficient to represent them.
I'm storing files in different remote locations, so have a couple of different
classes for reading and writing. I'm trying to figure out the best interface
to use for my functions. Originally I was using file paths, but that was
suboptimal because it meant that disk was always used (which meant lots of
clumsy tempfiles).
There are several areas of the code that have the same requirement, and would
directly use whatever was returned from this interface. As a result whatever
abstraction I choose will touch a fair bit of code.
What are the various tradeoffs to using BytesIO vs bytes?
def put_file(location, contents_as_bytes):
def put_file(location, contents_as_fp):
def get_file_contents(location):
def get_file_contents(location, fp):
Playing around I've found that using the File-Like interfaces (BytesIO, etc)
requires a bit of administration overhead in terms of `seek(0)` etc. That
raises a questions like:
* is it better to `seek` before you start, or after you've finished?
* do you `seek` to the start or just operate from the position the file is in?
* should you `tell()` to maintain the position?
* looking at something like `shutil.copyfileobj` it doesn't do any seeking
One advantage I've found with using file-like interfaces instead is that it
allows for passing in the fp to write into when you're retrieving data. Which
seems to give a good deal of flexibility.
def get_file_contents(location, write_into=None):
if not write_into:
write_into = io.BytesIO()
# get the contents and put it into write_into
return write_into
get_file_contents('blah', file_on_disk)
get_file_contents('blah', gzip_file)
get_file_contents('blah', temp_file)
get_file_contents('blah', bytes_io)
new_bytes_io = get_file_contents('blah')
# etc
Is there a good reason to prefer BytesIO over just using fixed bytes when
designing an interface in python?
Answer: The benefit of `io.BytesIO` objects is that they implement a common-ish
interface (commonly known as a 'file-like' object). `BytesIO` objects have an
internal pointer (whose position is returned by `tell()`) and for every call
to `read(n)` the pointer advances `n` bytes. Ex.
import io
buf = io.BytesIO(b'Hello world!')
buf.read(1) # Returns b'H'
buf.tell() # Returns 1
buf.read(1) # Returns b'e'
buf.tell() # Returns 2
# Set the pointer to 0.
buf.seek(0)
buf.read() # This will return b'H', like the first call.
In your use case, both the `bytes` object and the `io.BytesIO` object are
maybe not the best solutions. They will read the complete contents of your
files into memory.
Instead, you could look at `tempfile.TemporaryFile`
(<https://docs.python.org/3/library/tempfile.html>).
|
Numpy Not Allowing Use of Python 'Sum' Function?
Question: I just installed Pylab and Matplotlib to create a graph that is all working
fine. Then I went to open another python file for my program and noticed an
error:
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\numpy\core\fromnumeric.py", line 1708, in sum
sum = a.sum
AttributeError: 'list' object has no attribute 'sum'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\JD\git\ComputingCoursework\Coursework\Implementation\Files\AddDataGUI.py", line 768, in <module>
launcher = AddDataWindow('Hardware')
File "C:\Users\JD\git\ComputingCoursework\Coursework\Implementation\Files\AddDataGUI.py", line 33, in __init__
self.col = sum([[i,''] for i in self.col],[]) ## adds a space in between each item in self.col tuple
File "C:\Python34\lib\site-packages\numpy\core\fromnumeric.py", line 1711, in sum
out=out, keepdims=keepdims)
File "C:\Python34\lib\site-packages\numpy\core\_methods.py", line 32, in _sum
return umr_sum(a, axis, dtype, out, keepdims)
TypeError: cannot perform reduce with flexible type
Here is the code that seems to be causing an error
self.col = sum([[i,''] for i in self.col],[])
It has been workig fine before numpy was installed? HELP!
Answer: Never use `from pylab import *` or `from numpy import *` since these imports
would overwrite the builtin definition of sum with NumPy's sum function. Use
of one of these imports might be the cause of the problem you are seeing.
Instead use `import numpy as np` so you can control access to NumPy functions
explicitly (e.g. with `np.sum`).
Other reasons to avoid using `from module import *` in general are that it
pollutes the global namespace and makes it harder to trace where variables
have been defined.
|
Write image to webpage with python 3
Question: I have currently got this code set up:
import time
import http.server
import socketserver
import mimetypes
import os
HOST_NAME = 'localhost'
PORT = 8000
def load(self):
with open(self, 'r') as self:
self = self.read()
return self.encode('UTF-8')
class MyHandler(http.server.BaseHTTPRequestHandler):
def do_HEAD(self):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
def do_GET(self):
if self.path == "/":
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(load('index.html'))
if self.path.endswith('.css'):
self.send_response(200)
self.send_header("Content-type", "text/css")
self.end_headers()
dest = self.path.replace("/", "")
self.wfile.write(load(dest))
if self.path == "/bg.jpg":
self.send_response(200)
self.send_header("Content-type", "image/jpeg")
self.end_headers()
if __name__ == '__main__':
httpd = socketserver.TCPServer(("", PORT), MyHandler)
print(time.asctime(), "Server Starts - %s:%s" % ("", PORT))
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
print(time.asctime(), "Server Stops - %s:%s" % ("", PORT))
My webpage seems to be working and I get my index.html and css loaded when I
open up my webpage, however the image is not coming through, has anyone got
any idea why?
Answer: To send an image over HTTP, just write the image data directly to the socket,
like for any other type of file. You can't use your `load()` function for
this, since that encodes the text as UTF-8. Instead, you should open the file
with mode `rb` or similar, read a `bytes` from that filehandle, then write it
directly to `self.wfile` in the HTTP handler.
|
importing package and modules from another directory in python
Question: I have a bunch of python scripts in the path
/home/yotam/Applications/pyFoo
one of them is `__init__.py` which, to my understanding, turns the `pyFoo`
folder into a package. The folder also has a `src` subfolder, which stores an
`__init__.py` file of its own, as well as some other helpful scripts.
I want to import one of the files (modules?), called `Bar.py`, that uses
scripts from `/home/yotam/Applications/pyFoo/src`. If I try to load it from
the python interpreter, while in the folder ''/home/yotam/Applications'' using
>>> from pyFoo import Bar as B
everything is fine. If, however I want to run it from other folders, e.g. my
home directory, I get the error
ValueError: Attempted relative import in non-package
How can I import `Bar.py` from anyplace on my machine?
Answer: Just add the directory to your `sys.path`:
import sys
sys.path.append( '/path/to/libs' )
import my_lib_in_another_dir
|
Save results in the files with same name as input files
Question: I am working on a python code which is as follows:
import os
count = 0
for doc in os.listdir('/home/krupa/Krupa/Mirellas_Image_Annotation_Data/Test/Html_Files/Texts'):
if doc.endswith(".txt"):
with open(doc, 'r') as f:
single_line = ''.join([line for line in f])
single_space = ' '.join(single_line.split())
with open(doc.format(count) , "w") as doc:
doc.write(single_space)
count += 1
else:
continue
Here I want to write the output in the same file name but with different
extension (say .key). How do I do it? Please help. Thanks in advance
Answer:
import os
count = 0
for doc in os.listdir('/home/krupa/Krupa/Mirellas_Image_Annotation_Data/Test/Html_Files/Texts'):
doc1 = "doc_path" + doc
doc2 = "new_path_where_new_file_with_new_ext_needs_to_be_saved" + doc1
if doc1.endswith(".txt"):
with open(doc, 'r') as f:
single_line = ''.join([line for line in f])
single_space = ' '.join(single_line.split())
with open(doc2.format(count) , "w") as doc2:
doc2.write(single_space)
count += 1
else:
continue
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.