text
stringlengths 226
34.5k
|
---|
how do I run command line from python
Question: I have the following command I run in terminal:
mongoexport --db database_name --collection agents --type=csv --fieldFile agFieldsTest.txt --out file/path/agTestInfo.csv
I tried to run it using:
>>> import subprocess
>>> subprocess.call(["mongoexport --db database_name --collection agents --type=csv --fieldFile agFieldsTest.txt --out file/path/agTestInfo.csv"])
I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 524, in call
return Popen(*popenargs, **kwargs).wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1308, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Answer: The best way is to break up the command into individual "words":
>>> subprocess.call(["mongoexport", "--db", "database_name", "--collection", "agents", "--type=csv", "--fieldFile", "agFieldsTest.txt", "--out", "file/path/agTestInfo.csv"])
Alternatively, you can use `shell=True` to have the shell do that for you:
>>> subprocess.call(["mongoexport --db database_name --collection agents --type=csv --fieldFile agFieldsTest.txt --out file/path/agTestInfo.csv"], shell=True)
|
Sqlite using Python3 error?
Question: I am trying to insert some data into a sqlite database by following : -
param = '("593863695396044801","Ivan F. Siahaan","307783731","None","65","83","Thu Apr 30 19:45:13 +0000 2015","xyz")'
>>> conn.execute("INSERT INTO data "+param) Traceback (most recent call last):
File "<stdin>", line 1, in <module>
sqlite3.OperationalError: near ")": syntax error
I initialized connection to database as follows :-
from sqlite3 import dbapi2 as sqlite
conn = sqlite.connect('dataStore')
What wrong I am doing here ?
Answer: You are getting your SQL syntax wrong; both the quoting and the `INSERT`
statement syntax are incorrect.
SQLite doesn't support double quotes around string values, only single quotes.
From the [SQLite keywords
documenation](https://www.sqlite.org/lang_keywords.html):
> `'keyword'` A keyword in single quotes is a string literal.
> `"keyword"` A keyword in double-quotes is an identifier.
You are using double quotes, but for string literals you need to use single
quotes instead.
You are also missing the `VALUES` keyword; see the [`INSERT` statement
documentation](https://www.sqlite.org/lang_insert.html); the pattern is
`INSERT INTO <table> VALUES (<value list>)`.
You really want to leave quoting to the database itself, however. Use SQL
parameters here:
statement = "INSERT INTO data VALUES (?, ?, ?, ?, ?, ?, ?, ?)"
params = (593863695396044801, "Ivan F. Siahaan", 307783731, None, 65, 83, "Thu Apr 30 19:45:13 +0000 2015", "xyz")
conn.execute(statement, params)
Note that this would also let you use the Python `None` singleton to insert a
`NULL` in the database.
|
search regex line by line and print multiple lines after match
Question: just started writing scripts.
I am trying to write 4 lines after matching a regex in the first line and do
this through a very large file with thousands of lines.
#!/usr/local/bin/python
import sys
import string
import re
print"what file would you like to search?"
filename = sys.stdin.readline()
filename = filename.rstrip()
print "enter a barcode"
barcode=sys.stdin.readline()
barcode=barcode.rstrip()
regex=":0:"+barcode
infile = open(filename, "r")
outfile = open("sample_write.fastq", "w")
regex_object = re.compile(regex, re.I)
for line in infile:
m=regex_object.search(line)
if m:
outfile.write(line)
exit
This writes the lines that have matched string (barcode) into my outfile,
however, I need to write both the line with the barcode and the following 3
lines after the match into my target file. Any idea ?
Answer: You can use the following to match
.*(BARCODE).*\n.*\n.*\n.* //replace BARCODE with your variable
and write the matches (`\0`) to your target file
See [DEMO](https://regex101.com/r/tC0lJ0/4)
**Edit:** Use `regex =".*("+barcode+").*\n.*\n.*\n.*"`
|
Python draw rectangle and color in range
Question:
import brewer2mpl
import numpy as np
a = np.random.rand(3) # a[0] represents value of rect1, a[1] of rect[2]....
def get_colors():
"""
Get colorbrewer colors, which are nicer
"""
bmap = brewer2mpl.get_map('Accent','qualitative',6)
return bmap.mpl_colors
rect1 = matplotlib.patches.Rectangle((2,2), 1, 1, color='yellow'))
ax.add_patch(rect1)
rect2 = matplotlib.patches.Rectangle((3,3), 1, 1, color='green'))
ax.add_patch(rect2)
rect3 = matplotlib.patches.Rectangle((5,5), 1, 1, color='red'))
ax.add_patch(rect3)
I would like the color of the rectangle to vary based on the value of the
vector 'a'. Instead of pure yellow/green/red color, select a color from a
range, preferably the brewer2mpl colors
Answer: From what I can read, mpl_color is a list of colors. The colors are 3-tuples,
with a range in (0,1) representing the rgb amount. You can set a color to be
such a triple.
import pylab as py
py.plot([0,1],[2,3], color = (1,0,0.))
py.savefig('tmp.png')

So all you have to do is take the entry in the vector `a` (I'll call it
`avalue` below, with `0<avalue<1`) and map it to an appropriate integer. For
example
colorlist = get_colors()
colorlength = len(colorlist)
py.plot([0,1],[2,3], color = colorlist[int(avalue*colorlength)])
|
Python script that launches MPV
Question: I wrote a python script to make the use of mpv easier (cim is the title).
Here's the script:
from sh import mpv
cim=input("Cím: ")
a=int(input("with start(1) | without start (2) "))
b=int(input("with sub (1) | without sub(2) "))
if a == 1:
#w/ start
c=input("xx:yy:zz : ")
if b == 1:
#w/ sub
sh.mpv(cim,"--sub-file=",d,"start=",c)
elif b == 2:
#w/ sub
sh.mpv(cim,"start=",c)
elif a == 2:
#nincs start
if b == 1:
#w/ sub
d=input("sub: ")
sh.mpv(cim,"--sub-file=",d)
if b == 2:
sh.mpv(cim)
When i try to run it:
RAN:
'/usr/bin/mpv Red Museum.avi --sub-file= eng.srt'
STDOUT:
Error parsing option sub-file (option requires parameter)
Setting commandline option --sub-file= failed.
Answer: The problem appears to be the extra space between `--sub-file=` and `eng.srt`.
You could fix it by removing the `=` so that `mpv` expects them to be
separated by a space. i.e. replace the line
sh.mpv(cim,"--sub-file=",d)
with
sh.mpv(cim,"--sub-file", d)
* * *
If that doesn't work you could get rid of the extra space by using a string
concatenation:
sh.mpv(cim,"--sub-file=" + d)
|
python how to store a list in a file and get it back
Question: I have a list with a lot of things in. At the end of my script I would like to
store it in a file.
And when I start another script another day, I would like to extract my list
from my file and then use it.
I don't know if it's the best way to do this.
Here is what i would like to do in "code" for those who didn't understand
#script1
l = ('hey', 'ho', 'hello', 'world')
#save list into myfile
#script2
l = getmylistfrommyfile()
print(l)
>>>('hey', 'ho', 'hello', 'world')
#I can now use my list !
Answer: If you are looking for the best and most pythonic way of doing this then
[Pickling](https://docs.python.org/2/library/pickle.html) is a better idea. It
is as simple as :
#Save a dictionary into a pickle file.
import pickle
favorite_color = { "lion": "yellow", "kitty": "red" }
pickle.dump( favorite_color, open( "save.p", "wb" ) )
# Load the dictionary back from the pickle file.
favorite_color = pickle.load( open( "save.p", "rb" ) )
|
Python 3.x - toggling fullscreen in tkinter
Question: So far, I have a command that makes my window fullscreen. Now, predictably, I
want to be able to exit fullscreen also.
This is the code I have:
def toggFullscreen(self, win):
def exitFullscreen(event=None):
win.withdraw()
win.deiconify()
win.overrideredirect(False)
win.geometry('1024x700')
w = win.winfo_screenwidth()
h = win.winfo_screenheight()
win.overrideredirect(True)
win.geometry('%dx%d+0+0' % (w, h))
win.focus_set()
win.bind('<Escape>', exitFullscreen)
But the issue is that I can't get the window frame to reappear. I thought that
doing `win.overrideredirect(False)` would work, but it didnt.
Answer: not sure why it isn't working on your computer, but try this code sample:
#!python3
import tkinter as tk
geom=""
def fullscreen():
global geom
geom = root.geometry()
w = root.winfo_screenwidth()
h = root.winfo_screenheight()
root.overrideredirect(True)
root.geometry('%dx%d+0+0' % (w, h))
def exitfullscreen():
global geom
root.overrideredirect(False)
root.geometry(geom)
root = tk.Tk()
tk.Button(root,text ="Fullscreen", command=fullscreen).pack()
tk.Button(root,text ="Normal", command=exitfullscreen).pack()
root.mainloop()
the one thing i'm making sure i do is to store the geometry before going
fullscreen, and then re applying it when i exit fullscreen. the global
statement was needed because if i didn't use it the `fullscreen` function
stored the geometry in a local variable instead of the one i created at the
top.
|
How to Prepare Data for DecisionTreeClassifier Scikit
Question: I have the following Data in csv , the top row indicate column headers and
data is indexed, all the data is dicretized. I need to make a decision tree
classifier Model . Could some one guide me with this.
,age,workclass,fnlwgt,education,education-num,marital-status,occupation,relationship,race,sex,capital-gain,capital-loss,hours-per-week,native-country,class
0,"(16.927, 41.333]", State-gov,"(10806.885, 504990]", Bachelors,"(12, 16]", Never-married, Adm-clerical, Not-in-family, White, Male,"(0, 5000]",,"(30, 50]", United-States, <=50K
1,"(41.333, 65.667]", Self-emp-not-inc,"(10806.885, 504990]", Bachelors,"(12, 16]", Married-civ-spouse, Exec-managerial, Husband, White, Male,,,"(0, 30]", United-States, <=50K
2,"(16.927, 41.333]", Private,"(10806.885, 504990]", HS-grad,"(8, 12]", Divorced, Handlers-cleaners, Not-in-family, White, Male,,,"(30, 50]", United-States, <=50K
3,"(41.333, 65.667]", Private,"(10806.885, 504990]", 11th,"(-1, 8]", Married-civ-spouse, Handlers-cleaners, Husband, Black, Male,,,"(30, 50]", United-States, <=50K
4,"(16.927, 41.333]", Private,"(10806.885, 504990]", Bachelors,"(12, 16]", Married-civ-spouse, Prof-specialty, Wife, Black, Female,,,"(30, 50]", Cuba, <=50K
My aprroach so far:
df, filen = decision_tree.readCSVFile("../Data/discretized.csv")
print df[:3]
newdf = decision_tree.catToInt(df)
print newdf[:3]
model = DecisionTreeClassifier(random_state=0)
print cross_val_score(model, newdf, newdf[:,14], cv=10)
catToInt function:
def catToInt(df):
mapper={}
categorical_list = list(df.columns.values)
newdf = pd.DataFrame(columns=categorical_list)
#Converting Categorical Data
for x in categorical_list:
mapper[x]=preprocessing.LabelEncoder()
for x in categorical_list:
someinput = df.__getattr__(x)
newcol = mapper[x].fit_transform(someinput)
newdf[x]= newcol
return newdf
The error :
print cross_val_score(model, newdf, newdf[:,14], cv=10)
File "C:\Python27\lib\site-packages\pandas\core\frame.py", line 1787, in __getitem__
return self._getitem_column(key)
File "C:\Python27\lib\site-packages\pandas\core\frame.py", line 1794, in _getitem_column
return self._get_item_cache(key)
File "C:\Python27\lib\site-packages\pandas\core\generic.py", line 1077, in _get_item_cache
res = cache.get(item)
TypeError: unhashable type
So I am able to transform categorical data to int. but I thing i am missing
something in next step .
Answer: This is what I got as the solution by following the comments above and more
searching. I got the intended result but i understand there would be more
refined way to do this.
from sklearn.tree import DecisionTreeClassifier
from sklearn.cross_validation import cross_val_score
import pandas as pd
from sklearn import preprocessing
def main():
df, _ = readCSVFile("../Data/discretized.csv")
newdf, classl = catToInt(df)
model = DecisionTreeClassifier()
print cross_val_score(model, newdf, classl, cv=10)
def readCSVFile(filepath):
df = pd.read_csv(filepath, index_col=0)
(_, _, sufix) = filepath.rpartition('\\')
(prefix, _, _) =sufix.rpartition('.')
print "csv read and converted to dataframe !!"
# df['class'] = df['class'].apply(replaceLabel)
return df, prefix
def catToInt(df):
# replace the Nan with "NA" which acts as a unique category
df.fillna("NA", inplace=True)
mapper={}
# make list of all column headers
categorical_list = list(df.columns.values)
#exclude the class column
categorical_list.remove('class')
newdf = pd.DataFrame(columns=categorical_list)
#Converting Categorical Data to integer labels
for x in categorical_list:
mapper[x]=preprocessing.LabelEncoder()
for x in categorical_list:
newdf[x]= mapper[x].fit_transform(df.__getattr__(x))
# make a class series encoded :
le = preprocessing.LabelEncoder()
myclass = le.fit_transform(df.__getattr__('class'))
#newdf is the dataframe with all columns except classcoumn and myclass is the class column
return newdf, myclass
main()
some links other than comments above that helped me :
1. <http://fastml.com/converting-categorical-data-into-numbers-with-pandas-and-scikit-learn/>
2. <http://biggyani.blogspot.com/2014/08/using-onehot-with-categorical.html>
Output:
csv read and converted to dataframe !!
[ 0.83418628 0.83930399 0.83172979 0.82804504 0.83930399 0.84254709
0.82985258 0.83022732 0.82428835 0.83678067]
It might help some novice user of sklearn like me. Suggestions/edits and
better answers are welcome.
|
Equation solver webapp in Python
Question: I am making an equation solver webapp in Python (Flask framework). App takes
user input [a, b, c] and sent to server by calling a server side python
function and get the results displayed using AJAX.
Part of Client JavaScript:
$.getJSON($SCRIPT_ROOT + '/_serverFunction', { form data });
Server Python function:
@app.route('/_solve')
def solve():
a = request.args.get('a')
b = request.args.get('b')
c = request.args.get('c')
cof = [a, b, c]
roots = np.roots(cof)
return jsonify(roots)
Above works just fine and i get the results displayed as per the input.
However, I would also like to show a graph representing the solution next to
the above results.
I am able to generate a dummy static graph at server using matplotlib and send
to the client with the following:
<img src="/image.jpg" id='bg'>
Server side python code:
@app.route('/image.jpg')
def image_jpeg():
image = StringIO()
plot(image)
image.seek(0)
return send_file(image,
attachment_filename="image.jpg",
as_attachment=True)
def plot(image):
x = np.linspace(0, 100)
y = np.sin(x)
pyplot.plot(x, y)
pyplot.savefig(image, format='jpg')
Above works and I get a static curve when i load the page. But I would like
this graph to change when user submits the input to the server to solve the
equation every time.
How to make the both calls to the solve() and image_jpeg() from the same same
AJAX call?
How do the I share the user data [a , b ,c] across functions so the graph can
be generated and sent back.
**_Update:_**
$('img').attr('src',"data:image/png;base64"+ "," + data.imgdata);
On server:
imgdata = image.getvalue()
import base64
imgdata = base64.b64encode(imgdata)
imgData is passed along with json.
Answer: One request can't become two, unfortunately, but there is a way of doing what
you want.
In order to achieve this, you could send the image as a [data
url](https://css-tricks.com/data-uris/) inside the JSON you request from your
server, then in JS you simply set the image element's **src** attribute the
data.
You would start by moving the **image_jpeg** function into your solve method,
since you have the image data in a string already all you need to do is encode
it into base64 and prefix it with **"data:image/png;base64,"** changing the
mime to suit the image format.
Also, to complete the answer, you can share user data across functions. You
would usually use Cookies, in Flask they wrap cookies in Sessions to make it
even easier for you, [take a
look](http://flask.pocoo.org/docs/0.10/quickstart/#sessions). That being said,
I don't think they fit this problem.
|
python querying a json objectpath
Question: I've a nested json structure, I'm using objectpath (python API version), but I
don't understand how to select and filter some information (more precisely the
nested information in the structure).
EG. I want to select the "description" of the action "reading" for the user
"John".
JSON:
{
"user":
[
"actions":
[
{
"name": "reading",
"description": "blablabla"
}
]
"name": "John"
]
}
CODE:
$.user[@.name is 'John' and @.actions.name is 'reading'].(actions.description)
but it doesn't work (empty set but in my JSON it isn't so). Any suggestion?
Answer: Is this what you are trying to do?
import objectpath
data = {
"user": {
"actions": {
"name": "reading",
"description": "blablabla"
},
"name": "John"
}
}
tree = objectpath.Tree(data)
result = tree.execute("$.user[@.name is 'John'].actions[@.name is 'reading'].description")
for entry in result:
print entry
**Output**
blablabla
I had to fix your JSON. Also, `tree.execute` returns a generator. You could
replace the `for` loop with `print result.next()`, but the `for` loop seemed
more clear.
|
"struct.error: unpack requires a string argument of length 4" when slicing [:3]
Question: I have a problem with a little server-client assignment in python 2.7.
The client can send 5 types of requests to the server:
1. get the server's IP
2. get contents of a directory on the server
3. run cmd command on the server and get the output
4. open a calculator on the server
5. disconnect
This is the error I get:
error:
msg_type, data_len = unpack("BH", client_structs[:3])
struct.error: unpack requires a string argument of length 4
Code:
client_structs = client_soc.recv(1024)
msg_type, data_len = unpack("BH", client_structs[:3])
Doesn't the substring contain 4 chars including the null?
Would appreciate explanation about this error + how to solve it.
Entire server code:
__author__ = 'eyal'
from struct import pack, unpack, calcsize
import socket
from os import listdir
from subprocess import check_output, call
def server():
ser_soc = socket.socket()
ser_soc.bind(("0.0.0.0", 8080))
ser_soc.listen(1)
while True:
accept_flag = raw_input("Would you like to wait for a client? (y/n) ")
if accept_flag == "y":
client_soc, client_address = ser_soc.accept()
while True:
client_structs = client_soc.recv(1024)
data_size = calcsize(client_structs) - 3
data_str = 'c' * data_size
unpacked_data = unpack("BH" + data_str, client_structs)
if unpacked_data[0] == 1:
ip = socket.gethostbyname(socket.gethostname())
ip_data = 'c' * len(ip)
to_send = pack("BH" + str(len(ip)) + ip_data, unpacked_data[0], len(ip), ip)
elif unpacked_data[0] == 2:
content = listdir(str(unpacked_data[2]))
content_str = "\r\n".join(content)
content_data = 'c' * len(content_str)
to_send = pack("BH" + str(len(content_str)) + content_data, unpacked_data[0],
len(content_str), content_str)
elif unpacked_data[0] == 3:
command = str(unpacked_data[2:]).split()
output = check_output(command)
message_data = 'c' * len(output)
to_send = pack("BH" + message_data, unpacked_data[0], len(output), output)
elif unpacked_data[0] == 4:
call("gnome-calculator")
msg_data = 'c' * len("The calculator is open.")
to_send = pack("BH" + msg_data, unpacked_data[0], len("The calculator is open."),
"The calculator is open.")
elif unpacked_data[0] == 5:
client_soc.close()
break
else:
to_send = pack("BH" + 'c' * len("invalid message type, try again"),
unpacked_data[0], len("invalid message type, try again"),
"invalid message type, try again")
if unpacked_data[0] != 5:
client_soc.send(to_send)
else:
break
ser_soc.close()
def main():
server()
if __name__ == "__main__":
main()
Entire client code:
__author__ = 'eyal'
from struct import pack, unpack, calcsize
import socket
def client():
my_soc = socket.socket()
my_soc.connect(("127.0.0.1", 8080))
while True:
send_flag = raw_input("Would you like to send the server a request? (y/n) ")
if send_flag == "y":
msg_code = input("What type of request would you like to send?\n"
"1. Get the server's IP address.\n"
"2. Get content of a directory on the server.\n"
"3. Run a terminal command on the server and get the output.\n"
"4. Open a calculator on the server.\n"
"5. Disconnect from the server.\n"
"Your choice: ")
if msg_code == 1 or msg_code == 4 or msg_code == 5:
to_send = pack("BH", msg_code, 0)
elif msg_code == 2:
path = raw_input("Enter path of wanted directory to get content of: ")
to_send = pack("BH" + 'c' * len(path), msg_code, len(path), path)
elif msg_code == 3:
command = raw_input("Enter the wanted terminal command, including arguments: ")
to_send = pack("BH" + 'c' * len(command), msg_code, len(command), command)
else:
print "Invalid message code, try again\n"
if 1 <= msg_code <= 5:
my_soc.send(to_send)
else:
break
data = my_soc.recv(1024)
unpacked_data = unpack("BH" + 'c' * (calcsize(data) - 3), data)
print "The server's response to your type-" + str(msg_code) + " request:"
print unpacked_data[2]
my_soc.close()
def main():
client()
if __name__ == "__main__":
main()
Answer: Why would there be any null included? The slice includes 3 characters, which
is exactly how many you specified -- indexed 0 to 2.
Instead, slice it with `client_structs[:4]`. (Or, as [abarnert points
out](http://stackoverflow.com/a/29997265/4099598), slice `[:3]` or
`[:struct.calcsize('>BH')]` **and** pack/unpack with `">BH"` to avoid
endianness problems.)
Python is not as tricky about most fencepost errors as most C-ish languages,
so you may have inadvertently gotten too clever for yourself.
|
Call Rails 4 param in Python script
Question: I am trying to use a form input from a rails controller in a Python script. I
have a form on my rails (version 4.2.1) app that take in a url, then I want to
use that url in a Python script. I'm new to rails and have no idea how to do
this. My app has gotten as far as taking in the form inputs and rendering them
on a page as well as being able to call a Python script and run it, but I need
to link them together.
Here is the controller code so far:
class ContestsController < ApplicationController
def index
value = %x(python /Users/my/Desktop/rails_test.py 2>&1)
render :text => value
@contests = Contest.all
end
def new
@contest = Contest.new
end
def create
@contest = Contest.new(contest_params)
if @contest.save
redirect_to contests_url
else
render 'new'
end
end
private
def contest_params
params.require(:contest).permit(:site, :contest_url)
end
end
My Python `rails_test.py` script is:
#!/bin/bash
print "Python script works!"
#url = last :contest_url param from rails app
#print url
**Try #1:**
I modified the rails code to:
value = %x(python /Users/jdesilvio/Desktop/rails_test.py #{Shellwords.escape(params[:contest_url])} 2>&1)
I modified the Python script to:
#!/Users/me/anaconda/bin/python2.7 python
import sys
print "Python script works!"
print "Url: ", sys.argv[1]
The output is:
Python script works! Url:
**My form is:**
<%= form_for @contest do |f| %>
<div>
<%= f.label :site %>
<%= f.text_field :site %>
</div>
<div>
<%= f.label :contest_url %>
<%= f.text_field :contest_url %>
</div>
<%= f.submit %>
<% end %>
Answer: well you could just pass them as command line arguments to your script. `%x`
does string interpolation, but you need to be careful and verify the input
since someone could pass `params[:contest_url] = " && rm -rf / "` or something
similar into your script and cause you problems(never trust user input) So the
Shellwords (<http://ruby-
doc.org/stdlib-2.0/libdoc/shellwords/rdoc/Shellwords.html>) class can help.
Perhaps something like.
value = %x(/Users/my/Desktop/rails_test.py #{Shellwords.escape(params[:site])} #{Shellwords.escape(params[:contest_url])} 2>&1)
Then just make your python script read the values via STDIN
#!/usr/bin/env python
import sys
print "Site: ", sys.argv[1]
print "Url: ", sys.argv[2]
I made your python scripts shebang call `/usr/bin/env python`, but if python
isn't in the path of the user that your rails app is running as you might need
the full path. Also you don't need a `/bin/bash` at the top of your python
script if you are calling it as an argument to the python executable.
|
'Communicate' in Python does not work
Question: I'm trying to write a python program to test a java program that takes input
from stdin using Scanner.
All other posts point to using communicate with popen, but for me it
absolutely does not work. When i run my python program, it just calls popen
and then stops while the java program waits for input. I wrote a print
statement after popen to check. It never prints.
Its very simple. I just want to give this program that waits for input some
input.
here is the code:
import os.path, subprocess
from subprocess import PIPE
p = subprocess.Popen(['java', 'Main'], stdin=PIPE, stdout=PIPE)
print 'after subprocess' #this never get's printed
output = p.communicate(input='5 5 4 3 2 1'.encode())[0]
print output
Answer: Without more information (like some sample Java code) it's hard to be sure,
but I'll bet the problem is that the Java code is waiting for a complete line,
and you haven't sent one.
If so, the fix is simple:
output = p.communicate(input='5 5 4 3 2 1\n'.encode())[0]
* * *
As a side note, why exactly are you calling `encode` on that string? It's
already encoded in whatever character set your source code uses. So, when you
call `encode`, it has to first _decode_ that to Unicode. And then, because you
didn't pass an argument to `encode`, it's going to encode it to your default
character set (`sys.getdefaultencoding()`), which doesn't seem any more likely
to match what the Java code is expecting than what you already have. It's
rarely worth calling `encode` with an argument, and you should almost* never
call it on a `str`, only a `unicode`.
* In case you're wondering, the exception is when you're using a handful of special codecs like `hex` or `gzip`. In Python 3, they decided that the occasional usefulness of those special cases was nowhere near as much as the frequent bug-magnet of calling `encode` on already-encoded strings, so they took it out of the language.
|
Ipython notebook widgets not showing values
Question: For some reason my Ipython notebook widgets stopped working after I upgraded
to pandas 0.16 (seems unrelated but thought I'd mention it). I'd love to post
a screenshot but StackOverflow won't let me because I'm new and don't have
enough "reputation" apparently. I'm trying this basic code, and as you can see
from the screenshot the dropdown menu, nor the radio button values populate
the widgets. I'm running the notebook in Python 2 mode. Is this a known
issue?!
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
from IPython.display import clear_output, display, HTML
temp_w = widgets.Dropdown(values={'1':1, '2':2})
display(temp_w)
mysecondwidget = widgets.RadioButtons(values=["Item A", "Item B", "Item C"])
display(mysecondwidget)
Answer: closing this. when I used the [newer] 'options=' parameter instead of values
it started working again. possibly when i upgraded pandas, under the hood, it
upgraded other dependencies i suppose which require the widget objects to only
accept the newer options param
|
How do I best parse a date string in the Python C API?
Question: Unfortunately there seems to be no `PyDateTime_FromString()` (equivalent to
e.g. `PyFloat_FromString()`) in the Python C API. Has anybody figured out what
would be the ideal workaround if I have to parse a date string into a
`datetime.datetime` type?
Answer: If
[`PyDateTime_FromTimestamp`](https://docs.python.org/2/c-api/datetime.html#c.PyDateTime_FromTimestamp)
does not suffice, you can just call any Python method like
[`strptime`](https://docs.python.org/2/library/datetime.html#datetime.datetime.strptime)
directly:
PyObject *datetimeType = (PyObject *) PyDateTimeAPI->DateTimeType;
PyObject *datetime = PyObject_CallMethod(datetimeType, "strptime", "ss",
"11-11-11 11:11:11", //date_string
"%y-%m-%d %H:%M:%S"); //format
But keep in mind, that `PyDateTimeAPI->DateTimeType` is not part of the
documented API, so it could break eventually any time in the future. I've
tested this with Python versions 2.7.9 and 3.4.2 only. Instead you should do
the equivalent of `from datetime import datetime` in your C code.
|
How to print pandas dataframe values in a sentence
Question: I have created a database using sqlite within python 2.7, and loaded the data
into the pandas dataframe as below. What I'm trying to do is I would like to
print the result as "The cities that are warmest in July are: Istanbul,
Ankara, Izmir, Bursa". The code that I have written in Python is as below:
import sqlite3 as lite
import pandas as pd
con = lite.connect("project_warmest.db")
with con:
cur = con.cursor()
cur.execute("DROP TABLE IF EXISTS cities;")
cur.execute("DROP TABLE IF EXISTS weather;")
cur.execute("CREATE TABLE cities (name text, region text)")
cur.execute("CREATE TABLE weather (city text, warm_month text, average_high integer)")
cur.execute("INSERT INTO cities VALUES('Istanbul', 'Marmara')")
cur.execute("INSERT INTO cities VALUES('Ankara', 'Ic Anadolu')")
cur.execute("INSERT INTO cities VALUES('Izmir', 'Ege')")
cur.execute("INSERT INTO cities VALUES('Antalya', 'Akdeniz')")
cur.execute("INSERT INTO cities VALUES('Bursa', 'Marmara')")
cur.execute("INSERT INTO weather VALUES('Istanbul', 'July',24)")
cur.execute("INSERT INTO weather VALUES('Ankara', 'July',21)")
cur.execute("INSERT INTO weather VALUES('Izmir', 'July',27)")
cur.execute("INSERT INTO weather VALUES('Antalya', 'August',30)")
cur.execute("INSERT INTO weather VALUES('Bursa', 'July',23)")
cur.execute("SELECT city FROM weather INNER JOIN cities ON name = city WHERE warm_month = 'July'")
rows = cur.fetchall()
cols = [desc[0] for desc in cur.description]
df = pd.DataFrame(rows, columns = cols)
print "The cities that are warmest in July are: %s, " %df.iloc[0]["city"]
Answer: You could join array of elements from `df["city"]` like
In [53]: print "The cities warmest in July are: %s" % ', '.join(df["city"].values)
The cities warmest in July are: Istanbul, Ankara, Izmir, Bursa
`', '.join(df["city"].values)` \-- this will return a comma-separated string.
* * *
Also, you could use `pd.read_sql()` or `pd.read_sql_query` to directly read
the sql results to dataframe.
In [54]: pd.read_sql("SELECT city FROM weather INNER JOIN cities ON name = city"
....: " WHERE warm_month = 'July'", con)
Out[54]:
city
0 Istanbul
1 Ankara
2 Izmir
3 Bursa
|
python soaplib ImportError: No module named core.service
Question: I'm developing python application and using soaplib for use in .net but when I
run the code , I ran into this error
Traceback (most recent call last):
File "soap.py", line 2, in <module>
from soaplib.core.service import rpc, DefinitionBase
ImportError: No module named core.serviceImportError: No module named core.service
How do I solved this?
Answer: I solved my problem like this download tar.gz soaplib file from this address
<https://pypi.python.org/pypi/soaplib/2.0.0-beta2> and install it with this
command `sudo python setup.py install`
|
match a specific string in excel and print those rows using python
Question: i am trying to print values from 4 columns in a excel. i need to match on a
particular string in the 4th column and then print only those rows which have
that string. (eg: if my 4th column in the 2nd row contains 'Sweet' i will
print the entire row)
sample excel:
name; number; fruit; comment
test1 ;1 ; apple ; healthy
test2; 2; banana ;sweet and healthy
here row2 should be printed
so far i have this , couldn't get to exact way to match string.
import gzip
import csv, io
with gzip.open("/test.csv.gz", "r") as file:
datareader = csv.reader(file)
included_cols = [9, 10, 11]
for row in datareader:
content = list(row[i] for i in included_cols if row[i])
print content
Answer: Making use of the fact that you are using a range (9 through 11):
with gzip.open("/test.csv.gz", "r") as file: datareader = csv.reader(file)
datareader = csv.reader(file)
for row in datareader:
if 'Sweet' in row[9:12]:
print(row[9:12])
|
How to delete certain words from a variable or a list python
Question:
common_words = set(['je', 'tek', 'u', 'još', 'a', 'i', 'bi',
's', 'sa', 'za', 'o', 'kojeg', 'koju', 'kojom', 'kojoj',
'kojega', 'kojemu', 'će', 'što', 'li', 'da', 'od', 'do',
'su', 'ali', 'nego', 'već', 'no', 'pri', 'se', 'li',
'ili', 'ako', 'iako', 'bismo', 'koji', 'što', 'da', 'nije',
'te', 'ovo', 'samo', 'ga', 'kako', 'će', 'dobro',
'to', 'sam', 'sve', 'smo', 'kao'])
all = []
for (item_content, item_title, item_url, fetch_date) in cursor:
#text = "{}".format(item_content)
text= item_content
text= re.sub('[,.?";:\-!@#$%^&*()]', '', text)
text = text.lower()
#text = [w for w in text if not w in common_words]
all.append(text)
I want to delete certain words/stopword from either the variable "test", or
later from the list "all" I put all the "text" variables from the iteration
in.
I tried it like this, but this doesn't delete just words but also those
letters if they exist in other words and the output is like 'd','f' for every
word, and I want the format to stay the same, I just need those words in the
common_words list deleted from the variable (or the list). How would I achieve
that?
Answer: As a pythonic way for remove the punctuation from a test you can use
`str.translate` method :
>>> "this is224$# a ths".translate(None,punctuation)
'this is224 a ths'
And for replace the words use `re.sub`,first create the regex with appending
the pip (`|`) to words :
reg='|'.join(common_words)
new_text=re.sub(reg,'',text)
example :
>>> s="this is224$# a ths"
>>> import re
>>> w=['this','a']
>>> boundary_words=['\b{}\b'.format(i) for i in w]
>>> reg='|'.join(oundary_words)
>>> new_text=re.sub(reg,'',s).translate(None,punctuation)
>>> new_text
' is224 ths'
|
Python Doubly Linked List Node Removal
Question: I created a basic doubly linked list class in Python, and it has three
methods: append, remove, and show. I fully understand the append method, and I
fully understand the show method. However, I am mildly confused about the way
my remove method works.
These are my two classes - my Node class and my Doubly Linked List class:
class ListNode:
def __init__(self, data, prev, next):
self.data = data
self.prev = prev
self.next = next
class DoubleList(object):
head = None
tail = None
def append(self, data):
new_node = ListNode(data, None, None)
if self.head is None:
self.head = self.tail = new_node
else:
new_node.prev = self.tail
new_node.next = None
self.tail.next = new_node
self.tail = new_node
def remove(self, node_value):
current_node = self.head
while current_node is not None:
if current_node.data == node_value:
if current_node.prev is not None:
current_node.prev.next = current_node.next
current_node.next.prev = current_node.prev
else:
self.head = current_node.next
current_node.next.prev = None
current_node = current_node.next
def show(self):
print("Show list data:")
current_node = self.head
while current_node is not None:
print(current_node.prev.data if hasattr(current_node.prev, "data") else None,)
print(current_node.data)
print(current_node.next.data if hasattr(current_node.next, "data") else None)
current_node = current_node.next
print("*"*50)
So, when I use my remove method that is part of my DoubleList class, the
element is removed from the list as expected. However, the Node instance is
also gone, and I verified this by using this bit of code before and after
removing two nodes.
import gc
for obj in gc.get_objects():
if isinstance(obj, ListNode):
print(obj.data)
Now, I assume I simply don't understand exactly what my remove method is
doing.
My understanding is this:
I thought that the node would still exist, as the remove method only reassigns
the previous nodes next attribute and the following nodes previous attribute.
However, the current node is not altered, and it still holds references to the
nodes that were next to it in the list.
Obviously, my understanding is wrong, and I want to know why.
Why does the instance of a Node I removed from my linked list disappear?
Answer: While the node holds references to the next and previous, it does not have
anything pointing to it and is garbage collected like any object in Python.
You had to use gc just to check that it was gone for the very reason it had no
referrers!
|
python http requests; browser not detected
Question: I want to login to my account with the following structure:
import requests
session=requests.Session()
resp=session.get('https://mywebsite.com/login')
cont=resp.content
post_data={'user':'username', 'pass':'password'}
post_response=session.post(url='https://mywebsite.com/login', data=post_data}
print cont
The following error occurs:
**Browser Error: Your browser version looks incompatible**
Answer: Change your `User-Agent` to something that the website thinks is compatible.
You can do it like this,
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.3; rv:36.0) Gecko/20100101 Firefox/36.0'
}
r = session.post('https://mywebsite.com/login', data=post_data, headers=headers)
|
Flask Application - How to link a javascript file to website
Question: I am having some trouble getting started with using javascript files on my
website (A Flask application). I start the website by running run.py which
looks like this:
#!flask/bin/python
from app import app
app.run(debug=True)
In my html page I have the following code which gives a 404 error:

Error message:

My file structure looks like this:

Any hints to where I'm going wrong?
Answer: Ah yes, luckily I am currently developing a flask application at the moment.
You are currently missing the **static** folder which by default flask looks
into, a folder structure something like this:
|FlaskApp
----|FlaskApp
--------|templates
- html files are here
--------|static
- css and javascript files are here
Two important default folders that Flask will look into templates and
**static**. Once you got that sorted you use this to link up with your
javascript files from your html page:
<script src="{{url_for('static', filename='somejavascriptfile.js')}}"></script>
Hope that helps any questions just ask.
Plus - A good article to read but not super related but it talks about the
folder structure of flask is this:
<https://www.digitalocean.com/community/tutorials/how-to-deploy-a-flask-
application-on-an-ubuntu-vps>
|
*Efficiently* moving dataframes from Pandas to R with RPy (or other means)
Question: I have a dataframe in Pandas, and I want to do some statistics on it using R
functions. No problem! RPy makes it easy to send a dataframe from Pandas into
R:
import pandas as pd
df = pd.DataFrame(index=range(100000),columns=range(100))
from rpy2 import robjects as ro
ro.globalenv['df'] = df
And if we're in IPython:
%load_ext rmagic
%R -i df
For some reason the `ro.globalenv` route is slightly slower than the `rmagic`
route, but no matter. What matters is this: The dataframe I will ultimately be
using is ~100GB. This presents a few problems:
1. Even with just 1GB of data, the transfer is rather slow.
2. If I understand correctly, this creates two copies of the dataframe in memory: one in Python, and one in R. That means I'll have just doubled my memory requirements, and I haven't even gotten to running statistical tests!
Is there any way to:
1. transfer a large dataframe between Python and R more quickly?
2. Access the same object in memory? I suspect this asking for the moon.
Answer: `rpy2` is using a conversion mechanism that is trying to avoid copying objects
when moving between Python and R. However, this is currently only working in
the direction R -> Python.
Python has an interface called the "buffer interface" that is used by `rpy2`
and that lets it minimize the number of copies for the C-level compatible
between R and Python (see
<http://rpy.sourceforge.net/rpy2/doc-2.5/html/numpy.html#from-rpy2-to-numpy>
\- the doc seems outdated as the `__array_struct__` interface is no longer the
primary choice).
There is no equivalent to the buffer interface in R, and the current concern
holding me back from providing an equivalent functionality in `rpy2` is the
handling of borrowed references during garbage collection (and the lack of
time to think sufficiently carefully about it).
So in summary there is a way to share data between Python and R without
copying but this will require to have the data created in R.
|
Why doesn't my Python script recognize a class from an imported module?
Question: **collection.py**
import sys
import os
import pymongo
from pymongo import MongoClient
class Collection():
"""returns a collection curser from mongodb"""
client = MongoClient()
def __init__(self, db, collection_name):
self.db = db
self.collection_name = collection_name
def getCollection(self):
data_base = getattr(self.client, self.db)
collObject = getattr(data_base, self.collection_name)
return collObject
**main.py**
import sys
import os
import collection
def main():
pass
if __name__ == '__main__':
print"Begin Main"
agents = Collection('hkpr_restore','agents')
print "agents is" , agents
These files are in the same directory. When I run `main.py`, however, I get an
error:
Begin Main
Traceback (most recent call last):
File "main.py", line 23, in <module>
agents = Collection('hkpr_restore','agents')
NameError: name 'Collection' is not defined
From what I've read, if the files are in the same directory, all I need to do
is use `import collection`.
Am I missing something?
Answer: You've only imported `collection`, not `Collection`.
Either do `from collection import Collection`, or use the full qualified name
when instantiating: `agents = collection.Collection('hkpr_restore','agents')`.
|
Sympy failed to render matrix
Question: I'm using IPython's Qtconsole and use default setting of printing setting. It
works well for polynomial, but do not work for Matrix
from sympy import init_printing, Matrix
init_printing()
a=Matrix([1,2])
a
the error is
ValueError:
\left[\begin{smallmatrix}1\\2\end{smallmatrix}\right]
^
Expected "\right" (at char 6), (line:1, col:7)
I have tried <http://www.codecogs.com/latex/eqneditor.php> and it seems the
latex code is correct. I have tried the dev version of sympy, it still doesn't
work. I did not try dev version of matplotlib yet. Because there're only
source for the dev version.
Answer: TLDR: It is a known issue, yet to be solved. You need to use a proper LaTeX.
Your problem might be related to
[this](https://github.com/sympy/sympy/issues/9799). The problem is due to
`matplotlibs` very limited understanding of LaTeX. In this case the
`\begin{...}` flag cannot be interpreted by `matplotlib`, although it is valid
LaTeX.
|
Abaqus Python Scripting - Create ODB without submitting Job
Question: I'm looking to create a simple ODB file using my model in session so that I
can display an orientation tensor in a custom field. I'm able to create an ODB
for a 2d part (made of s4 elements), but my system crashes whenever I load the
ODB for my 3d part (made of c3d8 elements).
Here is my script. Any help would be greatly appreciated!
from abaqusConstants import *
from odbAccess import *
from textRepr import *
odb = Odb(name='4',
analysisTitle='derived data',
description='test problem',
path='4.odb')
sCat = odb.SectionCategory(name='solid',
description='Test')
part1 = odb.Part(name='part-1',embeddedSpace=THREE_D, type=DEFORMABLE_BODY)
nodeData = [(1, -5.0, -5.0, 10.0), (2, -5.0, 5.0, 10.0), (3, -5.0, -5.0, 0.0), (4, -5.0, 5.0, 0.0), (5, 5.0, -5.0, 10.0), (6, 5.0, 5.0, 10.0), (7, 5.0, -5.0, 0.0), (8, 5.0, 5.0, 0.0)]
part1.addNodes(nodeData=nodeData, nodeSetName='nset-1')
elementData = [(1, 4, 5, 7, 6, 0, 1, 3, 2)]
part1.addElements(elementData=elementData, type='C3D8',
elementSetName='eset-1', sectionCategory=sCat)
assembly = odb.rootAssembly
instance1 = assembly.Instance(name='part-1-1', object=part1)
# An element set on an instance
eLabels = [1]
elementSet = instance1.ElementSetFromElementLabels(
name='eall',elementLabels=eLabels)
# A node set on the rootAssembly
instance1.NodeSetFromNodeLabels('nall', (1,2,3,4,5,6,7,8))
step1 = odb.Step(name='step-1', description='', domain=TIME, timePeriod=1.0)
frame1 = step1.Frame(incrementNumber=1, frameValue=0.1, description='')
fieldout = frame1.FieldOutput(name="FO", description="Fiber Orientation Data", type=TENSOR_3D_FULL, isEngineeringTensor=TRUE, validInvariants=[MAX_PRINCIPAL,])
elist = [1]
strs = [[0.72539, 0.19255, 0.082066, -0.12808, -0.1158, 0.042058]]
fieldout.addData(position=CENTROID, instance=instance1, labels=elist, data=strs)
odb.save()
odb.close()
Answer: In your command "odb= Odb(...)", I think there is a problem. Odb(...) method
is a member of session object, so I would modify the code like the following:
odb=session.Odb(name='4',analysisTitle='derived data', description='test problem,path='C:\temp\4.odb')
Also, note the file location where you want to create. It is a good practice
to write the complete file path.
|
I want to sort a list in a method with respect to the method's argument
Question: I have a method and as an argument I enter x and y coordinates of a point and
then I calculate the power reached to that [x,y] coordinate from other points
and sort them in order of highest power reached to lowest:
def power_at_each_point(x_cord, y_cord):
nodez_list = [nodes_in_room for nodes_in_room in range(1, len(Node_Positions_Ascending) + 1)]
powers_list = []
for each_node in nodez_list:
powers_list.append(cal_pow_rec_plandwall(each_node, [x_cord, y_cord]))
return max(powers_list)
I want to do that in a more pythonic way like `key = cal_pow_rec_plandwall`
but this method takes two arguments and not one. So how can I do it?
Answer: You just need a single call to `max` which takes a generator as an argument.
The `lambda` expression is just to make things more readable.
def power_at_each_point(x_cord, y_coord):
f = lambda x: cal_pow_rec_plandwall(x, [x_coord, y_coord])
return max(f(each_node) for each_node in xrange(1, len(Node_Positions_Ascending) + 1))
You can replace the generator with a call to `itertools.imap`:
from itertools import imap
def power_at_each_point(x_coord, y_coord):
f = lambda x: cal_pow_rec_plandwall(x, [x_coord, y_coord])
return max(imap(f, xrange(1, len(Node_Positions_Ascending) + 1)))
|
Node add-on with C++ on Windows 8
Question: I am trying to run basic code of "Hello World", while node Addon C++ way. None
of the stuff is different from wats the official standards of node addon
mentioned in the url <https://nodejs.org/api/addons.html>
I first created the **hello.cc** file which below mentioned code:-
#include <node.h>
#include <v8.h>
using namespace v8;
void Method(const v8::FunctionCallbackInfo<Value>& args)
{
Isolate* isolate = Isolate::GetCurrent();
HandleScope scope(isolate);
args.GetReturnValue().Set(String::NewFromUtf8(isolate, "world"));
}
void init(Handle<Object> exports)
{
NODE_SET_METHOD(exports, "hello", Method);
}
NODE_MODULE(hello, init)
The next part of the code shown below is binding.gyp which is the core file
used by node-gyp library which **in ideal world & successful outcomes, shall
generate a *.node file** which is the binary of C++ code inclusive google's
v8.h as well node.h as required files. You can relate this file something
similar to a *.so and *.dll file, in this its case *.node file and node
compiler is the consumer of it. The front end javascript communicates with
this file. Since Node is the common one, the whole arrangement works together
as one architecture. The binding.gyp is like JSON and contains the below
mentioned code:-
{
"targets": [
{
"target_name": "hello",
"sources": [ "hello.cc" ]
}
]
}
All noders know the importance of package.json, just for quick understanding,
its master reference for node, which tells wats your stuff all about in node
world. The code for is mentioned below:-
{
"name": "hello",
"version": "0.6.5",
"description": "Node.js Addons Example #1",
"main": "hello.js",
"private": true,
"scripts": {
"test": "node hello.js"
},
"gypfile": true,
"dependencies": {
"bindings": "~1.2.1"
}
}
I am trying to run the whole different by pre-installing node-gyp using
command npm install -g node-gyp
The catch is below mentioned error when I am trying to build my code by first
running **node-gyp configure** followed by **node-gyp build** command:-
C:\Node_Js_progs\addon_first>node-gyp configure
gyp info it worked if it ends with ok
gyp info using [email protected]
gyp info using [email protected] | win32 | ia32
gyp ERR! configure error
gyp ERR! stack Error: Can't find Python executable "python", you can set the PYTHON env variable.
gyp ERR! stack at failNoPython (C:\Users\Devanjan\AppData\Roaming\npm\node_modules\node-gyp\lib\configure.js:103:14)
gyp ERR! stack at C:\Users\Devanjan\AppData\Roaming\npm\node_modules\node-gyp\lib\configure.js:64:11
gyp ERR! stack at FSReqWrap.oncomplete (evalmachine.<anonymous>:99:15)
gyp ERR! System Windows_NT 6.3.9600
gyp ERR! command "node" "C:\\Users\\Devanjan\\AppData\\Roaming\\npm\\node_modules\\node-gyp\\bin\\node-gyp.js" "configure"
gyp ERR! cwd C:\Node_Js_progs\addon_first
gyp ERR! node -v v0.12.0
gyp ERR! node-gyp -v v1.0.3
gyp ERR! not ok
Please help & guide, why I am not able to make a significant, since the idea
is to bridge **node & C++** which I have created as part of my work!
Answer: Node-gyp requires python 2.7 and visual studio 2010 or 2012 for compilation..
Note: VS is for windows user only ..
|
TypeError: Collection(Database(MongoClient("localhost", 27017), u'demo_database'), u'entries) is not JSON serializable
Question: Good afternoon.
I'm trying to combine Python, MongoDB (via pymongo) and Flask to create
client-server application. I want to use one of methods to return all the
entire collection, like here:
@app.route('/entries', methods = ['GET'])
def get_entries():
client = MongoClient(db_host, db_port)
db_demo = client['demo_database']
entries = db_demo.entries
return JSONEncoder().encode(entries)
I also have an **Encoder** class, as advised
[here](http://stackoverflow.com/questions/16586180/typeerror-objectid-is-not-
json-serializable):
class JSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, ObjectId):
return str(o)
return json.JSONEncoder.default(self, o)
Data collection is very simple - actually, only one item with few fields. What
am I doing wrong? Perhaps I should develop more sophisticated encoder class?
Answer: Use
[bson.json_util.dumps](http://api.mongodb.org/python/current/api/bson/json_util.html#bson.json_util.dumps),
which already supports all the MongoDB extended JSON types:
>>> from bson.json_util import dumps
>>> c.test.test.insert_many([{} for _ in range(3)])
<pymongo.results.InsertManyResult object at 0x7f6ed3189550>
>>> dumps(c.test.test.find())
'[{"_id": {"$oid": "554faa99fa5bd8782e1698cf"}}, {"_id": {"$oid": "554faa99fa5bd8782e1698d0"}}, {"_id": {"$oid": "554faa99fa5bd8782e1698d1"}}]'
|
parsing string with specific name in python
Question: i have string like this
<name:john student male age=23 subject=\computer\sience_{20092973}>
i am confused ":","="
i want to parsing this string!
so i want to split to list like this
name:john
job:student
sex:male
age:23
subject:{20092973}
parsing string with specific name(name, job, sex.. etc) in python
i already searching... but i can't find.. sorry..
how can i this?
thank you.
Answer: It's generally a good idea to give more than one example of the strings you're
trying to parse. But I'll take a guess. It looks like your format is pretty
simple, and primarily whitespace-separated. It's simple enough that using
regular expressions should work, like this, where `line_to_parse` is the
string you want to parse:
import re
matchval = re.match("<name:(\S+)\s+(\S+)\s+(\S+)\s+age=(\S+)\s+subject=[^\{]*(\{\S+\})", line_to_parse)
matchgroups = matchval.groups()
Now matchgroups will be a tuple of the values you want. It should be trivial
for you to take those and get them into the desired format.
If you want to do many of these, it may be worth compiling the regular
expression; take a look at the `re` documentation for more on this.
As for the way the expression works: I won't go into regular expressions in
general (that's what the `re` docs are for) but in this case, we want to get a
bunch of strings that don't have any whitespace in them, and have whitespace
between them, and we want to do something odd with the subject, ignoring all
the text except the part between { and }.
Each "(...)" in the expression saves whatever is inside it as a group. Each
"\S+" stands for one or more ("+") characters that aren't whitespace ("\S"),
so "(\S+)" will match and save a string of length at least one that has no
whitespace in it. Each "\s+" does the opposite: it has not parentheses around
it, so it doesn't save what it matches, and it matches at one or more ("+")
whitespace characters ("\s"). This suffices for most of what we want. At the
end, though, we need to deal with the subject. "[...]" allows us to list
multiple types of characters. "[^...]" is special, and matches anything that
isn't in there. {, like [, (, and so on, needs to be escaped to be normal in
the string, so we escape it with \, and in the end, that means "[^{]*" matches
_zero_ or more ("*") characters that aren't "{" ("[^{]"). Since "*" and "+"
are "greedy", and will try to match as much as they can and still have the
expression match, we now only need to deal with the last part. From what I've
talked about before, it should be pretty clear what "({\S+})" does.
|
Python: Aaccess a function within another class
Question: I'm trying to access a function within another class, and doing that going
through a "master" class. I have successfully accessed the "master" class
through the Column 4 button, I'm trying to access the master class through the
column 5 function, and then make that go to the reaction function in the
window class, however when I try this, it fails.
from tkinter import *
class Window():
def __init__(self, parent, parent_class2):
self.parent = parent
self.parent_class = parent_class2
self.canvas = Canvas(self.parent, width=420, height=360)
self.canvas.pack(side="top", fill="both", expand="true")
self.cellwidth = 60
self.cellheight = 60
self.rows = 6
self.columns = 7
self.rect = {}
self.oval = {}
self.piece = []
#creates the grid
for row in range(6):
for column in range(7):
x1 = column*self.cellwidth
y1 = row * self.cellheight
x2 = x1 + self.cellwidth
y2 = y1 + self.cellheight
self.piece.append(Piece(self.canvas, x1,y1,x2,y2))
self.canvas.itemconfig(self.piece[8].oval, fill="deep pink")
def reaction(self):
print("In WIndow Class - SUCCESS!")
class ButtonsExampleGUI:
def __init__(self, parent, parent_class):
self.parent_class = parent_class
self.parent = parent
#self.buttons = 7
c4 = Button(parent,text = ("Column 4"), command = self.c4_played)
c4.pack(side = LEFT)
c5 = Button(parent,text = ("Column 5"), command = self.c5_played)
c5.pack(side = LEFT)
def c4_played(self):
self.parent_class.test()
print("Col 4")
def c5_played(self):
print("Col 5")
self.parent_class.towindow()
class Piece:
def __init__(self, parent_canvas, x1,y1,x2,y2):
self.parent_canvas = parent_canvas
self.rect = self.parent_canvas.create_rectangle(x1,y1,x2,y2, fill="grey", tags="rect")
self.oval = self.parent_canvas.create_oval(x1+2,y1+2,x2-2,y2-2, fill="white", tags="oval")
#self class here is being taken as the "parent_class" of the ButtonExampleGUI class
class Game:
def __init__(self, parent):
self.parent = parent
self.window = Window (self.parent, self)
self.buttons = ButtonsExampleGUI (self.parent, self)
#test being accessed by c4 function calling this within a different class
def test(self):
print("from parent class")
def towindow(self):
print("In Game Class")
self.parent_class2.reaction()
if __name__ == "__main__":
root = Tk()
game = Game(root)
root.mainloop()
Answer: maybe because you didn't define parent_class2 in Game? Try changing def
towindow(self):
from:
def towindow(self):
print("In Game Class")
self.parent_class2.reaction()
to:
def towindow(self):
print("In Game Class")
self.window.reaction()
|
How to make a python program open itself as text in safari?
Question: I was trying to make a program in Python that would use os.system to open a
file in Safari. In this case, I was trying to have it open a text copy of
itself. The files name is foo.py.
import os, socket
os.system("python -m SimpleHTTPServer 4000")
IP = socket.gethostbyname(socket.gethostname())
osCommand = "open -a safari http://"+IP+":4000/foo.py"
os.system(osCommand)
Answer: [`system`](https://docs.python.org/2/library/os.html#os.system) runs a
program, then waits for it to finish, before returning.
So it won't get to the next line of your code until the server has finished
serving. Which will never happen. (Well, you can hit ^C, and then it will stop
serving—but then when you get to the next line that opens `safari`, it'll have
no server to connect to anymore.)
This is one of the many reasons the docs for `system` basically tell you not
to use it:
> The [`subprocess`](https://docs.python.org/2/library/subprocess.html#module-
> subprocess) module provides more powerful facilities for spawning new
> processes and retrieving their results; using that module is preferable to
> using this function. See the [_Replacing Older Functions with
> the`subprocess`
> Module_](https://docs.python.org/2/library/subprocess.html#subprocess-
> replacements) section in the `subprocess` documentation for some helpful
> recipes.
For example:
import subprocess, socket
server = subprocess.Popen(['python', '-m', 'SimpleHTTPServer', '4000'])
IP = socket.gethostbyname(socket.gethostname())
safari = subprocess.Popen(['open', '-a', 'safari', 'http://'+IP+':4000/foo.py'])
server.wait()
safari.wait()
That will start both programs in the background, and then wait for both to
finish, instead of starting one, waiting for it to finish, starting the other,
and waiting for it to finish.
* * *
All that being said, this is kind of a silly way to do what you want. What's
wrong with just opening a file URL (like
`'file:///{}'.format(os.path.abspath(sys.argv[0]))`) in Safari? Or in the
default web browser (which would presumably be Safari for you, but would also
work on other platform, and for Mac users who used Chrome or Firefox, and so
on) by using `webbrowser.open` on that URL?
|
Python: Tkinter: Multi line scrolling entry box
Question: I would like to make a Tkinter window able to ask for a multi line entry (so
the user will add one or more lines of text) And then when we clic on the
button be able to retrieve the values entered by user for further use.
Until now I have this script:
from Tkinter import *
import ScrolledText
class EntryDemo:
def __init__(self, rootWin):
#Create a entry and button to put in the root window
self.textfield = ScrolledText(rootWin)
#Add some text:
self.textfield.delete(0,END)
self.textfield.insert(0, "Change this text!")
self.textfield.pack()
self.button = Button(rootWin, text="Click Me!", command=self.clicked)
self.button.pack()
def clicked(self):
print("Button was clicked!")
eText = self.textfield.get()
print("The Entry has the following text in it:", eText)
#Create the main root window, instantiate the object, and run the main loop
rootWin = Tk()
#app = EntryDemo( rootWin )
rootWin.mainloop()
But it didn't seem to work, A window appear with nothing inside. Could you
help me?
#########EDIT
New code:
from Tkinter import *
import ScrolledText
class EntryDemo:
def __init__(self, rootWin):
self.textfield = ScrolledText.ScrolledText(rootWin)
#Add some text:
#self.textfield.delete(0,END)
self.textfield.insert(INSERT, "Change this text!")
self.textfield.pack()
self.button = Button(rootWin, text="Click Me!", command=self.clicked)
self.button.pack()
def clicked(self):
eText = self.textfield.get(1.0, END)
print(eText)
rootWin = Tk()
app = EntryDemo( rootWin )
rootWin.mainloop()
Sorry if it look like done with no effort by some down voters (even if I spent
more than a day on it) but the Multi line text entry is not exactly what we
can call well documented to learn by ourself.
Answer: Your first problem is that you commented out the `app = EntryDemo( rootWin )`
call, so you're not actually doing anything but creating a `Tk()` root window,
then starting its main loop.
If you fix that, your next problem is that you're trying to use the
`ScrolledText` module as if it were a class. You need the
`ScrolledText.ScrolledText` class.
If you fix that, your next problem is that you're trying to `delete` from an
empty text field, which is going to raise some kind of Tcl index error, and
then you're also trying to `insert` at position 0 in an empty text field,
which will raise the same error. There's no reason to do the `delete` at all,
and for the `insert` you probably want to use `INSERT` as the position.
You still have multiple problems after that, but fixing these three will get
your edit box up and displayed so you can start debugging everything else.
|
TypeError: get_params() missing 1 required positional argument: 'self'
Question: I was trying to use `scikit-learn` package with python-3.4 to do a grid
search,
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model.logistic import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn.metrics import precision_score, recall_score, accuracy_score
from sklearn.preprocessing import LabelBinarizer
import numpy as np
pipeline = Pipeline([
('vect', TfidfVectorizer(stop_words='english')),
('clf', LogisticRegression)
])
parameters = {
'vect__max_df': (0.25, 0.5, 0.75),
'vect__stop_words': ('english', None),
'vect__max_features': (2500, 5000, 10000, None),
'vect__ngram_range': ((1, 1), (1, 2)),
'vect__use_idf': (True, False),
'vect__norm': ('l1', 'l2'),
'clf__penalty': ('l1', 'l2'),
'clf__C': (0.01, 0.1, 1, 10)
}
if __name__ == '__main__':
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='accuracy', cv = 3)
df = pd.read_csv('SMS Spam Collection/SMSSpamCollection', delimiter='\t', header=None)
lb = LabelBinarizer()
X, y = df[1], np.array([number[0] for number in lb.fit_transform(df[0])])
X_train, X_test, y_train, y_test = train_test_split(X, y)
grid_search.fit(X_train, y_train)
print('Best score: ', grid_search.best_score_)
print('Best parameter set:')
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(best_parameters):
print(param_name, best_parameters[param_name])
However, it does not run successfully, the error message looks like this:
Fitting 3 folds for each of 1536 candidates, totalling 4608 fits
Traceback (most recent call last):
File "/home/xiangru/PycharmProjects/machine_learning_note_with_sklearn/grid search.py", line 36, in <module>
grid_search.fit(X_train, y_train)
File "/usr/local/lib/python3.4/dist-packages/sklearn/grid_search.py", line 732, in fit
return self._fit(X, y, ParameterGrid(self.param_grid))
File "/usr/local/lib/python3.4/dist-packages/sklearn/grid_search.py", line 493, in _fit
base_estimator = clone(self.estimator)
File "/usr/local/lib/python3.4/dist-packages/sklearn/base.py", line 47, in clone
new_object_params[name] = clone(param, safe=False)
File "/usr/local/lib/python3.4/dist-packages/sklearn/base.py", line 35, in clone
return estimator_type([clone(e, safe=safe) for e in estimator])
File "/usr/local/lib/python3.4/dist-packages/sklearn/base.py", line 35, in <listcomp>
return estimator_type([clone(e, safe=safe) for e in estimator])
File "/usr/local/lib/python3.4/dist-packages/sklearn/base.py", line 35, in clone
return estimator_type([clone(e, safe=safe) for e in estimator])
File "/usr/local/lib/python3.4/dist-packages/sklearn/base.py", line 35, in <listcomp>
return estimator_type([clone(e, safe=safe) for e in estimator])
File "/usr/local/lib/python3.4/dist-packages/sklearn/base.py", line 45, in clone
new_object_params = estimator.get_params(deep=False)
TypeError: get_params() missing 1 required positional argument: 'self'
I also tried to use only
if __name__ == '__main__':
pipeline.get_params()
It gives the same error message. Who knows how to fix this?
Answer: This error is almost always misleading, and _actually_ means that you're
calling an instance method on the class, rather than the instance (like
calling `dict.keys()` instead of `d.keys()` on a `dict` named `d`).*
And that's exactly what's going on here. [The docs](http://scikit-
learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html)
imply that the `best_estimator_` attribute, like the `estimator` parameter to
the initializer, is not an estimator _instance_ , it's an estimator _type_ ,
and "A object of that type is instantiated for each grid point."
So, if you want to call methods, you have to construct an object of that type,
for some particular grid point.
However, from a quick glance at the docs, if you're trying to get the params
that were used for the particular instance of the best estimator that returned
the best score, isn't that just going to be `best_params_`? (I apologize that
this part is a bit of a guess…)
* * *
For the `Pipeline` call, you definitely have an instance there. And the only
[documentation](http://scikit-
learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) for that
method is a param spec which shows that it takes one optional argument,
`deep`. But under the covers, it's probably forwarding the `get_params()` call
to one of its attributes. And with `('clf', LogisticRegression)`, it looks
like you're constructing it with the _class_ `LogisticRegression`, rather than
an instance of that class, so if that's what it ends up forwarding to, that
would explain the problem.
* * *
* The reason the error says "missing 1 required positional argument: 'self'" instead of "must be called on an instance" or something is that in Python, `d.keys()` is effectively turned into `dict.keys(d)`, and it's perfectly legal (and sometimes useful) to call it that way explicitly, so Python can't really tell you that `dict.keys()` is illegal, just that it's missing the `self` argument.
|
incrementing list values Python 3
Question:
infile = open('numbers.txt','r')
c = 0
for i in infile:
if int(i) > c:
c = int(i)
hist = [0]*c
for i in infile: #checking
ii = int (i)-1
hist[ii] += 1 # problem area seemingly
for i in range(c):
print(str(i+1),':',end=' ')
print(str(hist[i]))
The purpose of the code above is to open number.txt, which has a 100 numbers,
all within range(1,101), all with '\n' characters at their ends, and to count
the number of times each number has been encountered.
First the greatest number in the file is found, then a list with the number of
element equals the greatest number in the file, is made with all elements
initially set to zero. Then the numbers are checked. All numbers in file are
integers.
If the greatest number contained in the file is 100, then the hist[] list has
initially 100 elements which are all 0.
During checking, say if a number 78 is encountered, the element with index
[77] ie hist[77] is updated from value 0 to 1. If 78 is encountered again,
hist[77] is changed from 1 to 2.
This way whatever number occurs in the file, each occurring number has a
counter (also numbers that don't appear but are less that the greatest
occurring no. have counters but thats not a problem).
Having checked that the correct file is being opened, the hist list is
initially being set up properly, the problem is that the values of the hist[]
list are not getting increment when the corresponding number is being
encountered. When i print the list at the end, all values are still zero.
I am using Python 3.4.3. Both the script and 'numbers.txt' are on my desktop.
Any help appreciated.
Answer: You are looping over the file twice, but did not rewind the file reading
pointer to the start. Add a `file.seek()` call before the second loop:
infile.seek(0)
for i in infile: #checking
Without the `seek` iteration over `infile` will not yield any further lines;
you reached the end of the file _already_.
You'd be better off using a [`collections.Counter()`
object](https://docs.python.org/2/library/collections.html#collections.Counter)
here though:
from collections import Counter
with open('numbers.txt','r') as infile:
hist = Counter(int(line) for line in infile)
You can then either get the highest counted number using `max(hist)` or
instead retrieve the numbers in most common to least order with the
`Counter.most_common()` method:
# In numerical order, setting missing numbers to 0
for i in range(1, max(hist) + 1):
print('{}: {}'.format(i, hist.get(i, 0)))
# In most common to least order
# In numerical order
for number, count in hist.most_common():
print('{}: {}'.format(number, count))
|
Accessing URLs from a list in Python
Question: I'm trying to search a HTML document for links to articles, store them into a
list and then use that list to search each one individually for their titles.
Answer: It's possibly not direct answer for OP, but should not considered as off-
topic: **You should not parse web page for the html data**.
html web pages are not optimized to answer for a lot of requests, especially
requests not from browsers. A lot of generated traffic could made servers
overloaded and trigger DDOS.
So you need try to found any available API for interesting your site and only
if nothing relative found, use parsing of web content with using cache of
request to not overload target resource.
At the first look, `The Guardian` have [Open API](http://open-
platform.theguardian.com/) with [documentation](http://open-
platform.theguardian.com/documentation/) how to use.
Using that API you could work with site content in simply manner, making you
interesting requests more easier and answers available without parsing.
For example, search by tag "technology" [api
output](http://content.guardianapis.com/search?q=technology&api-key=test):
from urllib.request import urlopen, URLError, HTTPError
import json
import sys
def safeprint(s):
try:
print(s)
except UnicodeEncodeError:
if sys.version_info >= (3,):
print(s.encode('utf8').decode(sys.stdout.encoding))
else:
print(s.encode('utf8'))
url = "http://content.guardianapis.com/search?q=technology&api-key=test"
try:
content = urlopen(url).read().decode("utf-8")
json_data = json.loads(content)
if "response" in json_data and "results" in json_data["response"]:
for item in json_data["response"]["results"]:
safeprint(item["webTitle"])
except URLError as e:
if isinstance(e, HTTPError):
print("Error appear: " + str(e))
else:
raise e
Using that way you could walk through all publications in deep without any
problem.
|
How to print c_type byte array as set of hexadecimal values in python?
Question: I have data structure defined as follows:
class Factory_Params_Get_Command(Structure):
_pack_ = 1
_fields_ = [("SN",c_byte * 32),
("Voltage",c_byte),
("Reserved",c_byte * 30)]
# Print the fields
def __str__(self):
return "Serial Number: %s" % (list(self.SN))
This prints the serial number like:
[0, 32, 58, 73.....]
I would like to print the serial number as set of hexadecimal values, where
each byte is represented by 2 hex numbers, and , if possible , without commas
and without spaces. Something like this:
03C8A0D6.....
Would appreciate your help
Answer: Possibly something like:
hexstring = ''.join('%02X' % b for b in self.SN)
That applies the formatting string `%02X` to every byte in the array, and then
concatenates everything into a single string. For example:
>>> import ctypes
>>> sn = (ctypes.c_byte*32)(*range(1,32))
>>> hexstring = ''.join('%02X' % b for b in sn)
>>> print hexstring
0102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F00
>>>
|
Python TypeError: unorderable types: builtin_function_or_method() < int()
Question: I keep getting this error while trying to do a Python program. The point of
the program is to keep on rolling a set of die, until the user enters a number
less than 0.
import random
num_sixes = 0
num_sevens = 0
num_rolls = int(input('Enter number of rolls:\n'))
while (input <= 1):
for i in range(num_rolls):
die1 = random.randint(1,6)
die2 = random.randint(1,6)
roll_total = die1 + die2
#Count number of sixes and sevens
if roll_total == 6:
num_sixes = num_sixes + 1
if roll_total == 7:
num_sevens = num_sevens + 1
print('Roll %d is %d (%d + %d)' % (i, roll_total, die1, die2))
print('\nDice roll statistics:')
print('6s:', num_sixes)
print('7s:', num_sevens)
else:
print('Invalid number of rolls. Try again.')
Here's the traceback:
Traceback (most recent call last):
File "unorderable.py", line 7, in <module>
while (input <= 1):
TypeError: unorderable types: builtin_function_or_method() <= int()
Answer: You are comparing whether the built-in function `input` is less than or equal
to 1
while (input <= 1):
Did you mean to do this instead?
while (num_rolls > 0):
|
How to copy a folder (divide into sub folders) using Python?
Question: i try to copy a directory (divided into folders and subfolders) to a new
folder that will be created.i work with python 2.7.
* dir_src = an exist folder
* dir_dst = a new folder (not exist) that all the folders will be copied to
I read <https://docs.python.org/2/library/shutil.html> and tried this code:
import os,shutil
dir_src = r"C:\Project\layers"
dir_dst = r"C:\Project\new"
for file in os.listdir(dir_src):
print file
src_file = os.path.join(dir_src, file)
dst_file = os.path.join(dir_dst, file)
shutil.copytree(src_file, dst_file,symlinks=False, ignore=None)
print 'copytree'
But i get an error:
WindowsError: [Error 267] : 'C:\\Project\\layers\\abc.cpg/*.*'
Thank you very much for any help.
Answer: The error you are getting (`Permission denied`) should tell you what is the
problem - you don't have rights to read or copy the files. Running the program
as administrator should fix it.
|
Twitter Stream not working for Tweepy
Question: Using the code below, I'm trying to get a hash tag. It works fine for larger
searches like #StarWars, but when i ask for smaller ones it doesn't seem to
return anything.
Ideas?
_'code' is used instead of the actual strings for authentication_
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
from textwrap import TextWrapper
import json
access_token = "code"
access_token_secret = "code"
consumer_key = "code"
consumer_secret = "code"
class StdOutListener(StreamListener):
''' Handles data received from the stream. '''
status_wrapper = TextWrapper(width=60, initial_indent=' ', subsequent_indent=' ')
def on_status(self, status):
try:
print self.status_wrapper.fill(status.text)
print '\n %s %s via %s\n' % (status.author.screen_name, status.created_at, status.source)
except:
# Catch any unicode errors while printing to console
# and just ignore them to avoid breaking application.
pass
def on_error(self, status_code):
print('Got an error with status code: ' + str(status_code))
return True # To continue listening
def on_timeout(self):
print('Timeout...')
return True # To continue listening
if __name__ == '__main__':
listener = StdOutListener()
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
stream = Stream(auth, listener)
stream.filter(track=['#TestingPythonTweet'])
Answer: Ok, so found that the answer to this is that i was expecting it to work retro-
actively. This was a fundamental error on my part. Instead what actually
happens is that it gets what's _currently_ being tweeted. Not was has been
previously.
|
Subtract List B from List A, but keeping the List A index and using difflib string similarity
Question: I need some help with Python. This is not the classic subtract List B from
List A to make List C. Instead I would like to look at the indexes of the
items in List A (city names in a single word) that are not in List B, and
store them into a new List C. Also, matched items in List B are not exactly
the same that the ones in List A, they come from OCR, so they are little
misspelled, I would like to consider a match if they are 90% similar.
e.g.
List A: #all list items are citynames in just a single word
0. Corneria
1. klandasco
2. Blue_Mars
3. Setiro
4. Jeti_lo
5. Neo_Tokyo
List B: #citynames are little misspelled
0. lcandasco
1. Ne0_Tolcyo
So, the result should be...
List C:
[0, 2, 3, 4]
The result items are not important (Corneria, Blue_Mars, Setiro, Jeti_lo),
instead I need to keep the original indexes of the items in List A, once the
subtraction has been made.
So far Im doing this...
a = ["aaa", "bbb", "ccc", "ddd", "ccc", "eee"]
b = ["bbb", "eee"]
c = [i for i, v in enumerate(a) if v not in b]
print(c)
output...
[0, 2, 3, 4]
But I need to implement the difflib part in order to match items with 90% of
similarity, how could I do this using only pure python script, (preferibly
using only difflib)???
Answer: How about this:
from difflib import SequenceMatcher
max_ratio = 0.9
c = [i for i, v in enumerate(a)
if not any(map(lambda x: SequenceMatcher(None, v, x).ratio()>=max_ratio, b))]
Snippet which used `fuzzywuzzy`:
from fuzzywuzzy import fuzz
max_ratio = 90
c = [i for i, v in enumerate(a)
if not any(map(lambda x: fuzz.ratio(v, x)>=max_ratio, b))]
Note. Before using `fuzzywuzzy` you should install it.
|
Using Django/python to JSON posts
Question: I'm new to using Django so please forgive my ignorance with such a general
question. How would you go about displaying JSON posts from a python script? I
have a python script which updates an sqlite3 database and creates a JSON
object. I want to display this data on a Django frontend. Bonus points if you
can give advice on how to make it live update as the database is updated. Any
guidance would be helpful.
Answer: See Django's [JsonResponse](https://docs.djangoproject.com/en/1.8/ref/request-
response/#jsonresponse-objects). Use this in your views.py and return the json
as a response in your view.
from django.http import JsonResponse
...
return JsonResponse(<your_JSON_object>)
|
Use with..open to read a list of xls files to copy into a single workbook with multiple sheets
Question: First timer here. My overall goal is to copy exactly, the data in 3 different
xls files into one xls workbook with a sheet for each original xls file. In a
non-programmatic sense, I want to copy and paste the data from each xls file
into it's own sheet in a new workbook (xls or xlsx file). I've been using xlrd
& xlwt to do this and with the help of searching around SO, I've been able to
get most of the code set. However, I'm having difficult time comprehending how
to use the with...open command to read the data from each original xls file
and put it on it's own sheet. As you will probably learn from my code block,
my Python skills are limited. Thx!
import xlwt as xlwt
from os.path import join
import xlrd
wb = xlwt.Workbook()
path = r'C:\data_path\\'
xls1 = 'file1.xls'
xls2 = 'file2.xls'
xls3 = 'file3.xls'
Sheet1 = 'file1_data'
Sheet2 = 'file2_data'
Sheet3 = 'file3_data'
names = [Sheet1, Sheet2, Sheet3]
dataset = [path + xls1, path + xls2, path + xls3]
for name in names:
wb.add_sheet(name)
for n, data in enumerate(dataset):
**I feel there should be some type of with..open statement here**
ws = wb.get_sheet(n)
ws.write(0,0, data)
wb.save(join(path,'test.xls'))
Answer: Assuming that each source file has only one sheet:
import xlwt as xlwt
from os.path import join
import xlrd
output = xlwt.Workbook()
path = r'C:\data_path\\'
xls1 = 'file1.xls'
xls2 = 'file2.xls'
xls3 = 'file3.xls'
Sheet1 = 'file1_data'
Sheet2 = 'file2_data'
Sheet3 = 'file3_data'
names = [Sheet1, Sheet2, Sheet3]
dataset = [path + xls1, path + xls2, path + xls3]
for n, data in enumerate(dataset):
book = xlrd.open_workbook(data, formatting_info=True)
sheet = book.sheet_by_index(0)
r = output.add_sheet(names[n])
for row in range(sheet.nrows):
for column in range(sheet.ncols):
cell_val = sheet.cell_value(rowx=row, colx=column)
r.write(row, column, cell_val)
output.save(join(path,'test.xls'))
|
How to check if a word is more common in its plural form rather than in it's singular form in an array of words (with Python/NLTK)?
Question: I'm trying to do the NLTK exercices but I can't do this one. "Which nouns are
more common in their plural form, rather than their singular form? (Only
consider regular plurals, formed with the -s suffix.)". I spent a day thinking
about this and trying things but I just can't get it. Thank you.
Answer: Take a corpus, do a count_:
>>> from collections import Counter
>>> from nltk.corpus import brown
>>> texts = brown.words()[:10000]
>>> word_counts = Counter(texts)
>>> word_counts['dollar']
5
>>> word_counts['dollars']
15
But do note that sometimes it's unclear when you use only surface strings in
counting, e.g.
>>> texts = brown.words()[:10000]
>>> word_counts = Counter(texts)
>>> word_counts['hits']
14
>>> word_counts['hit']
34
>>> word_counts['needs']
14
>>> word_counts['need']
30
POS sensitive counts (see types vs tokens):
>>> texts = brown.tagged_words()[:10000]
>>> word_counts = Counter(texts)
>>> word_counts[('need', 'NN')]
6
>>> word_counts[('needs', 'NNS')]
3
>>> word_counts[('hit', 'NN')]
0
>>> word_counts[('hits', 'NNS')]
0
Let's reverse engineer a little, `brown` corpus is nice and it's tokenized and
tagged in NLTK but if you want to use your own corpus, then you have to
consider the follow:
* Which corpus to use? How to tokenize? How to POS-tag?
* What are you counting? Types or tokens?
* How to handle POS ambiguity? How to differentiate Nouns from non-Nouns?
Finally, consider this:
* Is there really a way to find out whether plural or singular is more common for a word in language? Or will it always be relative to the corpus you chose to analyze?
* Are there cases where plural or singular don't exists for certain nouns? (Most probably the answer is yes).
|
Parsing text event file in Python
Question: I have a large text file with Event data that I am trying to parse to a csv.
The structure looks like this:
START
USER: a
TIME: 1000
CLICKS: 1
COMMAND A: 2
COMMAND B: 1
END
START
USER: b
TIME: 00
CLICKS: 1
COMMAND A: 2
COMMAND B: 1
COMMAND C: 1
END
The events are separated using the START and END tags and I am trying to parse
it to create a csv file that has each event as a row, and the other attributes
as columns, so in the example above, the columns would be USER, TIME, CLICKS,
COMMAND A, COMMAND B, COMMAND C and the values for each would be the value
after the :
I know that this code will read an individual event:
with open('sampleIVTtxt.txt', 'r') as input_data:
for line in input_data:
if line.strip() == 'START REPORT':
break
for line in input_data:
if line.strip() == 'END':
Where I am stuck is how to parse the lines within the event block and store
them as columns and values in a csv. I'm thinking for each line within the
event block I need to parse out the column name using regex and then store
those names in an array and use writerow(namesarray) to create the columns.
But I'm not sure how to loop through the whole txt file and store subsequent
event values in those columns.
I am new to python, so any help would be appreciated.
Answer: Something like:
import csv
with open('sampleIVTtxt.csv', 'w') as csvfile:
fieldnames = ['USER', 'TIME','CLICKS','COMMAND_A','COMMAND_B','COMMAND_C']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
with open('sampleIVTtxt.txt', 'r') as input_data:
for line in input_data:
thisLine=line.strip()
if thisLine == 'START':
myDict={}
elif "USER" in thisLine:
myDict['USER'] = thisLine[6:]
....and so on....
elif thisLine == 'END':
writer.writerow(myDict)
|
Echoing messages received through UDP back through a different TCP port
Question: I use Python 2.7.x (2.7.8) and am trying to write a program using twisted
python to function like this:
* Program waits for a messaged received through TCP 7001 or UDP 7000.
* Messages received through UDP 7000 are bridged over to TCP 7001 going out.
I couldn't figure out how to bridge UDP to TCP, so I looked up an example on
this site, like this one, [Twisted UDP to TCP
Bridge](http://stackoverflow.com/questions/20008281/twisted-udp-to-tcp-bridge)
but the problem is that the example is lying because it doesn't work. I added
a "print datagram" on datagramReceived to see if UDP responds to receiving
**anything at all** and it does not. This is totally frustrating.
Here's my current test code altered **a little bit** from that example:
from twisted.internet.protocol import Protocol, Factory, DatagramProtocol
from twisted.internet import reactor
class TCPServer(Protocol):
def connectionMade(self):
self.port = reactor.listenUDP(7000, UDPServer(self))
def connectionLost(self, reason):
self.port.stopListening()
def dataReceived(self, data):
print "Server said:", data
class UDPServer(DatagramProtocol):
def __init__(self, stream):
self.stream = stream
def datagramReceived(self, datagram, address):
print datagram
self.stream.transport.write(datagram)
def main():
f = Factory()
f.protocol = TCPServer
reactor.listenTCP(7001, f)
reactor.run()
if __name__ == '__main__':
main()
As you can see I changed the ports to comply with my test environment and
added a print datagram to see if anything calls datagramReceived. I have no
problems sending TCP things to this program, TCPServer works just fine because
dataReceived can be called.
Answer: I ran the code from your question (with one slight modification: I enabled
logging). I used `telnet` to connect to the TCP server on port 7001. I used a
Python REPL to create a UDP socket and send some datagrams to port 7000.
Here is my REPL transcript:
>>> import socket
>>> s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
>>> s.sendto('hello', ('127.0.0.1', 7000))
5
>>> s.sendto('world', ('127.0.0.1', 7000))
5
>>>
Here is my server log (formatted to fit on your screen):
... Log opened.
... Factory starting on 7001
... Starting factory <twisted.internet.protocol.Factory instance at 0x2b9b128>
... UDPServer starting on 7000
... Starting protocol <__main__.UDPServer instance at 0x2e8f8c0>
... hello
... world
And here's my telnet session transcript:
$ telnet localhost 7001
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
helloworld
My interpretation of these results is that the program actually works as
specified. I wonder what you did differently when you tried it that produced
different, non-working results.
|
PYTHON 3: Read a file and map the coordinates to the turtle screen
Question: So I have to write a program that reads a `csv` file which contains earthquake
data, it then takes only the latitude and longitude from the file and map it
to the turtle screen in Python 3.
This is the [CSV file](http://textuploader.com/gayg)
My program:
import turtle
def drawQuakes():
filename = input("Please enter the quake file: ")
readfile = open(filename, "r")
readlines = readfile.read()
start = readlines.find("type")
g = readlines[start]
Type = g.split(",")
tracy = turtle.Turtle()
tracy.up()
for points in Type:
print(points)
x = float(Type[1])
y = float(Type[2])
tracy.goto(x,y)
tracy.write(".")
drawQuakes()
I know this program is fairly easy, but I keep getting this error:
x = float(Type[1])IndexError: list index out of range
Answer: You aren't using the file correctly, let's break it down:
readlines = readfile.read()
# readlines is now the entire file contents
start = readlines.find("type")
# start is now 80
g = readlines[start]
# g is now 't'
Type = g.split(",")
# Type is now ['t']
tracy = turtle.Turtle()
tracy.up()
for points in Type:
print(points)
# points is 't'
x = float(Type[1])
# IndexError: list index out of range
y = float(Type[2])
tracy.goto(x,y)
tracy.write(".")
I would use the csv.DictReader:
import turtle
import csv
def drawQuakes():
filename = input("Please enter the quake file: ")
tracy = turtle.Turtle()
tracy.up()
with open(filename, 'r') as csvfile:
reader = reader = csv.DictReader(csvfile)
for row in reader:
if row['type'] == 'earthquake':
x = float(row['latitude'])
y = float(row['longitude'])
tracy.goto(x,y)
tracy.write(".")
drawQuakes()

|
Is there any function in Python similar to 'which' and 'open' in matlab?
Question: In matlab, it's easy to find the path to a '.m' file by 'which XX.m' and also
convenient to view the code by 'open XXX.m'.
In Python, is there any similar command?
Answer: If you've already imported the module (or can do so without any harm), the
[`inspect`](https://docs.python.org/3/library/inspect.html) module is probably
what you want. For example, `getsourcefile` will give you the path to the file
the module was loaded from, while `getsource` will give you the source code.
Of course these don't always work, because some Python modules can be
extension modules written in C, or cached `.pyc` files that someone installed
without the `.py` file, or bundled up in a `.zip` file instead of a flat
directory, or anything else you can think up and write a loader for… But in
those cases, _nothing_ could work; when something reasonable could work,
`inspect` will.
There's nothing quite like the `open` function because Matlab is a GUI
environment, whereas Python is just a language that can run in a wide variety
of different GUIs or none at all, but once you know the path, presumably you
can figure out how to open it in your IDE session or in your favorite text
editor or whatever.
* * *
If you can't actually import the module (maybe the reason you're asking is
because `import XX` is failing and you want to find the code to fix the
problem…), but want to know which module _would_ be imported, that's not quite
as easy in Python 2.7 as in 3.4, but the
[`imp`](https://docs.python.org/2.7/library/imp.html) module is often good
enough—in particular, `imp.find_module('XX')` will usually get you what you
want (as long as you pay attention to the explanation about packages in the
docs).
|
How to check if linux host is up-to-date with python yum module
Question: The yum commandline tool has an option to check if a host is up-to-date or if
there are any (security-) updates available.
$ yum check-update
Python has a "yum" Module - it offers functions like:
import yum
yb = yum.YumBase()
if yb.rpmdb.searchNevra(name='httpd'):
print "httpd installed"
But I didn't find any documentation / function to check if updates are
available with the python module (like "yum check-updates" on the command
line).
Does anyone know how to do this using the yum python module (not calling the
commandline tool and scraping the output).
Answer: After debugging the commandline (python) scripts I found a solution how to
check for updates ... see [here](http://slash4.net/blog/python/python-yum-
check-update-howto.html)
|
With python multiprocessing how can child process terminate another child process?
Question: I have two child processes created by my main application process.
A = Process(target=a, args=(aaa,))
B = Process(target=b,args=())
For the sake of argument they are called A and B; How could process A
terminate Process B?
Answer: Whilst I would seriously not recommend child processes being able to nuke each
other. Personally I would have the main process managing child processes.
If you want to go down the route you are currently on. One way you could do
what you wanted would be to get the main process to pass the process id of
child `A` to child `B` (or how ever you want to do it). Using this the process
ID you pass to the process you can then terminate the process.
Option 1: If only one process can terminate the other then the simplest
solution is for process `B` to be started first. Then for process `A` to be
passed process `B`'s process ID. Using this you can the terminate the process.
def a(pid):
os.kill(pid)
B = Process(target=b, args=())
A = Process(target=a,args=(B.pid))
Option 2: To do this you can use Queues or Pipes (from the [multiprocessing
library](https://docs.python.org/2/library/multiprocessing.html#exchanging-
objects-between-processes)).
from multiprocessing import Process, Queue
def f(q):
# will send the process id of the alternative child process
print q.get()
if __name__ == '__main__':
queue_A = Queue()
queue_B = Queue()
A = Process(target=f, args=(queue_A,))
B = Process(target=f,args=(queue_B,))
A.start()
B.start()
queue_A.put(B.pid)
queue_B.put(A.pid)
# do stuff....
Option 3 (my preferred method): Get the main process to do the terminating.
A = Process(target=a, args=())
B = Process(target=b,args=())
A.terminate()
B.terminate()
|
Python error when executing shell script
Question: I'm executing a line of shell script using python. It isn't working with
`which` to test of the existence of homebrew
#!/usr/bin/python
import sys, subprocess
subprocess.call(["which python"])
throws the long error
Traceback (most recent call last):
File "installations.py", line 5, in <module>
subprocess.call(["which python"])
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 522, in call
return Popen(*popenargs, **kwargs).wait()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 709, in __init__
errread, errwrite)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 1326, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
But I know that the shell execution is working correctly on some level
#!/usr/bin/python
import sys, subprocess
subprocess.call(["whoami])
prints my username just fine. Am I doing something wrong, or is which for some
reason not supported. Is there some better supported way of detecting the
existence of an installation?
Answer: The failing call is trying to find an executable named 'which python' not
running `which` with an argument of `python` as you likely intended. The list
that you pass to call (unless `shell=True` is set) is the list of the command
and all the arguments. Doing
subprocess.call(['which', 'python'])
will probably give you what you are looking for.
|
Batch Gradient Descent for Logistic Regression
Question: I've been following Andrew Ng CSC229 machine learning course, and am now
covering logistic regression. The goal is to maximize the log likelihood
function and find the optimal values of theta to do so. The link to the
lecture notes is: [<http://cs229.stanford.edu/notes/cs229-notes1.ps][1]>
-pages 16-19. Now the code below was shown on the course homepage (in matlab
though--I converted it to python).
I'm applying it to a data set with 100 training examples (a data set given on
the Coursera homepage for a introductory machine learning course). The data
has two features which are two scores on two exams. The output is a 1 if the
student received admission and 0 is the student did not receive admission. The
have shown all of the code below. The following code causes the likelihood
function to converge to maximum of about -62. The corresponding values of
theta are [-0.05560301 0.01081111 0.00088362]. Using these values when I test
out a training example like [1, 30.28671077, 43.89499752] which should give a
value of 0 as output, I obtain 0.576 which makes no sense to me. If I test the
hypothesis function with input [1, 10, 10] I obtain 0.515 which once again
makes no sense. These values should correspond to a lower probability. This
has me quite confused.
import numpy as np
import sig as s
def batchlogreg(X, y):
max_iterations = 800
alpha = 0.00001
(m,n) = np.shape(X)
X = np.insert(X, 0, 1, 1)
theta = np.array([0] * (n+1), 'float')
ll = np.array([0] * max_iterations, 'float')
for i in range(max_iterations):
hx = s.sigmoid(np.dot(X, theta))
d = y - hx
theta = theta + alpha*np.dot(np.transpose(X),d)
ll[i] = sum(y * np.log(hx) + (1-y) * np.log(1- hx))
return (theta, ll)
Answer: Note that the sigmoid function has:
sig(0) = 0.5
sig(x > 0) > 0.5
sig(x < 0) < 0.5
Since you get all probabilities above `0.5`, this suggests that you never make
`X * theta` negative, or that you do, but your learning rate is too small to
make it matter.
for i in range(max_iterations):
hx = s.sigmoid(np.dot(X, theta)) # this will probably be > 0.5 initially
d = y - hx # then this will be "very" negative when y is 0
theta = theta + alpha*np.dot(np.transpose(X),d) # (1)
ll[i] = sum(y * np.log(hx) + (1-y) * np.log(1- hx))
The problem is most likely at `(1)`. The dot product will be very negative,
but your `alpha` is very small and will negate its effect. So `theta` will
never decrease enough to properly handle correctly classifying labels that are
`0`.
Positive instances are then only barely correctly classified for the same
reason: your algorithm does not discover a reasonable hypothesis under your
number of iterations and learning rate.
**Possible solution:** increase `alpha` and / or the number of iterations, or
use [momentum](http://cs231n.github.io/neural-networks-3/#sgd).
|
How to get the authenticated user name under apache while in python code?
Question: Basically I just need to get the authenticated username from apache2, (for
basic / similar authentication schemes) but while in python.
In php, this is simple, and looks like this:
$username = $_SERVER['PHP_AUTH_USER'];
$password = $_SERVER['PHP_AUTH_PW'];`
Here is what the relevent section of the config in the site's sites-enabled
config looks like:
#Provides Authentication for the admin pages
<Location /admin>
AuthType Basic
AuthName "example.com"
AuthUserFile /data/passwd_admin
Require valid-user
</Location>`
However, I cannot seem to find any method of retrieving this information from
python.
Answer:
from flask import request,Flask
import json
app = Flask(__name__)
@app.route("/")
def index():
auth = request.authorization
if not auth:return "No Auth!"
return auth.username+":"+auth.password
app.run(debug=True)
see : <http://flask.pocoo.org/snippets/8/>
|
Extracting data with Headless Selenium browser with Python
Question: I'm trying to find a way to extract the GA datalayer object from a website
with a headless browser via Python so therefore; I've follow the instructions
from [here](https://realpython.com/blog/python/headless-selenium-testing-with-
python-and-phantomjs/) and...
from selenium import webdriver
driver = webdriver.PhantomJS()
driver.get("http://www.example.com/")
# DL = currentURL.dataLayer
# Do something with DL
What is the keyword I'm looking for here? `driver.something` should give me
the dataLayer object. As a reminder: dataLayer object is a javascript object.
Answer: Since, from what I understand, `dataLayer` is a global variable, use
[`execute_script()`](https://selenium-
python.readthedocs.org/api.html#selenium.webdriver.remote.webdriver.WebDriver.execute_script):
driver.execute_script("return dataLayer;")
Note, that you may need to [explicitly wait](https://selenium-
python.readthedocs.org/waits.html#explicit-waits) for the page to load before
executing the script.
Or, at least, increase the [page load timeout](https://selenium-
python.readthedocs.org/api.html#selenium.webdriver.remote.webdriver.WebDriver.set_page_load_timeout):
driver.set_page_load_timeout(10)
|
Modifying html files in terms of tags?
Question: I have several html files, whose contents are like this:
<html>
<header>
<title>A test</title>
</header>
<body>
<table>
<tr>
<td id="MenuTD" style="vertical-align: top;">
Stuff here <a>with a link</a>
<p>Or paragraph tags</p>
<div>Or a DIV</div>
</td>
<td>Another TD element, without the MenuTD id</td>
</tr>
</table>
<div>
<link rel="stylesheet" href="\d\d\d\d_files/zannotationtargettoggle.css" type="text/css">
</div>
</body>
</html>
where `\d` is a placeholder of a digit, and the exact digit varies from file
to file.
I want to write a Python program to convert each html file to the following
form:
<html>
<header>
<title>A test</title>
</header>
<body>
<link rel="stylesheet" href="\d\d\d\d_files/zannotationtargettoggle.css" type="text/css">
<td id="MenuTD" style="vertical-align: top;">
Stuff here <a>with a link</a>
<p>Or paragraph tags</p>
<div>Or a DIV</div>
</td>
</body>
</html>
Specifically,
1. How can we extract the header tag `<header>...</header>` and `<link rel="stylesheet" href="\d\d\d\d_files/zannotationtargettoggle.css" type="text/css">` , given that they don't have IDs?
2. If the body tag has attributes e.g. `<body style="margin-left: 6px; cursor: default;" onload="InitBody();">...</body>`, How shall we first empty the content `...` inside its beginning and ending tags, and then add `<link rel="stylesheet" href="\d\d\d\d_files/zannotationtargettoggle.css" type="text/css">` and the content of `menu_td` inside?
Thanks!
Answer: You can use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/) to
modify the input document:
import bs4
doc = bs4.BeautifulSoup(s) # s your input html
td = doc.find('td')
doc.find('table').replace_with(doc.find('link'))
doc.find('div').replace_with(td)
Testing the resulting document:
>>> print str(doc)
<html>
<body><header>
<title>A test</title>
</header>
<link href="\d\d\d\d_files/zannotationtargettoggle.css" rel="stylesheet" type="text/css"/>
<td id="MenuTD" style="vertical-align: top;">
Stuff here <a>with a link</a>
<p>Or paragraph tags</p>
<div>Or a DIV</div>
</td>
</body></html>
or you could construct a new document:
doc = bs4.BeautifulSoup(s)
doc2 = bs4.BeautifulSoup('<html />')
doc2.html.append(doc.header)
doc2.html.append(doc2.new_tag('body'))
doc2.body.append(doc.link)
doc2.body.append(doc.find('td'))
|
What's the difference between jython-standalone-2.7.0.jar and jython-2.7.0.jar
Question: I wrote a Java example, the code is:
import org.python.core.PyObject;
import org.python.util.PythonInterpreter;
import javax.script.ScriptEngine;
import javax.script.ScriptEngineFactory;
import javax.script.ScriptEngineManager;
import javax.script.ScriptException;
import java.util.List;
class JythonExample {
public static void main(String args[]) throws ScriptException {
listEngines();
ScriptEngineManager mgr = new ScriptEngineManager();
ScriptEngine pyEngine = mgr.getEngineByName("python");
try {
pyEngine.eval("print \"Python - Hello, world!\"");
} catch (Exception ex) {
ex.printStackTrace();
}
final PythonInterpreter interpreter = new PythonInterpreter();
interpreter.exec("print \"Python - Hello, world!\"");
PyObject result = interpreter.eval("2 + 3");
System.out.println(result.toString());
}
public static void listEngines(){
ScriptEngineManager mgr = new ScriptEngineManager();
List<ScriptEngineFactory> factories =
mgr.getEngineFactories();
for (ScriptEngineFactory factory: factories) {
System.out.println("ScriptEngineFactory Info");
String engName = factory.getEngineName();
String engVersion = factory.getEngineVersion();
String langName = factory.getLanguageName();
String langVersion = factory.getLanguageVersion();
System.out.printf("\tScript Engine: %s (%s)\n",
engName, engVersion);
List<String> engNames = factory.getNames();
for(String name: engNames) {
System.out.printf("\tEngine Alias: %s\n", name);
}
System.out.printf("\tLanguage: %s (%s)\n",
langName, langVersion);
}
}
}
In my `pom.xml`, if I use:
<dependency>
<groupId>org.python</groupId>
<artifactId>jython-standalone</artifactId>
<version>2.7.0</version>
</dependency>
then I can run `java -jar target/jython-example-1.0-SNAPSHOT.jar` successfuly,
by the way, I used `maven-assembly-plugin` to build a runnable jar.
if I use:
<dependency>
<groupId>org.python</groupId>
<artifactId>jython</artifactId>
<version>2.7.0</version>
</dependency>
then when I run `java -jar target/jython-example-1.0-SNAPSHOT.jar`, I'll
always get the following error:
ScriptEngineFactory Info
Script Engine: jython (2.7.0)
Engine Alias: python
Engine Alias: jython
Language: python (2.7)
ScriptEngineFactory Info
Script Engine: Oracle Nashorn (1.8.0_31)
Engine Alias: nashorn
Engine Alias: Nashorn
Engine Alias: js
Engine Alias: JS
Engine Alias: JavaScript
Engine Alias: javascript
Engine Alias: ECMAScript
Engine Alias: ecmascript
Language: ECMAScript (ECMA - 262 Edition 5.1)
java.lang.NullPointerException
at me.soulmachine.JythonExample.main(JythonExample.java:21)
Exception in thread "main" ImportError: Cannot import site module and its dependencies: No module named site
Determine if the following attributes are correct:
* sys.path: ['/home/programmer/src/github/JythonExample/JythonExample/target/Lib', '__classpath__', '__pyclasspath__/']
This attribute might be including the wrong directories, such as from CPython
* sys.prefix: /home/programmer/src/github/JythonExample/JythonExample/target
This attribute is set by the system property python.home, although it can
be often automatically determined by the location of the Jython jar file
You can use the -S option or python.import.site=false to not import the site module
It seems the `pyEngine` is `null`.
So I wonder what's the difference between `jython-standalone-2.7.0.jar` and
`jython-2.7.0.jar`
Answer: One problem I've just discovered with the same error is that the maven build
2.7.0 does not include the lib folder. This is probably a build error for the
release build. I had to move up the b2 build which does properly include the
lib folder in the supplied jar.
Problem maven 2.7.0 jar:
`<dependency> <groupId>org.python</groupId> <artifactId>jython-
standalone</artifactId> <version>2.7.0</version> </dependency>`
Working maven 2.7.1b2 that includes the lib folder:
`<dependency> <groupId>org.python</groupId> <artifactId>jython-
standalone</artifactId> <version>2.7.1b2</version> </dependency>`
NOTE: If you download the jar directly from the Jython site it does correctly
include the lib folder. It's just the maven repository version.
|
Python client write to Java's writeUTF method
Question: I have the following code snippet for a **Java** server that can't be changed:
....
while(true){
try {
System.out.println("Will listen . . .");
instruction = this.dis.readUTF(); // Socket's dataInputStream
System.out.println("Got instruction: " + instruction);
} catch (IOException ex) {
Logger.getLogger(ServerThread.class.getName()).log(Level.SEVERE, null, ex);
}
....
} // end while(true)
And I have the following client code for **python** :
....
self.socket.send(b"GET~CARD\n")
print self.socket.recv(1024)
....
The problem that I'm facing is that I can get the client to send information
to the server but the server won't stop listening, so it stays on the blocking
call `this.dis.readUTF();`.
As you can see I tried using the `\n` character at the end of the string but
it stays listening. Does anybody knows how to write from a python client to a
java server using `readUTF()`?
Answer: Have a look at the docs for the `readUTF` function
[here](https://docs.oracle.com/javase/7/docs/api/java/io/DataInput.html#readUTF\(\))
Primarily, this stands out
> First, two bytes are read and used to construct an unsigned 16-bit integer
> in exactly the manner of the readUnsignedShort method . This integer value
> is called the UTF length and specifies the number of additional bytes to be
> read. These bytes are then converted to characters by considering them in
> groups. The length of each group is computed from the value of the first
> byte of the group. The byte following a group, if any, is the first byte of
> the next group.
Try something like this in your python code
import struct
message = u"GET~CARD\n"
size = len(message)
...
sock.send(struct.pack("!H", size))
sock.send(message)
|
Why default JSONEncoder in python cannot serialize abc.Sequence and abc.Mappings?
Question: So, if we create some class, implementing `collections.abc.MutableSequence`,
and `list` is such an example,
class Woof(collections.MutableSequence):
def __init__(self, data=[]):
self.s = list(data)
def __setitem__(self, k, v):
self.s[k] = v
def __getitem__(self,k):
return self.s[k]
def __delitem__(self,k):
del self.s[k]
def __repr__(self):
return repr(self.s)
def __len__(self):
return len(self.s)
def insert(self,k,v):
self.s[k] = v
we cannot pass its instance to default JSONEncoder.
>>> w = Woof(range(5,10))
>>> json.dumps(w)
TypeError: [5, 6, 7, 8, 9] is not JSON serializable
Question is not **how** to serialize custom classes, but **why** python can
serialize only its own implementations of `Sequence` and `Mapping`, but not
arbitrary ones, implementing exactly same interface.
Answer: As far as I know, Bob Ippolito (the primary author of
[`simplejson`](http://simplejson.readthedocs.org/), who also did the changes
to stdlibify it as [`json`](http://docs.python.org/library/json) in the
stdlib, and does most of the maintenance on the stdlib version) has never
given his rationale for this, so if you really wanted to know for sure, the
only thing you could do is to ask him.
But I think there's an obvious answer: If you encoded all `Sequence` types to
JSON arrays, there'd be no way to distinguish the difference `Sequence` types
at decode time. And, since your code is presumably using something other than
`list` for some reason, and some reason that the `json` module can't predict,
it may be throwing away important information.
That said, wanting to treat other sequence types—or even arbitrary
iterables—the same as `list` is a common-enough use case that it was made easy
to do if you want it, and it's [the example the docs use for
`JSONEncode.default`](https://docs.python.org/3/library/json.html#json.JSONEncoder.default).
|
Import scip to python
Question: I've tried unsuccessfully to get scip running with python. I'm using Yosemite
(10.10.3), python 2.7 and have installed scip optimization suit
(<http://scip.zib.de/download.php?fname=scipoptsuite-3.1.1.tgz>) with make.
I can start scip after the installation via the terminal.
sages-MBP:~ sage$ scip SCIP version 3.1.1 [precision: 8 byte] [memory: block] >[mode:optimized][LP solver: SoPlex 2.0.1] [GitHash: bade511]
But when I try to use `import scip` in python there appears the message
No module named scip
The same error message appears for `from zibopt import scip`
If I print the system path with `print sys.path` in python, the folder
`scipoptsuite-3.1.1/scip-3.1.1/bin` is included where the file
`scip-3.1.1.darwin.x86_64.gnu.opt.spx` is situated. Is it possible that the
reason for the error message is that I haven't linked correctly necessary
libraries?
Or which folders have to be included in the pythonpath to get scip working?
I Hope someone can help me out!
Answer: You need to install the python interface that comes with SCIP. Go to
`scip/interfaces/python/` and read the instructions in `README` and `INSTALL`.
This interface is using [Cython](http://cython.org/) to communicate with the
C-Code of SCIP.
`make` will only install the native Linux or Mac binaries/libraries.
The environment variable `DYLD_LIBRARY_PATH` needs to be set to contain the
`lib/` directory of the SCIPoptSuite installation (see comment by @саша)
|
How to upgrade Django app from 1.6.11 to 1.8 when makemigrations crashes?
Question: I'm following the instructions to upgrade a Django app from Django 1.6.11 to
1.8.1. I've deleted all the migrations save for the **init**.py. I then run:
manage.py makemigrations
and it crashes with the following traceback:
Traceback (most recent call last):
File "manage.py", line 12, in <module>
execute_from_command_line(sys.argv)
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/core/management/__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/core/management/base.py", line 390, in run_from_argv
self.execute(*args, **cmd_options)
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/core/management/base.py", line 441, in execute
output = self.handle(*args, **options)
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/core/management/commands/makemigrations.py", line 143, in handle
self.write_migration_files(changes)
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/core/management/commands/makemigrations.py", line 171, in write_migration_files
migration_string = writer.as_string()
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/db/migrations/writer.py", line 166, in as_string
operation_string, operation_imports = OperationWriter(operation).serialize()
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/db/migrations/writer.py", line 124, in serialize
_write(arg_name, arg_value)
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/db/migrations/writer.py", line 75, in _write
arg_string, arg_imports = MigrationWriter.serialize(item)
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/db/migrations/writer.py", line 303, in serialize
item_string, item_imports = cls.serialize(item)
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/db/migrations/writer.py", line 377, in serialize
return cls.serialize_deconstructed(path, args, kwargs)
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/db/migrations/writer.py", line 268, in serialize_deconstructed
arg_string, arg_imports = cls.serialize(arg)
File "/Users/dwatson/Environments/website/lib/python2.7/site-packages/django/db/migrations/writer.py", line 465, in serialize
"topics/migrations/#migration-serializing" % (value, get_docs_version())
ValueError: Cannot serialize: <bound method AccountManager.gen_num of <coreapi.models.AccountManager object at 0x103925450>>
There are some values Django cannot serialize into migration files.
For more, see https://docs.djangoproject.com/en/1.8/topics/migrations/#migration-serializing
This function is a method defined in the AccountManager class, which derives
from models.Manager:
def gen_num(self):
max_digits = 9
candidate = random.randint(0, 10**max_digits)
return candidate
I've inspected the return candidate line and as expected, it's an int.
I assume that this method is being introspected because it is set as the
default for the associated field:
num = models.PositiveIntegerField(
_('Num'), help_text=_('Number, automatically-generated'),
unique=True, blank=True, null=True, default=AccountManager().gen_num
)
Answer: The solution is to follow the advice given here:
<https://docs.djangoproject.com/en/1.8/topics/migrations/#migration-
serializing>
That is, move the function from the class body of the Manager to the module
body:
def gen_num():
max_digits = 9
candidate = random.randint(0, 10**max_digits)
return candidate
After this change, makemigrations completes successfully.
|
Basic Auth for CherryPy webapp
Question: I'm trying to create a very basic cherrypy webapp that will ask the user for a
username and password before the first (and only) page loads. I used the
example set forth in CherryPy docs here:
<http://cherrypy.readthedocs.org/en/latest/basics.html#authentication>
Here's my specific code for wsgi.py:
import cherrypy
from cherrypy.lib import auth_basic
from myapp import myapp
USERS = {'jon': 'secret'}
def validate_password(username, password):
if username in USERS and USERS[username] == password:
return True
return False
conf = {
'/': {
'tools.auth_basic.on': True,
'tools.auth_basic.realm': 'localhost',
'tools.auth_basic.checkpassword': validate_password
}
}
if __name__ == '__main__':
cherrypy.config.update({
'server.socket_host': '127.0.0.1',
'server.socket_port': 8080,
})
# Run the application using CherryPy's HTTP Web Server
cherrypy.quickstart(myapp(), '/', conf)
The above code will get me a browser user/pass prompt however when I click OK
to the prompt, I get the following error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/cherrypy/_cprequest.py", line 667, in respond
self.hooks.run('before_handler')
File "/usr/local/lib/python2.7/site-packages/cherrypy/_cprequest.py", line 114, in run
raise exc
TypeError: validate_password() takes exactly 2 arguments (3 given)
I'm not sure where it thinks it's getting that 3rd argument from. Any
thoughts? Thanks!
Answer: From the documentation of cherrypy
checkpassword: a callable which checks the authentication credentials.
Its signature is checkpassword(realm, username, password). where
username and password are the values obtained from the request's
'authorization' header. If authentication succeeds, checkpassword
returns True, else it returns False.
So your implementation of checkpassword has to follow the same api which is:
`checkpassword(realm, username, password)`, and what you've show us is missing
the first parameter - realm.
|
Why wouldn't Selenium be able to load a firefox profile?
Question: The following code:
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait # available since 2.4.0
from selenium.webdriver.support import expected_conditions as EC # available since 2.26.0
import os.path
import os
FIREFOX_PATH = os.path.join("C:",
"FirefoxPortable", "FirefoxPortable.exe")
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
binary = FirefoxBinary(FIREFOX_PATH)
driver = webdriver.Firefox(firefox_binary=binary)
# go to the google home page
driver.get("http://www.google.com")
Fails with a complaint that the profile can't be loaded:
$ python test_selenium.py
Traceback (most recent call last):
File "test_selenium.py", line 17, in <module>
driver = webdriver.Firefox(firefox_profile=profile,firefox_binary=binary)
File "/usr/lib/python2.7/site-packages/selenium-2.45.0-py2.7.egg/selenium/webdriver/firefox/webdriver.py", line 59, in __init__
self.binary, timeout),
File "/usr/lib/python2.7/site-packages/selenium-2.45.0-py2.7.egg/selenium/webdriver/firefox/extension_connection.py", line 47, in __init__
self.binary.launch_browser(self.profile)
File "/usr/lib/python2.7/site-packages/selenium-2.45.0-py2.7.egg/selenium/webdriver/firefox/firefox_binary.py", line 66, in launch_browser
self._wait_until_connectable()
File "/usr/lib/python2.7/site-packages/selenium-2.45.0-py2.7.egg/selenium/webdriver/firefox/firefox_binary.py", line 105, in _wait_until_connectable
raise WebDriverException("Can't load the profile. Profile "
selenium.common.exceptions.WebDriverException: Message: Can't load the profile. Profile Dir: %s If you specified a log_file in the FirefoxBinary constructor, check it for details.
Any thoughts what is going on?
I confess I care not a whit about profiles (and have only the vaguest sense
what they are) - I just want a basic browser loaded so I can automate some web
scraping.
Many thanks! /YGA
Answer: As per [this](http://stackoverflow.com/questions/2422798/python-os-path-join-
on-windows) question, specifying the path as below worked. This searches for
the `FirefoxPortable` folder in `D:/`.
FIREFOX_PATH = os.path.join("D:/", "FirefoxPortable", "FirefoxPortable.exe")
Also, the below code is working fine. I removed the `import os` line as it was
redundant.
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait # available since 2.4.0
from selenium.webdriver.support import expected_conditions as EC # available since 2.26.0
import os.path
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
FIREFOX_PATH = os.path.join("D:/", "FirefoxPortable", "FirefoxPortable.exe")
binary = FirefoxBinary(FIREFOX_PATH)
driver = webdriver.Firefox(firefox_binary=binary)
# go to the google home page
driver.get("http://www.google.com")
driver.quit()
|
file parsing using hive query issue
Question: I have a complex text file to parse and load for analysis.
I started off with a simple Hive query to parse a text file and load as a
table in HDFS.
I am using beeswax to run this query.
## name_area.txt
arun:salem anand:vnr Cheeli:guntur
Hive Query
CREATE TABLE test( name STRING, area STRING) ROW FORMAT SERDE
'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' WITH SERDEPROPERTIES
("input.regex" = "^(._):(._)$","output.format.string" = "%1$s %2$s") LOCATION
'/user/name_area.txt';
The file is copied to HDFS.
When i execute the query, I am getting the following exception.
NoReverseMatch at /beeswax/execute/6 Reverse for ‘execute_parameterized_query’
with arguments ‘(6,)’ and keyword arguments ‘{}’ not found. Request Method:
POST Request URL: <http://192.168.58.128:8000/beeswax/execute/6> Django
Version: 1.2.3 Exception Type: NoReverseMatch Exception Value: Reverse for
‘execute_parameterized_query’ with arguments ‘(6,)’ and keyword arguments ‘{}’
not found. Exception Location: /usr/lib/hue/build/env/lib/python2.6/site-
packages/Django-1.2.3-py2.6.egg/django/core/urlresolvers.py in reverse, line
297 Python Executable: /usr/bin/python2.6 Python Version: 2.6.6 Python Path:
[”, ‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/setuptools-0.6c11-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pip-0.6.3-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Babel-0.9.6-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/BabelDjango-0.2.2-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Beaker-1.4.2-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.7.2-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Markdown-2.0.3-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/MarkupSafe-0.9.3-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/MySQL_python-1.2.3c1-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Paste-1.7.2-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/PyYAML-3.09-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Pygments-1.3.1-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/South-0.7-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Spawning-0.9.6-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/Twisted-8.2.0-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/anyjson-0.3.1-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/avro-1.5.0-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/billiard-2.7.3.28-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/celery-3.0.19-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/configobj-4.6.0-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/django_auth_ldap-1.2.1-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/django_celery-3.0.17-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/django_extensions-0.5-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/django_nose-0.5-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/elementtree-1.2.6_20050316-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/enum-0.4.4-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/eventlet-0.9.14-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/greenlet-0.3.1-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/httplib2-0.8-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/importlib-1.0.2-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/kerberos-1.1.1-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/kombu-2.5.10-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/lockfile-0.8-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/lxml-3.3.5-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/moxy-1.0.0-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/openpyxl-1.6.1-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/ordereddict-1.1-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pam-0.1.3-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/processing-0.52-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/pyOpenSSL-0.13-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/pycrypto-2.6-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/pysqlite-2.5.5-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/python_daemon-1.5.1-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/python_dateutil-2.0-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/python_ldap-2.3.13-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/pytidylib-0.2.1-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/requests-2.2.1-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/requests_kerberos-0.4-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/sasl-0.1.1-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/sh-1.08-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/simplejson-2.0.9-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/threadframe-0.2-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/thrift-0.9.1-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/urllib2_kerberos-0.1.6-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages/xlrd-0.9.0-py2.6.egg’,
‘/usr/lib/hue/build/env/lib/python2.6/site-
packages/zope.interface-3.5.2-py2.6-linux-x86_64.egg’,
‘/usr/lib/hue/desktop/core/src’, ‘/usr/lib/hue/desktop/libs/hadoop/src’,
‘/usr/lib/hue/desktop/libs/liboozie/src’,
‘/usr/lib/hue/build/env/lib/python2.6/site-packages’,
‘/usr/lib/hue/apps/about/src’, ‘/usr/lib/hue/apps/beeswax/src’,
‘/usr/lib/hue/apps/filebrowser/src’, ‘/usr/lib/hue/apps/hcatalog/src’,
‘/usr/lib/hue/apps/help/src’, ‘/usr/lib/hue/apps/jobbrowser/src’,
‘/usr/lib/hue/apps/jobsub/src’, ‘/usr/lib/hue/apps/oozie/src’,
‘/usr/lib/hue/apps/pig/src’, ‘/usr/lib/hue/apps/proxy/src’,
‘/usr/lib/hue/apps/useradmin/src’, ‘/usr/lib/hue/build/env/bin’,
‘/usr/lib64/python2.6′, ‘/usr/lib64/python2.6/plat-linux2′,
‘/usr/lib64/python2.6/lib-dynload’, ‘/usr/lib64/python2.6/site-packages’,
‘/usr/lib/python2.6/site-packages’, ‘/usr/lib/python2.6/site-
packages/setuptools-0.6c11-py2.6.egg-info’, ‘/usr/lib/hue/apps/beeswax/gen-
py’, ‘/usr/lib/hue’, ‘/usr/lib64/python26.zip’, ‘/usr/lib64/python2.6/lib-tk’,
‘/usr/lib64/python2.6/lib-old’, ‘/usr/lib/python2.6/site-
packages/setuptools-0.6c11-py2.6.egg-info’, ‘/usr/lib/python2.6/site-
packages/setuptools-0.6c11-py2.6.egg-info’,
‘/usr/lib/hue/apps/beeswax/src/beeswax/../../gen-py’,
‘/usr/lib/hue/apps/jobbrowser/src/jobbrowser/../../gen-py’,
‘/usr/lib/hue/apps/proxy/src/proxy/../../gen-py’] Server time: Fri, 24 Apr
2015 07:37:07 -0700
Appreciate your help on this.
Answer: Your create statement query does not seems write. The location should be the
directory name where input file exists and not the file name.
|
How to Bind two different EVENT to one ListCtrl in wxpython without confliction?
Question: **I want to bind two event to one ListCtrl weight in wxpython.**
**Such as, left click and right click.** The former will refresh the content
of somewhere, and the later will create a PopupMenu, which contains something
about rename, setting...
How should I do?
I tried `wx.EVT_LIST_ITEM_SELECTED`, `wx.EVT_LIST_COL_CLICK`. It works!
**But, when I use`wx.EVT_LIST_ITEM_RIGHT_CLICK`, it will also trigger the
`wx.EVT_LIST_ITEM_SELECTED`**
So, How to do this without confliction? Thank you!
Here is my code!
import wx
class ListCtrlLeft(wx.ListCtrl):
def __init__(self, parent, i):
wx.ListCtrl.__init__(self, parent, i, style=wx.LC_REPORT | wx.LC_HRULES | wx.LC_NO_HEADER | wx.LC_SINGLE_SEL)
self.parent = parent
self.Bind(wx.EVT_SIZE, self.on_size)
self.InsertColumn(0, '')
self.InsertStringItem(0, 'library-one')
self.InsertStringItem(0, 'library-two')
self.Bind(wx.EVT_LIST_ITEM_SELECTED, self.on_lib_select)
self.Bind(wx.EVT_LIST_ITEM_RIGHT_CLICK, self.on_lib_right_click)
def on_size(self, event):
size = self.parent.GetSize()
self.SetColumnWidth(0, size.x - 5)
def on_lib_select(self, evt):
print "Item selected"
def on_lib_right_click(self, evt):
print "Item right-clicked"
class Memo(wx.Frame):
def __init__(self, parent, i, title, size):
wx.Frame.__init__(self, parent, i, title=title, size=size)
self._create_splitter_windows()
self.Centre()
self.Show(True)
def _create_splitter_windows(self):
horizontal_box = wx.BoxSizer(wx.HORIZONTAL)
splitter = wx.SplitterWindow(self, -1, style=wx.SP_LIVE_UPDATE | wx.SP_NOBORDER)
splitter.SetMinimumPaneSize(250)
vertical_box_left = wx.BoxSizer(wx.VERTICAL)
panel_left = wx.Panel(splitter, -1)
panel_left_top = wx.Panel(panel_left, -1, size=(-1, 30))
panel_left_top.SetBackgroundColour('#53728c')
panel_left_str = wx.StaticText(panel_left_top, -1, 'Libraries', (5, 5))
panel_left_str.SetForegroundColour('white')
panel_left_bottom = wx.Panel(panel_left, -1, style=wx.BORDER_NONE)
vertical_box_left_bottom = wx.BoxSizer(wx.VERTICAL)
# Here!!!!
list_1 = ListCtrlLeft(panel_left_bottom, -1)
# ----------
vertical_box_left_bottom.Add(list_1, 1, wx.EXPAND)
panel_left_bottom.SetSizer(vertical_box_left_bottom)
vertical_box_left.Add(panel_left_top, 0, wx.EXPAND)
vertical_box_left.Add(panel_left_bottom, 1, wx.EXPAND)
panel_left.SetSizer(vertical_box_left)
# right
vertical_box_right = wx.BoxSizer(wx.VERTICAL)
panel_right = wx.Panel(splitter, -1)
# ......
panel_right.SetSizer(vertical_box_right)
horizontal_box.Add(splitter, -1, wx.EXPAND | wx.TOP, 1)
self.SetSizer(horizontal_box)
splitter.SplitVertically(panel_left, panel_right, 250)
def on_quit(self, evt):
self.Close()
evt.Skip()
if __name__ == "__main__":
app = wx.App()
Memo(None, -1, 'PyMemo', (500, 300))
app.MainLoop()
Answer: You can bind the LIST_COL_CLICK event to your listcntl and in your event
handler.
def OnColClick(self, event):
if event.LeftDown():
print 'left clicked!'
elif event.RightDown():
print 'left clicked!'
I did not check if it works but this will be the direction I would take.
Basically I took the strategy for key event handling explained here:
<http://www.blog.pythonlibrary.org/2009/08/29/wxpython-catching-key-and-char-
events/>
For other methods of mouse event: <http://wxpython.org/docs/api/wx.MouseEvent-
class.html>
|
Python: My csv.reader imports csv file as text
Question: After importing `CSV` file, the output appears like `[['1', '2', '3']]`. How
can I make sure it is imported as a number?
Answer: With the csv module `reader` function a row read from the csv file is returned
as a list of strings. If you want unquoted numbers not to be returned as
strings use the `quoting=csv.QUOTE_NONNUMERIC` option. However, be aware that
this option will convert to float. If you want to process or display these
numbers as `int`s then you will have to cast them as `int`.
This is your csv file:
1,2,3
4,5,6
7,8,9
Assuming you want a list of lists output:
import csv
tst = open('test.csv')
l = []
reader = csv.reader(tst, quoting=csv.QUOTE_NONNUMERIC)
for line in reader:
l.append(line)
print l
Yields:
[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]
To output integers, you might as well convert the strings to integers
directly:
l = []
tst = open('test.csv')
reader = csv.reader(tst)
for line in reader:
l.append([int(i) for i in line])
print l
Yields:
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
|
How to split dates from weather data file
Question: I am new in python and i would greatly appreciate some help. I have data
generated from a weather station(`rawdate`) in the format `2015-04-26
00:00:48` like this
Date,Ambient Temperature (C),Wind Speed (m/s)
2015-04-26 00:00:48,10.75,0.00
2015-04-26 00:01:48,10.81,0.43
2015-04-26 00:02:48,10.81,0.32
and i would like to split them into `year` `month` `day` `hour` and `minute`.
My attempt so far is this:
for i in range(len(rawdate)):
x=rawdate[1].split()
date.append(x)
but it gives me a list full of empty lists. My target is to convert this into
a list of lists (using the command `split`) where the new data will be stored
into x in the form of `[date, time]`. Then i want to split further using split
with "-" and ":". Can someone offer some advice?
Answer:
>>> from datetime import datetime
>>> str_date = '2015-04-26 00:00:48'
>>> datte = datetime.strptime(str_date, '%Y-%m-%d %H:%M:%S')
>>> t = datte.timetuple()
>>> y, m, d, h, min, sec, wd, yd, i = t
>>> y
2015
>>> m
4
>>> min
0
>>> sec
48
|
Import problems with Python namespace packages
Question: I am trying to use Python [namespace packages
concept](https://www.python.org/dev/peps/pep-0382/) to split my library across
multiple directories. In general, it works, but I have a problem regarding
importing names to projects package level.
My project structure is following:

> **`project1/coollibrary/__init__.py`**
from __future__ import absolute_import
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
from .foomodule import foo
> **`project1/coollibrary/foomodule.py`**
def foo():
print ('foo')
> **`project2/coollibrary/__init__.py`**
from __future__ import absolute_import
from pkgutil import extend_path
__path__ = extend_path(__path__, __name__)
from .barmodule import bar
> **`project2/coollibrary/barmodule.py`**
def bar():
print ('bar')
Both projects are in the PATH:
$ echo ${PYTHONPATH}
/home/timo/Desktop/example/project1:/home/timo/Desktop/example/project2
And I am running code from here:
$ pwd
/home/timo/Desktop/example
$ python3
>>> import coollibrary
>>> coollibrary.foo() # works
foo
>>> coollibrary.bar() # does not work (the problem)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'bar'
>>> import coollibrary.barmodule
>>> coollibrary.barmodule.bar() # works
bar
How to fix the code so that I could import both `foo` and `bar` directly from
`coollibrary` package. Also, is there a solution that works for both Python2.7
and Python3.4 (other versions not required).
Answer: Starting with Python 3.3, you can use [PEP 420 -- Implicit Namespace
Packages](https://www.python.org/dev/peps/pep-0420/).
Basically, you would remove your `__init__.py` file in both repositories, and
add:
setup(
...
packages=['coollibrary.{foomodule/barmodule}'],
namespace_packages=['coollibrary'],
...
)
to your `setup.py`.
Can't help you with Python 2.7 though...
|
Image Gradient Vector Field in Python
Question: I am trying to get the [Gradient Vector
Field](http://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/Gradient2.svg/566px-
Gradient2.svg.png) of an image using Python (similar to [this matlab
question](http://stackoverflow.com/questions/10416918/compute-gradient-vector-
field-of-an-image)).
This is the original image: 
Here is my code:
import numpy as np
import matplotlib.pyplot as plt
import Image
from PIL import ImageFilter
I = Image.open('test.png').transpose(Image.FLIP_TOP_BOTTOM)
I = I.filter(ImageFilter.BLUR)
p = np.asarray(I)
w,h = I.size
y, x = np.mgrid[0:h:500j, 0:w:500j]
dy, dx = np.gradient(p)
skip = (slice(None, None, 3), slice(None, None, 3))
fig, ax = plt.subplots()
im = ax.imshow(I, extent=[x.min(), x.max(), y.min(), y.max()])
ax.quiver(x[skip], y[skip], dx[skip], dy[skip])
ax.set(aspect=1, title='Quiver Plot')
plt.show()
This is the result:

The problem is that the vectors seem to be incorrect. This point gets more
clear when you zoom in the image:

Why do some of the vectors point to the center as expected, while others do
not?
Maybe there is an issue with the result of the call to `np.gradient`?
Answer: I think your strange results are, at least in part, because p is of type
`uint8`. Even numpy diff results in clearly incorrect values for an array of
this dtype. If you convert to signed integer by replacing the definition of
`p` with the following: `p = np.asarray(I).astype(int8)` then the results of
diff are correct. The following code gives me what looks like a reasonable
field,
import numpy as np
import matplotlib.pyplot as plt
import Image
from PIL import ImageFilter
I = Image.open('./test.png')
I = I.filter(ImageFilter.BLUR)
p = np.asarray(I).astype('int8')
w,h = I.size
x, y = np.mgrid[0:h:500j, 0:w:500j]
dy, dx = np.gradient(p)
skip = (slice(None, None, 3), slice(None, None, 3))
fig, ax = plt.subplots()
im = ax.imshow(I.transpose(Image.FLIP_TOP_BOTTOM),
extent=[x.min(), x.max(), y.min(), y.max()])
plt.colorbar(im)
ax.quiver(x[skip], y[skip], dx[skip].T, dy[skip].T)
ax.set(aspect=1, title='Quiver Plot')
plt.show()
This gives the following:

and close up this looks like you'd expect,

|
TransportError(503, u'') when tying to use a recently created elasticsearch index
Question: I'm creating an Elasticseach index using Python API like this:
from elasticsearch import Elasticsearch
es = Elasticsearch()
index_body = {"mappings": {".percolator": {"properties": {"message": {"type": "string", "analyzer": "english"}}}}}
# Creates the index if it doesn't exist
if not es.indices.exists('test'):
es.indices.create(index='test', body=index_body)
print es.exists(index='test', id='1')
The index is created successfully, but when I check for the existence of a
document inside the index it fails with this error:
Traceback (most recent call last):
File "main.py", line 12, in <module>
print es.exists(index='test', id='1')
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 68, in _wrapped
return func(*args, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/__init__.py", line 282, in exists
self.transport.perform_request('HEAD', _make_path(index, doc_type, id), params=params)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 307, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 86, in perform_request
self._raise_error(response.status, raw_data)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/base.py", line 102, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
elasticsearch.exceptions.TransportError: TransportError(503, u'')
If I run this script a second time, with the index already created it works
just fine. Does anyone have an idea of what could be going wrong?
Answer: When creating a new index, you need to wait until all shards are allocated.
The best way I know of how to do is this:
1. fetch `<your_index>/_status`
2. iterate over all `indices.<your_index>.shards` and verify that `routing.state = STARTED` everywhere
3. Go to 1) unless all shards are started
[Here's a (PHP)
project](https://github.com/ruflin/Elastica/blob/master/test/lib/Elastica/Test/Base.php#L81-L92)
which did this for unit tests:
protected function _waitForAllocation(Index $index)
{
do {
$settings = $index->getStatus()->get();
$allocated = true;
foreach ($settings['shards'] as $shard) {
if ($shard[0]['routing']['state'] != 'STARTED') {
$allocated = false;
}
}
} while (!$allocated);
}
**Wrong answer:**
You've to give ES _some space_ before your continue. ES is _near realtime_ ,
so delays are to be expected. Especially when you're running your code
sequentially with next to no delay.
I think you simply need to call the [`_refresh`
endpoint](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-
refresh.html) and you're covered.
In practice I've to do the same in my unit tests. They execute very fast and
creating/pumping data/destroying indices takes time so in my `setUp()` I've a
call to `_refresh` before handing over to the respective `test*()` method. And
in some tests, where I index data, I also have to place `_refresh` calls.
Usually, during _normal operation_ you don't need to call it shouldn't. Keep
in mind that the default `refresh_interval` is `1s`. If you're regularly
update your index and expect sub-second updates to be reflected (I'm talking
about e.g. `_search`), that's where you need to start.
|
Accessing a file path saved in a .txt file. (Python)
Question: I have a text file that contains file paths of files that I wish to open.
The text file looks like this:
28.2 -1.0 46 14 10 .\temp_109.17621\voltage_28.200\power_-1.txt
28.2 -2.0 46 16 10 .\temp_109.17621\voltage_28.200\power_-2.txt
...
I would like to open the files at this filepath.
First step is to load each filepath from the text file.
I've tried this using:
path = np.loadtxt('NonLorentzianData.txt',usecols=[5],dtype='S16')
which generates a `path[1]` that looks like:
`.\\temp_109.17621` ...
rather than the entire file path.
Am I using the wrong `dtype` or is this not possible with `loadtxt`?
Answer: If you change the data type to `np.str_` it will work:
path = np.loadtxt('NonLorentzianData.txt',usecols=[5],dtype=np.str_)
print(path[1])
.\temp_109.17621\voltage_28.200\power_-2.txt
Or using `dtype=("S44")` will also work which is the length of your longest of
the two paths. You are specifying a _16 character string_ so you only get the
_first 16 characters_.
In [17]: s = ".\\temp_109.17621"
In [18]: len(s)
Out[18]: 16
# 43 character string
In [26]: path = np.loadtxt('words.txt',usecols=[5],dtype=("S43"))
In [27]: path[1]
Out[27]: '.\\temp_109.17621\\voltage_28.200\\power_-2.tx'
In [28]: len(path[1])
Out[28]: 43
# 38 character string
In [29]: path = np.loadtxt('words.txt',usecols=[5],dtype=("S38"))
In [30]: path[1]
Out[30]: '.\\temp_109.17621\\voltage_28.200\\power_'
In [31]: len(path[1])
Out[31]: 38
In [32]: path = np.loadtxt('words.txt',usecols=[5],dtype=np.str_)
In [33]: path[1]
Out[33]: '.\\temp_109.17621\\voltage_28.200\\power_-2.txt'
If you look at the
[docs](http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html) you will
see what every dtype does and how to use them.
If you just want all the file paths you can also use `csv.reader`:
import csv
with open("NonLorentzianData.txt") as f:
reader = csv.reader(f,delimiter=" ")
for row in reader:
with open(row[-1]) as f:
.....
|
How to check whether a connection for a specific port is open using python
Question: I need to check whether a connection with some port is running on windows. I
can do it using netstat, but how can I check it using python?
Answer: Try like this:
import socket;
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
res = s.connect_ex(('192.1.1.1',80))
if res == 0:
print "Open"
else:
print "Closed"
|
series object not callable with linear regression in python
Question: I am new to Python and I am trying to build a simple linear regression model.
I am able to build the model and see the results, but when I try to look at
the parameters I get an error and I am not sure where I am going wrong.
Code:
import statsmodels.formula.api as smf
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm = smf.ols(formula='medv ~ lstat', data=data).fit()
lm.describe
produces results
Dep. Variable: medv R-squared: 0.544
Model: OLS Adj. R-squared: 0.543
Method: Least Squares F-statistic: 601.6
Date: Wed, 06 May 2015 Prob (F-statistic): 5.08e-88
Time: 15:01:03 Log-Likelihood: -1641.5
No. Observations: 506 AIC: 3287.
Df Residuals: 504 BIC: 3295.
Df Model: 1
But when I try to call the parameters
lm.params()
I receive this
Series object is not callable
I must be missing something but I am not sure what it is. The model is being
produced correctly. Thanks!
Answer: Try
lm.params
instead of
lm.params()
The latter tries to call the params as a function (which it isn't)
|
Python FFT & Magnitude Spectrum of two similar signals have different frequencies
Question: My goal is to compare the FFT of similar signals. For some reason, when I take
the magnitude spectrum of two signals of the same length, the frequencies are
different... I can't do a simple side by side comparison of two signals
because of this. Anyone have any tips on how to get the same FFT on the
signals?
So for instance Signal1 provides the following:
 
Update: Here's the two signals plotted from 0-400Hz 
Here's my code: The logic behind the code is to import the signal, find where
the sound starts, chop the signal to be 1 second in length, perform FFT on
signal for comparison.
import numpy as np
from scipy.io.wavfile import read
from pylab import plot
from pylab import plot, psd, magnitude_spectrum
import matplotlib.pyplot as plt
#Hello Signal!!!
(fs, x) = read('C:\Desktop\Spectral Work\EB_AB_1_2.wav')
#Remove silence out of beginning of signal with threshold of 1000
def indices(a, func):
#This allows to use the lambda function for equivalent of find() in matlab
return [i for (i, val) in enumerate(a) if func(val)]
#Make the signal smaller so it uses less resources
x_tiny = x[0:100000]
#threshold is 1000, 0 is calling the first index greater than 1000
thresh = indices(x_tiny, lambda y: y > 1000)[1]
# backs signal up 20 bins, so to not ignore the initial pluck sound...
thresh_start = thresh-20
#starts at threshstart ends at end of signal (-1 is just a referencing thing)
analysis_signal = x[thresh_start-1:]
#Split signal so it is 1 second long
one_sec = 1*fs
onesec = x[thresh_start-1:one_sec]
#***unsure is just a placeholder because it spits out a weird error if I don't use
#a third variable
(xsig, ysig, unsure) = magnitude_spectrum(onesec, Fs=fs)
xsig is the amplitude and ysig is the Frequencies.
Here's links to the .wav files if you're interested in trying it out yourself:
[.wav1](https://drive.google.com/file/d/0BxHc4PVaRU7obTZ5TDlhTXdMNW8/view?usp=sharing
"wav1")
[.wav2](https://drive.google.com/file/d/0BxHc4PVaRU7oWE5YZGttU3lhcEU/view?usp=sharing
"wav2") Note: originally i uploaded the wrong .wav1 file... the correct one is
now up.
Answer: I'm guessing your signals aren't actually the same length. If you're
thresholding them independently, your `thresh_start` value won't be the same,
so:
onesec = x[thresh_start-1:one_sec]
will give you different-length arrays for the two files. You can either
calculate the `threshold` value separately and then provide that number to
this module as a constant, or make your `onesec` array be the same length from
the start of _each_ threshold value:
onesec = x[thresh_start-1:one_sec+thresh_start-1]
(Remember that slice notations is `[start:stop]`, not `[start:length]`)
|
how to load python script in interactive shell
Question: I am trying `sudo python get_gps.py -c` and expecting it to load the script
and then present the interactive shell to debug the script live as opposed to
typing it in manually.
Answer: From the docs:
$ python --help
usage: /usr/bin/python2.7 [option] ... [-c cmd | -m mod | file | -] [arg] ...
Options and arguments (and corresponding environment variables):
-B : don't write .py[co] files on import; also PYTHONDONTWRITEBYTECODE=x
-c cmd : program passed in as string (terminates option list)
-d : debug output from parser; also PYTHONDEBUG=x
-E : ignore PYTHON* environment variables (such as PYTHONPATH)
-h : print this help message and exit (also --help)
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
use `-i` option
|
Pillow is not working
Question: I just started a new Django project and when I try to makemigratons I get an
error saying that I have to install Pillow and I have already installed
Pillow.
ERRORS:
shop.ProductImages.product_img: (fields.E210) Cannot use ImageField because Pillow is not installed.
HINT: Get pillow at https://pypi.python.org/pypi/Pillow or run command "pip install pillow".
When I run pip freeze I can see that Pillow is already installed:
Pillow==2.7.0
I'm using Python 3.4.3 and Django 1.8. I didn't have this problem when I was
using Python 2.7.
UPDATE:
When i try to import Image from PIL outside my virtualenv everything is fine,
but when I try that in virtualenv i get this:
Traceback (most recent call last):
Fille "<stdin>", line 1, in <module>
File "C:\Python27\Lib\site-packages\PIL\Image.py", line 63, in <module>
from PIL import _imaging as core
ImportError: DLL load failed: The specified module could not be found.
I have python2 installed outside my virtualenv so does this mean that python
is not looking in my virtualenv\Lib\site-packages? And I have just noticed
that when I run pip freeze i get a list of packages that are installed on my
system and not in virualenv, also when I try to install something that is
already installed on my system I have to run pip --upgrade.
Answer: I overcome this problem with easy_install --upgrade pillow and if I want to
install some package in virtualenv that I already have on my system I can just
use pip --upgrade. If someone have explanation for this problem and better
solution please let me know!
|
Traceback (most recent call last) error:
Question: Please help me n this i am not able to run getting error:
Error: File "C:/Python34/pyt.py", line 6, in <module> class Animal1: File "C:/Python34/pyt.py", line 30, in Animal1 cat = Animal1("cat",7) NameError: name 'Animal1' is not defined
Code:
import random
import sys
import os
class Animal1:
__name = ""
__age = 0
def __init__(self,name,age):
self.__nae = name
self.__age = age
def set_name(self,__name):
self.__name = __name
def get_name(self):
return self.__name
def set_age(self,__age):
self.__age = age
def get_age(self):
self.__age
def toPrint(self):
return "Animal is {} and age is {}"
cat = Animal1("cat",7)
print(cat.toPrint())
Answer: I don't know what your code is really like, but it should be like this:
class Animal1(object):
def __init__(self, name, age):
self.__name = name
self.__age = age
def set_name(self,__name):
self.__name = __name
def get_name(self):
return self.__name
def set_age(self,__age):
self.__age = age
def get_age(self):
self.__age
def toPrint(self):
return "Animal is %s and age is %s" % (self.__name, self.__age)
cat = Animal1("cat", 7)
print(cat.toPrint())
|
Iterating through file multiple times (Python)
Question: I have a file that looks like this:
1,var1
2,var2
3,var3
4,var1_val1
5,var2_val2
6,var1_val2
7,var3_val1
8,var2_val1
9,var3_val2
Output file should look like:
var1 1 4 6
var2 2 8 5
var3 3 7 9
My code is quite complicated. It works, but it's very inefficient. Can this be
done more efficiently:
def findv(var):
with open(inputfile) as f:
for line in f:
elems=line.split(',')
name=elems[0]
if var!=name:
continue
field=elems[0]
f.seek(0)
for line in f:
elems2=line.split(',')
if elems2[1].endswith(var+'_val1'):
first=elems2[0]
f.seek(0)
for line in f:
elems3=line.split(',')
if elems3[1].endswith(var+'_val3'):
second=elems3[0]
return var,field,first,second
main part of the code:
with open(inputfile) as f:
with open(outputfile) as fout:
for line in f:
tmp=line.split(',')
if current[1].endswith('val1') or current[1].endswith('val2'):
continue
v=tmp[1]
result=findv(v)
f2.write(result)
My function findv(var) is called each time a line in input file starts with
varx and then searches through the file multiple times until it finds fields
that correspond to varx_val1 and varx_val2.
EDIT: I need to preserve the order of the input file, so var1 has to appear
first in the output file, then var2, then var3 etc.
Answer: Use a dictionary, with the keys being your labels and a list to store your
values. This way, you only have to loop over your file once.
from collections import defaultdict
results = defaultdict(list)
with open('somefile.txt') as f:
for line in f:
if line.strip():
value, key = line.split(',')
if '_' in key:
key = key.split('_')[0] # returns var1 from var1_val1
results[key].append(value)
for k,v in results.iteritems():
print('{} {}'.format(k, ' '.join(v)))
Here is a version that includes the below comments:
from collections import OrderedDict
results = OrderedDict
with open('somefile.txt') as f:
for line in f:
line = line.strip()
if line:
value, key = line.split(',')
key = key.split('_')[0] # returns var1 from var1_val1
results.setdefault(key, []).append(value)
for k,v in results.iteritems():
print('{} {}'.format(k, ' '.join(v)))
|
How to declare a module deprecated in python
Question: How to declare a module deprecated in python?
I want a warning to be printed whenever a particular module is imported or any
of its functions are called.
Answer: You want to
[`warn`](https://docs.python.org/2/library/warnings.html#warnings.warn) with a
[`DeprecationWarning`](https://docs.python.org/2/library/exceptions.html#exceptions.DeprecationWarning).
Exactly how you call it doesn't matter that much, but the stdlib has a
standard pattern for deprecated modules, like this:
# doc string, top-level comments, imports, __all__ =
import warnings
warnings.warn("the spam module is deprecated", DeprecationWarning,
stacklevel=2)
# normal module code
See [the 2.7 `sets`
source](https://hg.python.org/cpython/file/2.7/Lib/sets.py#l57) for an
example.
|
How to get system information in Go?
Question: Can anyone recommend a module which can be used to get system information,
like Python's `psutil`?
When I tried `>go get github.com/golang/sys get 'sys'`, I received the
following:
Report Error:
package github.com/golang/sys
imports github.com/golang/sys
imports github.com/golang/sys: no buildable Go source files in D:\go_source\src\github.com\golang\sys
This my system environment:
# native compiler windows amd64
GOROOT=D:\Go
#GOBIN=
GOARCH=amd64
GOOS=windows
CGO_ENABLED=1
PATH=c:\mingw64\bin;%GOROOT%\bin;%PATH%
LITEIDE_GDB=gdb64
LITEIDE_MAKE=mingw32-make
LITEIDE_TERM=%COMSPEC%
LITEIDE_TERMARGS=
LITEIDE_EXEC=%COMSPEC%
LITEIDE_EXECOPT=/C
Answer: You would need actually to do ([following
godoc](https://godoc.org/golang.org/x/sys)):
go get golang.org/x/sys/unix
# or
go get golang.org/x/sys/windows
# or
go get golang.org/x/sys/plan9
(depending on your OS)
|
Python loop in the directory inside directory
Question: I currently have the python code which loops in the directory, if it finds
another directory inside the directory, then it goes inside it & loops it
again to get all the files inside it. The directory traverse only one level.
Here is the code :
import io
import sys, os
import glob
reload(sys)
sys.setdefaultencoding("utf-8")
directory = u"F:\\songs"
count=0;
songs = glob.glob(os.path.join(directory, u"*"))
with io.open('filenames.txt', encoding='utf-8') as newnames:
for filename in newnames:
try:
print directory + "\\" + songs[count]
if "####" in filename:
innersongs = glob.glob(os.path.join(directory, songs[count]))
for innerfilename in innersongs:
print directory + "\\" + songs[count] + "\\" + innerfilename
os.rename(oldpath, newpath)
else:
print "hhy"
os.rename(oldpath, newpath)
count += 1
except IOError as (errno, strerror):
print "I/O error({0}): {1}".format(errno, strerror)
except:
print "Unexpected error:", sys.exc_info()[0]
raise
Above is the code which does the following i mentioned. Below is the names of
the file with which it gets renamed. Filename is filenames.txt
abcd sdfdd dfdcbv
####amcdn dfghfgn yrtyrt
@@@@abcdk eruifgf oirer
@@@@eprouri kdfjdkfd soieeure dlfkdl
@@@@weoerto oeruoture oitretre
@@@@ewpoire gkjdfkg sdfdsfl sdfsdhjfd
tieruoigd s,dfkdsjf dsfjbdjsf
dfrkewire dgngfkdgdf fdgkjdfg
gmfdghjfdhgdfg
dfgfdgdf skdfhds sdfhsdjhf sdfitertre
weikds dgkjksdf asouew sdfhds sdfj
Above is the sample file. In the current scenerio, whenever it gets the `####`
in the file, then the script gets to know that it is the name of the
directory, so go inside it & all `@@@@` are the name of the sudirectories or
files in the inner directory. After `@@@@` is over, we gets out of the inner
directory loop & loop over the current directory for changing the file names.
But i stuck at the inner loop to move inside & change the name of inner
directory file. What wrong i am doing here ? What else i can improve in the
code ?
Answer: Such a weird format. I don't really understand what it's for, so doing a wild
guess – I recommend using json, e.g.
[tree.json](http://pastebin.com/RHBRRFxc).
To use this in Python:
## tree.py
import json
def process_tree(prefix, element):
for key, val in zip(element.keys(), element.values()):
if type(val) is dict:
process_tree(key, val)
elif type(val) is list:
if val:
print('Renaming {0}/{1} to {0}/{2}'.format(prefix, val[0], val[1]))
if __name__=='__main__':
tree = json.load(open('tree.json'))
process_tree('.', tree)
In this example, `json.load` converts a json into `dict` with nested `dict`s
and `list`s, so you can iterate over them. Also, this example is based on
recursion – process_tree calls itself to process a child `dict` while
processing a parent `dict`.
Its output:
Renaming ./sdfdd to ./dfdcbv
Renaming ./skdfhds to ./sdfhsdjhf
Renaming amcdn/gkjdfkg to amcdn/sdfdsfl
Renaming amcdn/eruifgf to amcdn/oirer
Renaming amcdn/dfghfgn to amcdn/yrtyrt
Renaming amcdn/oeruoture to amcdn/oitretre
Renaming amcdn/kdfjdkfd to amcdn/soieeure
Renaming ./dgkjksdf to ./asouew
Renaming ./s,dfkdsjf to ./dsfjbdjsf
Renaming ./dgngfkdgdf to ./fdgkjdfg
Again, I don't really understand what your code was meant to do, so my code
has obvious limitations. Elements with empty values (e.g. empty list) are
ignored (`if val:` checks if value is empty). If element value (`val`) is not
a `dict`, element name (`key`) is not used at all (otherwise, it becomes a
prefix). If `list` has more than two elements, only first two are used.
Anyway, you know better how to rewrite this, if this _slightly_ fits your
needs. :)
|
Python multiple domain crawler InvalidSchema exception
Question: This is my code. My goal is crawl multiple domain. I set domains in url array
but I can't crawl.
The code can find urls but don't parse or crawl.
This is the results: my code run ('Total Link Number : ', 387) ('Links for
News: ', 146)
# -*- coding: utf-8 -*-
import requests
from bs4 import BeautifulSoup
import codecs
headers = {
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5)",
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"accept-charset": "cp1254,ISO-8859-9,utf-8;q=0.7,*;q=0.3",
"accept-encoding": "gzip,deflate,sdch",
"accept-language": "tr,tr-TR,en-US,en;q=0.8",
}
def haber_oku(haber_url):
r = requests.get(haber_url, headers=headers)
if r.status_code != 200:
return
soup = BeautifulSoup(r.content)
result = soup.find("div", {'itemprop': 'articleBody'})
if result:
return result.get_text()
else:
result = soup.find("div", {'itemprop': 'description'})
if result:
return result.get_text()
return
def scrape_hurriyet(keywords, detay_goster, url):
if len(keywords) > 0:
keywords = keywords.split(',')
s = 0
r = requests.get(url, headers=headers)
if r.status_code != 200:
print("request reddedildi")
return
soup = BeautifulSoup(r.content)
results = soup.findAll("a")
print ("Toplam link sayisi : ", len(results))
liste_link = []
liste_text = []
haberler = []
for result in results:
h = result.get('href')
t = result.get_text()
if h is not None:
if str(h).find('http://www.hurriyet.com.tr/') or str(h).find('http://www.milliyet.com.tr/spor') >= 0:
if h not in liste_link:
if h.find('.asp') or h.find('.htm') > 0:
liste_link.append(h)
liste_text.append(t)
print ("Tekil linkler: ", len(liste_link))
i = 0
while i < len(liste_link):
h = liste_link[i]
t = liste_text[i]
haber = haber_oku(h)
if haber is not None:
haber = BeautifulSoup(haber).get_text()
ok = 0
found = ""
if len(keywords) == 0:
haberler.append(haber)
else:
for keyword in keywords:
print ('----------------------')
if haber.find(keyword) >= 0:
found = found + " " + keyword
ok += 1
if ok > 0:
print ("3", h, t, found)
if detay_goster is True:
haberler.append(haber)
i += 1
k = 0
while k < len(haberler):
f = codecs.open("abc" + str(k+1) + ".txt", encoding='utf-8', mode='w+')
f.write(haberler[k])
k += 1
f.close()
keywords = ''
url = ['http://www.hurriyet.com.tr/', 'http://www.milliyet.com.tr/']
s = 0
while s < len(url):
scrape_hurriyet(keywords, True, url[s])
s += 1
They are exceptions:
Traceback (most recent call last):
File "C:/Users/KerimCaner/PycharmProjects/Hurriyet/hurriyet.py", line 94, in <module>
scrape_hurriyet(keywords, True, url[s])
File "C:/Users/KerimCaner/PycharmProjects/Hurriyet/hurriyet.py", line 62, in scrape_hurriyet
haber = haber_oku(h)
File "C:/Users/KerimCaner/PycharmProjects/Hurriyet/hurriyet.py", line 17, in haber_oku
r = requests.get(haber_url, headers=headers)
File "C:\Users\KerimCaner\AppData\Roaming\Python\Python27\site-packages\requests\api.py", line 69, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\KerimCaner\AppData\Roaming\Python\Python27\site-packages\requests\api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "C:\Users\KerimCaner\AppData\Roaming\Python\Python27\site-packages\requests\sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\KerimCaner\AppData\Roaming\Python\Python27\site-packages\requests\sessions.py", line 567, in send
adapter = self.get_adapter(url=request.url)
File "C:\Users\KerimCaner\AppData\Roaming\Python\Python27\site-packages\requests\sessions.py", line 641, in get_adapter
raise InvalidSchema("No connection adapters were found for '%s'" % url)
requests.exceptions.InvalidSchema: No connection adapters were found for 'javascript:;'
Answer: The error you got: `requests.exceptions.InvalidSchema: No connection adapters
were found for 'javascript:;'` states that you are trying to crawl a piece of
javascript. You are currently crawling all url's in anchor-tags, but you need
to filter out javascript-url's. You should replace the following line:
if h is not None:
with something like this:
if h is not None and not(h.startswith("javascript")):
|
Segmentation fault while using a C module in a .so file
Question: I got a segmentation fault – probably caused by the long loop – while running
my python script at the command line under Linux. I know exactly _where_ the
problem is, but I don't know _why_. I've tried some of the method I searched
online, including this site, but I still cannot solve it. So, please help me –
thank you in advance. Here follows some of the code:
### 0
`analyzer.py`, where the program begins:
from classify import BayesClassifier
class Analyzer:
def __init__(self):
self.classify = BayesClassifier('/home/user/yakamoz/srcanalyzer/classification/training.tab')
if __name__ == '__main__':
a = Analyzer()
# the following is a string of Chinese character, which, I am sure,
# has no influence on the Segmentation fault, you can just suppose
# it as a paragraph in English.
text = "市委常委、纪委书记杨娟高度评价我县基层党风廉政建设:\
务实创新成效显著作者:县纪委办公室发布时间:11月27日下午,\
市委常委、纪委书记杨娟率领市纪委副书记蒋玉平、王友富,\
市纪委常委、秘书长任斌,市纪委机关党委书记刘林建一行来我\
县调研基层党风廉政建设。调研中,杨娟高度评价我县基层党风廉政建设,\
认为工作务实创新,成效显著。县委书记陈朝先,县委副书记季代双,县委常委、\
纪委书记韩忠明陪同调研。杨娟一行先后来到两河镇、西部花都、两江广场、\
工业园区等地实地调研我县基层党风廉政建设,检阅我县“两化”互动、“三化”\
联动发展成果。查阅相关资料在两河镇,杨娟认真听取了两河片区纪工委\
日常工作开展情况的汇报,仔细翻阅了巡查工作日记和接访记录。杨娟指出,\
设置乡镇片区纪工委是加强基层纪检组织建设的创新举措。\
盐亭在全市率先设置、运行纪工委以来,在化解农村信访矛盾,理顺群众情绪,\
强化基层办案工作等方面取得了明显成效。她要求,要总结提炼片区纪工委的经验,\
进一步明确职能职责,在机构设置、人员配备、制度建设等方面进行探索实践,\
为全市基层纪检组织建设提供有益经验借鉴。杨娟还饶有兴趣地参观了两河镇\
的机关廉政文化建设"
print str(a.classify.classify_text(text)[0])
### 1
`classify.py`; this file is used by the `analyzer.py`, presented above:
# -*- coding:utf-8 -*-
from match import WordMatch
import cPickle
import math
class BayesClassifier:
__trainingdata = {}
__classifywordscount = {}
__classifydoccount = {}
def __init__(self, table_name):
self.trainingtable = cPickle.load(open(table_name, 'r'))
for x in self.trainingtable:
self.train(x[1], x[0])
print 'training finished'
self.matrix = self.get_matrix()
self.vector_count = len(self.matrix)
self.doc_count = len(self.trainingtable)
self.match = WordMatch(self.matrix)
def get_matrix(self):
matrix = {}
for x in self.trainingtable:
for k in x[0]:
matrix[k] = 0
return matrix
def doc_to_vector(self, content):
matrix = {word:value for (word, value) in self.match.find(content).items()}
return matrix
def train(self, cls, vector):
if cls not in self.__trainingdata:
self.__trainingdata[cls] = {}
if cls not in self.__classifywordscount:
self.__classifywordscount[cls] = 0
if cls not in self.__classifydoccount:
self.__classifydoccount[cls] = 0
self.__classifydoccount[cls] += 1
for word in vector.keys():
self.__classifywordscount[cls] += vector[word]
if word not in self.__trainingdata[cls]:
self.__trainingdata[cls][word] = vector[word]
else:
self.__trainingdata[cls][word] += vector[word]
def classify_text(self, content):
t = -1 << 32
res = "unknown classification"
for cls in self.__trainingdata.keys():
prob = self.__count(cls, self.doc_to_vector(content))
if prob > t:
res = cls
t = prob
return res, t
### 2
`match.py`; this code is referenced by the `classify.py`
# -*- coding:utf-8 -*-
import os
import re
import util.ahocorasick.x64 as ahocorasick
# util.ahocorasick.x64 is a folder where .so file locates
class WordMatch(object):
def __init__(self, arg):
self.__tree = ahocorasick.KeywordTree()
if isinstance(arg, (list, dict)):
for item in arg:
if item:
self.__tree.add(item)
elif isinstance(arg, basestring):
if os.path.isfile(arg):
fp = open(arg)
for line in fp:
line = line.strip()
if line:
self.__tree.add(line)
fp.close()
else:
print 'the path of the input file does not exist'
return
else:
print 'parameter fault'
return
self.__tree.make()
def _findall(self, content):
'''return the list of keywords that is found
'''
hit_list = []
if isinstance(content, basestring):
for start, end in self.__tree.findall(content):
if len(content[start:end]):
hit_list.append(content[start:end])
else:
print 'AC automation requires string '
return hit_list
def find(self, content):
'''return those matched keywords and the corresponding count
'''
hit_list = self._findall(content)
mydict = {}
for item in hit_list:
if item in mydict:
mydict[item] += 1
else:
mydict[item] = 1
return mydict
### 3
`__init__.py`, under the folder `util.ahocorasick.x64`:
import _ahocorasick
__all__ = ['KeywordTree']
# A high level version of the keyword tree. Most of the methods here
# are just delegated over to the underlying C KeywordTree
#(in the .so file, which is not shown here).
class KeywordTree(object):
def __init__(self):
self.__tree = _ahocorasick.KeywordTree();
def add(self, s):
return self.__tree.add(s)
def make(self):
return self.__tree.make()
def zerostate(self):
return self.__tree.zerostate()
##### !! I found this is where the segmentation fault occurs
def __findall_helper(self, sourceBlock, allow_overlaps, search_function):
"""Helper function that captures the common logic behind the
two findall methods."""
startpos = 0
startstate = self.zerostate()
loop_times = 0
while True:
#print spot_1
match = search_function(sourceBlock, startpos, startstate)
#print spot_2
if not match:
break
yield match[0:2]
startpos = match[1]
if allow_overlaps: #which in my case is always false
startstate = match[2]
else:
loop_times = loop_times + 1
#print spot_3
startstate = self.zerostate()
#print spot_4
#print loop_times
def findall(self, sourceBlock, allow_overlaps=0):
return self.__findall_helper(sourceBlock, allow_overlaps,self.__tree.search)
I am confused by the different result given: I've found that the problem lies
in 3 `__init__.py` or rather, the `__findall_helper(self, sourceBlock,
akkow_overlaps, search_function)`.
By uncommenting one of the following annotation:
#print spot_1
#print spot_2
#print spot_4
one can eliminate the segmentation fault and the loop is finite (the match can
be `None`), but by uncommenting the `#print spot_3`, one can _not_ (it seems
like an infinite loop). Here comes my question:
Does the `print` statement has a side effect in python? I found only that
there is a `print` statement in one of the three spots mentioned above
(`spot_1` or `spot_2` or `spot_4`) can eliminate the fault. By the way, I
found this by accident, there are no `print` at first.
### 4
Here is the `backtrace` using `gdb`.
(gdb) r analyzer.py
Starting program: /usr/local/bin/python analyzer.py
[Thread debugging using libthread_db enabled]
Detaching after fork from child process 11499.
training finished
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff178956d in ahocorasick_KeywordTree_search_helper (state=0x85c730,
string=0x8967d4 "【中国环保在线 市场行情】“我国将在2016年启动全国碳市场。全国碳交 易市场的首批行业企业将由电力、冶金、有色、建材、化工5个传统制造业和航", <incomplete sequence \347\251>..., n=140733193395828, startpos=118366835,
out_start=0x7fffffffd468, out_end=0x7fffffffd460, out_last_state=0x7fffffffd458) at aho-corasick.c:216
216 aho-corasick.c: No such file or directory.
in aho-corasick.c
Missing separate debuginfos, use: debuginfo-install glibc-2.12- 1.149.el6.x86_64
(gdb) bt
#0 0x00007ffff178956d in ahocorasick_KeywordTree_search_helper (state=0x85c730, string=0x8967d4 "【中国环保在线 市场行情】“我国将在2016年启动全国碳市场。全国碳交 易市场的首批行业企业将由电力、冶金、有色、建材、化工5个传统制造业和航", <incomplete sequence \347\251>..., n=140733193395828, startpos=118366835, out_start=0x7fffffffd468, out_end=0x7fffffffd460, out_last_state=0x7fffffffd458) at aho-corasick.c:216
#1 0x00007ffff178a2b1 in ahocorasick_KeywordTree_basesearch (self=0x7ffff7f6c230, args=0x7ffff0ca1a50, kwargs=0x0, helper=0x7ffff1789525<ahocorasick_KeywordTree_search_helper>) at py_wrapper.c:190
#2 0x00007ffff178a358 in ahocorasick_KeywordTree_search (self=0x7ffff7f6c230, args=0x7ffff0ca1a50, kwargs=0x0) at py_wrapper.c:212
#3 0x00000000004a7103 in call_function (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:4013
#4 PyEval_EvalFrameEx (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:2666
#5 0x0000000000507e8d in gen_send_ex (gen=0x7904640, arg=0x0, exc=<value optimized out>) at Objects/genobject.c:84
#6 0x00000000004a25da in PyEval_EvalFrameEx (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:2497
#7 0x00000000004a805b in fast_function (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:4099
#8 call_function (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:4034
#9 PyEval_EvalFrameEx (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:2666
#10 0x00000000004a8bd7 in PyEval_EvalCodeEx (co=0x7ffff1ff54b0, globals=<value optimized out>, locals=<value optimized out>, args=<value optimized out>, argcount=3, kws=0x9984520, kwcount=0, defs=0x7ffff2016968, defcount=1, closure=0x0) at Python/ceval.c:3253
#11 0x00000000004a6dce in fast_function (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:4109
#12 call_function (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:4034
#13 PyEval_EvalFrameEx (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:2666
#14 0x00000000004a805b in fast_function (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:4099
#15 call_function (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:4034
#16 PyEval_EvalFrameEx (f=<value optimized out>, throwflag=<value optimized out>) at Python/ceval.c:2666
#17 0x00000000004a8bd7 in PyEval_EvalCodeEx (co=0x7ffff7ec4130, globals=<value optimized out>, locals=<value optimized out>, args=<value optimized out>, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3253
#18 0x00000000004a8ce2 in PyEval_EvalCode (co=<value optimized out>, globals=<value optimized out>, locals=<value optimized out>) at Python/ceval.c:667
#19 0x00000000004c91fe in run_mod (fp=0x880ee0, filename=0x7fffffffe30c "analyzer.py", start=<value optimized out>, globals=0x7fc140, locals=0x7fc140, closeit=1, flags=0x7fffffffdea0) at Python/pythonrun.c:1346
#20 PyRun_FileExFlags (fp=0x880ee0, filename=0x7fffffffe30c "analyzer.py", start=<value optimized out>, globals=0x7fc140, locals=0x7fc140, closeit=1,flags=0x7fffffffdea0) at Python/pythonrun.c:1332
#21 0x00000000004c9414 in PyRun_SimpleFileExFlags (fp=0x880ee0, filename=0x7fffffffe30c "analyzer.py", closeit=1, flags=0x7fffffffdea0)at Python/pythonrun.c:936
#22 0x0000000000414a4f in Py_Main (argc=<value optimized out>, argv=<value optimized out>) at Modules/main.c:599
#23 0x0000003fd281ed5d in __libc_start_main () from /lib64/libc.so.6
#24 0x0000000000413bc9 in _start ()
Answer: I see
self.__tree = _ahocorasick.KeywordTree();
Then
self.__tree.zerostate()
and finally
return self.__findall_helper(sourceBlock, allow_overlaps,self.__tree.search_long)
So my guess is that the function `search_long` is invalidated when you do
`__tree.zerostate()` so you get undefined behaviour which in some cases leads
to a segfault. It's a lot of code and there's a opaque library so it's hard to
tell for sure. Best thing is to go to the documentation and make sure you're
using the library correctly.
The `print` is a red herring, and by allocating something just forces the
crash to happen sooner.
Hope it helps.
|
Warning "InsecurePlatformWarning" while connecting to Rally with python (using pyral)
Question: When I connect to Rally via the python REST API (pyral), I get the following
warning.
`C:\PYTHON27\lib\site-
packages\requests-2.6.0-py2.7.egg\requests\packages\urllib3\util\ssl_.py:79:
InsecurePlatformWarning: A true SSLContext object is not available. This
prevents urllib3 from configuring SSL appropriately and may cause certain SSL
connections to fail. For more information, see
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning`
The rest works fine however but it is slightly annoying to have this warning
each time.
Any idea on how to resolve the connection "issue" or hiding the warning?
The code I use is the following:
#!/usr/bin/env python
from pyral import Rally, rallySettings, RallyRESTAPIError
#####################################
### CONNECTION TO RALLY ###
#####################################
#Configuration parameters
my_server = rally1.rallydev.com
my_user = "[email protected]"
my_password = "toto"
my_workspace = "Sandbox"
my_project = "My Project"
#Connect to Rally
try:
rally = Rally(my_server, my_user, my_password, workspace=my_workspace, project=my_project)
except RallyRESTAPIError, details:
sys.stderr.write('ERROR: %s \n\n' % details)
rally.enableLogging('rally.simple-use.log')
print "\n"
Answer: The solution was in front of my eyes the whole time (thanks abarnert!)
Just needed to add:
import logging
logging.basicConfig(filename='Rally.log',level=logging.NOTSET)
logging.captureWarnings(True)
|
Tox and django_toolbar: ImportError
Question: When trying to use Tox to better streamline testing across multiple
environments, I'm runing into the following error when testing for Python 3.4:
`ImportError: No module named 'debug_toolbar'`
However, `django-debug-toolbar==1.3.0` is listed in my `requirements.txt`
file, and my `tox.ini` file looks as follows:
[tox]
envlist = py27,py34
skipsdist = True
[testenv]
deps = -r{toxinidir}/requirements.txt
setenv =
PYTHONPATH = {toxinidir}:{toxinidir}
commands = python manage.py test
It seems as if it is not properly installing the requirements.. Oddly enough,
the py27 environment does not throw this error, and is able to perform the
tests just fine.
What could be causing this?
EDIT: for reference, here's my current setup. When I run tox the first time
(i.e. without .tox), it works fine, but any times after that it fails. The
`.tox` directory does seem to get build up correctly; all dependencies are
installed in `.tox/py34/lib/python3.4/site-packages`. Compared to earlier, I
now also uninstalled `django`, and indeed that is now the first dependency
that fails.
(venv)joost@thorin:myproject/ (master✗) % rm -rf .tox
(venv)joost@thorin:myproject/ (master✗) % cat tox.ini
[tox]
envlist = py27,py34
skipsdist = True
[testenv]
deps = -r{toxinidir}/requirements.txt
commands = python manage.py test
(venv)joost@thorin:myproject/ (master✗) % tox
py27 create: /Users/Joost/myproject/.tox/py27
py27 installdeps: -r/Users/Joost/myproject/requirements.txt
py27 runtests: PYTHONHASHSEED='4248725049'
py27 runtests: commands[0] | python manage.py test
Creating test database for alias 'default'...
................
----------------------------------------------------------------------
Ran 16 tests in 0.093s
OK
Destroying test database for alias 'default'...
py34 create: /Users/Joost/myproject/.tox/py34
py34 installdeps: -r/Users/Joost/myproject/requirements.txt
py34 runtests: PYTHONHASHSEED='4248725049'
py34 runtests: commands[0] | python manage.py test
Creating test database for alias 'default'...
................
----------------------------------------------------------------------
Ran 16 tests in 0.093s
OK
Destroying test database for alias 'default'...
____________________________________________________________________ summary _____________________________________________________________________
py27: commands succeeded
py34: commands succeeded
congratulations :)
(venv)joost@thorin:myproject/ (master✗) % tox
py27 runtests: PYTHONHASHSEED='3259360769'
py27 runtests: commands[0] | python manage.py test
Creating test database for alias 'default'...
................
----------------------------------------------------------------------
Ran 16 tests in 0.088s
OK
Destroying test database for alias 'default'...
py34 runtests: PYTHONHASHSEED='3259360769'
py34 runtests: commands[0] | python manage.py test
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named 'django'
ERROR: InvocationError: '/Users/Joost/myproject/.tox/py34/bin/python manage.py test'
____________________________________________________________________ summary _____________________________________________________________________
py27: commands succeeded
ERROR: py34: commands failed
(venv)joost@thorin:myproject/ (master✗) %
Answer: In the end, this was solved by upgrading `tox`. I'm not sure yet when this was
fixed exactly, or what bug it was listed as, but using version `2.1.1` I do
not get this error any longer.
|
Flask register_blueprint error (Python)
Question: Sorry for my English, but.. I need help.
I have a project on Flask. And when I include Blueprint on my Flask app I have
errors. The code is from a book, Miguel Grinberg about Flask.
Project tree:
.
├── app
│ ├── __init__.py
│ ├── main
│ │ ├── errors.py
│ │ ├── __init__.py
│ │ └── views.py
│ ├── static
│ │ ├── static_files
│ └── templates
│ └── html_files
├── config.py
├── manage.py
create_app() in **app/__init__.py**
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(config[config_name])
config[config_name].init_app(app)
bootstrap.init_app(app)
db.init_app(app)
from .main import main as main_blueprint
app.register_blueprint(main_blueprint)
return app
listing **app/main/__init__.py**
from flask import Blueprint
main = Blueprint('main', __name__)
from . import views, errors
Passenger row proccess output
Traceback (most recent call last):
File "/opt/passenger/passenger-4.0.57/helper-scripts/wsgi-loader.py", line 320, in <module>
app_module = load_app()
File "/opt/passenger/passenger-4.0.57/helper-scripts/wsgi-loader.py", line 61, in load_app
return imp.load_source('passenger_wsgi', startup_file)
File "/home/m/mallts/dev.wget-studio.ru/myenv/lib/python3.4/imp.py", line 171, in load_source
module = methods.load()
File "<frozen importlib._bootstrap>", line 1220, in load
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "passenger_wsgi.py", line 16, in <module>
from manage import app as application
File "/home/m/mallts/dev.wget-studio.ru/fikls/manage.py", line 7, in <module>
app = create_app('default') #(os.getenv('FLASK_CONFIG') or 'default')
File "/home/m/mallts/dev.wget-studio.ru/fikls/app/__init__.py", line 19, in create_app
app.register_blueprint(main_blueprint)
File "/home/m/mallts/dev.wget-studio.ru/myenv/lib/python3.4/site-packages/flask/app.py", line 62, in wrapper_func
return f(self, *args, **kwargs)
File "/home/m/mallts/dev.wget-studio.ru/myenv/lib/python3.4/site-packages/flask/app.py", line 889, in register_blueprint
blueprint.register(self, options, first_registration)
File "/home/m/mallts/dev.wget-studio.ru/myenv/lib/python3.4/site-packages/flask/blueprints.py", line 153, in register
deferred(state)
File "/home/m/mallts/dev.wget-studio.ru/myenv/lib/python3.4/site-packages/flask/blueprints.py", line 128, in wrapper
func(state)
File "/home/m/mallts/dev.wget-studio.ru/myenv/lib/python3.4/site-packages/flask/blueprints.py", line 399, in <lambda>
self.name, code_or_exception, f))
File "/home/m/mallts/dev.wget-studio.ru/myenv/lib/python3.4/site-packages/flask/app.py", line 62, in wrapper_func
return f(self, *args, **kwargs)
File "/home/m/mallts/dev.wget-studio.ru/myenv/lib/python3.4/site-packages/flask/app.py", line 1090, in _register_error_handler
'It is currently not possible to register a 500 internal ' \
AssertionError: It is currently not possible to register a 500 internal server error on a per-blueprint level.
My **app/main/errors.py**
from flask import render_template
from . import main
@main.errorhandler(404)
def page_not_found(e):
return render_template('404.html'), 404
@main.errorhandler(500)
def internal_server_error(e):
return render_template('500.html'), 500
My **app/main/views.py**
from flask import render_template, url_for, session, redirect
from . import main
@main.route('/')
def index():
return render_template('index.html')
Thx for your help
Answer: I just ran into this same issue with another application that I was
refactoring to use blueprints. There's no need for a workaround, since Flask
offers a decorator for just such a case:
[`app_errorhandler`](http://flask.pocoo.org/docs/0.10/api/#flask.Blueprint.app_errorhandler).
It works exactly like `errorhandler` in that it registers an error route for
the entire app, but it works with blueprints. Like so:
from flask import render_template
from . import main
@main.app_errorhandler(404)
def page_not_found(e):
return render_template('404.html'), 404
@main.app_errorhandler(500)
def internal_server_error(e):
return render_template('500.html'), 500
Grinberg's book, _Flask Web Development_ \-- an excellent read -- uses this
decorator for error pages registered on the `main` blueprint. You can review
the companion code
[here](https://github.com/miguelgrinberg/flasky/blob/master/app/main/errors.py).
I missed it at first, too. :P
|
How does str(list) work?
Question: **Why does`str(list)` returns how we see list on the console? How does
`str(list)` work? (any reference to the CPython code for `str(list)`)?**
>>> x = ['abc', 'def', 'ghi']
>>> str(x)
"['abc', 'def', 'ghi']"
To get the original list back from the `str(list)` I have to:
>>> from ast import literal_eval
>>> x = ['abc', 'def', 'ghi']
>>> str(x)
"['abc', 'def', 'ghi']"
>>> list(str(x))
['[', "'", 'a', 'b', 'c', "'", ',', ' ', "'", 'd', 'e', 'f', "'", ',', ' ', "'", 'g', 'h', 'i', "'", ']']
>>> literal_eval(str(x))
['abc', 'def', 'ghi']
**Why doesn't`list(str(list))` turns the `str(list)` back to the original
list?**
Or I could use:
>>> eval(str(x))
['abc', 'def', 'ghi']
**Is`literal_eval` the same as `eval`? Is `eval` safe to use?**
**How many times can I do the following? Does the code break if it keep on
doing`str(list(str(list))))`?** E.g.
>>> x = 'abc'
>>> list(x)
['a', 'b', 'c']
>>> str(list(x))
"['a', 'b', 'c']"
>>> list(str(list(x)))
['[', "'", 'a', "'", ',', ' ', "'", 'b', "'", ',', ' ', "'", 'c', "'", ']']
>>> str(list(str(list(x))))
'[\'[\', "\'", \'a\', "\'", \',\', \' \', "\'", \'b\', "\'", \',\', \' \', "\'", \'c\', "\'", \']\']'
>>> list(str(list(str(list(x)))))
['[', "'", '[', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'a', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'b', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'c', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ']', "'", ']']
>>> str(list(str(list(str(list(x))))))
'[\'[\', "\'", \'[\', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \'a\', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \',\', "\'", \',\', \' \', "\'", \' \', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \'b\', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \',\', "\'", \',\', \' \', "\'", \' \', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \'c\', "\'", \',\', \' \', \'"\', "\'", \'"\', \',\', \' \', "\'", \']\', "\'", \']\']'
>>> list(str(list(str(list(str(list(x)))))))
['[', "'", '[', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '[', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'a', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'b', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", 'c', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', "'", '"', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", '"', "'", ',', ' ', "'", ',', "'", ',', ' ', "'", ' ', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ']', "'", ',', ' ', '"', "'", '"', ',', ' ', "'", ']', "'", ']']
Answer: Well you have a total of 4 questions, let us go one by one.
> **1\. Why does`str(list)` returns how we see `list` on the console? How does
> `str(list)` work?**
# What is [`str()`](https://docs.python.org/3/library/functions.html#str) and
[`__str__()`](https://docs.python.org/3/reference/datamodel.html#object.__str__)?
The `str()` callable is to return a _printable_ form of the object only! From
the [docs](https://docs.python.org/3/library/functions.html#str)
> `str(object)` does not always attempt to return a string that is acceptable
> to `eval()`; its goal is to return a printable string.
The `__str__()` function in a class is called whenever you call `str()` on an
object. Again from the
[documentation](https://docs.python.org/3/reference/datamodel.html#object.__str__)
> `object.__str__(self)`
>
> Called by the `str()` built-in function and by the `print` statement to
> compute the “informal” string representation of an object.
# What is the [`list`](https://docs.python.org/3/library/functions.html#list)
callable?
The `list()` callable is to create a list from an iterable passed as an
argument. Again from the
[docs](https://docs.python.org/3/library/functions.html#list)
> Return a `list` whose items are the same and in the same order as iterable‘s
> items
Thus, `str(list)` gives you a printable form and `list(str(list))` will
iterate over the string. That is `list(str(list))` will give you a list of the
individual characters of the printable form of the argument passed.
A small walk-through between the nested calls,
Given list, `l = ['a','b']` (Apologies for taking a smaller example than that
in your question).
When you call `str(l)`, it returns a printable form of the list `l`, that is
`"['a','b']"`.
Now you can see clearly that `"['a','b']"` is a string and is indeed an
_iterable_. Now when you call `list` on this i.e. `list("['a','b']")` you get
a weird list like `['[', "'", 'a', "'", ',', "'", 'b', "'", ']']`. _Why does
this happen?_ This happens because the string iterates over its characters,
you can test this by using a dummy string,
>>> 'dummy'
'dummy'
>>> list('dummy')
['d', 'u', 'm', 'm', 'y']
Thus when you call the `list` on a string you get a list of character. Note
that again here, when you call `str()` on `list('dummy')`, you will not get
back your original string `'dummy'`, so again you will have to use
[`join`](https://docs.python.org/3/library/functions.html#join)! Thus
recalling the same function will **NOT** get you back your original object!
**So, Calling`str()` over a list calls the builtin `__str__()` method of the
list?**
**_The answer is NO!_**
## What happens internally when you call `str()` on a list?
Whenever you call `str()` on an list object, the steps followed are
1. Call the `repr()` of each of the list element.
2. Add a fancy `[` at the front and another `]` at the end of the list.
3. Join all of them with a comma.
~~As you can see from the source code of the list object in[cpython on
github](https://github.com/python/cpython/blob/master/Objects/listobject.c).~~
Going through the source code of cpython in
[hg.python](https://hg.python.org/cpython/file/e8783c581928/Objects/listobject.c#l362),
which is more clear, you can see the following three comments. (Thanks to
Ashwini for the link on that particular
[code](http://stackoverflow.com/questions/30109030/how-does-strlist-
work#comment48330380_30109030))
>
> /* Do repr() on each element. Note that this may mutate the list,
> so must refetch the list size on each iteration. */ line (382)
>
> /* Add "[]" decorations to the first and last items. */ line (398)
>
> /* Paste them all together with ", " between. */ line (418)
>
These correspond to the points I mentioned above.
# Now what is
[`repr()`](https://docs.python.org/3/library/functions.html#repr)?
`repr()` prints the string representation of all the objects. Again from the
[documentation](https://docs.python.org/3/library/functions.html#repr)
> Return a string containing a printable representation of an object.
and also note this sentence!
> For many types, this function makes an attempt to return a string that would
> yield an object with the same value when passed to `eval()`, otherwise the
> representation is a string enclosed in angle brackets that contains the name
> of the type of the object together with additional information often
> including the name and address of the object.
And now your second question here,
> **2\. Why doesn't`list(str(list))` turns the `str(list)` back to the
> original list?**
Internally, `str(list)` actually creates the `repr()` representation of the
list object. So to get back the list after calling `str` on the list, you
actually need to do
[`eval`](https://docs.python.org/3/library/functions.html#eval) on it and not
a `list` call.
# Workarounds
But we all know that [`eval` is
_evil_](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html),
so what is/are the workaround(s)?
## 1\. Using
[`literal_eval`](https://docs.python.org/3/library/ast.html#ast.literal_eval)
The first work-around would be to use
[`ast.literal_eval`](https://docs.python.org/3/library/ast.html#ast.literal_eval).
That brings us to your 3rd question,
> **3\. Is`literal_eval()` the same as `eval()`? Is `eval()` safe to use?**
[`ast.literal_eval()`](https://docs.python.org/3/library/ast.html#ast.literal_eval)
is safe
[unlike](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html)
the `eval()` function. The docs themselves mention that it is safe --
> _Safely_ evaluate an expression node or a string containing a Python literal
> or container display
## 2\. Using string functions and builtins
Another workaround can be done using
[`str.split()`](https://docs.python.org/3/library/stdtypes.html#str.split)
>>> x = ['abc', 'def', 'ghi']
>>> a = str(x)
>>> a[2:-2].split("', '")
['abc', 'def', 'ghi']
This is just a simple way to do that for a list of strings. For a list of
integers you will need
[`map`](https://docs.python.org/3/library/functions.html#map).
>>> x = [1,2,3]
>>> a =str(x)
>>> list(map(int,a[1:-1].split(', '))) # No need for list call in Py2
[1, 2, 3]
Thus unlike `literal_eval` these are simple hacks given that you know the
elements of the list. If they are heterogeneous in nature like `[1, "a",
True]` then you will have to loop through the split list and discover the
element type and then convert it and append the converted element to a final
list.
And for your final question,
> **4\. Does the code break if you do`str(list(str(list))))` again and
> again?**
Not really. The output will grow longer and longer as each time you are
creating a `list` of a `str` and then again getting the printable version of
it. The limitation is your physical machine's limitation only. (which will be
soon reached as each step the string length is multiplied by 5.)
|
Parallel processing a large number of tasks
Question: I have 10,000 csv files for which I have to open in Pandas and
manipulate/transform using some of Pandas's function and save the new output
to csv. Could I use a parallel process (for Windows) to make the work faster?
I tried the following but no luck:
import pandas pd
import multiprocessing
def proc_file(file):
df = pd.read_csv(file)
df = df.reample('1S', how='sum')
df.to_csv('C:\\newfile.csv')
if __name__ == '__main__':
files = ['C:\\file1.csv', ... 'C:\\file2.csv']
for i in files:
p = multiprocessing.Process(target=proc_file(i))
p.start()
I don't think I have a good understanding of multiprocessing in Python.
Answer: Maybe something like this:
p = multiprocessing.Pool()
p.map(prof_file, files)
For this size, you really need a process pool, so that the cost of launching a
process is offset by the work it does.
[multiprocessing.Pool](https://docs.python.org/2/library/multiprocessing.html#module-
multiprocessing.pool) does exactly that: it transforms task parallelism (which
is what you were doing) into [task
parallelism](http://en.wikipedia.org/wiki/Task_parallelism).
|
Python2.7 - pip install pymssql fails on CentOS 6.3
Question: I am trying to install pymssql for Python on a CentOS machine however it keeps
failing on me.
I have already installed the following:
> freetds-devel
>
> python-devel
Which seems to be the fix I keep coming across in searches, however I have
already installed both of these and I am still getting the following error:
pip install pymssql
Collecting pymssql
/usr/local/lib/python2.7/site-packages/pip-6.1.1-py2.7.egg/pip/_vendor/requests/packages/urllib3/util/ssl_.py:79:
Using cached pymssql-2.1.1.tar.gz
Installing collected packages: pymssql
Running setup.py install for pymssql
Complete output from command /usr/local/bin/python2.7 -c "import setuptools, tokenize;__file__='/tmp/pip-build-oU7MKZ/pymssql/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-PE9Gxh-record/install-record.txt --single-version-externally-managed --compile:
setup.py: platform.system() => 'Linux'
setup.py: platform.architecture() => ('64bit', 'ELF')
setup.py: platform.linux_distribution() => ('Red Hat Enterprise Linux Server', '6.3', '*******')
setup.py: platform.libc_ver() => ('glibc', '2.3')
setup.py: Not using bundled FreeTDS
setup.py: include_dirs = ['/usr/local/include']
setup.py: library_dirs = ['/usr/local/lib']
running install
running build
running build_ext
building '_mssql' extension
creating build
creating build/temp.linux-x86_64-2.7
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/include -I/usr/local/include/python2.7 -c _mssql.c -o build/temp.linux-x86_64-2.7/_mssql.o -DMSDBLIB
_mssql.c:314:22: error: sqlfront.h: No such file or directory
In file included from _mssql.c:316:
cpp_helpers.h:34:19: error: sybdb.h: No such file or directory
_mssql.c:532: error: expected specifier-qualifier-list before ‘BYTE’
_mssql.c:683: error: expected specifier-qualifier-list before ‘DBPROCESS’
.............[Lots of errors removed from here]
:22123: error: ‘SYBVARBINARY’ undeclared (first use in this function)
_mssql.c:22135: error: ‘SYBVARCHAR’ undeclared (first use in this function)
_mssql.c: At top level:
_mssql.c:23607: error: expected ‘)’ before ‘val’
_mssql.c:23689: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘__Pyx_PyInt_from_py_DBINT’
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/usr/local/bin/python2.7 -c "import setuptools, tokenize;__file__='/tmp/pip-build-oU7MKZ/pymssql/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-PE9Gxh-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-oU7MKZ/pymssql
So I see this error in the log:
> _mssql.c:314:22: error: sqlfront.h: No such file or directory In file
> included from _mssql.c:316:
Which from searches suggests that I need to install freetds-devel and/or
python-devel but I already have.
I also notice this line:
> setup.py: Not using bundled FreeTDS
I am new to Linux based operating so I'm not sure if this means that its not
using the FreeTDS at all or if it is just using the one I downloaded instead
of a bundled version or something? Does this suggest that maybe the freetds-
devel I downloaded is not being used correctly? If so how can I make setup.py
use the freetds-devel I downloaded?
If the freetds-devel is not an issue then is there something else I am missing
to install pymssql?
EDIT : More info
When I run the following find command:
> sudo find / -name "sqlfront.h"
The file it complains about is found here:
> /usr/include/freetds/sqlfront.h
So is it just that my FreeTDS install is messed up or what is wrong?
Answer: You didn't mention what version of CentOS you were using. Since you said
Python 2.7 I'm going to assume CentOS 7. If you're on CentOS 6 and using a
locally built Python, please update your question.
In any case, on my CentOS 7 system:
# rpm -q centos-release
centos-release-7-1.1503.el7.centos.2.8.x86_64
After installing the EPEL repositories:
# yum -y install epel-release
And then installing the requirements:
# yum -y install gcc python-pip python-devel freetds-devel
I was able to succesfully `pip install` `pymssql`:
# pip install pymssql
Downloading/unpacking pymssql
Downloading pymssql-2.1.1.tar.gz (2.4MB): 2.4MB downloaded
Running setup.py (path:/tmp/pip-build-iWrHta/pymssql/setup.py) egg_info for package pymssql
setup.py: platform.system() => 'Linux'
setup.py: platform.architecture() => ('64bit', 'ELF')
setup.py: platform.linux_distribution() => ('CentOS Linux', '7.1.1503', 'Core')
setup.py: platform.libc_ver() => ('glibc', '2.2.5')
setup.py: Not using bundled FreeTDS
setup.py: include_dirs = ['/usr/local/include']
setup.py: library_dirs = ['/usr/local/lib']
Installed /tmp/pip-build-iWrHta/pymssql/setuptools_git-1.1-py2.7.egg
Installing collected packages: pymssql
Running setup.py install for pymssql
setup.py: platform.system() => 'Linux'
setup.py: platform.architecture() => ('64bit', 'ELF')
setup.py: platform.linux_distribution() => ('CentOS Linux', '7.1.1503', 'Core')
setup.py: platform.libc_ver() => ('glibc', '2.2.5')
setup.py: Not using bundled FreeTDS
setup.py: include_dirs = ['/usr/local/include']
setup.py: library_dirs = ['/usr/local/lib']
building '_mssql' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/local/include -I/usr/include/python2.7 -c _mssql.c -o build/temp.linux-x86_64-2.7/_mssql.o -DMSDBLIB
gcc -pthread -shared -Wl,-z,relro build/temp.linux-x86_64-2.7/_mssql.o -L/usr/local/lib -L/usr/lib64 -lsybdb -lrt -lpython2.7 -o build/lib.linux-x86_64-2.7/_mssql.so
building 'pymssql' extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/local/include -I/usr/include/python2.7 -c pymssql.c -o build/temp.linux-x86_64-2.7/pymssql.o -DMSDBLIB
gcc -pthread -shared -Wl,-z,relro build/temp.linux-x86_64-2.7/pymssql.o -L/usr/local/lib -L/usr/lib64 -lsybdb -lrt -lpython2.7 -o build/lib.linux-x86_64-2.7/pymssql.so
Successfully installed pymssql
Cleaning up...
|
Python utility for parsing blocks?
Question: I have a file that starts something like:
databaseCons = {
main = {
database = "readable_name",
hostname = "hostdb1.serv.com",
instances= {
slaves = {
conns = "8"
}
}
maxconns = "5",
user = "user",
pass = "pass"
}
}
So, what I'd like to do is parse this out into a `dict` of sub-`dict`s,
something like:
`{'main': {'database': 'readable_name', 'hostname': 'hostdb1.serv.com',
'maxconns': '5', 'instances': {'slave': {'maxCount': '8'}}, 'user': 'user',
'pass': 'pass'}} `
I think the above makes sense... but please feel free to edit this if it
doesn't. Basically I want the equivalent of:
conns = '8'
slave = dict()
slave['maxCount'] = conns
instances = dict()
instances['slave'] = slave
database = 'readable_name'
hostname = 'hostdb1.serv.com'
maxconns = '5'
user = 'user'
pas = 'pass'
main = dict()
main['database'] = database
main['hostname'] = hostname
main['instances'] = instances
main['maxconns'] = maxconns
main['user'] = user
main['pass'] = pas
databaseCons = dict()
databaseCons['main'] = main
Are there any modules out there that can handle this sort of parsing? Even
what I've suggested above looks messy.. there's got to be a better way I'd
imagine.
Answer: Here is a [pyparsing](https://pyparsing.wikispaces.com/) parser for your
config file:
from pyparsing import *
def to_dict(t):
return {k:v for k,v in t}
series = Forward()
struct = Suppress('{') + series + Suppress('}')
value = quotedString.setParseAction(removeQuotes) | struct
token = Word(alphanums)
assignment = Group(token + Suppress('=') + value + Suppress(Optional(",")))
series << ZeroOrMore(assignment).setParseAction(to_dict)
language = series + stringEnd
def config_file_to_dict(filename):
return language.parseFile(filename)[0]
if __name__=="__main__":
from pprint import pprint
pprint(config_file_to_dict('config.txt'))
|
How many connections will open in mongodb in python?
Question: I am working with mongodb and python, for accessing the database I am doing
this:
#pymongo connection settings
from pymongo import Connection
connection = Connection('localhost', 27017)
db = connection['MyDB']
I am inserting documents like this:
db_data = db.mycollection.insert({'name': 'Mulagala', 'age': 24})
and finding like this:
db_data = db.mycollection.find()
When I am creating multiple users or getting `mycollection` details for
multiple times how many conections will open in mongodb. Do I need to close
any open connection before returning result?
Answer: No matter how many `db.coll.find({...})` and `db.coll.insert({...})` will you
do, you will still have only one connection. You do not need to close open
connection (at the end of the script it will be closed automatically)
**P.S.** `MongoClient` is the preferred method to [connect to
mongo](https://api.mongodb.org/python/current/tutorial.html) (`Connection` is
deprecated)
|
Python, I want to make list json.dumped But, error 'dict' object has no attribute 'dumps'
Question: below is my python code
r = requests.get("https://www.googleapis.com/youtube/v3/search?part=snippet&maxResults=50&channelId="+CHANNELID+"&order=date&key="+DEVELOPER_KEY)
json = r.json()
items = json.get("items")
videos = []
for x in items:
title = x["snippet"]["title"]
videoId = x["id"]["videoId"]
channelTitle = x["snippet"]["channelTitle"]
cam_thumbnails = x["snippet"]["thumbnails"]["medium"]["url"]
publishedAt = x["snippet"]["publishedAt"]
data = { "title" : title,
"videoId" : videoId,
"channelTitle" : channelTitle,
"cam_thumbnails" : cam_thumbnails,
"publishedAt" : publishedAt,
}
videos.append(data)
print json.dumps(videos) # this code cause problem
I inserted 'dict' to 'list' and then called json.dumps() But, error was arised
error messege is 'dict' object has no attribute 'dumps'
What is problem?, and How can I solve this problem?
Answer: Previously, you must have imported the json module, writing `import json`,
which creates a variable in your namespace, whose name is `json`. Then, you do
`json = r.json()`, ie. you assign a new reference to the name `json`, which
doesn't represents the _module_ json anymore, but, instead, the result of the
`r.json()` method. Thus, you can't use anymore the json module using the
syntax `json.function()`, because json is the result of r.json(). To resolve
your problem, you must change the name of the variable named `json` in your
example, to, for example, `json_dict` or anything else.
|
Replicating request to Chef with Python RSA
Question: Goal: I need a Python 3 wrapper for Chef's REST API. Because its Python-3,
PyChef is out of the question.
Problem: I am trying to replicate the Chef request with Python RSA. But the
wrapper results in an error message: _"Invalid signature for user or client
'XXX'"_.
I approached the wrapper by trying to replicate the cURL script shown in [Chef
Authentication and Authorization with
cURL](https://docs.chef.io/auth.html#curl) using a Python RSA package: [RSA
Signing and verification](http://stuvel.eu/files/python-rsa-
doc/usage.html#signing-and-verification).
Here's my rewrite. It could be simpler but I started getting paranoid about
line breaks and headers order, so added a few unnecessary things:
import base64
import hashlib
import datetime
import rsa
import requests
import os
from collections import OrderedDict
body = ""
path = "/nodes"
client_name = "anton"
client_key = "/Users/velvetbaldmime/.chef/anton.pem"
# client_pub_key = "/Users/velvetbaldmime/.chef/anton.pub"
hashed_body = base64.b64encode(hashlib.sha1(body.encode()).digest()).decode("ASCII")
hashed_path = base64.b64encode(hashlib.sha1(path.encode()).digest()).decode("ASCII")
timestamp = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%SZ")
canonical_request = 'Method:GET\\nHashed Path:{hashed_path}\\nX-Ops-Content-Hash:{hashed_body}\\nX-Ops-Timestamp:{timestamp}\\nX-Ops-UserId:{client_name}'
canonical_request = canonical_request.format(
hashed_body=hashed_body, hashed_path=hashed_path, timestamp=timestamp, client_name=client_name)
headers = "X-Ops-Timestamp:{timestamp}\nX-Ops-Userid:{client_name}\nX-Chef-Version:0.10.4\nAccept:application/json\nX-Ops-Content-Hash:{hashed_body}\nX-Ops-Sign:version=1.0"
headers = headers.format(
hashed_body=hashed_body, hashed_path=hashed_path, timestamp=timestamp, client_name=client_name)
headers = OrderedDict((a.split(":", 2)[0], a.split(":", 2)[1]) for a in headers.split("\n"))
headers["X-Ops-Timestamp"] = timestamp
with open(client_key, 'rb') as privatefile:
keydata = privatefile.read()
privkey = rsa.PrivateKey.load_pkcs1(keydata)
with open("pubkey.pem", 'rb') as pubfile:
keydata = pubfile.read()
pubkey = rsa.PublicKey.load_pkcs1_openssl_pem(keydata)
signed_request = base64.b64encode(rsa.sign(canonical_request.encode(), privkey, "SHA-1"))
dummy_sign = base64.b64encode(rsa.sign("hello".encode(), privkey, "SHA-1"))
print(dummy_sign)
def chunks(l, n):
n = max(1, n)
return [l[i:i + n] for i in range(0, len(l), n)]
auth_headers = OrderedDict(("X-Ops-Authorization-{0}".format(i+1), chunk) for i, chunk in enumerate(chunks(signed_request, 60)))
all_headers = OrderedDict(headers)
all_headers.update(auth_headers)
# print('curl '+' \\\n'.join("-H {0}: {1}".format(i[0], i[1]) for i in all_headers.items())+" \\\nhttps://chef.local/nodes")
print(requests.get("https://chef.local"+path, headers=all_headers).text)
At each step I tried to check if the variables have the same result as their
counterparts in the curl script.
The problem seems to be at signing stage - there's an obvious discrepancy
between the output of python's packages and my mac's openssl tools. Due to
this discrepancy, Chef returns `{"error":["Invalid signature for user or
client 'anton'"]}`. Curl script with the same values and keys works fine.
`dummy_sign = base64.b64encode(rsa.sign("hello".encode(), privkey, "SHA-1"))`
from Python has the value of
N7QSZRD495vV9cC35vQsDyxfOvbMN3TcnU78in911R54IwhzPUKnJTdFZ4D/KpzyTVmVBPoR4nY5um9QVcihhqTJQKy+oPF+8w61HyR7YyXZRqmx6sjiJRffC4uOGb5Wjot8csAuRSeUuHaNTl6HCcfRKnwUZnB7SctKoK6fXv0skWN2CzV9CjfHByct3oiy/xAdTz6IB+fLIwSQUf1k7lJ4/CmLJLP/Gu/qALkvWOYDAKxmavv3vYX/kNhzApKgTYPMw6l5k1aDJGRVm9Ch/BNQbg1WfZiT6LK+m4KAMFbTORfEH45KGWBCj9zsyETyMCAtUycebjqMujMqEwzv7w==
while the output of `echo -n "hello" | openssl rsautl -sign -inkey ~/.chef/anton.pem | openssl enc -base64` is
WfoASF1f5DPT3CVPlWDrIiTwuEnjr5yCV+WIlbQLFmwm3nfhIqfTPLyTM56SwTSg
CKdboVU4EBFxC3RsU2aPpELqRH6+Fnl2Tl273vo6kLzvC/8+tUBTdNZdzSPhx6S8
x+6wzVFXsd3QeGAWoHkEgTKodSByFzARnZFxO2JzUe4dnygijwruHdf9S4ldrRo6
eaShwaxuNzM0cIl+Umz5iym3cCD6GFL13njmXZs3cHRLesBtLKA7pNxJ1UDf2WN2
OK09aK+bHaM4jl5HeQ2SdNzBQIKvyDcxX4Divnf2I/0tzD16J6BEMGCfTfsI2f3K
TVGulq81+sH9zo8lGnpDrw==
I couldn't find the information on default hashing algorithm in openssl for
`rsautl`, but I guess it's SHA-1.
At this point I don't really know which way to look, hope anyone can help make
it right.
Answer: From [Chef Authentication and Authorization with
cURL](https://docs.chef.io/auth.html#curl),
timestamp=$(date -u "+%Y-%m-%dT%H:%M:%SZ")
time is in UTC, so in Python, it has to be
timestamp = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
`openssl` equivalent of Python,
dummy_sign = base64.b64encode(rsa.sign("hello".encode(), privkey, "SHA-1"))
is
echo -n hello|openssl dgst -sha1 -sign ~/.chef/anton.pem -keyform PEM|openssl enc -base64
In Python code, you're signing the message digest, SHA-1, of the message.
That's called detached signature.
`echo -n "hello" | openssl rsautl -sign -inkey ~/.chef/anton.pem | openssl enc -base64` but this one signs the whole message, without making digest.
Python `rsa` module has no equivalent of `openssl rsautl -sign`. So I defined
a function to fill that space.
from rsa import common, transform, core, varblock
from rsa.pkcs1 import _pad_for_signing
def pure_sign(message, priv_key):
'''Signs the message with the private key.
:param message: the message to sign. Can be an 8-bit string or a file-like
object. If ``message`` has a ``read()`` method, it is assumed to be a
file-like object.
:param priv_key: the :py:class:`rsa.PrivateKey` to sign with
:return: a message signature block.
:raise OverflowError: if the private key is too small to contain the
requested hash.
'''
keylength = common.byte_size(priv_key.n)
padded = _pad_for_signing(message, keylength)
payload = transform.bytes2int(padded)
encrypted = core.encrypt_int(payload, priv_key.d, priv_key.n)
block = transform.int2bytes(encrypted, keylength)
return block
Test;
openssl
echo -n hello|openssl rsautl -sign -inkey .chef/anton.pem |base64
foIy6HVpfIpNk4hMYg8YWCEZwZ7w4Qexr6KXDbJ7/vr5Jym56joofkn1qUak57iSercqQ1xqBsIT
fo6bDs2suYUKu15nj3FRQ54+LcVKjDrUUEyl2kfJgVtXLsdhzYj1SBFJZnbz32irVMVytARWQusy
b2f2GQKLTogGhCywFFyhw5YpAHmKc2CQIHw+SsVngcPrmVAAtvCZQRNV5zR61ICipckNEXnya8/J
Ga34ntyELxWDradY74726OlJSgszpHbAOMK02C4yx7OU32GWlPlsZBUGAqS5Tu4MSjlD1f/eQBsF
x/pn8deP4yuR1294DTP7dsZ9ml64ZlcIlg==
Python
base64.b64encode(pure_sign.pure_sign(b'hello',prik)).decode()
'foIy6HVpfIpNk4hMYg8YWCEZwZ7w4Qexr6KXDbJ7/vr5Jym56joofkn1qUak57iSercqQ1xqBsITfo6bDs2suYUKu15nj3FRQ54+LcVKjDrUUEyl2kfJgVtXLsdhzYj1SBFJZnbz32irVMVytARWQusyb2f2GQKLTogGhCywFFyhw5YpAHmKc2CQIHw+SsVngcPrmVAAtvCZQRNV5zR61ICipckNEXnya8/JGa34ntyELxWDradY74726OlJSgszpHbAOMK02C4yx7OU32GWlPlsZBUGAqS5Tu4MSjlD1f/eQBsFx/pn8deP4yuR1294DTP7dsZ9ml64ZlcIlg=='
Change the line;
signed_request = base64.b64encode(rsa.sign(canonical_request.encode(), privkey, "SHA-1"))
to
signed_request = base64.b64encode(pure_sign(canonical_request.encode(), privkey))
|
Get all pixels of an Image [Wand]
Question: I am using the [Wand Python
Library](https://wand.readthedocs.org/en/0.4.0/index.html), and trying to
solve the [Python Challenge](http://pythonchallenge.com), and my current
problem asks me to get all even/odd pixels.
This is, obviously, a very simple task. However, I found the Wand library
quite slow in loading pixels/copying pixels (Maybe it's because I also change
fill_color to the color of the pixel for each?), and I was wondering if I
could just load them all in at once.
My current solution to loading all pixels is something like this:
from wand.image import Image
img = Image(filename="5808.jpg")
pixels = []
for x in range(img.width+1):
for y in range(img.height+1):
pixels.append(img[x, y])
print(pixels)
I'd prefer something like this:
from wand.image import Image
img = Image(filename="5808.jpg")
print(img.pixels)
Is there anything akin to that? Thanks in advance.
Answer: Without attempting to read the Python Challenge, and just focusing on the
question..
> I'd prefer something like this: `img.pixels`
Iterating over the image size to collect `wand.color.Color` objects would be
slow as this invokes repeated uses of MagickWand's internal pixel iteration &
pixel structure. As said challenge is for Python, and not C buffers, I'd
recommend getting a buffer of pixel data once. After which your freely able to
iterate, compare, evaluate, & etc. without need of ImageMagick resource.
For this example, I'm assuming the images is a 2x2 PNG below _(scaled for
visibility.)_

from wand.image import Image
# Prototype local variables
pixels = []
width, height = 0, 0
blob = None
# Load image
with Image(filename='2x2.png') as image:
# Enforce pixels quantum between 0 & 255
image.depth = 8
# Save width & height for later use
width, height = image.width, image.height
# Copy raw image data into blob string
blob = image.make_blob(format='RGB')
# Iterate over blob and collect pixels
for cursor in range(0, width * height * 3, 3):
# Save tuple of color values
pixels.append((blob[cursor], # Red
blob[cursor + 1], # Green
blob[cursor + 2])) # Blue
print(pixels)
#=> [(255, 0, 0), (0, 0, 255), (0, 128, 0), (255, 255, 255)]
This example is pretty quick, but remember Python does objects very well.
Let's extend the `Image` object, and create the method to satisfy
`img.pixels`.
# Lets create a generic pixel class for easy color management
class MyPixel(object):
red = 0
green = 0
blue = 0
def __init__(self, red=0, green=0, blue=0):
self.red = red
self.green = green
self.blue = blue
def __repr__(self):
return u'#{0.red:02X}{0.green:02X}{0.blue:02X}'.format(self)
# Extend wand.image.Image and add a new `img.pixels` pseudo-attribute
class MyImage(Image):
# Repeat above example
@property
def pixels(self):
pixels = []
self.depth = 8
blob = self.make_blob(format='RGB')
for cursor in range(0, self.width * self.height * 3, 3):
pixel = MyPixel(red=blob[cursor],
green=blob[cursor + 1],
blue=blob[cursor + 2])
pixels.append(pixel)
return pixels
# Call your custom class; which, collects your custom pixels
with MyImage(filename=filename) as img:
print(img.pixels)
#=> [#FF0000, #0000FF, #008000, #FFFFFF]
|
Python requests session keeps creating connection without reusing the connection
Question: I have the following code:
protocol = "http"
if self.protocol == PROTOCOL_HTTPS:
protocol = "https"
if self.session is None:
self.session = Session()
self.session.get(protocol+'://'+self.ip)
url = protocol+"://"+self.ip+requestURL
response = None
if requestType == GET_METHOD:
response = self.session.get(url, headers=(header),stream=False)
elif requestType == POST_METHOD:
response = self.session.post(url, payload, headers=(header), stream=False)
This code works but it opens too many connections to the device. I want only
one connection to be opened in the session. I tried the following code but it
doesn't seem to work. It is still creating more than 1 connection.
adapter = requests.adapters.HTTPAdapter(pool_connections=1, pool_maxsize=1)
self.session.mount('http://', adapter)
url = protocol+"://"+self.ip+requestURL
resp = self.session.get(url)
What am I doing wrong?
How do I ensure that only one connection is opened in the session?
Answer: You don't need to use an `HTTPAdapter` for one connection. May just try
>>> import requests
>>> s = requests.Session()
>>> s.get('http://httpbin.org/get')
from <http://docs.python-requests.org/en/latest/api/#requests.Session>
|
python just import fails but works with from
Question: Why does this not work:
import matplotlib.pyplot as plt
import os
import skimage
camera = skimage.io.imread(os.path.join(skimage.data_dir, 'camera.png'))
#plt.show(io.imshow(camera))
But using `from skimage import io` does. So this works:
import matplotlib.pyplot as plt
import os
import skimage # I still need to import skimage to get the data_dir
from skimage import io
camera = io.imread(os.path.join(skimage.data_dir, 'camera.png'))
#plt.show(io.imshow(camera))
I thought
import skimage
skimage.io."something"
Was equivalent to
from skimage import io
io."something"
Answer: > I thought
>
>
> import skimage
> skimage.io."something"
>
>
> Was equivalent to
>
>
> from skimage import io
> io."something"
>
It's not.
import skimage
causes python to look for the skimage module. Maybe there's a `__init__.py`
that sets up what becomes visible and what is done when you import that
module.
|
Web application with python
Question: I have a Python program that takes one input (string) and prints out several
lines as an output. I was hoping to get it online, so that the web page only
has a text-area and a button. Inserting a string and pressing the button
should send the string to python and make the original page print out the
output from Python.
Assuming I have only dealt with Python and a little bit of HTML and have very
limited knowledge about the web, how might I approach this problem?
For some reason, my first instinct was PHP, but Googling "PHP Python
application" didn't really help.
That is, assuming what I want IS a 'web application', right? Not a web server,
web page or something else?
How could I get a really simple version (like in the first paragraph) up and
running, and where to learn more?
Answer: Yes, the term you're looking for is "web application". Specifically, you want
to host a Python web application.
Exactly what this means, can vary. There are two main types of application -
`cgi` and `wsgi` (or stand-alone, which is what most people talk about when
they say "web app").
Almost nobody uses cgi nowadays, you're much more likely to come across a site
using a web application framework, like Django or Flask.
If you're worried about easily making your page accessible to other people, a
good option would be to use something like the free version of Heroku do to
something simple like this. They have a guide available here:
<https://devcenter.heroku.com/articles/getting-started-with-
python#introduction>
If you're more technically inclined and you're aware of the dangers of opening
up your own machine to the world, you could make a simple Flask application
that does this and just run it locally:
from flask import Flask, request
app = Flask(__name__)
@app.route('/')
def main():
return 'You could import render_template at the top, and use that here instead'
@app.route('/do_stuff', methods=['POST'])
def do_stuff():
return 'You posted' + request.form.get('your string')
app.run('127.0.0.1', port=5555, debug=True)
|
How should I handle an error in Python 2.7 in the initialization method for a C extension?
Question: In Python 2.7 (which we need to support), the initialization function for a
C/C++ extension should be declared with the PyMODINIT_FUNC macro, which
effectively makes the function void. However, I'm not sure how we should
handle errors that occurs during this function. We could throw a C++ exception
within the function, but I'm not thrilled about that idea. Is there a better
way?
Here's the background: In order to work around an architectural problem that
we cannot address in a single release, we need to have the user call Python
via a script that we provide rather than directly from the Python executable.
By checking the process name, we can detect the situation where the user calls
via the executable rather than the script. In this case, we would like to
issue an error message and then terminate gracefully.
Answer: You can use one of the
[PyErr_Set*](https://docs.python.org/2/c-api/exceptions.html#c.PyErr_SetString)
methods.
Exceptions are always checked after calling `init_module_name()`.
It's not explicitly stated in the documentation, but if you look at the
examples, or if you [read the
source](https://hg.python.org/releasing/2.7.9/file/753a8f457ddc/Python/importdl.c),
you'll see that it is true.
|
PyQT4 and Threading: Closing MainWidget after QThread finished and ListWidget updated
Question: I'm trying to understand how threading works in Python. Therefore I wrote an
application that is installing some packages in a separate QThread.
Once the worker is finished (finished Signal emitted), I'd like to wait a few
seconds and then Close the MainWidget (see below):
def finished(self):
self.installbutton.setEnabled(True)
self.listwidget.addItem(QtCore.QString("Process complete, have a nice day!"))
time.sleep(5)
self.close()
The Problem seems now, that calling self.close within the finished-callback
leads to a Situation where the ListWidget is no more updated anymore before
the window closes.
My guess is, that the ListWidget and the callback function live in the same
stack and therefore the ListWidget has no Chance to terminate any more. But
how to solve these kind of issues?
Any suggestions how I can overcome this?
By the way: I'm "forced" to use Python 2.7.5 and PyQt4 in order to be
compliant with the rest of my Company... that's why I'm using the old-style
signals
Here the code:
import sys
import os
import time
import subprocess
import logging
log = logging.getLogger(__name__)
logging.basicConfig(level=logging.DEBUG)
from PyQt4 import QtGui, QtCore
def main():
app = QtGui.QApplication(sys.argv)
w = InstallWizzard()
w.show()
sys.exit(app.exec_())
class InstallWizzard(QtGui.QMainWindow):
def __init__(self, parent=None):
QtGui.QMainWindow.__init__(self,parent)
self.centralwidget = QtGui.QWidget(self)
self.setCentralWidget(self.centralwidget)
self.thread = Worker()
hlayout = QtGui.QVBoxLayout()
self.listwidget = QtGui.QListWidget()
self.installbutton = QtGui.QPushButton()
self.installbutton.setText("Install Brother now...")
hlayout.addWidget(self.installbutton)
self.listwidget.addItem(QtCore.QString(r'Click "Install Brother now... to start installation"'))
self.centralwidget.setLayout(hlayout)
self.connect(self.installbutton, QtCore.SIGNAL("clicked()"),self._on_installation_start)
self.connect(self.thread, QtCore.SIGNAL("finished()"), self.finished)
# self.connect(self.thread, QtCore.SIGNAL("terminated()"), self.updateUI)
self.connect(self.thread, QtCore.SIGNAL("install_mssg(QString)"), self._cmd_processed)
hlayout.addWidget(self.listwidget)
def _cmd_processed(self, mssg):
self.listwidget.addItem(QtCore.QString(mssg))
def _on_installation_start(self):
self.installbutton.setEnabled(False)
cmds = [("Installing comtypes", r'easy_install comtypes')]
self.thread.install(cmds)
def finished(self):
self.installbutton.setEnabled(True)
self.listwidget.addItem(QtCore.QString("Process complete, have a nice day!"))
time.sleep(5)
self.close()
class Worker(QtCore.QThread):
def __init__(self, parent = None):
QtCore.QThread.__init__(self,parent)
print("Started Worker...")
def install(self, cmds):
self.cmds = cmds
self.start()
def run(self):
for desc, cmd in self.cmds:
self.emit(QtCore.SIGNAL("install_mssg(QString)"),QtCore.QString(desc + ": ..."))
try:
self._write_cmd_line(cmd)
mssg = "... Successful"
except Exception as e:
mssg = QtCore.QString(str("... Faillure: " + e))
self.emit(QtCore.SIGNAL("install_mssg(QString)"),mssg)
print("ond tschuss from worker")
def __del__(self):
self.quit()
self.wait()
def _write_cmd_line(self, cmd):
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
a,b = p.communicate()
if "cannot" in a or "error" in a or "not recognized" in a:
errormssg = "Could not proccess cmd!"
logging.error(errormssg)
raise Exception(errormssg)
else:
logging.debug(str(cmd) + " successful")
if __name__ == '__main__':
main()
Answer: after reading your question and your code.. in understand that you have two
thread the main thread and worker thread and when you're worker is done(I mean
here that thread is reached the last line of your code) you want to notify the
main thread to close the worker and switch to another window(Probably) or do
something else.
TODO this obviously you need to communicate with the UI thread(MainThread), so
you'll need to add this lines in your **Worker(Thread) Class**
signal = QtCore.pyqtSignal(str) # Create a signal
then go to the last line from your thread, and be sure that when your worker
reach that line the job will be done, and add this line to notify the main
thread
self.signal.emit('Some String here') # this will send the string over the signal to your UI thread
Now **Jump To Your InstallWizzard Class ** and add this lines inside your
__init__ method add this line below **self.thread = Worker()**
self.thread.signal.connect(self.finished) # finished is a method inside your class
Now inside Your finished method you can terminate/close/quit you're method
easily using something like that
self.thread.quit() # as you might guessed this will quit your thread.
also you can replace quit() with terminate() if you want to force the worker
to be closed please but please read this warning From the Qt documentation for
QThread::terminate:
> **Warning:** This function is dangerous and its use is discouraged. The
> thread can be terminated at any point in its code path. Threads can be
> terminated while modifying data. There is no chance for the thread to clean
> up after itself, unlock any held mutexes, etc. In short, use this function
> only if absolutely necessary.
and if you care about the string which it will be sent over signal from the
Worker add new parameter to you finished method to get access to the string.
and this is it :)
I know that I am too late to answer your question(1 years ago ) but it might
be helpful for other developers...anyway cheers
|
Reading an image in python - experimenting with images
Question: I'm experimenting a little bit working with images in Python for a project I'm
working on.
This is the first time ever for me programming in Python and I haven't found a
tutorial that deals with the issues I'm facing.
I'm experimenting with different image decompositions, and I want to define
some variable `A` as a set image from a specified folder. Basically I'm
looking for Python's analog of Matlab's `imread`.
After googling for a bit, I found many solutions but none seem to work for me
for some reason.
For example even this simple code
import numpy as np
import cv2
# Load an color image in grayscale
img = cv2.imread('messi5.jpg',0)
which is supposed to work (taken from <http://opencv-python-
tutroals.readthedocs.org/en/latest/py_tutorials/py_gui/py_image_display/py_image_display.html>)
yields the error "No module named cv2".
Why does this happen? How can I read an image?
Another thing I tried is
import numpy as np
import skimage.io as io
A=io.imread('C:\Users\Oria\Desktop\test.jpg')
io.imshow(A)
which yields the error "SyntaxError: (unicode error) 'unicodeescape' codec
can't decode bytes in position 2-3: truncated \UXXXXXXXX escape"
All I want to do is be able to read an image from a specified folder,
shouldn't be hard...Should also be noted that the database I work with is ppm
files. So I want to read and show ppm images.
Edit: My enviornment is Pyzo. If it matters for anything.
Edit2: Changing the back slashes into forward slashes changes the error to
Traceback (most recent call last):
File "<tmp 1>", line 3, in <module>
A=io.imread('C:/Users/Oria/Desktop/test.jpg')
File "F:\pyzo2015a\lib\site-packages\skimage\io\_io.py", line 97, in imread
img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
File "F:\pyzo2015a\lib\site-packages\skimage\io\manage_plugins.py", line 209, in call_plugin
return func(*args, **kwargs)
File "F:\pyzo2015a\lib\site-packages\matplotlib\pyplot.py", line 2215, in imread
return _imread(*args, **kwargs)
File "F:\pyzo2015a\lib\site-packages\matplotlib\image.py", line 1258, in imread
'more images' % list(six.iterkeys(handlers.keys)))
File "F:\pyzo2015a\lib\site-packages\six.py", line 552, in iterkeys
return iter(d.keys(**kw))
AttributeError: 'builtin_function_or_method' object has no attribute 'keys'
Answer: The closest analogue to Matlab's imread is
[scipy.misc.imread](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.misc.imread.html),
part of the [scipy](http://scipy.org/) package. I would write this code as:
import scipy.misc
image_array = scipy.misc.imread('filename.jpg')
Now to your broader questions. The reason this seems hard is because you're
coming from Matlab, which uses a different philosophy. Matlab is a monolithic
install that comes out of the box with a huge number of functions. Python is
modular. The built-in library is relatively small, and then you install
packages depending on what you want to do. For instance, the packages `scipy`
(scientific computing), `cv2` (computer vision), and `PIL` (image processing)
can all read simple images from disk, so you choose between them depending on
what else from the package you might want to use.
This provides a lot more flexibility, but it does require you to become
comfortable installing packages. Sadly this is much more difficult on Windows
than on Linux-like systems, due to the lack of a "package manager". On Linux I
can `sudo apt-get install scipy` and install all of scipy in one line. In
Windows, you might be better off installing something like
[conda](http://conda.pydata.org/miniconda.html) that smooths the package
installation process.
|
python multiprocessing/threading cleanup
Question: I have a python tool, that has basically this kind of setup:
main process (P1) -> spawns a process (P2) that starts a tcp connection
-> spawns a thread (T1) that starts a loop to receive
messages that are sent from P2 to P1 via a Queue (Q1)
server process (P2) -> spawns two threads (T2 and T3) that start loops to
receive messages that are sent from P1 to P2 via Queues (Q2 and Q3)
The problem I'm having is that when I stop my program (with Ctrl+C), it
doesn't quit. The server process is ended, but the main process just hangs
there and I have to kill it.
The thread loop functions all look the same:
def _loop(self):
while self.running:
res = self.Q1.get()
if res is None:
break
self._handle_msg(res)
All threads are started as daemon:
t = Thread(target=self._loop)
t.setDaemon(True)
t.start()
In my main process, I use atexit, to perform clean-up tasks:
atexit.register(self.on_exit)
Those clean-up tasks are essentially the following:
1) set `self.running` in P1 to `False` and sent `None` to Q1, so that the
Thread T1 should finish
self.running = False
self.Q1.put(None)
2) send a message to P2 via Q2 to inform this process that it is ending
self.Q2.put("stop")
3) In P2, react to the "stop" message and do what we did in P1
self.running = False
self.Q2.put(None)
self.Q3.put(None)
That is it and in my understanding, that should make everything shut down
nicely, but it doesn't.
The main code of P1 also contains the following endless loop, because
otherwise the program would end prematurely:
while running:
sleep(1)
Maybe that has something to do with the problem, but I cannot see why it
should.
So what did I do wrong? Does my setup have major design flaws? Did I forget to
shut down something?
**EDIT**
Ok, I modified my code and managed to make it shut down correctly most of the
time. Unfortunately, from now and then, it still got stuck.
I managed to write a small working example of my code. To demonstrate what
happens, you need to simple start the script and then use `Ctrl + C` to stop
it. It looks like the issue appears now usually if you press `Ctrl + C` as
soon as possible after starting the tool.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import signal
import sys
import logging
from multiprocessing import Process, Queue
from threading import Thread
from time import sleep
logger = logging.getLogger("mepy-client")
class SocketClientProtocol(object):
def __init__(self, q_in, q_out, q_binary):
self.q_in = q_in
self.q_out = q_out
self.q_binary = q_binary
self.running = True
t = Thread(target=self._loop)
#t.setDaemon(True)
t.start()
t = Thread(target=self._loop_binary)
#t.setDaemon(True)
t.start()
def _loop(self):
print "start of loop 2"
while self.running:
res = self.q_in.get()
if res is None:
break
self._handle_msg(res)
print "end of loop 2"
def _loop_binary(self):
print "start of loop 3"
while self.running:
res = self.q_binary.get()
if res is None:
break
self._handle_binary(res)
print "end of loop 3"
def _handle_msg(self, msg):
msg_type = msg[0]
if msg_type == "stop2":
print "STOP RECEIVED"
self.running = False
self.q_in.put(None)
self.q_binary.put(None)
def _put_msg(self, msg):
self.q_out.put(msg)
def _handle_binary(self, data):
pass
def handle_element(self):
self._put_msg(["something"])
def run_twisted(q_in, q_out, q_binary):
s = SocketClientProtocol(q_in, q_out, q_binary)
while s.running:
sleep(2)
s.handle_element()
class MediatorSender(object):
def __init__(self):
self.q_in = None
self.q_out = None
self.q_binary = None
self.p = None
self.running = False
def start(self):
if self.running:
return
self.running = True
self.q_in = Queue()
self.q_out = Queue()
self.q_binary = Queue()
print "!!!!START"
self.p = Process(target=run_twisted, args=(self.q_in, self.q_out, self.q_binary))
self.p.start()
t = Thread(target=self._loop)
#t.setDaemon(True)
t.start()
def stop(self):
print "!!!!STOP"
if not self.running:
return
print "STOP2"
self.running = False
self.q_out.put(None)
self.q_in.put(["stop2"])
#self.q_in.put(None)
#self.q_binary.put(None)
try:
if self.p and self.p.is_alive():
self.p.terminate()
except:
pass
def _loop(self):
print "start of loop 1"
while self.running:
res = self.q_out.get()
if res is None:
break
self._handle_msg(res)
print "end of loop 1"
def _handle_msg(self, msg):
self._put_msg(msg)
def _put_msg(self, msg):
self.q_in.put(msg)
def _put_binary(self, msg):
self.q_binary.put(msg)
def send_chunk(self, chunk):
self._put_binary(chunk)
running = True
def signal_handler(signal, frame):
global running
if running:
running = False
ms.stop()
else:
sys.exit(0)
if __name__ == "__main__":
signal.signal(signal.SIGINT, signal_handler)
ms = MediatorSender()
ms.start()
for i in range(100):
ms.send_chunk("some chunk of data")
while running:
sleep(1)
Answer: Maybe you should try to capture `SIGINT` signal, which is generated by `Ctrl +
C` using `signal.signal` like this:
#!/usr/bin/env python
import signal
import sys
def signal_handler(signal, frame):
print('You pressed Ctrl+C!')
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
print('Press Ctrl+C')
signal.pause()
Code stolen from [here](http://stackoverflow.com/questions/1112343/how-do-i-
capture-sigint-in-python)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.