text
stringlengths 226
34.5k
|
---|
Post xml data with auth using python
Question: I have this xml string i want to POST to an API url, i've been cheking the
docs and came up with something like this:
import urllib.request as ur
import urllib.parse as up
auth_handler = ur.HTTPBasicAuthHandler()
auth_handler.add_password(realm='something',
uri='http://api/api',
user=username,
passwd=passw)
opener = ur.build_opener(auth_handler)
opener.addheaders = [('User-agent', 'api-id'), ("Content-Type","applicaiton/xml;charset=utf-8")]
data = up.urlencode(('<?xml version="1.0" encoding="UTF-8"?>'
"<entry>"
"<episode>"+ep_no+"</episode>"
"<status></status>"
"<score></score>"
"<tags></tags>"
"</entry>"))
bin_data = data.encode('utf-8')
opener.open("http://api/api/add/"+id+".xml", data=bin_data)
However, i'm getting:
...
File "/home/hairo/sandbox/post_test.py", line 124, in post
data = up.urlencode(('<?xml version="1.0" encoding="UTF-8"?>'
...
raise TypeError
TypeError: not a valid non-string sequence or mapping object
It looks like i'm missing something obvious, but i can't figure out what it
is, any help?
Answer: That call to
[urlencode](https://docs.python.org/2/library/urllib.html#urllib.urlencode) is
only passing a one-element tuple.
[Here](http://stackoverflow.com/questions/5607551/python-urlencode-string)'s
an example of what type of argument urlencode works with.
"Convert a mapping object or a sequence of two-element tuples".
|
How do you import files into the Python shell?
Question: I made a sample file that uses a class called `Names`. It has its
initialization function and a few methods. When I create an instance of the
class it retrieves the instance's first name and last name. The other methods
greet the instance and say a departure message. My question is: how do I
import this file into the Python shell without having to run the module
itself?
The name of my file is classNames.py and the location is
C:\Users\Darian\Desktop\Python_Programs\Experimenting
Here is what my code looks like:
class Names(object):
#first function called when creating an instance of the class
def __init__(self, first_name, last_name):
self.first_name = first_name
self.last_name = last_name
#class method that greets the instance of the class.
def intro(self):
print "Hello {} {}!".format(self.first_name, self.last_name)
def departure(self):
print "Goodbye {} {}!".format(self.first_name, self.last_name)
But I get the error:
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import classNames.py
ImportError: No module named classNames.py
Answer: I am not clear about what you expect and what you see instead, but the
handling of your module works exactly like the `math` and any other modules:
You just import them. If this happens for the first time, the file is taken
and executed. Everything what is left in its namespace after running it is
available to you from outside.
If your code only contains just `def`, `class` statements and assignments, wou
won't notice that anything happens, because, well, nothing "really" happens at
this time. But you have the classes, functions and other names available for
use.
However, if you have `print` statements at top level, you'll see that it is
indeed executed.
If you have this file anywhere in your Python path (be it explicitly or
because it is in the current working directory), you can use it like
import classNames
and use its contents such as
n = classNames.Names("John", "Doe")
or you do
from classNames import Names
n = Names("John", "Doe")
**Don't do`import classNames.py`**, as this would try to import module `py.py`
from the package `classNames/`.
|
How do I get a function inside of a while loop in a function to work properly?
Question: I have looked and looked for an answer, but I am new to python and the answers
I find seem to be over my head or not quite what I need. I am trying to take
my code and turn it into multiple functions to complete the simple task of
receiving some grades from the user and displaying the input back to the user
as they want it. Hopefully this will be more clear when you see my code.
import sys
gradeList = []
def validate_input():
try:
gradeList.append(int(grades))
except:
return False
else:
return True
def average(count, total):
ave = total / count
return ave
def exit(gradeList):
if gradeList == []:
print "You must enter at least one grade for this program to work. Good Bye."
sys.exit()
#def get_info():
while True:
grades = raw_input("Please input your grades.(or Enter once you have entered all of your grades): ")
if not grades:
break
elif validate_input() == True:
continue
else:
print "Please enter a number."
continue
def give_answers(gradeList):
while True:
print "\n",
print "Please select what you would like to do with your grades."
print "Press 1 for the Highest grade."
print "Press 2 for the Lowest grade."
print "Press 3 for the Average of the grades you entered."
print "\n",
print "Or you can press 'q' to quit."
choice = raw_input("> ")
if choice == 'q':
break
elif choice == '1':
print "\n",
print "The highest grade that you entered was %d.\n" % highest,
elif choice == '2':
print "\n",
print "The lowest grade that you entered was %d.\n" % lowest,
elif choice == '3':
print "\n",
print "Your average grade was %d.\n" % average,
else:
print "Please enter 1, 2, 3 or 'q'."
#get_info()
exit(gradeList)
gradeList.sort()
highest = gradeList[-1]
lowest = gradeList[0]
count = len(gradeList)
total = sum(gradeList)
average = average( count, total)
give_answers(gradeList)
Everything works correctly as I have pasted the code, but when I try to define
`get_info` the nested `validate_input()` function stops working. It no longer
populates the list so `exit()` catches and ends the program. The loop seems to
just pass over the `validate_input()`function altogether because the else
statement trips after each time through. My question is how do I get the
`validate_input()` function to work properly while continuing to accept input
until the user presses enter?
Answer: Pass grade to your function validate_input so it could add the grade to list
like:
def validate_input(grades):
|
Feed google charts custom properties like color through gviz_api
Question: I'm having trouble propagating custom_properties like color into my google
chart through the python layer gviz_api.
I would like to create a bar chart with individually colored bars such as in
the example here:
<https://developers.google.com/chart/interactive/docs/gallery/barchart#BarStyles>
But I can't figure out how to set this up throug the gviz_api
(<http://code.google.com/p/google-visualization-python/>).
I'm fine feeding the data in any way, dictionaries, lists, tuplets, one row at
a time, as long as I can color the bars individually. Here's my latest non-
working attempt, generate.py:
import gviz_api
def main():
# Creating the data
description = {"test" : ("string", "Test name"),
"duration" : ("number", "Duration")}
data = [dict(test="test A", duration=1000, custom_properties={"role":"color:green"}),
{"test": "test B", "duration": 4000}]
# Loading it into gviz_api.DataTable
data_table = gviz_api.DataTable(description, custom_properties={"role":"style"})
data_table.LoadData(data)
# Creating a JSon string
json = data_table.ToJSon(columns_order=("test", "duration"), order_by="test")
# Read page_template from file
f = open('template.html', 'r')
page_template = f.read()
# Putting the JSon string into the template
print page_template.format(json)
if __name__ == '__main__':
main()
And the corresponding template.html:
<html>
<script src="https://www.google.com/jsapi" type="text/javascript"></script>
<script>
google.load('visualization', '1', {{packages:['corechart']}});
google.setOnLoadCallback(drawChart);
function drawChart() {{
var options = {{
title: 'Test results',
legend: 'none',
chartArea: {{ width: "50%", height: "70%" }}
}}
var json_chart = new google.visualization.BarChart(document.getElementById('chart_div'));
var json_data = new google.visualization.DataTable({0}, 0.6);
json_chart.draw(json_data, options);
}}
</script>
<body>
<div id="chart_div"></div>
</body>
</html>
Answer: After struggling some more with gviz_api and taking a peek at its
implementation I gave up and decided not to use the gviz_api wrapper. Instead
I transferred the data to the template via an array and got the individually
colored bars I was after. With the gviz_api dependency out of the way, [Google
Chart, different color for each
bar](http://stackoverflow.com/questions/6375248/google-chart-different-color-
for-each-bar) held good information.
generate.py:
f = open('template.html', 'r')
page_template = f.read()
f.close()
testData = ['Test 1', 43, 'PASS', 'Test 2', 54, 'FAIL']
print page_template.format(testData)
template.html:
<html>
<script src="https://www.google.com/jsapi" type="text/javascript"></script>
<script type="text/javascript">
google.load("visualization", '1.1', {{packages:['corechart']}});
google.setOnLoadCallback(drawChart);
function drawChart() {{
var options = {{
title: 'Test results',
legend: 'none',
chartArea: {{ width: "50%", height: "70%" }}
}}
var barChart = new google.visualization.BarChart(document.getElementById('chart_div'));
var dataTable = new google.visualization.DataTable();
dataTable.addColumn('string', 'Test name');
dataTable.addColumn('number', 'Duration');
dataTable.addColumn({{ type: 'string', role: 'style' }});
// Import array with contents: [<test name>, <duration>, <result>, .. }}
testData = {0}
dataTable.addRows(testData.length/3);
for (i = 0; i < testData.length/3;i++) {{
dataTable.setValue(i, 0, testData[i*3]);
dataTable.setValue(i, 1, testData[i*3+1]);
if (testData[i*3+2] == 'PASS') {{
dataTable.setValue(i, 2, 'color:green');
}} else
dataTable.setValue(i, 2, 'color:red');
}}
}}
barChart.draw(dataTable, options);
}}
</script>
<body>
<div id="chart_div"></div>
</body>
</html>
The doubled curly braces in the template are there to enable use of the python
string.format() method.
|
Parsing data for xml file in python
Question: I have the following xml file:
<address addr="x.x.x.x" addrtype="ipv4"/>
<hostnames>
</hostnames>
<ports><port protocol="tcp" portid="1"><state state="open" reason="syn-ack" reason_ttl="0"/><service name="tcpmux" method="table" conf="3"/></port>
<port protocol="tcp" portid="64623"><state state="open" reason="syn-ack" reason_ttl="0"/><service name="unknown" method="table" conf="3"/></port>
</ports>
<times srtt="621179" rttvar="35357" to="762607"/>
</host>
<host starttime="1418707433" endtime="1418707742"><status state="up" reason="syn-ack" reason_ttl="0"/>
<address addr="y.y.y.y" addrtype="ipv4"/>
<hostnames>
</hostnames>
<ports><port protocol="tcp" portid="1"><state state="open" reason="syn-ack" reason_ttl="0"/><service name="tcpmux" method="table" conf="3"/></port>
<port protocol="tcp" portid="64680"><state state="open" reason="syn-ack" reason_ttl="0"/><service name="unknown" method="table" conf="3"/></port>
</ports>
<times srtt="834906" rttvar="92971" to="1206790"/>
</host>
<host starttime="1418707433" endtime="1418707699"><status state="up" reason="syn-ack" reason_ttl="0"/>
<address addr="w.w.w.w" addrtype="ipv4"/>
<hostnames>
</hostnames>
<ports><extraports state="filtered" count="997">
<extrareasons reason="no-responses" count="997"/>
</extraports>
<port protocol="tcp" portid="25"><state state="open" reason="syn-ack" reason_ttl="0"/><service name="smtp" method="table" conf="3"/></port>
<port protocol="tcp" portid="443"><state state="open" reason="syn-ack" reason_ttl="0"/><service name="https" method="table" conf="3"/></port>
<port protocol="tcp" portid="7443"><state state="open" reason="syn-ack" reason_ttl="0"/><service name="oracleas-https" method="table" conf="3"/></port>
</ports>
<times srtt="690288" rttvar="110249" to="1131284"/>
</host>
What I tried for extracting data for each ip is:
import sys
import xml.etree.ElementTree as ET
input=sys.argv[1]
tree=ET.parse(input)
root=tree.getroot()
for host in root.findall('host'):
updown=host.find('status').get('state')
if updown=='up':
print 'IP Address: '+host.find('address').get('addr')
ports=[port.get('portid') for port in root.findall('.//port')]
state=[port.get('state') for port in root.findall('.//port/state')]
name=[port.get('name') for port in root.findall('.//port/service')]
But it returns me all information of ips. How can I get the specific
information for each IP ?
I think I should change the `root.findall` but I don't know how I can do that.
Answer: Within the loop just change `root.findall()` to `host.findall()`:
for host in root.findall('host'):
updown=host.find('status').get('state')
if updown=='up':
print 'IP Address: '+host.find('address').get('addr')
ports=[port.get('portid') for port in host.findall('.//port')]
state=[port.get('state') for port in host.findall('.//port/state')]
name=[port.get('name') for port in host.findall('.//port/service')]
This will limit finding ports, states and names to those within each host,
rather than those withing the whole XML document.
|
Hit http url using python getting IOError
Question: I tried by hitting as below
import urllib
second_query="http://example.com"
pw = urllib.urlopen(second_query)
pw = pw.read()
print pw
I am trying to hit jira api -<http://example.com> but I am getting following
error
Traceback (most recent call last):
File "abc.py", line 7, in <module>
pw = urllib.urlopen(second_query)
File "/abc/xyz/pqr/thirdparty/python/2.6/lib/python2.6/urllib.py", line 87, in urlopen
return opener.open(url)
File "/abc/xyz/pqr/thirdparty/python/2.6/lib/python2.6/urllib.py", line 203, in open
return getattr(self, name)(url)
File "/abc/xyz/pqr/thirdparty/python/2.6/lib/python2.6/urllib.py", line 358, in open_http
return self.http_error(url, fp, errcode, errmsg, headers)
File "/abc/xyz/pqr/thirdparty/python/2.6/lib/python2.6/urllib.py", line 371, in http_error
result = method(url, fp, errcode, errmsg, headers)
File "/abc/xyz/pqr/thirdparty/python/2.6/lib/python2.6/urllib.py", line 683, in http_error_401
errcode, errmsg, headers)
File "/abc/xyz/pqr/thirdparty/python/2.6/lib/python2.6/urllib.py", line 381, in http_error_default
raise IOError, ('http error', errcode, errmsg, headers)
IOError: ('http error', 401, 'Unauthorized', <httplib.HTTPMessage instance at 0x138e26c8>)
Please suggest solution.
Answer: By reading the error it's clear that you don't have the required
authorization:
IOError: ('http error', 401, 'Unauthorized', <httplib.HTTPMessage instance at 0x138e26c8>
You have somehow to provide the necessary credentials... I'm quite sure
however that Jira provides high level API to use its tools, so you don't have
to use a low level approach.
|
Gather constant output from process (Python)
Question: I have a program that generates constant output (`hcitool lescan`):
CC:41:00:4D:AA:AA Name1
CC:41:00:4D:AA:BB Name2
CC:41:00:4D:AA:CC Name3
I would like to parse this output in `Python` constantly, but **after few
seconds I want to kill the process**. As it has to be terminated manually (by
pressing `CTRL-C`), I cannot use `subprocess.check_value(["prog"])`. Also
calling `p = subprocess.Popen(["prog"], stdout=subprocess.PIPE)` isn't good,
as it orders `Popen` to read this to the `EOF`. Also, such call hangs the
`Python` script.
My question is: How can I start a program in `Python` (can be limited to
`Linux` environment) and terminate it after few seconds while gathering it's
output (from `stdout`)?
Answer: Depending on what your program is doing, you could use a couple of methods.
The first one would be to put the process in a while loop and check for the
MAC address in the lescan output file.
import os
tag = 0
while tag != "00:11:22:33:44:55":
open("file.txt","w").close()
os.system("hcitool lescan> file.txt"+" & pkill --signal SIGINT hcitool")
f = open("file.txt","r")
read = f.read()
if "00:11:22:33:44:55" in read:
print "found"
tag = "00:11:22:33:44:55"
print "program finished"
os._exit(0)
The second solution would be if you are using `lescan` and `hcidump` at the
same time.
import time
import os
import subprocess
d = subprocess.Popen(["sudo hcitool lescan --duplicates & sudo hcidump -w dump.txt"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
time.sleep(3)
dump = os.system("sudo hcidump -r dump.txt>scan.txt")
#read the file like the previous solution I shown you
print "program finished"
os._exit(0)
The best way I found to do this was to output your readings to a file and the
scan the file for any specific thing your are looking for.
Comment back if you need anything specific, though I think I have covered your
question.
|
Utf-8 issues with python and nltk
Question: I have this little script, which is basically just a test I'm doing for a
larger program. I have a problem with the encoding. When I try to write to
file the utf-8 characters, such as øæå, are not encoded properly. Why is that,
and how can I solve this issue?
#!/usr/bin/python
# -*- coding: utf-8 -*-
import nltk
from nltk.collocations import *
collocations = open('bam.txt', 'w')
bigram_measures = nltk.collocations.BigramAssocMeasures()
tokens = nltk.wordpunct_tokenize("Hei på deg øyerusk, du er meg en gammel dust, neida neida, det er ikke helt sant da."
"Men du, hvorfor så brusk, ikke klut i din susk på en runkete lust")
finder = BigramCollocationFinder.from_words(tokens)
# finder.apply_freq_filter(3)
scored = finder.score_ngrams(bigram_measures.raw_freq)
for i in scored:
print i[0][0] + ' ' + i[0][1] + ': ' + str(i[1]) + '\n'
collocations.write(i[0][0] + ' ' + i[0][1] + ': ' + str(i[1]) + '\n')
collocations.close()
Answer: The thing is `nltk.wordpunct_tokenize` doesn't work with non-ascii data. It is
better to use `PunktWordTokenizer` from `nltk.tokenize.punkt`. So import is
as:
from nltk.tokenize.punkt import PunktWordTokenizer as PT
and replace:
tokens = nltk.wordpunct_tokenize("Hei på deg øyerusk, du er meg en gammel dust, neida neida, det er ikke helt sant da."
"Men du, hvorfor så brusk, ikke klut i din susk på en runkete lust")
with:
tokens = PT().tokenize("Hei på deg øyerusk, du er meg en gammel dust, neida neida, det er ikke helt sant da."
"Men du, hvorfor så brusk, ikke klut i din susk på en runkete lust")
|
Python: Iterate through object executing code both at certain places and also at end
Question: Here is some samplecode to explain:
outputText=""
counter=0
for obj in specialObjects:
if (obj.id < 400) or (obj.name.startswith("he")) or (obj.deliberateBreak==True):
print "The object %s is causing a section break."%obj.details
outputText = outputText.rjust(80)
open("file%d.txt"%counter,"w").write(outputText)
outputText=""
outputText+=obj.shortValue()
# THIS CODE IS DUPLICATED
outputText = outputText.rjust(80)
open("file%d.txt"%counter,"w").write(outputText)
What I need to do is iterate over a list of these special objects and check a
few different conditions each time. If any of the conditions are met (as seen
here) then I need to take the current output buffer, write it to a file, then
start a new output buffer and keep processing.
The problem here is the code duplication. Notice how the two lines
(outputText= and open) are duplicated. If I fail to put in the second set of
lines, the last set of objects will be processed but their output will never
be written.
I can think of two possible solutions to prevent the code duplication. Both of
them seem slightly inelegant, so I was wondering if there was an even better
way.
1) Wrap the code that would be repeated in a function.
outputText=""
counter=0
for obj in specialObjects:
if (obj.id < 400) or (obj.name.startswith("he")) or (obj.deliberateBreak==True):
print "The object %s is causing a section break."%obj.details
counter = writeData(outputText)
outputText=""
outputText+=obj.shortValue()
writeData(outputText,counter)
def writeData(outputText,counter):
outputText = outputText.rjust(80)
open("file%d.txt"%counter,"w").write(outputText)
return counter+1
2) Use a numeric for loop instead, and count to one higher than the length of
the object list; use that value as a flag to mean "write, but now exit":
outputText=""
counter=0
for obj in range(len(specialObjects))+1:
if (obj = len(specialObjects)) or (specialObjects[obj].id < 400) or (specialObjects[obj].name.startswith("he")) or (specialOejcts[obj].deliberateBreak==True):
print "The object %s is causing a section break."%specialObjects[obj].details
outputText = outputText.rjust(80)
open("file%d.txt"%counter,"w").write(outputText)
outputText=""
if (obj==len(specialObjects)):
break
outputText+=specialObjects[obj].shortValue()
If I had to choose one, I'd probably pick #2, but this could end up creating
some weird edge cases with the 'if' statement if any more complex boolean
logic ever needs to be used.
Is there an even cleaner or more "Pythonic" way to accomplish this without
code duplication?
Thanks!
Answer: When I find myself writing code like this, where I'm iterating over a
collection and repeating code after the end of the loop, I usually take it as
a sign that I'm not iterating over the right thing.
In this case, you're iterating over a list of objects. But what you _really_
want to iterate over, I think, is a list of _groups_ of objects. That's what
[`itertools.groupby`](https://docs.python.org/2/library/itertools.html#itertools.groupby)
is useful for.
Your code has a lot going on, so I'm going to use a simplified example to
illustrate how you can get rid of that duplicate code. Say, for (a very
contrived) example, that I have a list of things like this:
things = ["apples", "oranges", "pears", None,
"potatoes", "tomatoes", None,
"oatmeal", "eggs"]
This is a list of objects. Looking carefully, there are several groups of
objects delimited by `None` (note that you'd typically represent `things` as a
nested list, but let's ignore that for the purpose of the example). My goal is
to print out each group on a separate line:
apples, oranges, pears
potatoes, tomatoes
oatmeal, eggs
Here's the "ugly" way of doing this:
current_things = []
for thing in things:
if thing is None:
print ", ".join(current_things)
current_things = []
else:
current_things.append(thing)
print ", ".join(current_things)
As you can see, we have that duplicated `print` after the loop. Nasty!
Here's the solution using `groupby`:
from itertools import groupby
for key, group in groupby(things, key=lambda x: x is not None):
if key:
print ", ".join(group)
`groupby` takes an iterable (`things`) and a key function. It looks at each
element of the iterable and applies the key function. When the key changes
value, a new group is formed. The result is an iterator that returns `(key,
group)` pairs.
In this case, we'll use the check for `None` to be our key function. That's
why we need the `if key:`, since there will be groups of size one
corresponding to the `None` elements of our list. We'll just skip those.
As you can see, `groupby` allows us to iterate over the things we _really_
want to iterate over: _groups_ of objects. This is more natural for our
problem, and the code simplifies as a result. It looks like your code is very
similar to the above example, except that your key function will check the
various properties of the object (`obj.id < 400 ...`). I'll leave the
implementation details up to you...
|
python 2.7 interpreter exits when calling opencv's SURF detect keypoints
Question: I'm trying to compute an image's keypoints and features usinf SURF. I wrote
the following script for Python 2.7:
import cv2
# load image and conver to grayscale
ima=cv2.imread('image.png')
ima2 = cv2.cvtColor(ima, cv2.COLOR_BGR2GRAY)
# create object to extract keypoints
surf_detector=cv2.FeatureDetector_create("SURF")
# extract those keypoints
keypoints = surf_detector.detect(ima2)
Reaching the last line of the script python exits without warning nor error
message. ¿Any clues of what might be happening and how to fix this?
Answer: Problem aparently "fixed itself" after (yet another) reinstallation of OpenCV
from scratch following the instructions at <http://danwin.com/2014/12/compile-
opencv-2-4-10-ubuntu-14-04-14-10/>
|
Python tkinter: Replacing an image button with an image label
Question: Hi I'm trying to make a code that would replace an image button with an image
label when the button is pressed. But the window isn't updating so the new
image doesn't became visible. Can anybody help me? If it is even possible to
do that way.
There is the code I'm trying:
from tkinter import *
import time
gifdir = "./"
class Game:
def __init__(self):
self.__window = Tk()
igm = PhotoImage(file=gifdir+"empty.gif")
Button(self.__window, image=igm, command= self.change_picture)\
.grid(row=1, column=2, sticky=E)
def change_picture():
igm = PhotoImage(file=gifdir+"new.gif")
Label(self.__window, image=igm,)\
.grid(row=1, column=2, sticky=E)
self.__window.mainloop()
def main():
Game()
main()
When I add this code to the end:
self.__window.update_idletasks()
time.sleep(1)
the new picture is shown for a one second but I need to see it all the time
and still be able to press other buttons.
Answer: I modified your code, as your code is very strangely designed, and incorrect
IMO. This is the modified version:
from tkinter import *
import time
class Game:
def __init__(self):
self.__window = Tk()
self.gifdir = "./"
self.igm = PhotoImage(file=self.gifdir+"empty.gif")
self.btn = Button(self.__window, image=self.igm, command = self.change_picture)
self.btn.grid(row=1, column=2, sticky=E)
self.__window.mainloop()
def change_picture(self):
self.igm = PhotoImage(file=self.gifdir+"new.gif")
self.btn.configure(image = self.igm)
def main():
Game()
main()
In this new version, pressing the button, will change the image on it.
Basically, in your class, you need to keep references to created widgets.
Especially keeping a reference for `PhotoImage` is important, as if the
reference is not kept, garbage collector will remove the image, when instance
of `PhotoImage` will go out of scope in `change_picture`.
|
Python tkinter: Replacing an image button with an image label by pressing the button
Question: I'm trying to change an image button to an image label. Pressing the button
should change the image of the button to label with an other image. And after
that it should still be possible to press other image buttons.
I have code which I got from here: [Python tkinter: Replacing an image button
with an image label](http://stackoverflow.com/questions/27516600/python-
tkinter-replacing-an-image-button-with-an-image-label)
from tkinter import *
class Game:
def __init__(self):
self.__window = Tk()
self.gifdir = "./"
self.igm = PhotoImage(file=self.gifdir+"empty.gif")
self.btn = Button(self.__window, image=self.igm, command = self.change_picture)
self.btn.grid(row=1, column=2, sticky=E)
self.btn2 = Button(self.__window, image=self.igm, command = self.change_picture)
self.btn2.grid(row=1, column=1, sticky=E)
self.__window.mainloop()
def change_picture(self):
self.igm = PhotoImage(file=self.gifdir+"new.gif")
self.btn.configure(image = self.igm)
def main():
Game()
main()
If I press the other button I'm not able to press the other one anymore and I
would like to change pressed button to a label.
Answer: I modified the code to use multiple references for buttons and images:
from tkinter import *
class Game:
def __init__(self):
self.__window = Tk()
self.gifdir = "./"
self.imgs = [PhotoImage(file=self.gifdir+"empty.gif"),
PhotoImage(file=self.gifdir+"empty.gif")]
self.btns = []
btn1 = Button(self.__window, image=self.imgs[0],
command = lambda: self.change_picture(0))
btn1.grid(row=1, column=2, sticky=E)
self.btns.append(btn1)
btn2 = Button(self.__window, image=self.imgs[1],
command = lambda: self.change_picture(1))
btn2.grid(row=1, column=1, sticky=E)
self.btns.append(btn2)
self.__window.mainloop()
def change_picture(self, btn_no):
self.imgs[btn_no] = PhotoImage(file=self.gifdir+"new.gif")
self.btns[btn_no].configure(image = self.imgs[btn_no])
def main():
Game()
main()
The references to buttons and images are stored in lists. change_picture was
change to take button number as an argument, so that you can distinguish which
button is pressed.
With these changes, each button can be pressed independently so that the image
changes when pressed.
|
Python How to generate file name
Question: I was wondering if the community can help me out I'm a newbie to programming.
I trying to ssh to a list of devices contain on "list.txt" once it logs into
the router I send the same command to each one and write the output to a file.
However, on the code below it overrides the output of each device. I need to
create a unique file for each output with the name of the IP address that is
contains on the "list.txt" file. If anybody can help me out I would really
appreciated.
import paramiko
import time
import os
def disable_paging(remote_conn):
'''Disable paging on a Cisco router'''
remote_conn.send("terminal length 0\n")
time.sleep(1)
# Clear the buffer on the screen
output = remote_conn.recv(1000)
return output
#Create variables
f = open('list.txt')
filepath = ('test/tas.txt')
username = 'test'
password = 'test'
#Create a for loop
for i in f:
remote_conn_pre = paramiko.SSHClient()
remote_conn_pre.set_missing_host_key_policy(
paramiko.AutoAddPolicy())
remote_conn_pre.connect(i, username=username, password=password)
remote_conn = remote_conn_pre.invoke_shell()
output = remote_conn.recv(1000)
disable_paging(remote_conn)
# Now let's try to send the router a command
remote_conn.send("\n")
remote_conn.send("show int des\n")
# Wait for the command to complete
time.sleep(2)
output = remote_conn.recv(88880000)
# print output
if not os.path.exists(os.path.dirname(filepath)):
os.makedirs(os.path.dirname(filepath))
with open(filepath, "w") as f:
f.write(output)
f.close()
Answer: You could just have your 'filepath' variable defined in the for loop, and use
the 'i' string to create it :
for i in f:
filename = 'tas_%s.txt' % str(i)
filepath = os.path.join('test', filename)
# anything else remains unchanged
Hope it works (difficult without knowing the content of 'list.txt').
|
ImportError: No module named 'PySide.QtCore' in python3.3 on Ubuntu
Question: I just upgraded my ubuntu to version 14.04, and since then `python3.3` cannot
import `QtCore` when calling my script; the full error output is as follows:
Traceback (most recent call last):
File "testcode.py", line 8, in <module>
from PySide.QtCore import *
ImportError: No module named 'PySide.QtCore'
Interestingly, the modules seem to be present:
>ls /usr/lib/python3/dist-packages/PySide/
__init__.py QtGui.cpython-34m-x86_64-linux-gnu.so QtScriptTools.cpython-34m-x86_64-linux-gnu.so QtWebKit.cpython-34m-x86_64-linux-gnu.so
phonon.cpython-34m-x86_64-linux-gnu.so QtHelp.cpython-34m-x86_64-linux-gnu.so QtSql.cpython-34m-x86_64-linux-gnu.so QtXml.cpython-34m-x86_64-linux-gnu.so
__pycache__ QtNetwork.cpython-34m-x86_64-linux-gnu.so QtSvg.cpython-34m-x86_64-linux-gnu.so QtXmlPatterns.cpython-34m-x86_64-linux-gnu.so
QtCore.cpython-34m-x86_64-linux-gnu.so QtOpenGL.cpython-34m-x86_64-linux-gnu.so QtTest.cpython-34m-x86_64-linux-gnu.so _utils.py
QtDeclarative.cpython-34m-x86_64-linux-gnu.so QtScript.cpython-34m-x86_64-linux-gnu.so QtUiTools.cpython-34m-x86_64-linux-gnu.so
The above path to `dist-packages` is included in `sys.path`. Any idea how to
fix this import issue?
Answer: The reason for this problem seems to be the naming of the modules. `python`
seems to look for a file named `QtCore.so` instead of
`QtCore.cpython-34m-x86_64-linux-gnu.so`, for example. Creating symlinks like
sudo ln -s QtCore.cpython-34m-x86_64-linux-gnu.so QtCore.so
for each library seems to solve the problem.
However, the ideal way would be probably to remove `PySide` for `python3.3`
and to re-install it fresh. But I do not know the naming convention to name
the package to be removed...
Maybe someone knows the exact package identifier?
|
Read lines starting with numbers
Question: How can I only read lines in a file that start with numbers in python i.e
Hello World <--ignore
Lovely day
blah43
blah
1234 <--read
blah12 <--ignore
blah
3124 <--read
Not matter how many words there are <--ignore
blah
0832 8423984 234892304 8239048324 8023948<--read
blah132 <--ignore
Answer:
import re
with open("filename") as f:
for line in f:
if re.match(r"^\d+.*$",line):
print line
|
Added lines in csv file python - "wb" doesnt work
Question: The code works except in the csv file there is now a blank line between each
of the values after running this - when i googled this problem it suggested
use "wb" but that returns the error
TypeError: 'str' does not support the buffer interface
The code for the keypad is 2580 and the CSV is in the format of
bakerg,ict,George Baker,11HM,NORMAL
Code
from tkinter import *
import csv
import os
def upgradetoadmin():
global masterpassword
masterpassword = []
def one():
masterpassword.append("1")
arraycheck()
def two():
masterpassword.append("2")
arraycheck()
def three():
masterpassword.append("3")
arraycheck()
def four():
masterpassword.append("4")
arraycheck()
def five():
masterpassword.append("5")
arraycheck()
def six():
masterpassword.append("6")
arraycheck()
def seven():
masterpassword.append("7")
arraycheck()
def eight():
masterpassword.append("8")
arraycheck()
def nine():
masterpassword.append("9")
arraycheck()
def zero():
masterpassword.append("0")
arraycheck()
def clear():
global masterpassword
masterpassword = []
def blankremover():
input = open('Student Data.csv', 'rb')
output = open('Student Data2.csv', 'wb')
writer = csv.writer(output)
for row in csv.reader(input):
if row:
writer.writerow(row)
input.close()
output.close()
def arraycheck():
global masterpassword
if len(masterpassword) == 4:
if masterpassword == ['2','5','8','0']:
print("Success")
my_file = open('Student Data.csv', 'r')
r = csv.reader(my_file)
lines = [l for l in r]
my_file.close()
print(lines)
i = 0
for item in lines:
admininfy = whotomakeanadmin.get()
if item[0] == admininfy:
print(item)
print("YAY")
item[4] = "ADMIN"
print(item)
print(lines)
os.remove('Student Data.csv')
writer = csv.writer(open('Student Data.csv', 'w'))
writer.writerows(lines)
print(admininfy + " is now an admin")
else:
print("Invalid Code")
masterpassword = []
keypadwindow = Tk()
keypadwindow.iconbitmap("hXYTZdJy.ico")
keypadwindow.title("ADMIN UPGRADER")
whotomakeanadmin = Entry(keypadwindow, width = 30)
whotomakeanadmin.grid(column = 0, row = 0, columnspan = 3, pady = 10)
Button(keypadwindow, text="1", height = 4, width = 10, command = one).grid(column = 0, row = 1)
Button(keypadwindow, text="2", height = 4, width = 10, command = two).grid(column = 1, row = 1)
Button(keypadwindow, text="3", height = 4, width = 10, command = three).grid(column = 2, row = 1)
Button(keypadwindow, text="4", height = 4, width = 10, command = four).grid(column = 0, row = 2)
Button(keypadwindow, text="5", height = 4, width = 10, command = five).grid(column = 1, row = 2)
Button(keypadwindow, text="6", height = 4, width = 10, command = six).grid(column = 2, row = 2)
Button(keypadwindow, text="7", height = 4, width = 10, command = seven).grid(column = 0, row = 3)
Button(keypadwindow, text="8", height = 4, width = 10, command = eight).grid(column = 1, row = 3)
Button(keypadwindow, text="9", height = 4, width = 10, command = nine).grid(column = 2, row = 3)
Button(keypadwindow, text="0", height = 4, width = 10, command = zero).grid(column = 1, row = 4)
Button(keypadwindow, text="CLEAR", height = 4, width = 10, command = clear).grid(column = 2, row = 4)
keypadwindow.mainloop()
upgradetoadmin()
Answer: try
open('Student Data.csv', 'wb', newline='')
Because the official
[documentation](https://docs.python.org/3/library/csv.html#csv.writer) asks us
to do so
|
Python socket.create() raises an errno 10061
Question: When the client tries to connect to the server it always raise an exception. I
have disabled Windows firewall and my antivirus and I have also opened the
port 50100 on the router configuration panel.
Server:
import socket
HOST = ''
PORT = 8882
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen(1)
conn, addr = s.accept()
s.close()
Client:
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = 'localhost'
port = 50100
remote_ip = s.gethostbyname(host)
s.connect((remote_ip, port))
s.close()
Answer: Specify the correct _remote_ port in the client (i.e., the same port as the in
the server script).
# Client
import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
host = 'localhost'
port = 8882
remote_ip = socket.gethostbyname(host)
s.connect((remote_ip, port))
s.close()
|
Can't pass variables to function with Tkinter in python
Question: I'm trying to take the textvariable from an entry(a) and then put it in a
label in a new window(w).
from Tkinter import *
def abind(avar):
print avar
w=Toplevel()
at=Label(w, text=avar).pack()
w.mainloop()
app=Tk()
at=StringVar()
a=Entry(app,textvariable=at)
avar=at.get()
a.pack()
a.focus()
b=Button(app, command=abind(avar)).pack()
app.mainloop()
It either prints blank, if I take the avar out of the parantheses after abind,
or opens a new window immeadiatley and doesn't display the button widget, if I
leave the avar.
Answer: There are two main problems with your code:
* with `avar=at.get()`, the `avar` variable has the value of the text variable _at that point in time_ , i.e. it is just the empty string
* with `Button(app, command=abind(avar))`, you are _calling_ the function `abind(avar)` and using its _result_ as a command, i.e. `None`
Also, by doing `b=Button(...).pack()`, `b` is the result of `pack()`, i.e.
`None`. This is not related to your problem, but it's probably not what you
intended, either. Try this:
b = Button(app, command=lambda: abind(at.get()))
b.pack()
This uses `lambda` to create a new anonymous function that will get the
current value from `at` using `at.get()` and call `abind` with that value,
setting the text of the `Label` accordingly.
If you want the `Label` to be updated as you type additional text into the
`Entry`, try this:
def abind(avar):
...
at = Label(w, textvariable=avar) # use textvariable
at.pack # pack again
...
...
b = Button(app, command=lambda: abind(at)) # pass at itself
...
|
pySerial modem result codes
Question: I'm using pySerial and python 2.7 on a Debian box. I'm writing python code to
control a USRobotics USB USR5637 modem. I want to check the responses from the
modem for issues/errors but I'm getting anomalous results from my code. I've
written some basic code for testing. I do have the phone line disconnected so
I can see a result code of "NO CARRIER." If anyone has access to pySerial and
a modem I would appreciate any help that can be provided.
I created the following code for testing:
import serial
ser = serial.Serial('/dev/ttyACM0', 9600, timeout=5)
ser.write("ATE0\r") # Attention - Echo disable
response = ser.read(4)
print "ATE0 %s" % response
ser.write("ATX0\r") # Attention - Echo disable
response2 = ser.read(8)
print "ATX0 %s" % response2
ser.write("ATDT411\r") # Attention - Dail - Tone - 411
response3 = ser.read(32)
print "ATDT %s" % response3
ser.write("ATH\r") # Attention - Hang up line
response4 = ser.read(16)
print "ATH %s" % response4
ser.write("ATZ\r") # Reset
response5 = ser.read(16)
print "ATZ %s" % response5
print "================================================="
print "%s %s %s %s %s" % (response, response2, response3, response4, response5)
ser.close()
The response I get is:
ATE0
OK
ATX0
OK
ATDT
ATH
NO CARRIER
ATZ
OK
=================================================
OK
OK
NO CARRIER
OK
My questions are:
1. What is the number in ser.read(4) or ser.read(8). Is it a timeout?
2. I don't seem to get the "NO CARRIER" until after the ATH section of the code. I would expect to get it directly after the ATDT section.
3. Why do I end up with only four results?
4. Why the anomalous spacing in the printed results?
Any help would be greatly appreciated. Thanks.
Answer: `ser.read(4)` means to read 4 bytes from the serial port. You have configured
a 5 second timeout with your constructor call:
serial.Serial('/dev/ttyACM0', 9600, timeout=5)
so `ser.read(4)` may return less than 4 characters. Use `timeout=None` for a
blocking read in which case it won't return until exactly 4 characters have
been received.
You are getting the weird spacing because each response sent by the modem ends
in a CR character (and it may be be a CR/LF pair.)
You might find it easier to interact with the modem using this `getline()`
function:
def getline(ser):
buf = ""
while True:
ch = ser.read(1)
if ch == '\r': # or perhaps '\n'
break
buf += ch
return buf
As for #2, I believe you are getting "NO CARRIER" due to the ATH command you
sent.
See the "Making a call" section in this MS Support Note
[(link)](http://support.microsoft.com/kb/164659)
According to the note sending a hangup command will result in a NO CARRIER
response. The modem is probably still trying to establish a connection when
you send the ATH.
|
Creating a class in Python
Question: I am working a program for school. We are to create a "class" and I keep
getting an attribute error but I can not figure out what I wrong can someone
please help. The directions are as followed. I have also included the code
that I have written in hopes someone can help me.
> Write a class named `Car` that has the following data attributes:
>
> * `__year_model`( for the car’s year model)
> * `__make`(for the car’s make of the car)
> * `__speed`( for the car’s current speed)
>
>
> The Car class should have a `__init__` method that accepts the car’s year
> model and make as arguments. It should also assign `0` to the `__speed` data
> attribute.
>
> The class should also have the following methods:
>
> * Accelerate: The `accelerate` method should add 5 to the speed data
> attribute each time it is called.
> * Brake: The `brake` method should subtract 5 from the speed data
> attribute each time it is called.
> * The `get_speed` method should return the current speed
>
>
> Next, Design a program that creates a car object, and then calls the
> accelerate method five times. After each call to the accelerate method, get
> the current speed of the car and display it. Then call the brake method five
> times. After each call to the brake method, get the current speed of the car
> and display it.
class Car:
def __init__(self, year_model, make, speed):
self.year_model = year_model
self.make = make
self.speed = 0
############# year_model################
def setYear_model(self, year_model):
self.year_model = year_model
def getYear_model(self):
return self.year_model
############# Make################
def setMake(self, make):
self.make = make
def getMake(self):
return self.make
############# speed################
def setSpeed(self, speed):
if speed < 0:
print("Speed cannot be negative")
else:
self.speed = speed
def getSpeed(self):
return self.speed
def accelerate(self, speed):
self.speed += 5
return self.speed
def brake(self, speed):
self.speed -= 5
return self.speed
############# str ############
def __str__(self):
return "Make : " + self.make + ", Model Year :" + \
self.year_model + ", speed =" + str(self.speed)
my actual program
import CarDefinition
def main():
my_car = CarDefinition.Car("2008", "Honda Accord")
print(my_car)
# Accelerate 5 times
print ("Car is Accelerating: ")
for i in range(5):
my_car.accelerate()
print ("Current speed: ", my_car.getSpeed())
print()
# Break 7 times
print ("Car is braking: ")
for i in range(7):
my_car.brake()
print ("Current speed: ", my_car.getSpeed())
print("my_car values at prgram end:\n", my_car)
main()
Answer: Try something like this
def accelerate(self):
self.speed += 5
return self.speed
|
looping for a sum versus summing a list comprehension
Question: In the sort of calculations I do, I often encounter the need to sum some
expression over (at least) one list. Let's say I want to find the sum of x^2
for x in some list L. Is there a significant difference between run times of
sum([val**2 for val in L])
versus
total = 0
for val in L:
total+=val**2
and would this change if I had a more complicated expression to sum? I'm not
concerned about the memory issues associated with creating a list, but cleaner
code and faster run time matter.
I'm basically wondering whether the optimization that people have probability
done to list generators and to sum would get me a faster code - doing the loop
in C rather than python.
**edit** for anyone searching for this, the answer turns out to be that
converting to numpy arrays and then doing the calculation is fastest of all.
Answer: The easiest and most certain answer to do is just test it:
import timeit
import numpy as np
L = range(1000)
M = np.arange(1000)
def f0():
sum([val**2 for val in L])
def f1():
total = 0
for val in L:
total+=val**2
def f2():
np.sum(M*M)
print timeit.timeit("f0()", setup="from __main__ import f0, f1, f2, L, M", number=100)
print timeit.timeit("f1()", setup="from __main__ import f0, f1, f2, L, M", number=100)
print timeit.timeit("f2()", setup="from __main__ import f0, f1, f2, L, M", number=100)
# 0.015289068222
# 0.00959897041321
# 0.000958919525146
The times have a similar ratio if 1M is used instead of 1K (here I also used
`number=10` so I didn't have to wait):
# 1.21456193924
# 1.08847117424
# 0.0474879741669
That is, the two pure Python approaches are about the same, and using numpy
speeds up the calculation 10-20x.
|
Problems on Windows Service written in python and py2exe
Question: I've written a service for Windows:
**agentservice.py**
import win32serviceutil
import win32service
import win32event
import win32evtlogutil
import agent
class AgentService(win32serviceutil.ServiceFramework):
_svc_name_ = "AgentService"
_svc_display_name_ = "AgentService"
_svc_deps_ = ["EventLog"]
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
def SvcRun(self):
import servicemanager
agent.verify()
# Write a 'started' event to the event log...
win32evtlogutil.ReportEvent(self._svc_name_,servicemanager.PYS_SERVICE_STARTED,0, servicemanager.EVENTLOG_INFORMATION_TYPE,(self._svc_name_, ''))
# wait for beeing stopped...
win32event.WaitForSingleObject(self.hWaitStop, win32event.INFINITE)
# and write a 'stopped' event to the event log.
win32evtlogutil.ReportEvent(self._svc_name_,servicemanager.PYS_SERVICE_STOPPED,0,
servicemanager.EVENTLOG_INFORMATION_TYPE,(self._svc_name_, ''))
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
if __name__ == '__main__':
win32serviceutil.HandleCommandLine(AgentService)
then **agent.py**
import os
import socket
import time
import json
import platform
PLATFORM = platform.system()
import uuid
import sys
HOST = 'highwe.net'
PORT = 8302
USERKEY = None
def getHoldHost():
hold_host = os.environ.get('HOLDHOST')
if hold_host is None:
return HOST
return hold_host
HOST = getHoldHost()
def macAddress():
return ':'.join(['{:02x}'.format((uuid.getnode() >> i) & 0xff) for i in range(0, 8 * 6, 8)][::-1])
def getRelease():
'''Get OS info'''
release = ''
if PLATFORM == 'Windows':
release = osAction("ver").decode('gbk')
return release
def getExpInfo(just_info=False):
'''Get Exception'''
import traceback
if just_info:
info = sys.exc_info()
return info[0].__name__ + ':' + str(info[1])
else:
return traceback.format_exc()
def osAction(command):
'''
run command
'''
try:
p = os.popen(command)
content = p.read()
p.close()
except Exception:
content = 'djoin_error:' + getExpInfo(True)
return content
def socketAgent():
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((HOST, PORT))
return sock
def diskMon():
mon_data = None
if PLATFORM == 'Windows':
disk = osAction("wmic logicaldisk get caption, size, freespace, drivetype")
mon_data = dict(disk=disk)
else:
pass
return mon_data
def send():
mac = macAddress()
release = getRelease()
try:
sock = socketAgent()
while True:
if disk:
message = json.dumps(dict(user_key=USERKEY, platform=PLATFORM, mac=mac, release=release, mon_data=disk, type="disk")) + '\u7ed3\u675f'
sock.send(message)
print '%s send disk' % PLATFORM
time.sleep(5)
except socket.error:
error_info = getExpInfo(True)
print HOST
print error_info
time.sleep(5)
send()
def verify():
global USERKEY
with open('agent.conf', 'r') as f:
out_data = f.read()
USERKEY = json.loads(out_data).get('user_key')
#print 'start...'
agentPid = os.getpid()
writePid(agentPid)
send()
def writePid(pid):
pid = str(pid)
with open('pid.config','w') as f:
f.write("%s\n" % pid)
if __name__ == '__main__':
pass
Note: agent.conf is also in current directory.
**agent.conf** :
{"user_key": "cd7eab88-3055-4b1d-95a4-2ad80731d226"}
and my **setup.py** is:
from distutils.core import setup
import py2exe
import sys
sys.argv.append("py2exe")
setup(service = ["agentservice"])
after I am run :
python setup.py
there is a agentservice.exe in ./dist directory. And run:
agentservice.exe -install
and everything is fine and the service appears in the Windows service list .it
success installed.
But what confused me is : why my service can't start and stop normally? Is
there any bugs in my code?
Any help would be greatly appreciated.
Answer: > Note: agent.conf is also in current directory.
How? The current directory is the directory you're in when you start the
program. The working directory for a service is usually your System32
directory, the working directory at the time you install is the `dist`
directory under your project, and presumably the working directory for the
client script is the top level of your project. Unless you've created a
hardlink/junction, the file isn't in all three places.
If you want to find a file that's in the _script's_ directory, you have to do
that explicitly. You can change the current directory to the script's
directory at startup, or configure the service to run from the script's
directory instead of the default location—or, better, you can ignore the
current working directory; get the script directory at startup with
`os.path.abspath(os.path.dirname(sys.argv[0]))` and then `os.path.join` that
to the path.
But in your case, you're already using a proper `setup.py`, so what you really
want to do is include the file as a data or extra file in your package and use
[`pkg_resources`](https://pythonhosted.org/setuptools/pkg_resources.html) to
locate it at runtime. (It's possible that `py2exe` has its own variation that
supersedes the `setuptools` stuff. If so, use that instead, of course.)
|
pywinauto error in Python 3.X
Question: I have installed "pywinauto" in Python 3.4.1 32 bit (on a Windows 7 64 bit
machine) using the command:
pip.exe install pywinauto
which gave me the following output:
> C:\Python34\Scripts>pip.exe install pywinauto Downloading/unpacking
> pywinauto Running setup.py
> (path:C:\Users\arun_m\AppData\Local\Temp\pip_build_arun_m\pywinauto\setup.py)
> egg_info for package pywinauto
>
> Installing collected packages: pywinauto Running setup.py install for
> pywinauto File "C:\Python34\Lib\site-packages\pywinauto\clipboard.py", line
> 94 print formats ^ SyntaxError: invalid syntax
>
>
> File "C:\Python34\Lib\site-
> packages\pywinauto\controls\common_controls.py",
>
>
> line 356 print "##### not dealing with that TVN_GETDISPINFO stuff yet" ^
> SyntaxError: invalid syntax
>
>
> File "C:\Python34\Lib\site-
> packages\pywinauto\controls\HwndWrapper.py",
>
>
> line 461 print "dialog not found" ^ SyntaxError: invalid syntax
>
>
> File "C:\Python34\Lib\site-packages\pywinauto\controls\wraphandle.py",
> line
>
>
> 43 except AttributeError, e: ^ SyntaxError: invalid syntax
>
>
> File "C:\Python34\Lib\site-packages\pywinauto\controls\__init__.py",
> line
>
>
> 39 print "blah" ^ SyntaxError: invalid syntax
>
>
> File "C:\Python34\Lib\site-packages\pywinauto\findbestmatch.py", line
> 137
> _after_tab = re.compile(ur"\t.*", re.UNICODE)
> ^
> SyntaxError: invalid syntax
>
> File "C:\Python34\Lib\site-packages\pywinauto\findwindows.py", line
> 221
> print "==" * 20
> ^
> SyntaxError: invalid syntax
>
> File "C:\Python34\Lib\site-packages\pywinauto\handleprops.py", line
> 323
> print "%15s\t%s" % (name, value)
> ^
> SyntaxError: invalid syntax
>
> File "C:\Python34\Lib\site-
> packages\pywinauto\tests\missingextrastring.py",
>
>
> line 160 print num_found, num_bugs, loc, ref ^ SyntaxError: invalid syntax
>
>
> File "C:\Python34\Lib\site-packages\pywinauto\tests\__init__.py", line
> 79
> print "BugType:", bug_type, is_in_ref,
> ^
> SyntaxError: invalid syntax
>
> File "C:\Python34\Lib\site-packages\pywinauto\test_application.py",
> line 36
> app.connect_(path = ur"No process with this please")
> ^
> SyntaxError: invalid syntax
>
> File "C:\Python34\Lib\site-packages\pywinauto\win32defines.py", line
> 50
> HKEY_CLASSES_ROOT = 2147483648L # Variable POINTER(HKEY__)
> ^
> SyntaxError: invalid syntax
>
> File "C:\Python34\Lib\site-packages\pywinauto\win32structures.py",
> line 43
> print "%20s "% name, getattr(struct, name)
> ^
> SyntaxError: invalid syntax
>
>
> Successfully installed pywinauto Cleaning up...
After this, when I execute the following in Python's IDLE:
import pywinauto
it gives no error. But when I try:
from pywinauto import application
it gives me the following output:
> Traceback (most recent call last): File "", line 1, in from pywinauto import
> application File "C:\Python34\lib\site-packages\pywinauto\application.py",
> line 64, in import win32structures ImportError: No module named
> 'win32structures'
I searched in Python3.4 folder and found "win32structures.py" file in the
location:
> C:\Python34\Lib\site-packages\pywinauto\
I don't know why it's giving "ImportError" when the file is present.
Can you please tell me what's going wrong?
Thanks!
Answer: Official `pywinauto 0.4.2` version is compatible with 32-bit Python 2.x only.
You can install `pywinauto` on `Python 2.7.8 32-bit`, for example (I use
Python 2.6.6). Also you can find some unofficial modifications which are
compatible with 64-bit Python 2.x (it's absolutely necessary for 64-bit apps
automation). I didn't see Python 3.x compatible versions. Maybe you will be
more lucky.
EDIT:
`pywinauto` project has been moved to [GitHub
repo](https://github.com/pywinauto/pywinauto). It's Python 3.x compatible now.
Use 64-bit Python for 64-bit apps and 32-bit Python for 32-bit ones.
|
bug of autocorrelation plot in matplotlib‘s plt.acorr?
Question: I am plotting autocorrelation with python. I used three ways to do it: 1.
pandas, 2. matplotlib, 3. statsmodels. I found the graph I got from matplotlib
is not consistent with the other two. The code is:
from statsmodels.graphics.tsaplots import *
# print out data
print mydata.values
#1. pandas
p=autocorrelation_plot(mydata)
plt.title('mydata')
#2. matplotlib
fig=plt.figure()
plt.acorr(mydata,maxlags=150)
plt.title('mydata')
#3. statsmodels.graphics.tsaplots.plot_acf
plot_acf(mydata)
plt.title('mydata')
The graph is here:
[http://quant365.com/viewtopic.php?f=4&t=33](http://quant365.com/viewtopic.php?f=4&t=33)
Answer: This is a result of different common definitions between statistics and signal
processing. Basically, the signal processing definition assumes that you're
going to handle the detrending. The statistical definition assumes that
subtracting the mean is all the detrending you'll do, and does it for you.
First off, let's demonstrate the problem with a stand-alone example:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from statsmodels.graphics import tsaplots
def label(ax, string):
ax.annotate(string, (1, 1), xytext=(-8, -8), ha='right', va='top',
size=14, xycoords='axes fraction', textcoords='offset points')
np.random.seed(1977)
data = np.random.normal(0, 1, 100).cumsum()
fig, axes = plt.subplots(nrows=4, figsize=(8, 12))
fig.tight_layout()
axes[0].plot(data)
label(axes[0], 'Raw Data')
axes[1].acorr(data, maxlags=data.size-1)
label(axes[1], 'Matplotlib Autocorrelation')
tsaplots.plot_acf(data, axes[2])
label(axes[2], 'Statsmodels Autocorrelation')
pd.tools.plotting.autocorrelation_plot(data, ax=axes[3])
label(axes[3], 'Pandas Autocorrelation')
# Remove some of the titles and labels that were automatically added
for ax in axes.flat:
ax.set(title='', xlabel='')
plt.show()

So, why the heck am I saying that they're all correct? They're clearly
different!
Let's write our own autocorrelation function to demonstrate what `plt.acorr`
is doing:
def acorr(x, ax=None):
if ax is None:
ax = plt.gca()
autocorr = np.correlate(x, x, mode='full')
autocorr /= autocorr.max()
return ax.stem(autocorr)
If we plot this with our data, we'll get a more-or-less identical result to
`plt.acorr` (I'm leaving out properly labeling the lags, simply because I'm
lazy):
fig, ax = plt.subplots()
acorr(data)
plt.show()

This is a perfectly valid autocorrelation. It's all a matter of whether your
background is signal processing or statistics.
This is the definition used in signal processing. The assumption is that
you're going to handle detrending your data (note the `detrend` kwarg in
`plt.acorr`). If you want it detrended, you'll explictly ask for it (and
probably do something better than just subtracting the mean), and otherwise it
shouldn't be assumed.
In statistics, simply subtracting the mean is assumed to be what you wanted to
do for detrending.
All of the other functions are subtracting the mean of the data before the
correlation, similar to this:
def acorr(x, ax=None):
if ax is None:
ax = plt.gca()
x = x - x.mean()
autocorr = np.correlate(x, x, mode='full')
autocorr /= autocorr.max()
return ax.stem(autocorr)
fig, ax = plt.subplots()
acorr(data)
plt.show()

However, we still have one large difference. This one is purely a plotting
convention.
In most signal processing textbooks (that I've seen, anyway), the "full"
autocorrelation is displayed, such that zero lag is in the center, and the
result is symmetric on each side. R, on the other hand, has the very
reasonable convention to display only one side of it. (After all, the other
side is completely redundant.) The statistical plotting functions follow the R
convetion, and `plt.acorr` follows what Matlab does, which is the opposite
convention.
Basically, you'd want this:
def acorr(x, ax=None):
if ax is None:
ax = plt.gca()
x = x - x.mean()
autocorr = np.correlate(x, x, mode='full')
autocorr = autocorr[x.size:]
autocorr /= autocorr.max()
return ax.stem(autocorr)
fig, ax = plt.subplots()
acorr(data)
plt.show()

|
Using GHC's profiling stats/charts to identify trouble-areas / improve performance of Haskell code
Question: **TL;DR:** Based on the Haskell code and it's associated profiling data below,
what conclusions can we draw that let us modify/improve it so we can narrow
the performance gap vs. the same algorithm written in imperative languages
(namely C++ / Python / C# but the specific language isn't important)?
## Background
I wrote the following piece of code as an answer to a question on a popular
site which contains many questions of a programming and/or mathematical
nature. (You've probably heard of this site, whose name is pronounced "oiler"
by some, "yoolurr" by others.) Since the code below is a solution to one of
the problems, I'm intentionally avoiding any mention of the site's name or any
specific terms in the problem. That said, I'm talking about problem one
hundred and three.
(In fact, I've seen many solutions in the site's forums from resident Haskell
wizards :P)
## Why did I choose to profile this code?
This was the first problem (on said site) in which I encountered a difference
in performance (as measured by execution time) between Haskell code vs.
C++/Python/C# code (when both use a similar algorithm). In fact, it was the
case for all of the problems (thus far; I've done ~100 problems but not
sequentially) that an optimized Haskell code was pretty much neck-and-neck
with the fastest C++ solutions, ceteris paribus for the algorithm, of course.
However, the posts in the forum for this particular problem would indicate
that the same algorithm in these other languages typically require at most one
or two seconds, with the longest taking 10-15 sec (assuming the same starting
parameters; I'm ignoring the very naive algorithms that take 2-3 min+). In
contrast, the Haskell code below required ~50 sec on my (decent) computer
(with profiling disabled; with profiling enabled, it takes ~2 min, as you can
see below; note: the exec time was identical when compiling with `-fllvm`).
Specs: i5 2.4ghz laptop, 8gb RAM.
In an effort to learn Haskell in a way that it can become a viable substitute
to the imperative languages, one of my aims in solving these problems is
learning to write code that, to the extent possible, has performance that's on
par with those imperative languages. In that context, I still consider the
problem as yet unsolved by me (since there's nearly a ~25x difference in
performance!)
## What have I done so far?
In addition to the obvious step of streamlining the code itself (to the best
of my ability), I've also performed the standard profiling exercises that are
recommended in "Real World Haskell".
But I'm having a hard time drawing conclusions that that tell me which pieces
need to be modified. That's where I'm hoping folks might be able to help
provide some guidance.
## Description of the problem:
I'd refer you to the website of problem one hundred and three on the
aforementioned site but here's a brief summary: the goal is to find a group of
seven numbers such that any two disjoint subgroups (of that group) satisfy the
following two properties (I'm trying to avoid using the 's-e-t' word for
reasons mentioned above...):
* no two subgroups sum to the same amount
* the subgroup with more elements has a larger sum (in other words, the sum of the smallest four elements must be greater than the sum of the largest three elements).
In particular, we are trying to find the group of seven numbers with the
smallest sum.
## My (admittedly weak) observations
A warning: some of these comments may well be totally wrong but I wanted to
atleast take a stab at interpreting the profiling data based on what I read in
Real World Haskell and other profiling-related posts on SO.
* There does indeed seem to be an efficiency issue seeing as how one-third of the time is spent doing garbage collection (37.1%). The first table of figures shows that ~172gb is allocated in the heap, which seems horrible... (Maybe there's a better structure / function to use for implementing the dynamic programming solution?)
* Not surprisingly, the vast majority (83.1%) of time is spent checking rule 1: (i) 41.6% in the `value` sub-function, which determines values to fill in the dynamic programming ("DP") table, (ii) 29.1% in the `table` function, which generates the DP table and (iii) 12.4% in the `rule1` function, which checks the resulting DP table to make sure that a given sum can only be calculated in one way (i.e., from one subgroup).
* However, I did find it surprising that more time was spent in the `value` function relative to the `table` and `rule1` functions given that it's the only one of the three which doesn't construct an array or filter through a large number of elements (it's really only performing O(1) lookups and making comparisons between `Int` types, which you'd think would be relatively quick). So this is a potential problem area. That said, it's unlikely that the `value` function is driving the high heap-allocation
Frankly, I'm not sure what to make of the three charts.
Heap profile chart (i.e., the first char below):
* I'm honestly not sure what is represented by the red area marked as `Pinned`. It makes sense that the `dynamic` function has a "spiky" memory allocation because it's called every time the `construct` function generates a tuple that meets the first three criteria and, each time it's called, it creates a decently large DP array. Also, I'd think that the allocation of memory to store the tuples (generated by construct) wouldn't be flat over the course of the program.
* Pending clarification of the "Pinned" red area, I'm not sure this one tells us anything useful.
Allocation by type and allocation by constructor:
* I suspect that the `ARR_WORDS` (which represents a ByteString or unboxed Array according to the GHC docs) represents the low-level execution of the construction of the DP array (in the `table` function). Nut I'm not 100% sure.
* I'm not sure what's the `FROZEN` and `STATIC` pointer categories correspond to.
* Like I said, I'm really not sure how to interpret the charts as nothing jumps out (to me) as unexpected.
## The code and the profiling results
Without further ado, here's **the code** with comments explaining my
algorithm. I've tried to make sure the code doesn't run off of the right-side
of the code-box - but some of the comments do require scrolling (sorry).
{-# LANGUAGE NoImplicitPrelude #-}
{-# OPTIONS_GHC -Wall #-}
import CorePrelude
import Data.Array
import Data.List
import Data.Bool.HT ((?:))
import Control.Monad (guard)
main = print (minimum construct)
cap = 55 :: Int
flr = 20 :: Int
step = 1 :: Int
--we enumerate tuples that are potentially valid and then
--filter for valid ones; we perform the most computationally
--expensive step (i.e., rule 1) at the very end
construct :: [[Int]]
construct = {-# SCC "construct" #-} do
a <- [flr..cap] --1st: we construct potentially valid tuples while applying a
b <- [a+step..cap] --constraint on the upper bound of any element as implied by rule 2
c <- [b+step..a+b-1]
d <- [c+step..a+b-1]
e <- [d+step..a+b-1]
f <- [e+step..a+b-1]
g <- [f+step..a+b-1]
guard (a + b + c + d - e - f - g > 0) --2nd: we screen for tuples that completely conform to rule 2
let nn = [g,f,e,d,c,b,a]
guard (sum nn < 285) --3rd: we screen for tuples of a certain size (a guess to speed things up)
guard (rule1 nn) --4th: we screen for tuples that conform to rule 1
return nn
rule1 :: [Int] -> Bool
rule1 nn = {-# SCC "rule1" #-}
null . filter ((>1) . snd) --confirm that there's only one subgroup that sums to any given sum
. filter ((length nn==) . snd . fst) --the last column us how many subgroups sum to a given sum
. assocs --run the dynamic programming algorithm and generate a table
$ dynamic nn
dynamic :: [Int] -> Array (Int,Int) Int
dynamic ns = {-# SCC "dynamic" #-} table
where
(len, maxSum) = (length &&& sum) ns
table = array ((0,0),(maxSum,len))
[ ((s,i),x) | s <- [0..maxSum], i <- [0..len], let x = value (s,i) ]
elements = listArray (0,len) (0:ns)
value (s,i)
| i == 0 || s == 0 = 0
| s == m = table ! (s,i-1) + 1
| s > m = s <= sum (take i ns) ?:
(table ! (s,i-1) + table ! ((s-m),i-1), 0)
| otherwise = 0
where
m = elements ! i
**Stats on heap allocation, garbage collection and time elapsed:**
% ghc -O2 --make 103_specialsubset2.hs -rtsopts -prof -auto-all -caf-all -fforce-recomp
[1 of 1] Compiling Main ( 103_specialsubset2.hs, 103_specialsubset2.o )
Linking 103_specialsubset2 ...
% time ./103_specialsubset2.hs +RTS -p -sstderr
zsh: permission denied: ./103_specialsubset2.hs
./103_specialsubset2.hs +RTS -p -sstderr 0.00s user 0.00s system 86% cpu 0.002 total
% time ./103_specialsubset2 +RTS -p -sstderr
SOLUTION REDACTED
172,449,596,840 bytes allocated in the heap
21,738,677,624 bytes copied during GC
261,128 bytes maximum residency (74 sample(s))
55,464 bytes maximum slop
2 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 327548 colls, 0 par 27.34s 41.64s 0.0001s 0.0092s
Gen 1 74 colls, 0 par 0.02s 0.02s 0.0003s 0.0013s
INIT time 0.00s ( 0.01s elapsed)
MUT time 53.91s ( 70.60s elapsed)
GC time 27.35s ( 41.66s elapsed)
RP time 0.00s ( 0.00s elapsed)
PROF time 0.00s ( 0.00s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 81.26s (112.27s elapsed)
%GC time 33.7% (37.1% elapsed)
Alloc rate 3,199,123,974 bytes per MUT second
Productivity 66.3% of total user, 48.0% of total elapsed
./103_specialsubset2 +RTS -p -sstderr 81.26s user 30.90s system 99% cpu 1:52.29 total
**Stats on time spent per cost-centre:**
Wed Dec 17 23:21 2014 Time and Allocation Profiling Report (Final)
103_specialsubset2 +RTS -p -sstderr -RTS
total time = 15.56 secs (15565 ticks @ 1000 us, 1 processor)
total alloc = 118,221,354,488 bytes (excludes profiling overheads)
COST CENTRE MODULE %time %alloc
dynamic.value Main 41.6 17.7
dynamic.table Main 29.1 37.8
construct Main 12.9 37.4
rule1 Main 12.4 7.0
dynamic.table.x Main 1.9 0.0
individual inherited
COST CENTRE MODULE no. entries %time %alloc %time %alloc
MAIN MAIN 55 0 0.0 0.0 100.0 100.0
main Main 111 0 0.0 0.0 0.0 0.0
CAF:main1 Main 108 0 0.0 0.0 0.0 0.0
main Main 110 1 0.0 0.0 0.0 0.0
CAF:main2 Main 107 0 0.0 0.0 0.0 0.0
main Main 112 0 0.0 0.0 0.0 0.0
CAF:main3 Main 106 0 0.0 0.0 0.0 0.0
main Main 113 0 0.0 0.0 0.0 0.0
CAF:construct Main 105 0 0.0 0.0 100.0 100.0
construct Main 114 1 0.6 0.0 100.0 100.0
construct Main 115 1 12.9 37.4 99.4 100.0
rule1 Main 123 282235 0.6 0.0 86.5 62.6
rule1 Main 124 282235 12.4 7.0 85.9 62.6
dynamic Main 125 282235 0.2 0.0 73.5 55.6
dynamic.elements Main 133 282235 0.3 0.1 0.3 0.1
dynamic.len Main 129 282235 0.0 0.0 0.0 0.0
dynamic.table Main 128 282235 29.1 37.8 72.9 55.5
dynamic.table.x Main 130 133204473 1.9 0.0 43.8 17.7
dynamic.value Main 131 133204473 41.6 17.7 41.9 17.7
dynamic.value.m Main 132 132640003 0.3 0.0 0.3 0.0
dynamic.maxSum Main 127 282235 0.0 0.0 0.0 0.0
dynamic.(...) Main 126 282235 0.1 0.0 0.1 0.0
dynamic Main 122 282235 0.0 0.0 0.0 0.0
construct.nn Main 121 12683926 0.0 0.0 0.0 0.0
CAF:main4 Main 102 0 0.0 0.0 0.0 0.0
construct Main 116 0 0.0 0.0 0.0 0.0
construct Main 117 0 0.0 0.0 0.0 0.0
CAF:cap Main 101 0 0.0 0.0 0.0 0.0
cap Main 119 1 0.0 0.0 0.0 0.0
CAF:flr Main 100 0 0.0 0.0 0.0 0.0
flr Main 118 1 0.0 0.0 0.0 0.0
CAF:step_r1dD Main 99 0 0.0 0.0 0.0 0.0
step Main 120 1 0.0 0.0 0.0 0.0
CAF GHC.IO.Handle.FD 96 0 0.0 0.0 0.0 0.0
CAF GHC.Conc.Signal 93 0 0.0 0.0 0.0 0.0
CAF GHC.IO.Encoding 91 0 0.0 0.0 0.0 0.0
CAF GHC.IO.Encoding.Iconv 82 0 0.0 0.0 0.0 0.0
**Heap profile:**

**Allocation by type:**

**Allocation by constructors:**

Answer: There is a lot that can be said. In this answer I'll just comment on the
nested list comprehensions in the `construct` function.
To get an idea on what's going on in `construct` we'll isolate it and compare
it to a nested loop version that you would write in an imperative language.
We've removed the `rule1` guard to test only the generation of lists.
-- List.hs -- using list comprehensions
import Control.Monad
cap = 55 :: Int
flr = 20 :: Int
step = 1 :: Int
construct :: [[Int]]
construct = do
a <- [flr..cap]
b <- [a+step..cap]
c <- [b+step..a+b-1]
d <- [c+step..a+b-1]
e <- [d+step..a+b-1]
f <- [e+step..a+b-1]
g <- [f+step..a+b-1]
guard (a + b + c + d - e - f - g > 0)
guard (a + b + c + d + e + f + g < 285)
return [g,f,e,d,c,b,a]
-- guard (rule1 nn)
main = do
forM_ construct print
-- Loops.hs -- using imperative looping
import Control.Monad
loop a b f = go a
where go i | i > b = return ()
| otherwise = do f i; go (i+1)
cap = 55 :: Int
flr = 20 :: Int
step = 1 :: Int
main =
loop flr cap $ \a ->
loop (a+step) cap $ \b ->
loop (b+step) (a+b-1) $ \c ->
loop (c+step) (a+b-1) $ \d ->
loop (d+step) (a+b-1) $ \e ->
loop (e+step) (a+b-1) $ \f ->
loop (f+step) (a+b-1) $ \g ->
if (a+b+c+d-e-f-g > 0) && (a+b+c+d+e+f+g < 285)
then print [g,f,e,d,c,b,a]
else return ()
Both programs were compiled with `ghc -O2 -rtsopts` and run with `prog +RTS -s
> out`.
Here is a summary of the results:
Lists.hs Loops.hs
Heap allocation 44,913 MB 2,740 MB
Max. Residency 44,312 44,312
%GC 5.8 % 1.7 %
Total Time 9.48 secs 1.43 secs
As you can see, the loop version, which is the way you would write this in a
language like C, wins in every category.
The list comprehension version is cleaner and more composable but also less
performant than direct iteration.
|
Combining numpy arrays to create a value field
Question: i have coded so far:
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from scipy.special import *
import matplotlib.pyplot as plt
import numpy as np
from math import *
import csv
## Globale Variablen ##
# kommen später ins main file, MÜSSEN vor dem import von diesem Modul definiert werden!)
rhof = 1000 # Dichte Flüssigkeit [kg/m³]
lameu = 11.2*10**9 # Lamé-Parameter, undrained [GPa]
lame = 8.4*10**9 # Lamé-Parameter, drained [GPa]
pi # durch Pythonmodul "math" gegeben
alpha = 0.65 # Biot-Willis-Koeffizient
G = 8.4*10**9 # Schermodul [GPa]
k = 1.0e-15 # Permeabilität [m²] bzw. [Darcy]
eta = 0.001 # Viskosität des Fluids [Pa*s]
## Berechnung der Parameter ##
# Berechnung der Diffusivität
kappa = k/eta
# Berechnung der Diffusivität
c = ((kappa*(lameu-lame)*(lame+2*G))/((alpha**2)*(lameu+2*G)))
## Wertebereich ##
xmin = 0
xmax = 50
xsteps = 1
x = np.arange(xmin,xmax,xsteps)
ymin = 0
ymax = 50
ysteps = 1
y = np.arange(ymin,ymax,ysteps)
#X, Y = np.meshgrid(x,y)
## Klassendefinition ##
# hier wird eine Bohrloch-Klasse definiert
class Bohrloch(object):
# __init__ erzeugt das eigentliche Objekt mit seinen Parametern
def __init__(self, xlage, ylage, tstart, q):
self.xlage = xlage # x-Lage der Bohrung
self.ylage = ylage # y-Lage der Bohrung
self.tstart = tstart # Start der Injektion/Produktion
self.q = q # Fluidmenge
#==============================================================================
# Drücke
#==============================================================================
# getPressure erzeugt einen Array mit allen Werten für x,y im Zeitraum t
def getPressure(self, t):
if (t-self.tstart<0): # Förderzeit muss in Förderzeitraum liegen
print "Startpunkt liegt außerhalb des Förderzeitraumes!"
return ()
else:
# erzeugen des Abstandsvektors
self.r = np.sqrt((x-self.xlage)**2+(y-self.ylage)**2)
# Druckformel nach Rudnicki (1986)
self.P = (self.q/(rhof*4*pi*kappa))*(expn(1,self.r**2/(4*c*(t-self.tstart))))
# Druck wird direkt bei der Bohrung als "not a number" definiert
self.P[self.xlage] = np.nan
self.P[self.ylage] = np.nan
# Umrechnung des Druckes in [MPa]
self.z = self.P/1e6
# return gibt die Druckwerte aus (und speichert sie zwischen)
return self.z
The issue i have is that i get back an 1 dimensional array of `self.z`. The
idea behind the code is the following:
I have an x- and and y-axis, e.g. with the ranges above. I try to get pressure
values (depending from x,y) for each possible combination of these 1d-arrays,
e.g. values for x,y = 1,1 ; 1,2 ; 1,3 and so on until 50 on each axis. If i
test the dimension of my array self.z with `ndim`, i see that i only have
covered one dimension. The desired output should be a two dimensional array
with 50x50 entrys (or 49x49), or am i wrong? I'm really stuck to this problem,
i think it should be simple to solve but i can't get it properly. Later on i
want to plot a 3D Plot with these values, i know i have to use `np.meshgrid`
for this issue. Do i have to use it also here? I hope you guys get my idea and
can help!
Answer: According to your explanation, the problem is in your definition of 'self.r'
You want a list of the distances r between (x,y) and (xlage,ylage) for all
combinations of x and y, i.e. a matrix. However, you have a vector which
contains r belonging to (x1,y1), (x2,y2),(x3,y3) ... , i.e. you are missing
the combinations (x1,y2), ... etc.
The solution is
# put this into section 'Wertebereich'
xx,yy = np.meshgrid(x,y)
# and change the function to
def getPressure(self, t):
if (t-self.tstart<0): # Förderzeit muss in Förderzeitraum liegen
print "Startpunkt liegt außerhalb des Förderzeitraumes!"
return ()
else:
# erzeugen des Abstandsvektors
self.r = np.sqrt((xx-self.xlage)**2+(yy-self.ylage)**2)
# Druckformel nach Rudnicki (1986)
self.P = (self.q/(rhof*4*pi*kappa))*(expn(1,self.r**2/(4*c*(t-self.tstart))))
# Druck wird direkt bei der Bohrung als "not a number" definiert
self.P = np.where((xx==self.xlage) & (yy==self.ylage),self.P,NaN)
# Umrechnung des Druckes in [MPa]
self.z = self.P/1e6
# return gibt die Druckwerte aus (und speichert sie zwischen)
return self.z
I can't check the code right now but it should do the job.
You can then plot it by using xx,yy and the output z, as described
[here](http://matplotlib.org/examples/mplot3d/surface3d_demo2.html)
|
How to exchange time between C and Python
Question: I'm writing some C code that needs to embed the current time in its (binary)
output file. Later, this file will be read by some other C code (possibly
compiled for different architecture) and/or some python code. In both cases
calculations may be required on the time.
What I'd like to know is:
1. How do I get current UTC time in C? Is time() the write call?
2. What format should I write this to file in? ASN1? ISO?
3. How do I convert to that format?
4. How do I read that format in C and Python and convert it into something useful?
Answer: You could use [rfc 3339 datetime
format](https://tools.ietf.org/html/rfc3339#section-5.6) (a profile of
ISO8601). It avoids many pitfalls of unconstrained ISO8601 timestamps.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void) {
char buf[21];
time_t ts = time(NULL);
struct tm *tp = gmtime(&ts);
if (tp == NULL || tp->tm_year > 8099 || tp->tm_year < 0) {
perror("gmtime");
exit(EXIT_FAILURE);
}
if (strftime(buf, sizeof buf, "%Y-%m-%dT%H:%M:%SZ", tp) == 0) {
fprintf(stderr, "strftime returned 0\n");
exit(EXIT_FAILURE);
}
exit(puts(buf) != EOF ? EXIT_SUCCESS : EXIT_FAILURE);
}
### Output
2014-12-20T11:08:44Z
To read it in Python:
>>> from datetime import datetime, timezone
>>> dt = datetime.strptime('2014-12-20T11:08:44Z', '%Y-%m-%dT%H:%M:%SZ')
>>> dt = dt.replace(tzinfo=timezone.utc)
>>> print(dt)
2014-12-20 11:08:44+00:00
|
Python C-Api 32bit PyDict_Check Access Violation, Win7, Python 2.66
Question: I am trying to use the Python C-Api for 32bit Project in VS2010.
I have installed the 64bit and 32bit Python. To distinguish the two Versions,
I have renamed the 32bit dll to 'python26_32.dll'. I have created an according
import .lib file 'python26_32.lib' with the VS dumpbin and lib tool (see.
<https://adrianhenke.wordpress.com/2008/12/05/create-lib-file-from-dll/>).
I have adjusted 'pyconfig.h' and uncommented the `#pragma comment(lib...)`
Statements.
The projects compiles fine for 32bit. Running the exe gets me an Access
Violation when calling `PyDict_Check`. Other method calls before worked fine
(e.g. `Py_Initialize`, `PyRun_SimpleString`, `PyDict_New`...)
Here is my small example:
#include "stdafx.h"
#include "python.h" //include in stdafx.h has the same result
#include <iostream>
using std::cout; using std::endl;
int _tmain(int argc, _TCHAR* argv[])
{
cout << "Tryin to initialize Python." << endl;
Py_SetPythonHome("D:\\Python26_32");
Py_Initialize();
char *codeString =
"import platform\n"
"import sys\n"
"print platform.architecture()[0]\n"
"print sys.path\n"
"sys.path.insert(0,'D:\\Python26_32')\n"
"print sys.path\n";
PyRun_SimpleString(codeString);
PyObject *pDict;
pDict = PyDict_New();
PyDict_SetItemString(pDict, "Key1", PyString_FromString("Value1"));
int i = PyDict_Check(pDict);
cout << "working" << endl;
return 0;
}
I have noticed, that 'PyDict_Check' is not in the dll's exports. It is defined
in the python header files.
I have tried to adjust the 'Path' (in Windows and in via the api (see
example)) but that did not help.
The 64bit version works fine (after changing the relvant VC++ directories and
the `Py_SetPythonHome` Statement.
Most confusing to me is, that parts of the c-api workes and others not.
Just tried `PyDict_CheckExact`, this works.
Many thanks for your help.
Answer: After trying other versions of Python (2.7.9) with same result, I recognized
that building the "Release Version" always worked. After disabling
`Py_DEBUG`in `pyconfig.h` the Debug Build worked too for my needs.
Although I can not explain the behavior, this solved the problem for me.
Therefore I would mark the problem as solved.
|
IJ.close() - Scripting python in ImageJ/FIJI
Question: I'm exceptionally new to python/scripting and I'm having a problem. I'm
writing the following in Fiji (shortened version of the script is below...)
from ij import IJ, ImagePlus
from java.lang import Runtime, Runnable
import os
filepaths = []
for folder, subs, files in os.walk('location/of/files/'):
for filename in files:
#the next part stops it appending DS files
if not filename.startswith('.'):
filepaths.append(os.path.abspath(os.path.join(folder, filename,)))
for i in filepaths:
IJ.open(i);
IJ.close();
Basically I want to open an image, do stuff, and then close the processed
image using `IJ.close()`. However it gives the following error:
`AttributeError: type object 'ij.IJ' has no attribute 'close'`
Any idea how to get around this?
Thanks!
Answer: The `IJ` class does not have a `close()` method. You probably want to call the
`close()` method of `ImagePlus`, which is the class for the image objects
themselves.
Try something like:
IJ.open(i)
imp = IJ.getImage()
imp.getProcessor().setf(100, 100, 3.14159) # or whatever
IJ.save(imp, "/path/to/myShinyModifiedImage.tif")
imp.close()
If you need to operate over multiple slices of a multi-plane image, see also
the "Loop over slices" template (Templates > Python menu of the [Script
Editor](http://imagej.net/Script_Editor)).
Note also that Jython does not have trailing semicolons on statements.
|
Multiprocessing HTTP get requests in Python
Question: I have to make numerous (thousands) of HTTP GET requests to a great deal of
websites. This is pretty slow, for reasons that some websites may not respond
(or take long to do so), while others time out. As I need as many responses as
I can get, setting a small timeout (3-5 seconds) is not in my favour.
I have yet to do any kind of multiprocessing or multi-threading in Python, and
I've been reading the documentation for a good while. Here's what I have so
far:
import requests
from bs4 import BeautifulSoup
from multiprocessing import Process, Pool
errors = 0
def get_site_content(site):
try :
# start = time.time()
response = requests.get(site, allow_redirects = True, timeout=5)
response.raise_for_status()
content = response.text
except Exception as e:
global errors
errors += 1
return ''
soup = BeautifulSoup(content)
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
return text
sites = ["http://www.example.net", ...]
pool = Pool(processes=5)
results = pool.map(get_site_content, sites)
print results
Now, I want the results that are returned to be joined somehow. This allows
two variation:
1. Each process has a local list/queue that contains the content it has accumulated is joined with the other queues to form a single result, containing all the content for all sites.
2. Each process writes to a single global queue as it goes along. This would entail some locking mechanism for concurrency checks.
Would multiprocessing or multithreading be the better choice here? How would I
accomplish the above with either of the approaches in Python?
* * *
**Edit** :
I did attempt something like the following:
# global
queue = []
with Pool(processes = 5) as pool:
queue.append(pool.map(get_site_contents, sites))
print queue
However, this gives me the following error:
with Pool(processes = 4) as pool:
AttributeError: __exit__
Which I don't quite understand. I'm having a little trouble understanding
_what_ exactly pool.map does, past applying the function on every object in
the iterable second parameter. Does it return anything? If not, do I append to
the global queue from within the function?
Answer: `pool.map` starts 'n' number of processes that take a function and runs it
with an item from the iterable. When such a process finishes and returns, the
returned value is stored in a result list in the same position as the input
item in the input variable.
eg: if a function is written to calculate square of a number and then a
`pool.map` is used to run this function on a list of numbers. def
square_this(x): square = x**2 return square
input_iterable = [2, 3, 4]
pool = Pool(processes=2) # Initalize a pool of 2 processes
result = pool.map(square_this, input_iterable) # Use the pool to run the function on the items in the iterable
pool.close() # this means that no more tasks will be added to the pool
pool.join() # this blocks the program till function is run on all the items
# print the result
print result
...>>[4, 9, 16]
The `Pool.map` technique may not be ideal in your case since it will block
till all the processes finishes. i.e. If a website does not respond or takes
too long to respond your program will be stuck waiting for it. Instead try
sub-classing the `multiprocessing.Process` in your own class which polls these
websites and use Queues to access the results. When you have a satisfactory
number of responses you can stop all the processes without having to wait for
the remaining requests to finish.
|
Cannot import nltk under OSX
Question: I use pip `install -U nltk` to install it. After pip list I can get nltk
(3.0.0). Also sys.path includes location of nltk.
But when run `import nltk`, it shows `ImportError: No module named nltk`
I use OSX Yosemite, python2.7
I cannot figure out why it happen. Thank you for any suggestion.
Answer: You might want to try the free Anaconda distribution of Python instead. NLTK
comes pre-installed in Anaconda, along with many other popular packages for
data analysis, etc. It's a lot easier than having to install & manage hundreds
of separate packages individually.
<https://store.continuum.io/cshop/anaconda/>
|
Python Pandas read_csv issue
Question: I have simple CSV file that looks like this:
inches,12,3,56,80,45
tempF,60,45,32,80,52
I read in the CSV using this command:
import pandas as pd
pd_obj = pd.read_csv('test_csv.csv', header=None, index_col=0)
Which results in this structure:
1 2 3 4 5
0
inches 12 3 56 80 45
tempF 60 45 32 80 52
But I want this (unnamed index column):
0 1 2 3 4
inches 12 3 56 80 45
tempF 60 45 32 80 52
**EDIT:** As @joris pointed out additional methods can be run on the resulting
DataFrame to achieve the wanted structure. My question is specifically about
whether or not this structure could be achieved through `read_csv` arguments.
Answer: from the documentation of the function:
names : array-like
List of column names to use. If file contains no header row, then you
should explicitly pass header=None
so, apparently:
pd_obj = pd.read_csv('test_csv.csv', header=None, index_col=0, names=range(5))
|
Is it a good idea to make use different Python scripts for programs?
Question: I am wondering if it would be a good idea to use different `.py` scripts for
different parts of a Python program. Like one `.py` file for a calculator and
another for class files and etc. If it is a good idea, is it possible?
If possible, where can I find how to do so?
I am asking this because I find it confusing to have so much code in a single
file, and have to find it anytime fixing is needed.
Answer: If your program starts assuming big dimensions, yes, you could split your
classes or simply your functions according to what they do. Usually functions
that do similar tasks or that work on the same data are put together.
To import a file containing a set of functions that you defined, if the file
is in the same folder where your main script is, you can simply use this
statement, if, for example, the name of the script containing your function
that you want to imported is called `functions.py`, you can simply do `import
functions` or `from functions import *`, or, better, `from functions import
function_1`.
Let's talk about the 3 ways of importing files that I have just mentioned:
_`import functions`_
Here, in order to use one of your functions, for example `foo`, you have to
put the name of the module in front of the name of the function followed by a
`.`:
`functions.foo('this is a string')`
_`from functions import *`_
In this case, you can directly call foo just typing `foo('this is a new method
of importing files')`. `*` means that you have just imported everything from
the module `functions`.
_`from functions import function_1`_
In this last case, you have imported a specific function `function_1` from the
module `functions`, and you can use just the `function_1` from the same
module:
`function_1('I cannot use the function "foo" here, because I did not imported
it')`
|
creating dynamic nested dictionary
Question: I have in file **abc.txt**
abc/pqr/lmn/xyz:pass
abc/pqr/lmn/bcd:pass
i need to parse these statements and output should be in nested dictionary as
below
{'abc':{'pqr':{'lmn':{'xyz':{'pass':1},{'bcd':{'pass':1}}}}}}
where 1 is pass count. i can able to do as much as this
import re
d={}
p=re.compile('[a-zA-z]+')
for line in open('abc.txt'):
for key in p.findall(line):
d['key']={}
i am new to the python so can anyone help me through this
Answer: Here's an updated version of my answer in which leaves of the tree data-
structure are now different from those in rest of it. Instead of the tree
being strictly a`defaultdict`-of-`defaultdict`s, the "leaves" on each branch
are now instances of a different subclass of`dict`named
[`collections.Counter`](https://docs.python.org/2/library/collections.html?highlight=counter#collections.Counter)which
are useful for counting the number of times their different keys occur. I did
this because of your response to my question about what should happen if the
last part of each line was something other than`":pass"` (which was "we have
to put new count for that key").
Nested dictionaries are often called tree data-structures and can be defined
recursively -- the root is a dictionary along with each of its branches. The
following
uses[`collections.defaultdict`](https://docs.python.org/2/library/collections.html?highlight=defaultdict#collections.defaultdict)instead
of a plain`dict`because it makes constructing them easier since you don't
special case the creation of the first branch of next level down (expect I
still do, but only when adding "leaves").
from collections import defaultdict, Counter
import re
# optional trick to make each dict subclass print just like a regular one
class defaultdict(defaultdict):
def __repr__(self):
return dict(self).__repr__()
class Counter(Counter):
def __repr__(self):
return dict(self).__repr__()
# borrowed from my answer @ http://stackoverflow.com/a/23711788/355230
Tree = lambda: defaultdict(Tree) # recursive definition - a dict of dicts
# some functions based on answer @ http://stackoverflow.com/a/14692747/355230
def nested_dict_get(nested_dict, keys):
return reduce(lambda d, k: d[k], keys, nested_dict)
def nested_dict_set(nested_dict, keys, value):
nested_dict_get(nested_dict, keys[:-1])[keys[-1]] = value
def nested_dict_update_count(nested_dict, keys):
if nested_dict_get(nested_dict, keys[:-1]): # exists, so update
nested_dict_get(nested_dict, keys[:-1]).update([keys[-1]])
else: # doesn't exist, so create
nested_dict_set(nested_dict, keys[:-1], Counter([keys[-1]]))
d = Tree()
pat = re.compile(r'[a-zA-z]+')
with open('abc.txt') as file:
for line in file:
nested_dict_update_count(d, [w for w in pat.findall(line.rstrip())])
print d # prints like a regular dict, not defaultdict
To test the leaf-counter capabilities of the revised code I used the following
test file which includes the same line twice, once ending with `:pass` again
and another ending in `:fail`.
Expanded **abc.txt** file:
abc/pqr/lmn/xyz:pass
abc/pqr/lmn/bcd:pass
abc/pqr/lmn/xyz:fail
abc/pqr/lmn/xyz:pass
Output:
{'abc': {'pqr': {'lmn': {'bcd': {'pass': 1}, 'xyz': {'fail': 1, 'pass': 2}}}}}
Please let me know if this is a correct interpretation of your comment about
counting the last word on each line.
|
read a .txt and output a .xml in python3.2
Question: I wanted to know how to read a .txt file and the output should be in .xml
format.
I have the input file as
Paper 1 / White Spaces are included
Single Correct Answer Type
1. Text of question 1
a) Option 1.a b) Option 1.b
c) Option 1.c d) Option 1.d
2. Text of question 2
a) This is an example of Option 2.a
b) Option 2.b has a special char α
c) Option 2.c
d) Option 2.d
3. Text of question 3
a) Option 3.a can span multiple
lines.
b) Option 3b
c) Option 3c
d) Option 3d
My code:
from lxml import etree
import csv
root = etree.Element('data')
#f = open('input1.txt','rb')
rdr = csv.reader(open("input1.txt",newline='\n'))
header = next(rdr)
for row in rdr:
eg = etree.SubElement(root, 'eg')
for h, v in zip(header, row):
etree.SubElement(eg, h).text = v
f = open(r"C:\temp\input1.xml", "w")
f.write(etree.tostring(root))
f.close()
I'm getting an error like:
Traceback (most recent call last):
File "E:\python3.2\input1.py", line 11, in <module>
etree.SubElement(eg, h).text = v
File "lxml.etree.pyx", line 2995, in lxml.etree.SubElement (src\lxml\lxml.etree.c:69677)
File "apihelpers.pxi", line 188, in lxml.etree._makeSubElement (src\lxml\lxml.etree.c:15691)
File "apihelpers.pxi", line 1571, in lxml.etree._tagValidOrRaise (src\lxml\lxml.etree.c:29249)
ValueError: Invalid tag name ' Paper 1'
And I want it to consider the white spaces also. I'm using Python 3.2. Any
suggestions?
Answer: You can read this info from txt file, organize in object classes and then
serialize this.
How to De/serialize: <http://code.activestate.com/recipes/577266-xml-to-
python-data-structure-de-serialization/>
Example:
f = open('file.txt')
lines = f.readlines()
f.close()
#do something to orginize these lines into objects.
xmlStrings = [serialize(pythonObj) for pythonObj in txtInfoObjs]
g = open('file.xml')
g.write(xmlStrings[0])
g.close()
|
python ngram reducer not returning any output?
Question: I am trying to build a mapreduce job for ngrams of google books. My mapper
works fine when tested locally but the reducer does not returns any value. The
reducer is something like below:
#! /usr/bin/env python
import sys
current_word = ''
word_in_progress = ''
target_year_count = 0
prior_year_count = 0
target_year = 1999
for line in sys.stdin:
line = line.strip().split('\t')
if len(line) !=3:
continue
current_word, year, occurances = line
if current_word != word_in_progress:
if target_year_count > 0:
if prior_year_count ==0:
print '%s\t%s' % (word_in_progress, target_year_count)
try:
year = int(year)
except ValueError:
continue
try:
occurances = int(occurances)
except ValueError:
continue
if year == target_year:
target_year_count += occurances
if year < target_year:
prior_year_count += occurances
print '%s\t%s' % (word_in_progress, target_year_count)
if target_year_count > 0:
if prior_year_count ==0:
print '%s\t%s' % (word_in_progress, target_year_count)
And when I type the below command in Ubuntu command line:
hduser@bharti-desktop:~/hadoop$ cat /home/hduser/Documents/test1.1.txt | /home/hduser /hadoop/mapper-ngram.py | sort -k1,1 | /home/hduser/hadoop/reducerngram1.py| sort -k2,2n
I get nothing. Could somebody tell me what wrong I am doing.
Answer: First thing I'd check:
cat /home/hduser/Documents/test1.1.txt | /home/hduser /hadoop/mapper-ngram.py | sort -k1,1 | /home/hduser/hadoop/reducerngram1.py| sort -k2,2n
^ # Is this space a typo?
Is this space a typo in your post, or did you really run the command that way?
If this is an accurate representation of your command, I'm guessing the
input's not actually getting to your mapper (and therefore nothing will get to
your reducer).
|
Using Regular Expressions on my Raspberry Pi
Question: Is there an editor that I could install on the Raspbian OS to practice Regex
with? If not, what about through Python? If so, is there a good python IDE out
there for Raspbian that supports Regexs?
Answer: Python itself supports regexes (via a built-in module). If you're just
interested in playing around with them, `import re` in an interactive shell,
and you have access to Python's regular expression engine.
|
Python: Create Dictionary using list of list, list of tuples and list
Question: I have a list `l1` as shown below:
l1 = [[(1,"INC1"), (1, "INC2")],[(3, "INC4"),(4, "INC6")]]
I have a list `names` as shown below:
names = ['John', 'Marcia']
I have another list `label_issue_list` as shown below:
label_issue_list = [["Problem1","Problem1"],["Problem3","Problem4"]]
The indices of all the three lists are connected as shown in the below output.
I'm trying to use all these three lists and get a dictionary output which
would like the below:
{
"John" : {
"Problem1" : 2,
"tick_info" : {"1": ["INC1", "INC2"]}
},
"Marcia" : {
"Problem3" : 1,
"Problem4" : 1,
"tick_info" : {"3":["INC4"], "4":["INC6"]}
}
}
I have tried the below code but this doesn't seem to work:
clust_stack = {}
for i in range(len(l1)):
fq= defaultdict( list )
for n,v in l1[i]:
fq[str(n)].append(v)
for name_, data in zip(names, label_issue_list)[i]:
clust_stack[name_] = Counter(map(str,data))
clust_stack[name_]["tick_info"] = {}
clust_stack[name_]["tick_info"] = dict(fq)
I get the below error in the `for name_, data in zip(names,
label_issue_list)[i]:` line:
ValueError: too many values to unpack
What do I need to do to get my desired output?
Answer: Start with zipping all 3 lists:
clust_stack = {}
for name, data, fq in zip(names, label_issue_list, l1):
clust_stack[name] = dict(Counter(data)) # turn back into regular dictionary
tick_info = clust_stack[name]['tick_info'] = {}
for num, tick in fq:
tick_info.setdefault(num, []).append(tick)
Using a `Counter` to count the information in each `l1` element is excellent,
but you do need to turn that result back to a regular dictionary so you can
add other keys to it.
You can then add the `tick_info` key to that, collecting the information in
those lists in list values per key.
Demo:
>>> from collections import Counter
>>> from pprint import pprint
>>> l1 = [[(1,"INC1"), (1, "INC2")],[(3, "INC4"),(4, "INC6")]]
>>> names = ['John', 'Marcia']
>>> label_issue_list = [["Problem1","Problem1"],["Problem3","Problem4"]]
>>> clust_stack = {}
>>> for name, data, fq in zip(names, label_issue_list, l1):
... clust_stack[name] = dict(Counter(data)) # turn back into regular dictionary
... tick_info = clust_stack[name]['tick_info'] = {}
... for num, tick in fq:
... tick_info.setdefault(num, []).append(tick)
...
>>> pprint(clust_stack)
{'John': {'Problem1': 2, 'tick_info': {1: ['INC1', 'INC2']}},
'Marcia': {'Problem3': 1,
'Problem4': 1,
'tick_info': {3: ['INC4'], 4: ['INC6']}}}
|
What kind of python magic does dir() perform with __getattr__?
Question: The following is in python 2.7 with MySQLdb 1.2.3.
I needed a class wrapper to add some attributes to objects which didn't
support it (classes with `__slots__` and/or some class written in C) so I came
out with something like this:
class Wrapper(object):
def __init__(self, obj):
self._wrapped_obj = obj
def __getattr__(self, obj):
return getattr(self._wrapped_obj, attr)
I was expecting that the `dir()` builtin called on my instance of `Wrapper`
should have returned just the names inherited by object plus `wrapped_obj`,
and I discovered that this is actually the case for _most_ cases, but not for
all. I tried this with a custom old style class, a custom new style class, and
some builtin classes, it always worked this way: the only exception that i
found is when the wrapped object was an instance of the class
`_mysql.connection`. In this case, `dir()` on my object happens to know also
all the method names attached to the wrapped connection object.
I read in the python documentation about `dir`, and this behaviour appears to
be legit: `dir` is supposed to return a list of "interesting names", not the
"real" content of the instance. But I really can't figure how it does this: it
actually understands the implementation of my `__getattr__` and resolves to
the attached item? If this is true, why only with that `connection` class and
not for instance with a simpler `dict`?
Here is some pasted code as an example of this curious behaviour:
>>> from _mysql import connection
>>> c = connection(**connection_parameters)
>>> c
<_mysql.connection open to '127.0.0.1' at a16920>
>>>
>>> dir(c)
['affected_rows', 'autocommit', 'change_user', 'character_set_name', 'close', 'commit', 'dump_debug_info', 'errno', 'error', 'escape', 'escape_string', 'field_count', 'get_character_set_info', 'get_host_info', 'get_proto_info', 'get_server_info', 'info', 'insert_id', 'kill', 'next_result', 'ping', 'query', 'rollback', 'select_db', 'set_character_set', 'set_server_option', 'shutdown', 'sqlstate', 'stat', 'store_result', 'string_literal', 'thread_id', 'use_result', 'warning_count']
>>>
>>> w = Wrapper(c)
>>> dir(w)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattr__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_wrapped_obj', 'affected_rows', 'autocommit', 'change_user', 'character_set_name', 'close', 'commit', 'dump_debug_info', 'errno', 'error', 'escape', 'escape_string', 'field_count', 'get_character_set_info', 'get_host_info', 'get_proto_info', 'get_server_info', 'info', 'insert_id', 'kill', 'next_result', 'ping', 'query', 'rollback', 'select_db', 'set_character_set', 'set_server_option', 'shutdown', 'sqlstate', 'stat', 'store_result', 'string_literal', 'thread_id', 'use_result', 'warning_count']
>>>
>>> d = Wrapper({})
>>> dir(d)
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattr__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_wrapped_obj']
>>>
Answer: There are two deprecated attributes in Python 2, [`object.__members__` and
`object.__methods__`](https://docs.python.org/2/library/stdtypes.html#object.__methods__);
these were aimed at supporting `dir()` on extension types (C-defined objects):
> `object.__methods__`
> Deprecated since version 2.2: Use the built-in function `dir()` to get a
> list of an object’s attributes. This attribute is no longer available.
>
> `object.__members__`
> Deprecated since version 2.2: Use the built-in function dir() to get a list
> of an object’s attributes. This attribute is no longer available.
These were removed from Python 3, but because your connection object (at
leasts in the older version you are using) still provides a `__methods__`
attribute that is found _through your`__getattr__` hook_ and used by `dir()`
here.
If you add a `print` statement to the `__getattr__` method you'll see the
attributes being accessed:
>>> class Wrapper(object):
... def __init__(self, obj):
... self._wrapped_obj = obj
... def __getattr__(self, obj):
... print 'getattr', obj
... return getattr(self._wrapped_obj, attr)
...
>>> dir(Wrapper({}))
getattr __members__
getattr __methods__
['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattr__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_wrapped_obj']
For new-style objects, the newer [`__dir__`
method](https://docs.python.org/2/library/functions.html#dir) supported by
`dir()` is properly looked up on the type only so you don't see that being
accessed here.
The [project HISTORY
file](https://github.com/farcepest/MySQLdb1/blob/master/HISTORY) suggests the
attributes were removed in the big Python 3 compatibility update for 1.2.4
beta 1.
|
ERROR: 'ogr2ogr' is not recognized as an internal or external command, operable program or batch file when running ogr2ogr in python script
Question: I get an error when trying to run ogr2ogr thru subprocess but I am able to run
it using just the windows command prompt. The script will be part of a series
of processes that start with batch importing gpx files unto a postgres db. Can
somebody please tell me what's wrong? Thanks!
:::::::::::::::::::::::::::: Running THIS script gives me an ERROR: 'ogr2ogr'
is not recognized as an internal or external command, operable program or
batch file.
> import subprocess
>
> import sys
>
> print sys.executable
>
> track= "20131007.gpx"
>
> subprocess.call(["ogr2ogr", "-f", "PostgreSQL", "PG:dbname=TTBASEMain
> host=localhost port=5432 user=postgres password=minda", track], shell=True)
::::::::::::::::::::::::::::: THIS CODE does its job well.
> ogr2ogr -f PostgreSQL PG:"dbname='TTBASEMain' host='localhost' port='5432'
> user='postgres' password='minda'" "20131007.gpx"
::::::::::::::::::::::::::::: THIS is what I have in my environment path:
> C:\Users\User>path PATH=C:\Program Files (x86)\Intel\iCLS Client\;C:\Program
> Files\Intel\iCLS Client\;C:\Program Files (x86)\NVIDIA
> Corporation\PhysX\Common;C:\windows\system32;C:\windows;C:\windows\System32\Wbem;C:\windows\System32\WindowsPowerShell\v1.0\;C:\Program
> Files (x86)\Intel\OpenCL SDK\3.0\bin\x86;C:\Program Files (x86)\Intel\OpenCL
> SDK\3.0\bin\x64;C:\Program Files\Intel\Intel(R) Management Engine
> Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine
> Components\IPT;C:\Program Files (x86)\Intel\Intel(R) Management Engine
> Components\DAL;C:\Program Files (x86)\Intel\Intel(R) Management Engine C
> omponents\IPT;C:\Program Files\Lenovo\Bluetooth Software\;C:\Program
> Files\Lenovo\Bluetooth
> Software\syswow64;C:\lastools\bin;C:\Python27;C:\Python27\Scripts;C:\Python27\DLLs;C:\Python27\Lib\site-
> packages;C:\Users\User\AppData\Roaming.local\bin;C:\Program Files
> (x86)\Windows Kits\8.1\Windows Performance Toolkit\;C:\Program
> Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files\GDAL
Answer: REINSTALLING the python bindings resolved my issue. I don't see GDAL on the
paths below but its working now. Is it supposed to be there so since its not,
I might probably have another round of GDAL head scratching in the future?
::::::::::::::::::::::::::::::::::::::: THIS is what I currently have when I
type in sys.path on python:
> Microsoft Windows [Version 6.2.9200] (c) 2012 Microsoft Corporation. All
> rights reserved.
>
> C:\Users\User>python Python 2.7.8 (default, Jun 30 2014, 16:08:48) [MSC
> v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or
> "license" for more information.
>
>> > import sys sys.path ['', 'C:\windows\SYSTEM32\python27.zip',
'C:\Python27\DLLs', 'C:\Python27\lib', 'C:\Python27 \lib\plat-win',
'C:\Python27\lib\lib-tk', 'C:\Python27', 'C:\Python27\lib\site-packages', '
C:\Python27\lib\site-packages\wx-3.0-msw']
|
Is there an easy way to get all common module extensions?
Question: I am making a library that deals with Python modules. Without getting into
details, I need a list of the common Python module extensions.
Obviously, I want `.py`, but I'd also like to include ones such as `.pyw`,
`.pyd`, etc. In other words, I want anything that you can import.
Is there a tool in the standard library which will make this list for me? Or
do I have to make it myself (and hardcode all of the values)?
extensions = ['.py', '.pyw', ...]
Answer: This functionality can be found in the [`importlib.machinery`
module](https://docs.python.org/3/library/importlib.html#module-
importlib.machinery). Inside, there are numerous constants which relate to the
various Python module extensions:
>>> import importlib
>>> importlib.machinery.SOURCE_SUFFIXES
['.py', '.pyw']
>>> importlib.machinery.OPTIMIZED_BYTECODE_SUFFIXES
['.pyo']
>>> importlib.machinery.EXTENSION_SUFFIXES
['.pyd']
>>> importlib.machinery.DEBUG_BYTECODE_SUFFIXES
['.pyc']
>>>
So, you could very easily join these into a global set1:
>>> set(importlib.machinery.SOURCE_SUFFIXES +
... importlib.machinery.OPTIMIZED_BYTECODE_SUFFIXES +
... importlib.machinery.EXTENSION_SUFFIXES +
... importlib.machinery.DEBUG_BYTECODE_SUFFIXES)
{'.pyw', '.py', '.pyd', '.pyc', '.pyo'}
>>>
* * *
You might also be interested in the [`all_suffixes`
function](https://docs.python.org/3/library/importlib.html#importlib.machinery.all_suffixes):
>>> importlib.machinery.all_suffixes()
['.py', '.pyw', '.pyc', '.pyd']
>>>
Note however that this function will replace `.pyc` with `.pyo` if Python is
launched with either the [`-O` or `-OO`
options](https://docs.python.org/3/using/cmdline.html#miscellaneous-options).
To avoid this, you can do:
>>> set(importlib.machinery.all_suffixes() +
... importlib.machinery.OPTIMIZED_BYTECODE_SUFFIXES +
... importlib.machinery.DEBUG_BYTECODE_SUFFIXES)
{'.pyw', '.py', '.pyd', '.pyc', '.pyo'}
>>>
This will ensure that both `.pyc` and `.pyo` are in the set.
* * *
Finally, you should be wary of `importlib.machinery.BYTECODE_SUFFIXES`. As
@MartijnPieters noted in the comments, it will always be equal to either
`OPTIMIZED_BYTECODE_SUFFIXES` or `DEBUG_BYTECODE_SUFFIXES`. This means that if
you add it to the collection, you will get either a duplicated `.pyc` or a
duplicated `.pyo` value (unless you use a set of course).
From the
[docs](https://docs.python.org/3/library/importlib.html#importlib.machinery.BYTECODE_SUFFIXES):
> `importlib.machinery.BYTECODE_SUFFIXES`
>
> A list of strings representing the recognized file suffixes for bytecode
> modules. Set to either `DEBUG_BYTECODE_SUFFIXES` or
> `OPTIMIZED_BYTECODE_SUFFIXES` based on whether `__debug__` is true.
I didn't bother using this constant however because I want both
`OPTIMIZED_BYTECODE_SUFFIXES` and `DEBUG_BYTECODE_SUFFIXES` in the collection.
So, there is no reason to add it.
* * *
1I decided to use a set because they have a faster lookup time than lists.
Meaning, they are better suited for a global collection of values that will
not change and which needs no particular order. In addition, they will ensure
that we do not accidentally add duplicate extensions to the collection.
|
GDAL reprojection error: in method 'Geometry_Transform', argument 2 of type 'OSRCoordinateTransformationShadow *'
Question: Using Python 2.7.9 with GDAL 1.11.1, with miniconda for package management --
Performing this a simple reprojection of a coordinate point causes the error
described below.
I am relatively new to GDAL, so I checked to see if the code from the [Python
GDAL/OGR 1.0 Cookbook](http://pcjericks.github.io/py-gdalogr-
cookbook/projection.html#reproject-a-geometry) produces the same issue, and it
does:
from osgeo import ogr
from osgeo import osr
source = osr.SpatialReference()
source.ImportFromEPSG(2927)
target = osr.SpatialReference()
target.ImportFromEPSG(4326)
transform = osr.CoordinateTransformation(source, target)
point = ogr.CreateGeometryFromWkt("POINT (1120351.57 741921.42)")
point.Transform(transform)
print point.ExportToWkt()
This is the error:
/opt/miniconda/envs/pygeo/lib/python2.7/site-packages/osgeo/ogr.pyc in Transform(self, *args)
4880 OGRERR_NONE on success or an error code.
4881 """
-> 4882 return _ogr.Geometry_Transform(self, *args)
4883
4884 def GetSpatialReference(self, *args):
TypeError: in method 'Geometry_Transform', argument 2 of type 'OSRCoordinateTransformationShadow *'
CoordinateTransform is a proxy for the C++ OSRCoordinateTransformationShadow
class, generated by SWIG. Per the [source code for
osgeo.ogr.Geometry](http://gdal.org/python/osgeo.ogr-
pysrc.html#Geometry.Transform) (what Point is), the correct types were passed
to the Transform method.
Best guess: Could this be caused by using a version of _ogr that is too old,
and so the implementation of _ogr.Geometry_Transform(self, *args) is expecting
a different? _ogr is another SWIG-generated proxy, I'm guessing for the OGR
class?
Answer: What everyone new to GDAL must learn: assign an error handler. (example:
<http://pcjericks.github.io/py-gdalogr-cookbook/gdal_general.html#install-
gdal-ogr-error-handler>)
With an error handler assigned, the output includes the explanation for the
error. In this case, it was: "Unable to load PROJ.4 library (libproj.so),
creation of OGRCoordinateTransformation failed."
Hopefully, imparting the knowledge of enabling GDAL error handling will help
others who may stumble upon this very issue.
|
Is there a pythonic way of assigning values to variables when passed in from the command line?
Question: I have written to following code but it feels very clunky and I was wondering
if there was a pythonic way of writing the following code:
import argparse
foo = 0
bar = 1
parser = argparse.ArgumentParser()
parser.add_argument("-a", "--foo", type=int,
help="foo")
parser.add_argument("-b", "--bar", type=int,
help="bar")
args = parser.parse_args()
if args.foo: # This is the bit that I think is clunky
foo = args.foo #
if args.bar: #
bar = args.bar #
In my code I have about 7 different arguments and having a list of `if`
statements doesn't seem like the best method. Is there a better way of writing
this section?
Answer: `argparse` have default arguments, so there is no need for the `if`s. Also,
you should seperate argument parsing and processing, therefore you don't need
local variables for your args, but can use them as parameters. So you would
finally get to something like this:
import argparse
def some_function(foo, bar):
pass
def main():
parser = argparse.ArgumentParser()
parser.add_argument("-a", "--foo", type=int, default=0,
help="foo")
parser.add_argument("-b", "--bar", type=int, default=1,
help="bar")
args = parser.parse_args()
some_function(args.foo, args.bar)
if __name__ == '__main__':
main()
|
APP.YAML: Combine different URLs-same file, static file serving & source code access in app.yaml
Question: I now there are a lot of questions about app.yaml, and I searched and
searched, but this one I couldn't find.
**TLDR** : PLEASE read full answer first, this isn't your standard
application_readable:true. **I basically want to access the same file via
different routes e.g. /static/img/pic.jpg AND /img/pic.jpg**
## USE CASE
I build a flask application (upon the work of
[fsouza](https://github.com/fsouza/gaeseries) ) and i try to build a thumbnail
extension for flask which will work on gae (because it's a read-only FS, I
forked [flask-thumbnails](https://github.com/silentsokolov/flask-thumbnails)
and currently try to expand it.)
## So I need:
* access to my static files via python so I can read the img and make thumbnails on the fly. URL is eg. **/STATIC/IMG/PIC.JPG**
* still deliver other images, css, js via app.yaml. URL is eg. **/IMG/PIC.JPG**
## What's not working:
**It's working locally** , won't work after deploying. I thinks app.yaml is
not so strictly enforced by dev_appserver.py as it should.
I can either get one of this scenarios to work. This is how my app.yaml
currently looks:
builtins:
- appstats: on
- admin_redirect: on
- deferred: on
- remote_api: on
- url: /css
static_dir: application/static/css
- url: /js
static_dir: application/static/js
- url: /img
static_dir: application/static/img
- url: /static
static_dir: application/static
application_readable: true
- url: .*
script: run.application.app
I also tried this instead:
- url: /css/(.*)
static_files: css/\1
upload: css/(.*)
- url: /js/(.*)
static_files: js/\1
upload: js/(.*)
- url: /img/(.*)
static_files: img/\1
upload: img/(.*)
When I comment the specific js,css,img stuff out, the application can access
the img in application/static/img and make a thumbnail out of it. But the urls
with e.g. /img/dont.need.thumbnail.jpg won't be served.
When I comment this part:
- url: /static
static_dir: application/static
application_readable: true
img,css,js get served like they should.
Can anybody help me? What I'm doing wrong?
**_Are app.yaml urls recursive?_**
## Current Workaround:
My current workaround is, that I simply add several url route through in
python application. But that is not efficient and I suspect it costs me more
CPU time and is slower. e.g.
app.add_url_rule('/img/<path>', 'static_img_files', view_func=views.static_img_files)
def static_img_files(path):
return static_files("img/"+path)
**Bonus hint:**
If you just git push-to-deploy
application_readable: true
won't work, so the python application won't have access to the static images
as soon as you test it on the gae servers and not locally anymore. you have to
deploy it via the app engine launcher (this alone took me ages to find out)
Answer: The answer you're looking for is in the [Static directory
handlers](https://cloud.google.com/appengine/docs/python/config/appconfig#Python_app_yaml_Static_file_pattern_handlers)
section of the [Configuring with
app.yaml](https://cloud.google.com/appengine/docs/python/config/appconfig)
doc.
Look for `application_readable`. Setting that attribute to `true` gives you
the best of both worlds, at the expense of quota (since static files marked
this way need to be uploaded to two different places).
**Updated with a working example**
I've stripped this down to the essentials.
app.yaml
application: example
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /dir1
static_dir: dir1
mime_type: text/plain
- url: /dir2
static_dir: dir2
mime_type: text/plain
application_readable: true
- url: .*
script: main.app
main.py
import webapp2
class Test(webapp2.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
for path in ['dir1/file1.txt', 'dir2/file2.txt']:
try:
with open(path) as f:
self.response.out.write(f.read())
except IOError:
pass
app = webapp2.WSGIApplication([(r'/', Test)])
dir1/file1.txt
Content1
dir2/file2.txt
Content2
You should be able to navigate to `/dir1/file1.txt` and `dir2/file2.txt` and
see their contents, but navigating to `/` sees only the latter file. I went
with text files instead of images for simplicity of example; that detail
should't matter.
(I'm using GAE SDK 1.9.17 on Linux)
|
How to make every object in a list a float
Question: I'm using Python 2. When I try to multiply objects in a list it just repeats
the same thing twice even though i have tried to use this to resolve the
issue:
map(float, prices)
The code i'm using is:
import urllib
from bs4 import BeautifulSoup
prices = []
htmlfile = urllib.urlopen("http://www.fifacoin.com/default/quick/listwithcategoryid? category_id=6").read()
soup = BeautifulSoup(htmlfile)
for item in soup.find_all('tr', {'data-price': True}):
prices.append(item['data-price'])
map(float, prices)
print prices[1] * 2
This code just outputs the value of prices 2. I'm new to Python so it's
probably something obvious
Answer: `map` does not change the original list; it simply returns a new list. Try:
prices = map(float, prices)
|
Deterministic python script behaves in non-deterministic way
Question: I have a script which uses no randomisation that gives me different answers
when I run it. I expect the answer to be the same, every time I run the
script. The problem appears to only happen for certain (ill-conditioned) input
data.
The snippet comes from an algorithm to compute a specific type of controller
for a linear system, and it mostly consists of doing linear algebra (matrix
inversions, Riccati equation, eigenvalues).
Obviously, this is a major worry for me, as I now cannot trust my code to give
me the right results. I know the result can be wrong for poorly conditioned
data, but I expect consistently wrong. Why is the answer not always the same
on my Windows machine? Why do the Linux & Windows machine not give the same
results?
I'm using `Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit
(Intel)] on win 32`, with Numpy version 1.8.2 and Scipy 0.14.0. (Windows 8,
64bit).
The code is below. I've also tried running the code on two Linux machines, and
there the script always gives the same answer (but the machines gave differing
answers). One was running Python 2.7.8, with Numpy 1.8.2 and Scipy 0.14.0. The
second was running Python 2.7.3 with Numpy 1.6.1 and Scipy 0.12.0.
I solve the Riccati equation three times, and then print the answers. I expect
the same answer every time, instead I get the sequence '1.75305103767e-09;
3.25501787302e-07; 3.25501787302e-07'.
import numpy as np
import scipy.linalg
matrix = np.matrix
A = matrix([[ 0.00000000e+00, 2.96156260e+01, 0.00000000e+00,
-1.00000000e+00],
[ -2.96156260e+01, -6.77626358e-21, 1.00000000e+00,
-2.11758237e-22],
[ 0.00000000e+00, 0.00000000e+00, 2.06196064e+00,
5.59422224e+01],
[ 0.00000000e+00, 0.00000000e+00, 2.12407340e+01,
-2.06195974e+00]])
B = matrix([[ 0. , 0. , 0. ],
[ 0. , 0. , 0. ],
[ -342.35401351, -14204.86532216, 31.22469724],
[ 1390.44997337, 342.33745324, -126.81720597]])
Q = matrix([[ 5.00000001, 0. , 0. , 0. ],
[ 0. , 5.00000001, 0. , 0. ],
[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ]])
R = matrix([[ -3.75632852e+04, -0.00000000e+00, 0.00000000e+00],
[ -0.00000000e+00, -3.75632852e+04, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 4.00000000e+00]])
counter = 0
while counter < 3:
counter +=1
X = scipy.linalg.solve_continuous_are(A, B, Q, R)
print(-3449.15531628 - X[0,0])
My numpy config is as below `print np.show_config()`
lapack_opt_info:
libraries = ['mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md', 'mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md']
library_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/lib/ia32', 'C:/Program Files (x86)/Intel/Composer XE 2013 SP1/compiler/lib/ia32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/include']
blas_opt_info:
libraries = ['mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md']
library_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/lib/ia32', 'C:/Program Files (x86)/Intel/Composer XE 2013 SP1/compiler/lib/ia32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/include']
openblas_info:
NOT AVAILABLE
lapack_mkl_info:
libraries = ['mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md', 'mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md']
library_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/lib/ia32', 'C:/Program Files (x86)/Intel/Composer XE 2013 SP1/compiler/lib/ia32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/include']
blas_mkl_info:
libraries = ['mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md']
library_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/lib/ia32', 'C:/Program Files (x86)/Intel/Composer XE 2013 SP1/compiler/lib/ia32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/include']
mkl_info:
libraries = ['mkl_blas95', 'mkl_lapack95', 'mkl_intel_c', 'mkl_intel_thread', 'mkl_core', 'libiomp5md']
library_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/lib/ia32', 'C:/Program Files (x86)/Intel/Composer XE 2013 SP1/compiler/lib/ia32']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['c:/Program Files (x86)/Intel/Composer XE 2013 SP1/mkl/include']
None
(edits to trim the question down)
Answer: In general, linalg libraries on Windows give different answers on different
runs at machine precision level. I never heard of an explanation why this
happens only or mainly on Windows.
If your matrix is ill conditioned, then the inv will be largely numerical
noise. On Windows the noise is not always the same in consecutive runs, on
other operating systems the noise might be always the same but can differ
depending on the details of the linear algebra library, on threading options,
cache usage and so on.
I've seen on and posted to the scipy mailing list several examples for this on
Windows, I was using mostly the official 32 bit binaries with ATLAS
BLAS/LAPACK.
The only solution is to make the outcome of your calculation not depend so
much on floating point precision issues and numerical noise, for example
regularize the matrix inverse, use generalized inverse, pinv, reparameterize
or similar.
|
EPFImporter from Itunes shows undefined symbol: _PyObject_NextNotImplemented after just installing and running
Question: I've just installed python on a new Centos 6.6 server so I can run EPFImporter
from itunes. Then I run on commandline the command:
`./EPFImporter/EPFImporter.py -D mysql itunes20141208`
I am not sure if this helps but these are versions installed on the server:
Python 2.6.6 Centos6.6 pip 1.5.6 from /usr/local/lib/python2.7/site-packages
(python 2.7)
Did anyone encounter this error before?
Traceback (most recent call last):
File "/root/EPFImporter/EPFImporter.py", line 43, in <module>
import EPFIngester
File "/root/EPFImporter/EPFIngester.py", line 40, in <module>
import psycopg2
File "/usr/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 50, in <module>
from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: /usr/local/lib/python2.7/site-packages/psycopg2/_psycopg.so: undefined symbol: _PyObject_NextNotImplemented
Answer: It could be that you have mismatching libraries: If you look at the [psycopg
documents](http://initd.org/psycopg/docs/install.html), it says:
> Note The libpq header files used to compile psycopg2 should match the
> version of the library linked at runtime. If you get errors about missing or
> mismatching libraries when importing psycopg2 check (e.g. using ldd) if the
> module psycopg2/_psycopg.so is linked to the right libpq.so.
|
Bad dataset layout
Question: I have three sets of 140+ dataset that are laid out in a weird way and I
cannot figure out how to rearrange them with Python. The files are arranged
with 4 rows on top and then 4 blanks lines followed by 5 columns. There are no
headers, rows 1 and 2 are one column, row 3 is 2 columns, and row 4 is
garbage. The first three rows are my identifiers for the dataset. Each data
Set has multiple records. Example:
xx4 <--ID
070414 <--DateStrong
5.6 10 <--Force Ratio
Sample Rate: 50/s <--Garbage
220.68 0.14 17.80 92.20
220.80 0.02 9.40 9.40
224.32 0.14 14.60 72.20
227.08 0.14 26.60 130.60
227.78 0.08 19.60 62.00
228.04 0.18 40.40 257.20
231.22 0.12 14.00 61.20
I'm trying to arrange the set to be:
xx4, 070414, 5.6, 10, 220.68, 0.14, 17.80, 92.20
xx4, 070414, 5.6, 10, 220.80, 0.02, 9.40, 9.40
xx4, 070414, 5.6, 10, 224.32, 0.14, 14.60, 72.20
My current working code is:
import os
import sys
import csv
import pandas as pd
import numpy as np
import itertools as it
import benFuncts.BenFuncts as bf #My own functions
import matplotlib.pyplot as plt
ID = []
ID_dict = {}
DATE = []
FORCE = []
RATIO = []
TIME = []
DURR = []
pF = []
TOF = []
ED7 = []
ED6 = []
ED5 = []
ED4 = []
h = 'DATE', 'DAYNUM', 'RATIO', 'CRIT', 'TOTRESP', 'CRITRESP', 'PELLETS', 'AVG_PF', 'AVG_TOF'
Crit = {}
MastList = []
rd_files = [] # List of file strings
# Makes the main file path in this case:
# /Users/benlibman/Desktop/EffortDemandTests/EffortDemandPyTests/
path = str(os.getcwd()) + '/'
# List of files in the working directory (see path above)
mainDir = os.listdir(str(os.getcwd()) + '/')
# Pulls the list files from the mainDir (above)
ID = [i for i in mainDir if len(i) <= 3 and 'ED' in i]
# f_Out = csv.writer(open('MainFile', 'wa'), delimiter=',')
# f_Out = open('MainFile', 'wa')
# , quoting=csv.QUOTE_NONE)
f_In = csv.reader(open('ED7', 'rb'), delimiter='\t')
def mkPath():
for row in f_In:
for i in row:
if len(i) > 1:
rd_files.append(path + str(i))
mP = mkPath()
# pdmF = pd.read_csv('MainFile', sep='\t', engine='python')
# with open('ED7120214', 'r') as f:
df = pd.read_csv(open('ED7120214', 'r'), sep='\t', skiprows=5, usecols=(
0, 1, 2, 3), names=('TIME', 'DURR', 'pF', 'TOF'))
frCR = pd.read_csv(open('ED7120214', 'r'), sep=' ', skiprows=(0, 1, 3), skipfooter=(
len(df)), engine='python', index_col=False, names=('FORCE', 'RATIO'))
date_index = pd.read_csv(open('ED7120214', 'r'), squeeze=True, sep=' ', skiprows=(
0, 2, 3), skipfooter=(len(df)), engine='python', index_col=False, names=('DATE', 'NaN'))
id_index = pd.read_csv(open('ED7120214', 'r'), squeeze=True, sep=' ', skiprows=(
1, 2, 3), skipfooter=(len(df)), engine='python', index_col=False, names=('ID', 'NaN'))
pDF = pd.DataFrame(df)
for row in pDF.TIME:
TIME.append(row)
for row in pDF.DURR:
DURR.append(row)
for row in pDF.pF:
pF.append(row)
for row in pDF.TOF:
TOF.append(row)
print pDF.pF.mean()
FORCE.append(frCR.FORCE)
RATIO.append(frCR.RATIO)
DATE.append(list(date_index.DATE))
ID_dict.update(id_index.ID)
DATE = [str(i).strip('[]') for i in DATE]
# ED7.append(FORCE)
# ED7.append(DATE)
# ED7.append(RATIO)
ED7.append(TIME)
ED7.append(DURR)
ED7.append(pF)
ED7.append(TOF)
Dt = bf.addCol(range(len(TIME)), DATE)
with open('MainFile', 'wa') as mf:
pDF.to_csv(mf, header=True, index_names=True, names=(
'DATE', 'DAYNUM', 'TIME', 'DURR', 'pF', 'TOF'))
Answer: If all you are trying to do is reformat your data and write it back to a file,
this should work for the file format in your example:
with open('data.txt') as in_file, open('new.txt', 'w') as out_file:
# get the dataset identifiers
ID = in_file.next().strip()
date_strong = in_file.next().strip()
force_ratio = in_file.next().strip()
force_ratio1, force_ratio2 = force_ratio.split()
in_file.next() # Garbage line
# example data has two blank lines
in_file.next()
in_file.next()
dataset_id = (ID, date_strong, force_ratio1, force_ratio2)
# iterate over the records
for line in in_file:
# prepend the dataset id
record = list(dataset_id)
record.extend(line.split())
# write to the new file
out_file.write(','.join(record) + '\n')
|
unexpected unicode values in dataframe?
Question: I can't find why this is happening. Can i get a justification ?
using pandas in python, if I write in the console:
pd.io.json.read_json('{"rCF":{"values":0.05}}')
I got printed a dataframe that looks like this
rCF
values 0.05
This if fine.
But if I write in the console:
pd.io.json.read_json('[{"rCF":{"values":0.05}}]')
I got printed a dataframe that looks like this
rCF
0 {u'values': 0.05}
in particular, why the key is u'values' and not just 'values'
Answer: `json` always decodes to Unicode:
>>> import json
>>> json.loads('{"a":"b"}')
{u'a': u'b'}
It's just that in your former case a `print` or the equivalent somewhere
inside `pandas` is hiding this, as would, e.g:
>>> x = _
>>> for k in x: print k
...
a
after the above snippet; but when you `print` (or the like) a **container** ,
you get to see the more precise `repr` of the container's items.
|
ImportError: No module named grappellidjango.contrib
Question: **i set following<http://django-
grappelli.readthedocs.org/en/2.6.3/index.html>**
settings.py
INSTALLED_APPS = (
'grappelli',
'django.contrib.admin',
)
url.py
urlpatterns = patterns('',
(r'^grappelli/', include('grappelli.urls')), # grappelli URLS
(r'^admin/', include(admin.site.urls)), # admin site
)
$ python manage.py collectstatic
**as a result my pycharm3.4.1 with django1.7.1 and grappelli2.6.3 tell me:**
> Traceback (most recent call last):
> File "D:\PyCharm 3.4.1\helpers\pycharm\django_manage.py", line 23, in
> run_module(manage_file, None, '**main** ', True)
> File "D:\Python27\lib\runpy.py", line 176, in run_module
> fname, loader, pkg_name)
> File "D:\Python27\lib\runpy.py", line 82, in _run_module_code
> mod_name, mod_fname, mod_loader, pkg_name)
> File "D:\Python27\lib\runpy.py", line 72, in _run_code
> exec code in run_globals
> File "D:\Documents\programe\python\django\mysite\manage.py", line 10, in
> execute_from_command_line(sys.argv)
> File "D:\Python27\lib\site-
> packages\django-1.7.1-py2.7.egg\django\core\management__init__.py", line
> 385, in execute_from_command_line
> utility.execute()
> File "D:\Python27\lib\site-
> packages\django-1.7.1-py2.7.egg\django\core\management__init__.py", line
> 354, in execute
> django.setup()
> File "D:\Python27\lib\site-
> packages\django-1.7.1-py2.7.egg\django__init__.py", line 21, in setup
> apps.populate(settings.INSTALLED_APPS)
> File "D:\Python27\lib\site-
> packages\django-1.7.1-py2.7.egg\django\apps\registry.py", line 85, in
> populate
> app_config = AppConfig.create(entry)
> File "D:\Python27\lib\site-
> packages\django-1.7.1-py2.7.egg\django\apps\config.py", line 116, in create
> mod = import_module(mod_path)
> File "D:\Python27\lib\importlib__init__.py", line 37, in import_module
> **import**(name)
> ImportError: No module named grappellidjango.contrib
Answer: Kindly set the following settings in settings.py file:-
INSTALLED_APPS = (
'grappelli',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
I hope above solution will resolve your issue.
|
OpenCV + Python + GigE Vision Camera
Question: I want to ask if someone have idea if it's possible to implement a
VideoCapture using OpenCV+Python and a GigE Vision Camera, I tried with
cv2.VideoCapture(0) but I am always obtaining the video of the integrated
webcam. I was trying new beta of OpenCV and using Windows
import numpy as np
import cv2
capture = cv2.VideoCapture(0)
while (True):
frame = capture.read()
cv2.imshow('camera',frame)
if cv2.waitKey(1) & 0xFF == odd('q'):
break
capture.release()
cv2.destroyAllWindows()
Thanks in advance.
Answer: Passing another integer value in function cv2.VideoCapture(int) instead of 0
selects another camera that is connected. example cv2.VideoCapture(1)
import numpy as np
import cv2
capture = cv2.VideoCapture(1)
while (True):
frame = capture.read()
cv2.imshow('camera',frame)
if cv2.waitKey(1) & 0xFF == odd('q'):
break
capture.release()
cv2.destroyAllWindows()
|
Python - Observer pattern - Object has no attribute
Question: I try to run one example from the book "Python Essential Reference" involving
observer pattern, but there is a problem with attribute. When the
AccountObserver executes `__del__` the error raises - Object has no attribute
'observers'. I don't know what wrong is with the code, so any help would be
very appreciate.
class Account(object):
def __init__(self, name, balance):
self.name = name
self.balance = balance
self.observers = set()
def __del__(self):
for ob in self.observers:
ob.close()
del self.observers
def register(self, observer):
self.observers.add(observer)
def unregister(self, observer):
self.observers.remove(observer)
def notify(self):
for ob in self.observers:
ob.update()
def withdraw(self, amt):
self.balance -= amt
self.notify()
class AccountObserver(object):
def __init__(self, theaccount):
self.theaccount = theaccount
self.theaccount.register(self)
def __del__(self):
self.theaccount.unregister(self)
del self.theaccount
def update(self):
print("Balance is %0.2f" % self.theaccount.balance)
def close(self):
print("Account no longer in use")
a = Account("Ketty", 200000)
a_mama = AccountObserver(a)
a_tata = AccountObserver(a)
a.unregister(a_mama)
a.withdraw(10)
And the output:
Balance is 199990.00
Account no longer in use
Exception ignored in: <bound method AccountObserver.__del__ of <__main__.AccountObserver object at 0x024BF9F0>>
Traceback (most recent call last):
File "F:\Projects\TestP\src\main.py", line 28, in __del__
File "F:\Projects\TestP\src\main.py", line 13, in unregister
AttributeError: 'Account' object has no attribute 'observers'
Exception ignored in: <bound method AccountObserver.__del__ of <__main__.AccountObserver object at 0x024BFEB0>>
Traceback (most recent call last):
File "F:\Projects\TestP\src\main.py", line 28, in __del__
File "F:\Projects\TestP\src\main.py", line 13, in unregister
AttributeError: 'Account' object has no attribute 'observers'
Answer: Python cleans out the module when the interpreter exits. At that point all
instances and classes are deleted, and that means that `Account.__del__` can
run **before** `AccountObserver.__del__`. The order in which the classes are
cleared depends on the global namespace dictionary order, which is random
thanks to the [random hash
seed](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONHASHSEED)
used. `Account.__del__` deletes `self.observers` so any later call to
`account.unregister()` will raise an `AttributeError`.
Your code relies on the classes and attributes all still being there when the
module exits. That means you can get both `KeyError` errors (as `a_mama` was
already unregistered), or `AttributeError` as the `self.observers` attribute
is already cleared (because `Account.__del__` cleared it).
There is a big fat warning in the [`object.__del__`
documentation](https://docs.python.org/3/reference/datamodel.html#object.__del__):
> **Warning** : Due to the precarious circumstances under which `__del__()`
> methods are invoked, exceptions that occur during their execution are
> ignored, and a warning is printed to `sys.stderr` instead. Also, when
> `__del__()` is invoked in response to a module being deleted (e.g., when
> execution of the program is done), other globals referenced by the
> `__del__()` method may already have been deleted or in the process of being
> torn down (e.g. the import machinery shutting down). For this reason,
> `__del__()` methods should do the absolute minimum needed to maintain
> external invariants. Starting with version 1.5, Python guarantees that
> globals whose name begins with a single underscore are deleted from their
> module before other globals are deleted; if no other references to such
> globals exist, this may help in assuring that imported modules are still
> available at the time when the `__del__()` method is called.
The work-around is to make your `__del__` method more robust in the face of
such exceptions:
def unregister(self, observer):
try:
self.observers.remove(observer)
except (KeyError, AttributeError):
# no such observer, or the observers set has already been cleared
|
OpenCV python - Distance from Origin to Harris Corners
Question: I'm writing a python library to do basic processing on geometric images. One
of the desired functions is to return a list of distance of all corners'
distance from the origin. I'm struggling with how to do this after looking at
the print of my corner objects;
[[ 9.20031281e+01 9.20031281e+01 9.20031281e+01 ..., 6.66796863e-01
1.01710939e+01 1.01710939e+01]
[ 1.36668701e+02 1.36668701e+02 1.36668701e+02 ..., 1.33374023e+00
1.07448441e+02 1.07448441e+02]
[ 1.36668701e+02 1.36668701e+02 1.36668701e+02 ..., 1.33374023e+00
1.07448441e+02 1.07448441e+02]
...,
[ -7.81250012e-04 3.12500005e-03 1.83593743e-02 ..., 3.36616707e+01
2.24355469e+01 2.24355469e+01]
[ -4.88281257e-05 3.12500005e-03 5.41992206e-03 ..., 3.67563972e+01
2.24355469e+01 2.24355469e+01]
[ -4.88281257e-05 5.37109387e-04 5.37109387e-04 ..., 3.67563972e+01
2.24355469e+01 2.24355469e+01]]
This image looked like this (note the two pink detected corners);

How can I find the distance (angle is not required although would be useful
too) from origin to the corners?
Thanks!
Answer: Here is an example of retrieving the `(x,y)` location of each match. I'll be
honest its not the neatest way to do things nor probably the quickest, but it
works (also I haven't fine tuned the parameters for detection).
import cv2
import numpy as np
import math
filename = "/Users/me/Downloads/square.jpg"
img = cv2.imread(filename)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
dst = cv2.cornerHarris(gray,2,3,0.04)
for y in range(0, gray.shape[0]):
for x in range(0, gray.shape[1]):
harris = cv2.cv.Get2D(cv2.cv.fromarray(dst), y, x) # get the x,y value
# check the corner detector response
if harris[0] > 0.01*dst.max():
print x,y # these are the locations of the matches
print 'Distance in pixels from origin: %d' % math.sqrt(x**2+y**2)
# draw a small circle on the original image
cv2.circle(img,(x,y),2,(155, 0, 25))
cv2.imshow('Harris', img) # show the image
cv2.waitKey()
|
streaming md5sum of contents of a large remote tarball
Question: I need to fetch a .tar.gz archive from an HTTP server and perform an MD5sum of
each file it contains. Since the archive is 4.5GB compressed, 12GB
decompressed, I'd like to do so without touching the hard drive. Of course I
can't keep everything in RAM either.
I'm trying to use python for it, but my problem is that for some weird reason
the tarfile module tries to seek() to the end of the input file handle - which
is something that you can't do with piped streams. Ideas?
import tarfile
import hashlib
import subprocess
URL = 'http://myhost/myfile.tar.gz'
url_fh = subprocess.Popen('curl %s | gzip -cd' % URL, shell=True, stdout=subprocess.PIPE)
tar_fh = tarfile.open(mode='r', fileobj=url_fh.stdout)
for tar_info in tar_fh:
content_fh = tar_fh.extractfile(tar_info)
print hashlib.md5(content_fh.read()).hexdigest(), tar_info.name
tar_fh.close()
The above fails with:
Traceback (most recent call last):
File "gzip_pipe.py", line 13, in <module>
tar_fh = tarfile.open(mode='r', fileobj=url_fh.stdout)
File "/algo/algos2dev4/AlgoOne-EC/third-party-apps/python/lib/python2.6/tarfile.py", line 1644, in open
saved_pos = fileobj.tell()
IOError: [Errno 29] Illegal seek
Answer: To find md5 sums of all files in a remote archive on-the-fly:
#!/usr/bin/env python
import tarfile
import sys
import hashlib
from contextlib import closing
from functools import partial
try:
from urllib.request import urlopen
except ImportError: # Python 2
from urllib2 import urlopen
def md5sum(file, bufsize=1<<15):
d = hashlib.md5()
for buf in iter(partial(file.read, bufsize), b''):
d.update(buf)
return d.hexdigest()
url = sys.argv[1] # url to download
with closing(urlopen(url)) as r, tarfile.open(fileobj=r, mode='r|*') as archive:
for member in archive:
if member.isreg(): # extract only regular files from the archive
with closing(archive.extractfile(member)) as file:
print("{name}\t{sum}".format(name=member.name, sum=md5sum(file)))
|
invalid elf header error for .so file
Question: I'm trying to run a script in python, and I'm getting an "invalid elf header"
error. I have a series of scripts that call one another, I'll go ahead and
include those as well:
import sys
sys.path.append("/home/smh/Linux/Desktop/gras-03-03/python")
from python_utilities import GRASTestor
which calls GRASTestor.py
from BaseTestor import BaseTestor
import numpy as np
import hepunit as unit
from gdml_writer import gdml_writer
from GDMLGeometryBuilder import GDMLGeometryBuilder
from GRASMacroBuilder import GRASMacroBuilder,GRASRMCMacroBuilder
from Plotters import Plotter
import os
import sys
import SpenvisCSVFileHandler
which calls SpenvisCSVFileHandler.py
import string
import Spenvis.so
import os
from numpy import *
which is where we get our error, specifically with the line "import
Spenvis.so"
/home/smh/Linux/Desktop/gras-03-03/python/python_utilities
Traceback (most recent call last):
File "perform_gras_rmc_tests.py", line 6, in <module>
from python_utilities import GRASTestor
File "/home/smh/Linux/Desktop/gras-03-03/python/python_utilities/GRASTestor.py", line 19, in <module>
import SpenvisCSVFileHandler
File "/home/smh/Linux/Desktop/gras-03-03/python/python_utilities/SpenvisCSVFileHandler.py", line 8, in <module>
import Spenvis.so
ImportError: /home/smh/Linux/Desktop/gras-03-03/python/python_utilities/Spenvis.so: invalid ELF header
And I'm not certain why it's not working. Any suggestions would be
appreciated!
Answer: Never mind. Upon looking at the file architecture, it appears the file
Spenvis.so is mac specific for some reason. Just need to get the correct file,
then.
|
timestamp column in sqlite return string in python
Question: i create a table with `SQLite Date Browse` App ...
when i want retrieve `datetime` value from `timestamp` column , SQLite return
unicod type ...

this is my insert code :
def Insert(self,mode,path,vname,stime,ftime):
con = sqlite3.connect(PATH_DataBase) # @UndefinedVariable
con.execute('INSERT INTO SendList VALUES(?,?,?,?,?)',(mode,path,vname,stime,ftime))
con.commit()
con.close()
dt1 = datetime.datetime(2013,01,01,01,01,01,0)
dt2 = datetime.datetime(2015,01,01,01,01,01,0)
c = 0
for f in os.listdir('/home/abbas/test/'):
c += 1
slist.Insert(common.MODE_Bluetooth_JAVA, '/home/abbas/test/'+f,'flower'+str(c) , dt1, dt2)
and now this is my table :

but when i want compare `starttime` with datetime.now() python give me error :
`TypeError: can't compare datetime.datetime to unicode`
Answer: "SQLite does not have a storage class set aside for storing dates and/or
times." Reference: <https://www.sqlite.org/datatype3.html>
Python's sqlite3 module offers "default adapters for the date and datetime
types in the datetime module." Reference:
<https://docs.python.org/2/library/sqlite3.html#default-adapters-and-
converters>
The only catch is that you must be sure to define the columns appropriately.
Example DDL:
import sqlite3
con = sqlite3.connect(PATH_DataBase, detect_types=sqlite3.PARSE_DECLTYPES)
con.execute('''create table if not exists SendList (
cid primary key,
mode text,
path text,
vname text,
starttime timestamp,
endtime timestamp);''')
con.commit()
con.close()
Any subsequent connections to insert or select data must pass
`sqlite3.PARSE_DECLTYPES` as the value for the keyword argument (aka kwarg)
`detect_types`. Example:
import datetime as dt
con = sqlite3.connect(PATH_DataBase, detect_types=sqlite3.PARSE_DECLTYPES)
cur = con.cursor()
cur.execute('''select
*
from
SendList
where
starttime between ? and ?
limit 10;''',
(dt.datetime(2013,1,1,0,0,0), dt.datetime(2014,12,31,23,59,59)))
results = cur.fetchall()
|
How to display text of listbox items in the canvas with Python3x tkiner?
Question: I am trying to display the text of the selected item of a listbox into the
canvas. When I bind the listbox to a helper event handler, it throws Attribute
Error: CLASS object has no attribute "HELPER EVENT HANDLER".
What I want is as follows:
1) When double clicking an item in the listbox to the left, its text should be
displayed on the canvas. This particular line of code is causing all the
troulbe to me.
lstbox.bind("<Double-Button-1>", self.OnDouble)
Could you please help me fixing this error?
2) I believe that there must be a way to make the lines' height on the listbox
larger than they appear in my application. However, I don't know how to do it.
I tried providing several options but these options are not recognized by
tkinter. Could you please suggest to me how to do it?
Here is the code:
import tkinter as tk
languages = ['Mandarin', 'English', 'French']
class LanguageFamilies(tk.Frame):
def __init__(self, *args, **kwargs):
tk.Frame.__init__(self, *args, **kwargs)
canv = tk.Canvas(self, width=675, height=530, bg="white", relief="sunken")
canv.config(scrollregion=(0,0,300,650), highlightthickness=0)
canv.pack(side="right", expand=True, fill="both")
# Create scroll bar
sbar = tk.Scrollbar(self)
canv.config(yscrollcommand=sbar.set)
sbar.config(command=canv.yview)
sbar.pack(side="right", fill="both")
# Create Scroll List
lstbox = tk.Listbox(self, width=240, height=530, relief="sunken", font="Courier")
lst_scrollbar = tk.Scrollbar(self)
lstbox.config(yscrollcommand=lst_scrollbar.set)
lst_scrollbar.config(command=lstbox.yview)
lstbox.pack(side="left", expand=True, fill="both")
lst_scrollbar.pack(side="right", fill="both")
lstbox.bind("<Double-Button-1>", self.OnDouble) # THIS IS THE LINE CAUSING THE ERROR
# Add items to the lstbox
i = 0
while i < len(languages):
lstbox.insert(i, languages[i])
i += 1
# Create a text inside canvas
canv_id = canv.create_text(50, 50, font="Times 14", anchor="nw")
msg = "This is a text inside canvas."
canv.insert(canv_id, "end", msg)
#Binding Handler
def OnDouble(self, event):
self.widget = event.widget
selection = self.widget.curselection()
content = self.widget.get(selection[0])
print("You selected", content)
if __name__ == "__main__":
root = tk.Tk()
root.geometry("930x530")
root.title("Language Families")
LanguageFamilies(root).pack(fill="both", expand=True)
root.mainloop()
And this is the error message:
Traceback (most recent call last):
File "C:/Python33/new_stack_overflow.py", line 43, in <module>
LanguageFamilies(root).pack(fill="both", expand=True)
File "C:/Python33/new_stack_overflow.py", line 23, in __init__
lstbox.bind("<Double-Button-1>", self.OnDouble)
AttributeError: 'LanguageFamilies' object has no attribute 'OnDouble'
Your help is highly appreciated!
Answer: the problem is that the def for `OnDouble` is defined _inside_ `__init__`,
making it not a class method but a method inside the scope of `__init__`. You
need to remove one level of indentation for `OnDouble`.
|
Name Error on MySQLDb
Question: I am working on this Python Credits on your account system, but it appears my
last for loop isn't running leading to a time_set not being defined. Does
anybody know why my last for loop isn't running. Database details have been
removed for security reasons. Here is my code:
from time import sleep
import MySQLdb
db = MySQLdb.connect(host="", # your host, usually localhost
user="", # your username
passwd="", # your password
db="") # name of the data base
cursor = db.cursor()#Defines a cursor allowing me to execute SQL commands
def timer(time, _print=False):
if time == 0:
cursor.execute("SELECT * FROM Accounts WHERE Username='{0}';".format(name))
cursor.execute("UPDATE Accounts SET Credits = 0 WHERE Username = '{1}';".format(creds, name))
return ''
else:
if _print: print time
sleep(1)
return timer(time - 1, _print)
name = raw_input("What username would you like to add credits to?:")
cursor.execute("SELECT * FROM Accounts WHERE Username='{0}';".format(name))
creds = input("How many credits would you like to add to the account?")
for test in cursor.fetchall():
creds = creds + test[3]
cursor.execute("UPDATE Accounts SET Credits = {0} WHERE Username = '{1}';".format(creds, name))
for row in cursor.fetchall():
time_set = creds * 60
print time_set
timer(time_set, True)
Answer: It appears that you are trying to pull rows from an UPDATE query, which will
get you nowhere since UPDATE does not return any rows.
|
Scrapy, with Python 3.4 is failing in terminal
Question:
File "/usr/local/bin/scrapy", line 7, in <module>
from scrapy.cmdline import execute
ImportError: No module named scrapy.cmdline
I am getting this when attempting to make a new project in mac terminal.
Any help would be appreciated.
Answer: Scrapy [doesn't work on Python
3](http://doc.scrapy.org/en/latest/faq.html#does-scrapy-work-with-python-3):
> Scrapy is supported under Python 2.7 only. Python 2.6 support was dropped
> starting at Scrapy 0.20.
|
converting a list into a HTML table in python
Question: Am having a text file where i read the content and assigning it to a variable.
I need to put the contents in a HTML table. As of now, i wrote a python
snippet to put the contents in a variable. How to convert it to a HTML table.
I need to loop through the file and assign the variable and update the data as
a field value in a table. How can i do it ? Help needed
My .txt file is
34,52,l,ma,cgb,eta
45,52,X,,lta,tox
67,52,V,nm,,oyt
My code to read is :
imp_auth = []
asn_eng = []
with open ("FR_list.txt","rt") as f:
for line in f:
(id,rel,sta,sev,ia,ae,iam,aem) = line.split(',')
imp_auth.append(iam)
asn_eng.append(aem)
Answer: You could use some of the available html templating engines and render the
template.
For example, by using [Jinja2](http://jinja.pocoo.org/docs/dev/) tempting
engine
from jinja2 import Template
table = """
<table>
{% for line in lines %}
<tr>
<td>{{ line[0] }}</td>
<td>{{ line[1] }}</td>
<td>{{ line[2] }}</td>
...
</tr>
{% endfor %}
</table>
"""
with open("FR_list.txt", "rt") as f:
template = Template(table)
t = template.render(lines=[line.split(',') for line in f])
print(t)
"""
<table>
<tr>
<td>34</td>
<td>52</td>
<td>l</td>
...
</tr>
...
</table>
"""
|
Value accessibility across files in Python
Question: I have a file structure as follows
Folder1:
Folder2
Class-with-global-variables-and-static-methods-to-modify-them.py
Folder2:
Myfile-1.py
Myfile-2.py
Master.py
Master.py imports the other 2 files in its folder. The " Class-with-global-
variables-and-static-methods-to-modify-them.py" is never instantiated and the
global values in it are modified via static methods. But I want these
modifications to persist globally.
ex:
class Class-with-global-variables-and-static-methods-to-modify-them:
...
...
importance = 0
...
...
@staticmethod
def modifyImportance(value):
Class-with-global-variables-and-static-methods-to-modify-them.imortance = 1
If Myfile-1.py modified the value of importance I want the new value of
importance to be available to Myfile-2.py if It tried to access it.
Answer: What you have already would work, but you're doing it wrong.
There is no point in Python in having a class that is never instantiated and
which only contains static variables. Python is not Java, and doesn't require
classes. Instead, just define the variables at module level and modify them
directly.
|
Writing and reading a new file by passing the files as an argument to a function in a python script
Question: I have a python script that gets input file names from the command prompt. I
created a list to store all the input files and pass that to a function to
create a new file with all the input files merged at once. Now, I pass this
newly written file as an input to another function. I am getting an error
message
**TypeError: coercing to Unicode: need string or buffer, list found**
Code:
file_list = []
for arg in range(1,len(sys.argv)-2):
file_list.append(sys.argv[arg])
process_name = sys.argv[len(sys.argv)-1]
integrate_files(file_list,process_name)
def integrate_files(file_list,process_name):
with open('result.log', 'w' ) as result:
for file_ in file_list:
for line in open( file_, 'r' ):
result.write( line )
start_process(result,process_name)
def start_process(result,process_name):
with open(result,'r') as mainFile:
content = mainFile.readlines()
I am getting this error highlighted at the lines having the word
**with.open()**. I tried to print the abspath of the result.log file. It
printed **closed file 'result.log', mode 'w' at 0x000000000227578**. Where am
I going wrong ? How should I create a new file and pass it to a function?
Answer: Your problem is that `result` is a closed file object:
start_process(result,process_name)
I think you want
start_process('result.log', process_name)
You could clean the script up a bit with
import shutil
file_list = sys.argv[1:-1]
process_name = sys.argv[-1]
integrate_files(file_list,process_name)
def integrate_files(file_list,process_name):
with open('result.log', 'w' ) as result:
for file_ in file_list:
with open(file_) as infile:
shutil.copyfileobj(infile, result)
start_process('result.log',process_name)
def start_process(result,process_name):
with open(result,'r') as mainFile:
content = mainFile.readlines()
|
Launch a completely independent process
Question: I wanted to initiate a process from my python script `(main.py)`, specifically
I want to run the below command
`nohup python ./myfile.py &`
and this file `myfile.py` should even after my main python script exits.
Additionally I wish to get the `pid` of the new process.
I tried `os.spawnl*`, `os.exec*` & `subprocess.Popen` methods, all are
terminating my `myfile.py` if my main.py script exits.
I may be missing something.
Update: Can I use `os.startfile` with `xdg-open`? Is it a right approach?
**Example**
a = subprocess.Popen([sys.executable, "nohup /usr/bin/python25 /long_process.py &"],\
stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
print a.pid
If I check `ps aux | grep long_process`, I could not see any process running.
**long_process.py** which keeps on printing some text: no exit.
Am I doing anything wrong here?
Answer: You open your long-running process and _keep a pipe to it_. So you expect to
talk to it. When yor launcher script exits, you can no longer talk to it.
~~The long-running process receives a`SIGPIPE` and exits.~~
The following just worked for me (Linux, Python 2.7).
Create a long-running executable:
$ echo "sleep 100" > ~/tmp/sleeper.sh
Run Python REPL:
$ python
>>>
import subprocess
import os
p = subprocess.Popen(['/bin/sh', os.path.expanduser('~/tmp/sleeper.sh')])
# look ma, no pipes!
print p.pid
# prints 29893
Exit the REPL and see the process still running:
>>> ^D
$ ps ax | grep sleeper
29893 pts/0 S 0:00 /bin/sh .../tmp/sleeper.sh
29917 pts/0 S+ 0:00 grep --color=auto sleeper
If you want to first communicate to the started process and then leave it
alone to run further, you have a few options:
* Handle `SIGPIPE` in your long-running process, do not die on it. Live without stdin after the launcher process exits.
* Pass whatever you wanted using arguments, environment, or a temporary file.
* If you want bidirectional communication, consider using a named pipe ([man mkfifo](http://linux.die.net/man/3/mkfifo)) or a socket, or writing a proper server.
* Make the long-running process fork after the initial bi-direcional communication phase is done.
|
python-user-agents library is not working
Question: I am trying to use the [python-user-agents](https://github.com/selwin/python-
user-agents/blob/master/user_agents/parsers.py). I keep running into a number
of bugs within the library itself.
First it referred to a `from ua_parser import user_agent_parser` that it never
defined. So after banging my head, I looked online to see what that might be
and found that `ua_parser` is yet another library that this project was using.
So I downloaded `ua_parser`. But now I am getting an error that
TypeError: parse_device() got an unexpected keyword argument 'model'
Sure enough, `ua_parser` has a model variable that the python-user-agents
library is not expecting. Has anyone done a better job with this library?
Whoever wrote it clearly did a terrible job. But it seems to be the only thing
out there that I could find. Any help fixing it to work well? I am looking to
use it so to identify if a browser's device is mobile or touchable or a tablet
as in: `user_agent.is_mobile` or `user_agent.is_touch_capable` or
`user_agent.is_tablet`
Answer: if you look at the readme from the github link it tells you what to install
and how to use the lib:
You need pyyaml and ua-parser:
pip install pyyaml ua-parser user-agents
A working example:
In [1]: from user_agents import parse
In [2]: ua_string = 'Mozilla/5.0 (iPhone; CPU iPhone OS 5_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B179 Safari/7534.48.3'
In [3]: user_agent = parse(ua_string)
In [4]: user_agent.is_mobile
Out[4]: True
In [5]: user_agent.is_touch_capable
Out[5]: True
In [6]: user_agent.is_tablet
Out[6]: False
|
Python/Selenium incognito/private mode
Question: I can not seem to find any documentation on how to make Selenium open the
browser in incognito mode.
Do I have to setup a custom profile in the browser or?
Answer: First of all, since `selenium` by default starts up a browser with a clean,
brand-new profile, _you are actually already browsing privately_. Referring
to:
* [Python - Start firefox with Selenium in private mode](http://stackoverflow.com/questions/27425116/python-start-firefox-with-selenium-in-private-mode)
* [How might I simulate a private browsing experience in Watir? (Selenium)](http://stackoverflow.com/questions/13082656/how-might-i-simulate-a-private-browsing-experience-in-watir-selenium)
* * *
But you can strictly enforce/turn on incognito/private mode anyway.
For chrome pass [`--incognito` command-line
argument](http://peter.sh/experiments/chromium-command-line-
switches/#incognito):
> `--incognito` Causes the browser to launch directly in incognito mode.
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--incognito")
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.get('https://google.com')
FYI, here is what it would open up:

For firefox, set `browser.privatebrowsing.autostart` to `True`:
from selenium import webdriver
firefox_profile = webdriver.FirefoxProfile()
firefox_profile.set_preference("browser.privatebrowsing.autostart", True)
driver = webdriver.Firefox(firefox_profile=firefox_profile)
FYI, this corresponds to the following checkbox in settings:

|
python user input without console
Question: I have a process that runs in the background using pythonw (.pyw instead of
.py) to not show console. as I am reformatting said process for distribution I
need it to preform some first run questions that need answering. making the
user fill out a generated text file would work but is not user friendly. it is
sadly not as simple as
config = {}
config['user'] = input('new user data: ')
because there is no console to request the input it will hang unanswered
as I am tying to make this for systems that may not have custom modules I'm
trying to not make dependencies. any good way to ask multiple questions
without the console to host the input using the base python install.
Answer: To emulate builtin `input()` using `tkinter`, you could use `askstring()`:
from tkinter import Tk
from tkinter.simpledialog import askstring
root = Tk()
root.withdraw() # hide main window
config = {}
config['user'] = askstring("Title", 'new user data: ')
|
Extracting href ID from block of code using Python
Question: From the following code, I want to target the href and collect the ID number
at the end of the link, setting that number as a variable named "VARIABLE" (in
this block of code, the number would be "2019786"). I've used requests to
collect the page in my program and I also have imported BeautifulSoup4. Also,
this block of code could change from page to page, thus changing the href ID
number, so I was thinking when collecting this number, use something like:
/d2l/lp/ouHome/home.d2l?ou=%
<a class="d2l-link d2l-outline d2l-left" href="/d2l/lp/ouHome/home.d2l?ou=2019786" title="Enter CHEM-1315-018 - Lab-CHEM 1315-001">CHEM-1315-018 - Lab-CHEM 1315-001</a>
Thanks in advance, Jackson
Answer: You can use BeautifulSoup and a simple regex to get the id if the pattern of
the URLs is going to be the same.
The code would look something like this:
from bs4 import BeautifulSoup
import re
text = """<a class="d2l-link d2l-outline d2l-left" href="/d2l/lp/ouHome/home.d2l?ou=2019786" title="Enter CHEM-1315-018 - Lab-CHEM 1315-001">CHEM-1315-018 - Lab-CHEM 1315-001</a>"""
soup = BeautifulSoup(text)
href_link = soup.a['href']
match = re.search(r'ou=(\d*)', href_link)
if match:
print int(match.group(1)) # do whatever you want with it here
else:
print "no id found"
Note that you don't need to specify the whole URL to get the id, as the id
would be always sent as a parameter after "ou=".
|
Python script to compress all pdf files in a directory on Windows 7
Question: Following the code sample
[here](http://stackoverflow.com/questions/27111760/script-to-compress-all-pdf-
files-in-a-directory), I have the code below that is supposed to compress
files in a directory using Ghostscript. The code compresses the files but then
when I open the files, I do not see any content at all. I only want the files
to be compressed. I don`t want the content to be erased.
from __future__ import print_function
import os
import subprocess
for root, dirs, files in os.walk("C:\comp"):
for file in files:
if file.endswith(".pdf"):
filename = os.path.join(root, file)
print (filename)
arg1= '-sOutputFile=' + file
p = subprocess.Popen(['C:/Program Files/gs/gs9.15/bin/gswin64c.exe',
'-sDEVICE=pdfwrite',
'-dCompatibilityLevel=1.4',
'-dPDFSETTINGS=/screen', '-dNOPAUSE',
'-dBATCH', '-dQUIET', str(arg1), filename],
stdout=subprocess.PIPE)
print (p.communicate())
Answer: I fixed the issue following @KenS suggestion of giving the output file a
different name and this worked.
from __future__ import print_function
import os
import subprocess
for root, dirs, files in os.walk("C:\comp"):
for file in files:
if file.endswith(".pdf"):
filename = os.path.join(root, file)
print (filename)
arg1= '-sOutputFile=' + "c" + file #added a c to the filename
p = subprocess.Popen(['C:/Program Files/gs/gs9.15/bin/gswin64c.exe',
'-sDEVICE=pdfwrite',
'-dCompatibilityLevel=1.4',
'-dPDFSETTINGS=/screen', '-dNOPAUSE',
'-dBATCH', '-dQUIET', str(arg1), filename],
stdout=subprocess.PIPE)
print (p.communicate())
|
Serving static file in django on Apache with mod_wsgi
Question:
<VirtualHost *:80>
ServerName test_site.com
ServerAdmin [email protected]
#<IfModule mpm_itk_module>
#AssignUserID USERNAME USERNAME
#</IfModule>
DocumentRoot /var/www/test_site.com
<Directory />
Options All
AllowOverride All
Require all granted
</Directory>
#Alias /static/ /var/www/test_site.com/testproject/static/
#<Location "/static/">
#Options -Indexes
#</Location>
Alias /robots.txt /var/www/test_site.com/testproject/static/robots.txt
Alias /favicon.ico /var/www/test_site.com/testproject/static/favicon.ico
Alias /media/ /var/www/test_site.com/testproject/media/
Alias /static/ /var/www/test_site.com/testproject/static/
<Directory /var/www/test_site.com/testproject/static>
Require all granted
Options -Indexes
</Directory>
<Directory /var/www/test_site.com/testproject/media>
Require all granted
Options -Indexes
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
WSGIScriptAlias / /var/www/test_site.com/testproject/testproject/wsgi.py
WSGIDaemonProcess test_site.com python-path=/var/www/test_site.com processes=2 threads=15 display-name=test_site.com
WSGIProcessGroup test_site.com
</VirtualHost>
This is my virtual host file on apache i am having problem to server static
files my pache version is newest one and os is ubuntu 14.04 server here is my
settings file
"""
Django settings for testproject project.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.7/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '20uuf(=u2+a&(!pbij6ka%_-m(l24baz^4+o^c@!#rfczhq=_r'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'temp'
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'testproject.urls'
WSGI_APPLICATION = 'testproject.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.7/ref/settings/#databases
"""DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}"""
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'temp',
'USER': 'root',
'PASSWORD': 'pAssw0rd',
'HOST': 'localhost',
'PORT': '5432',
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.7/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_URL = '/static/
'please some one guide what i did wrong and what thing i am missing here
Answer: <https://docs.djangoproject.com/en/1.7/howto/deployment/wsgi/modwsgi/#serving-
files> Solve the issue by following this link
|
InfogainLoss layer
Question: I wish to use a loss layer of type
[`InfogainLoss`](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1InfogainLossLayer.html#details)
in my model. But I am having difficulties defining it properly.
1. Is there any tutorial/example on the usage of `INFOGAIN_LOSS` layer?
2. Should the input to this layer, the class probabilities, be the output of a `SOFTMAX` layer, or is it enough to input the "top" of a fully connected layer?
`INFOGAIN_LOSS` requires three inputs: class probabilities, labels and the
matrix `H`. The matrix `H` can be provided either as a layer parameters
`infogain_loss_param { source: "fiename" }`.
Suppose I have a python script that computes `H` as a `numpy.array` of shape
`(L,L)` with `dtype='f4'` (where `L` is the number of labels in my model).
3. How can I convert my `numpy.array` into a `binproto` file that can be provided as a `infogain_loss_param { source }` to the model?
4. Suppose I want `H` to be provided as the third input (bottom) to the loss layer (rather than as a model parameter). How can I do this?
Do I define a new data layer which "top" is `H`? If so, wouldn't the data of
this layer be incremented every training iteration like the training data is
incremented? How can I define multiple unrelated input "data" layers, and how
does caffe know to read from the training/testing "data" layer batch after
batch, while from the `H` "data" layer it knows to read only once for all the
training process?
Answer: I still do not have a full answer to my question. This answer only covers my
third and fourth parts:
_3\. How can I convert an numpy.array into a binproto file_ :
In python
H = np.eye( L, dtype = 'f4' )
import caffe
blob = caffe.io.array_to_blobproto( H.reshape( (1,1,L,L) ) )
with open( 'infogainH.binaryproto', 'wb' ) as f :
f.write( blob.SerializeToString() )
Now you can add to the model prototext the `INFOGAIN_LOSS` layer with `H` as a
parameter:
layer {
bottom: "topOfPrevLayer"
bottom: "label"
top: "infoGainLoss"
name: "infoGainLoss"
type: "InfogainLoss"
infogain_loss_param {
source: "infogainH.binaryproto"
}
}
* * *
_4\. How to load`H` as part of a DATA layer_
Quoting [Evan Shelhamer's post](https://github.com/BVLC/caffe/issues/1640):
> There's no way at present to make data layers load input at different rates.
> Every forward pass all data layers will advance. However, the constant H
> input could be done by making an input lmdb / leveldb / hdf5 file that is
> only H since the data layer will loop and keep loading the same H. This
> obviously wastes disk IO.
* * *
As for the first two parts of my question:
_1\. Is there any tutorial/example on the usage of**InfogainLoss** layer?_:
A nice example can be found
[here](http://stackoverflow.com/questions/30486033/tackling-class-imbalance-
scaling-contribution-to-loss-and-sgd?lq=1): using **InfogainLoss** to tackle
class imbalance.
_2\. Should the input to this layer, the class probabilities, be the output of
a**Softmax** layer?_
According to [Yair's answer](http://stackoverflow.com/a/30037614/1714410) the
answer is _YES_ it should be the output of **Softmax** layer (or any other
layer that makes sure the input values are in range [0..1]).
* * *
Recently, I noticed that using `"InfogainLoss"` on top of `"Softmax"` layer
can lead to numerical instability. Thus I suggest combining these two layers
into a single one (much like `"SoftmaxWithLoss"` layer). The mathematics of
this combined layer are given
[here](http://stackoverflow.com/a/34917052/1714410). An implementation of this
"combined" infogainLoss + Softmax can be found in [this pull
request](https://github.com/BVLC/caffe/pull/3855).
|
How to invoke super when the parent method may not be defined?
Question: Certain classes in the Python standard library (and more generally) make use
of dynamic dispatch to call specialised methods in subclasses.
For example, the
[`ast.NodeVisitor`](https://docs.python.org/3/library/ast.html) class defines
a `visit` method. This method calls `visit_classname` methods where
appropriate. These methods are not defined on `ast.NodeVisitor` itself, but
may be provided by interested subclasses.
In other words, subclasses override only the methods that they wish to handle,
eg:
class SpecialNodeVisitor(ast.NodeVisitor):
def visit_FunctionDef(self, node):
print(node) # prints any node of type FunctionDef
Things get more complicated if `SpecialNodeVisitor` is itself subclassed.
`super()` may be used if `visit_FunctionDef` is overriden, but not in other
cases, ie:
class EvenMoreSpecialNodeVisitor(SpecialNodeVisitor):
def visit_FunctionDef(self, node):
super().visit_FunctionDef(node) # works fine
# ...
def visit_Call(self, node):
super().visit_Call(node) # AttributeError
# ...
Specifically, the second example causes `AttributeError: 'super' object has no
attribute 'visit_Call'`.
* * *
The above behaviour makes sense: the parent class _doesn't_ have the method in
question. However, it causes two problems:
* When writing the subclass, some dynamic methods need to call `super()`, but some don't. This inconsistency makes it _really_ easy to make mistakes.
* If a new dynamic method is later added to the parent class, all of the subclasses have to be altered to call `super()`. This breaks a really fundamental rule of object-oriented programming.
**Ideally, all subclass methods should be able to make use of`super()`, with
the call being a no-op if the method is not defined. Is there a 'pythonic' way
to achieve this?**
I am particularly after a solution which is transparent to the subclass (for
example, I don't want to try/except on `AttributeError` in every single
method, as this would be just as easy to forget, and is ugly as hell).
(It's worth noting that in many cases, and indeed in this particular example,
it's not possible to simply define all the possible methods on the parent
class, as doing so may have side effects.)
Answer: You cannot have what you want; the most readable method is to simply use
`try..except` on that `AttributeError`:
def visit_Call(self, node):
try:
super().visit_Call(node)
except AttributeError:
pass
The alternative would be for you to add aliases for
`NodeVisitor.generic_visit` for every node type to `SpecialNodeVisitor`:
import inspect
class SpecialNodeVisitor(ast.NodeVisitor):
def visit_FunctionDef(self, node):
print(node) # prints any node of type FunctionDef
_ast_nodes = inspect.getmembers(
ast,
lambda t: isinstance(t, type) and issubclass(t, ast.AST) and t is not ast.AST)
for name, node in _ast_nodes:
name = 'visit_' + name
if not hasattr(SpecialNodeVisitor, name):
setattr(SpecialNodeVisitor, name, ast.NodeVisitor.generic_visit)
You could encapsulate that into a meta class if you want to. Since `super()`
looks directly into the class `__dict__` namespaces you cannot simply define a
`__getattr__` method on the meta class to do the lookup dynamically,
unfortunately.
|
Python PIL import Mac compatiblity
Question: I wonder now for a long time, can I redirect python imports. So on my Machine
I did:
pip install pil
and then I can do:
from PIL import Image
That works great but the problem is many programs just want Image, they want:
import Image
Now my question is can I "redirect" the above statement so that I can use
import Image? If that does not work, how could I make it work?
Answer: Basically you can use any of the methods mentioned
[here](https://docs.python.org/2/library/site.html).
On my linux installation, PIL already uses one of those - there's a directory
called PILcompat which has files like Image.py, and a file called
PILcompat.pth which is used to tell python to add this directory to its module
search path. This way when
import Image
is executed, python finds PILcompat/Image.py which executes its contents -
from PIL.Image import *
I'm guessing either the .pth file or the directory are missing on your
machine.
|
How to reference a string in a list in python
Question: I've been writing a program (a hangman game) today, and have come across a few
snags, I know you guys don't like people just using this as answers to their
homework, but I have searched quite a while, and while I have posted this
program earlier in the development, I have come to new ground. I am getting to
grips with lists, tuples, dictionaries, etc. and this program has been great
practice. Sorry for the wall of text!
_The Question:_
Is there a way to refer to a string when using a list?
Context - defining a function (last line being a test to see if working)
def click_1 (text):
key_1 = word.index [letter]
hidden_word[key_1] = letter
print (hidden_word)
This throws up a long error, which I wont post, here is the 'important' part
in my eyes:
File "/Users/foo/Desktop/Hangman.py", line 19, in click_1
key_1 = word.index [letter]
TypeError: 'builtin_function_or_method' object is not subscriptable
If someone could help me, that would be great. I've tried using the 'in'
statement, but I have not been able to make it work - currently, though I
suspect it may be part of the solution. I've tried:
key_1 = word.index [letter in word]
and:
key_1 = word.index([letter] in word)
both of which do not work.
Cheers in advance from a young programmer!
Answer: I have no idea what your function is SUPPOSED to do, but here's the issue
you're having:
def click_1(text):
key_1 = word.index [letter]
# word.index(letter) calls the function,
# word.index[letter] tries to treat the function like a dictionary
# and pull up its value attached to the key `letter`
hidden_word[key_1] = letter
print (hidden_word)
Alternatively I'd suggest looking at a different design pattern. Try instead:
the_word = generate_random_word_from(word_list)
# which is probably as simple as random.choice(word_list)
guessed_letters = set()
def display_word():
global the_word
global guessed_letters
# globals are bad in practice, but this is actually a great use case
# for a global if you haven't wrapped this all up in OOP yet!
masked_list = [letter if letter in guessed_letters else "_" for
letter in the_word]
# this is literally:
# # masked_list = []
# # for letter in the_word:
# # if letter in guessed_letters:
# # masked_list.append(letter)
# # else:
# # masked_list.append("_")
masked_word = ''.join(masked_list)
# join the list on an empty string, e.g.:
# ['s','_','m','e','w','_','r','d'] becomes 's_mew_rd'
print(masked_word)
# this wouldn't be a bad place to trigger your win condition, too.
# Could be as simple as
# # if "_" not in masked_word:
# # you_win()
def guess_letter():
global the_word
global guessed_letters
guessed_letter = prompt_for_letter_somehow()
if guessed_letter not in the_word:
add_hangman_piece() # put another arm/leg/whatever on your dude
guessed_letters.add(guessed_letter)
# add that letter to the guessed_letters set anyway
# validation would be nice here so you can't just keep guessing the same letters
|
Trying to crack XOR key for project (python2.7)
Question: Hi im trying to to encrypt a code then try and find the key for the code but
not getting the best results with dont know what i can do here
from itertools import izip, cycle
import itertools
import binascii
a = 0
message = "Hello friend"
length = len(message)
key = "s"
c = 0
def xor_crypt_string(data, key):
return "".join(chr(ord(x) ^ ord(y)) for (x,y) in izip(data, cycle(key)))
encrypt = xor_crypt_string(message, key)
while (c <= length):
res = itertools.permutations('abcdefghijklmnopqrstuvwxyz', c) # 3 is the length of your result.
c = c + 1
for i in res:
keys = ''.join(i)
decrypt = xor_crypt_string(encrypt, keys)
for d in decrypt:
if (ord(d) > 47 and ord(d) < 58) or (ord(d) == 32) or (ord(d) > 64 and ord(d) < 91) or (ord(d) > 96 and ord(d) <123):
print decrypt
else:
a = 0
Answer: I suspect that you want a call to `all` rather than a regular `for` loop when
you're checking if your decryption is valid. That will test all the characters
and only print the decryption if they're all valid:
if all(x == 32 or 47 < x < 58 or 64 < x < 91 or 96 < x < 123
for x in (ord(c) for c in decrypt)):
print decrypt
You can make your test even clearer than I have above by using a string or set
membership test, rather than explicitly checking ordinal ranges (e.g. `if
all(c in string.printable for c in decrypt)` is only a little more broad than
your test).
|
How to calculate Time Between Day one and Day two (Python)
Question:
from datetime import datetime
FMT = '%H%M'
rate1 = 35.34
Midnight = "0000"
signOnSun1 = raw_input("What time did you sign on Sunday: ");
signOffSun1 = raw_input("What time did you sign off Sunday: ");
totalShift = (datetime.strptime(signOffSun1, FMT) - datetime.strptime (signOnSun1, FMT))
midnightToSignOff = (datetime.strptime(signOffSun1, FMT) - datetime.strptime (Midnight, FMT))
midnightToSignOff = diff.total_seconds()/60.0/60
basically this is what i have. if i sign on at 1800 and sign off that night at
0200 i can't return a proper answer of 8 8 hours
Answer: The problem is the program doesn't know that the shift ended on the next day.
To account for that, you can check if the sign on time is later than the sign
off time, if that's the case, add a day to the sign off time.
from datetime import datetime, timedelta
FMT = '%H%M'
sign_on_time_str = raw_input("What time did you sign on Sunday: ");
sign_off_time_str = raw_input("What time did you sign off Sunday: ");
sign_on_time = datetime.strptime(sign_on_time_str, FMT)
sign_off_time = datetime.strptime(sign_off_time_str, FMT)
if sign_on_time > sign_off_time:
sign_off_time += timedelta(days=1)
total_shift = sign_off_time - sign_on_time
print total_shift.total_seconds() / 60 / 60
|
Python dateutil parser.parse("On") as today
Question: Under both Python 2.7 and 3.3.5, I am trying to parse a string to date.
>>> from dateutil import parser
>>> parser.parse("On")
datetime.datetime(2014, 12, 25, 0, 0)
Today is 2014-12-25, so it appears the result is today when parsing "On" (case
insensitive). Is this behavior correct?
In fact, I would like this parse to raise an exception, as I don't think "On"
is a valid date. How should I "correct" the behavior to my expectation? I mean
not checking the input as "On", because I don't know if any other string like
'On' will surprise me again.
For some special case, even set 'fuzzy=False', the parse returns today without
an exception. For example:
>>> parser.parse("' '", fuzzy=False)
datetime.datetime(2014, 12, 25, 0, 0)
Based on the feedback, it seems a possible workaround can be given a rarely
used default date. Compare the result to see if parse is success or not.
>>> parser.parse("' '", fuzzy=False, default=datetime(1900,1,1))
Answer: "on" is in
[dateutil.parser.parserinfo.JUMP](http://dateutil.readthedocs.org/en/latest/parser.html#dateutil.parser.parserinfo).
When parser of dateutil parses a timestr it will check whether the component
of the timestr is in the JUMP list, and the only component of "on" is "on" so
it's in the JUMP list and the default datetime is used.
When checking "on" is lowered, so it's case insensitive.
|
Haystack-django SyntaxError: 'return' outside function
Question: **My Search_Indexes**
from datetime import datetime
from haystack import indexes
import json
from dezign.models import dezign
class dezignIndex(indexes.SearchIndex,indexes.Indexable):
text=indexes.CharField(document=True,use_template=True)
post_date=indexes.DateTimeField(model_attr='post_date')
like=indexes.IntegerField(model_attr='like',indexed=False)
#content_auto=indexes.EdgeNgramField(model_attr='title')
#r= indexes.CharField(indexed=False)
def get_model(self):
return dezign
def index_queryset(self,using=None):
return self.get_model().objects.filter(like__exact=0)
# Error in prepare method
def prepare(self, object):
self.prepared_data = super(dezignIndex, self).prepare(object)
self.dezign_jsonformat=[]
select_dezign = dezign.objects.filter(like=self.prepared_data['like'])
for i in select_dezign:
dezign_jsonformat.append({'title':i.title,'body':i.body,'like':i.like,'date':i.post_date})
self.prepared_data['list']=json.dumps(dezign_jsonformat)
return self.prepared_data
When i run in command prompt
**I am using Haystack with Whoosh**
**python .\manage.py rebuild_index**
**_Error_**
PS C:\Python27\Scripts\dezignup_django> python .\manage.py rebuild_index
WARNING: This will irreparably remove EVERYTHING from your search index in connection 'default'.
Your choices after this are to restore from backups or rebuild via the `rebuild_index` command.
Are you sure you wish to continue? [y/N] y
Removing all documents from your index because you said so.
Traceback (most recent call last):
File ".\manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 399, in execute_from_command_line
utility.execute()
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 285, in execute
output = self.handle(*args, **options)
File "C:\Python27\lib\site-packages\haystack\management\commands\rebuild_index.py", line 15, in handle
call_command('clear_index', **options)
File "C:\Python27\lib\site-packages\django\core\management\__init__.py", line 159, in call_command
return klass.execute(*args, **defaults)
File "C:\Python27\lib\site-packages\django\core\management\base.py", line 285, in execute
output = self.handle(*args, **options)
File "C:\Python27\lib\site-packages\haystack\management\commands\clear_index.py", line 50, in handle
backend.clear()
File "C:\Python27\lib\site-packages\haystack\backends\whoosh_backend.py", line 239, in clear
self.setup()
File "C:\Python27\lib\site-packages\haystack\backends\whoosh_backend.py", line 126, in setup
self.content_field_name, self.schema = self.build_schema(connections[self.connection_alias].get_unified_index().all_
searchfields())
File "C:\Python27\lib\site-packages\haystack\utils\loading.py", line 316, in all_searchfields
self.build()
File "C:\Python27\lib\site-packages\haystack\utils\loading.py", line 200, in build
indexes = self.collect_indexes()
File "C:\Python27\lib\site-packages\haystack\utils\loading.py", line 169, in collect_indexes
search_index_module = importlib.import_module("%s.search_indexes" % app)
File "C:\Python27\lib\site-packages\django\utils\importlib.py", line 40, in import_module
__import__(name)
File "C:\Python27\Scripts\dezignup_django\dezign\search_indexes.py", line 28
return self.prepared_data
SyntaxError: 'return' outside function
I am doing a small web search project where we need data in json format
please help
I will be very thankful to you
Answer: Make sure you're not mixing spaces and tabs for indentation.
One possible reason for the error is that the line containing the `return`
statement is indented with tab, while other statements are indented with
spaces.
Choose spaces (preferred) or tab only for indentation.
|
Python - Launch external program and quit script?
Question: I've wrote a little script to launch a program and send a mail if a program
has quit:
#!/usr/bin/env python
# coding: utf8
import psutil
from email.mime.text import MIMEText
from subprocess import Popen, PIPE
import time
import os
program = "transmission-gtk"
if not program in [psutil.Process(i).name for i in psutil.get_pid_list()]:
try:
os.system(program)
text = "but it has been restarted"
except IOError as e:
text = "The restart failed!\n" + e
time.sleep(2)
msg = MIMEText("Transmission has been closed !!\n" + text)
msg["From"] = "[email protected]"
msg["To"] = "[email protected]"
msg["Subject"] = "subject"
p = Popen(["/usr/sbin/sendmail", "-t"], stdin=PIPE)
p.communicate(msg.as_string())
The script is launched by cron, every 15 min. All works, except one thing:
**transmission quit at the same moment the script quits**...
I don't want to write a permanent script (raspberry-pi), so a `while` loop is
not what I want.
Then, how to launch my program ?
Answer: You may use `os.system(program+'&')` to background the execution of your
program.
Also, even if your program works, you should have a look at the `subprocess`
module (which intends to replace `os.system()`, among other things).
|
Computing frequencies fast in Python
Question: I need to count the frequencies of words in a corpus. Usually I use the
Counter class from the collections package.
from collections import Counter
list_of_words = ['one', 'two', 'three', 'three']
freqs = Counter(list_of_words)
However, the corpus I am analysing consists of several million words, so it
would be great if there was a faster way to compute these scores?
Here is the code that reads in the words:
from read_cg3 import read_cg3
test = read_cg3('/Users/arashsaidi/Work/Corpus/DUO_Corpus/Bokmaal-tagged-random/DUO_BM_0.txt')
count = 0
word_list = []
for sentence in test:
for word in sentence:
count += 1
word_list.append(word)
print count
The read_cg3 is a module that reads parsed files and returns a list of
sentences. Here is the module:
import re
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
def read_cg3(cg3_file):
"""
Reads a cg3 file and returns a list of each sentence with Token, parsed, and one tag
:param cg3_file: path to file
:return: list of words + attributes
"""
rx_token = re.compile("^\"<(.+?)>\"$")
rx_attributes = re.compile("^\s+\".+?\"\s+.+$")
rx_eos = re.compile("^\s*$")
curr_token = None
curr_word = []
curr_sentence = []
result = []
with open(cg3_file) as cg3_file:
for line in cg3_file:
if rx_token.match(line):
curr_token = "\"%s\"" % rx_token.match(line).group(1)
# print curr_token
if rx_attributes.match(line):
curr_word = line.split()
# print curr_word[0], curr_word[1]
# print curr_word
if curr_token and curr_word:
# to get more tags uncomment this and comment below
# curr_sentence += [[curr_token] + curr_word]
if '$' not in curr_word[0] and not is_number(curr_word[0].strip('"').replace('.', '')) \
and len(curr_word[0]) < 30:
# curr_sentence += [[curr_token.strip('"')] +
# [curr_word[0].lower().strip('"')] + [curr_word[1]]]
curr_sentence += [curr_word[0].lower().strip('"')]
curr_token = None
curr_word = []
if rx_eos.match(line):
# print curr_sentence
if curr_sentence:
result += [curr_sentence]
curr_sentence = []
curr_token = None
curr_word = []
# cleanup if last sentence not EOL
if curr_token and curr_word:
print 'cg3 reached end of file and did some cleanup on file {}'.format(cg3_file)
curr_sentence += [[curr_token] + curr_word]
if curr_sentence:
print 'cg3 reached end of file and did some cleanup on file {}'.format(cg3_file)
result += curr_sentence
return result
Here is the way the files read by the read_cg3 look like:
"<TEKNOLOGI>"
"teknologi" subst appell mask ub ent
"<OG>"
"og" konj <*>
"<UNDERVISNING>"
"undervisning" subst appell fem ub ent <*>
"<|>"
"$|" clb <overskrift> <<<
"<En>"
"en" det mask ent kvant
"<intervjuunders¯kelse>"
"intervjuunders¯kelse" subst appell mask ub ent
"<av>"
"av" prep
"<musikklÊreres>"
"musikklÊrer" subst appell mask ub fl gen
"<didaktiske>"
"didaktisk" adj fl pos
"<bruk>"
"bruk" subst appell mask ub ent
"<av>"
"av" prep
"<digitale>"
"digital" adj fl pos
"<verkt¯y>"
"verkt¯y" subst appell n¯yt ub fl <*¯y>
"<i>"
"i" prep
"<undervisningsfaget>"
"undervisningsfag" subst appell n¯yt be ent
"<komposisjon>"
"komposisjon" subst appell mask ub ent
"<i>"
"i" prep
"<videregÂende>"
"videregÂende" adj ub m/f ent pos
"<skole>"
"skole" subst appell mask ub ent
"<|>"
"$|" clb <overskrift> <<<
"<Markus>"
"Markus" subst prop mask
"<A.>"
"A." subst prop fork <*>
"<SkjÊrstad>"
"SkjÊrstad" subst prop <*stad> <*>
"<|>"
"$|" clb <overskrift> <<<
My method just reads in one file, this is for testing, the corpus consists of
around 30000 files.
Answer: You can sue built-in `count` function that compiled in C :
dict((i,test_list.count(i)) for i in set(test_list))
for better understanding you can see the following benchmarking :
from timeit import timeit
s1="""l=[1, 1, 1, 2, 3, 4, 1, 2, 5, 7, 2, 3]
from collections import Counter
Counter(l)"""
s2="""l=[1, 1, 1, 2, 3, 4, 1, 2, 5, 7, 2, 3]
dict((i,l.count(i)) for i in set(l))"""
print 'using Counter : ' ,timeit(stmt=s1, number=1000000)
print 'using built-in : ',timeit(stmt=s2, number=1000000)
result :
using Counter : 8.78281712532
using built-in : 2.91788387299
|
How to make a ping client in Python socket?
Question: I learn Python and as a practice decided to write a server for a network game
"Backgammon". The question arose as to make sure that found in the queue
apponet still online? How to send a ping and accept the answer? I'm trying to
do it on line 71, but this is incorrect.
import pymongo, socket, threading, string, json, MySQLdb, random, time
stat = True
conn = {}
#gamers = {}
games = {}
diceroll = []
tempconn = {}
c = pymongo.Connection()
db = c.nardy
nru = db.notregusers
uwait = db.userwait
class ClientThread(threading.Thread):
def __init__(self, channel, details):
self.channel = channel
self.details = details
threading.Thread.__init__(self)
def getUserId(self, data):
json_data = json.loads(data)
if json_data['t'] == 'n':
print ' - User not register'
if nru.find({'identuser':json_data['i'], 'platform':json_data['p']}).count() == 0:
print ' - New user'
count = nru.find().count();
newid = count + 1;
nru.save({'id':newid, 'identuser':json_data['i'], 'platform':json_data['p'], 'level':1})
return nru.find_one({'id':newid})
else:
print ' - Old user'
return nru.find_one({'identuser':json_data['i'], 'platform':json_data['p']})
elif json_data['t'] == 'r':
return 9999
def mySQLConnect(self):
db = MySQLdb.connect(host="localhost", user="", passwd="", db="", charset='utf8')
return db
def run(self):
while True:
data = self.channel.recv(1024)
try:
json_data = json.loads(data)
comm = json_data['c']
if comm == 'x':
if conn.has_key(gamer['level']) and self in conn[gamer['level']]:
conn[gamer['level']].remove(self)
self.channel.close()
break
elif comm == 'h':
gamer = self.getUserId(data)
self.gamer = gamer
elif comm == 'f':
lev = 0
findenemy = 1
while findenemy == 1:
if conn.has_key(gamer['level']):
lev = gamer['level']
elif conn.has_key((gamer['level'] - 1)):
lev = (gamer['level'] - 1)
elif conn.has_key((gamer['level'] + 1)):
lev = (gamer['level'] + 1)
if lev != 0:
if len(conn[lev]) > 0:
enemy = conn[lev].pop(0)
firsttime = time.time()
enemy.channel.send('{"o":"p"}')
while (firsttime + 30) > time.time() and findenemy == 1:
if comm == 'k':
findenemy = 0
print ' - Ping enemy: ok'
if findenemy == 0:
self.enemy = enemy
gameid = str(self.gamer['id']) + '_' + str(self.enemy.gamer['id'])
games[gameid] = {'dice':[], 'nextstep':0}
vrag = self.enemy
a = random.sample(range(1,7),2)
self.channel.send('{"o":"s", "u":"' + str(a[0]) + '", "e":"' + str(a[1]) + '"}')
vrag.channel.send('{"o":"s", "u":"' + str(a[1]) + '", "e":"' + str(a[0]) + '"}')
if a[0] > a[1]:
step = 1
games[gameid]['nextstep'] = self.gamer['id']
self.channel.send('{"o":"u"}')
vrag.channel.send('{"o":"w"}')
else:
step = 2
games[gameid]['nextstep'] = self.enemy.gamer['id']
self.channel.send('{"o":"w"}')
vrag.channel.send('{"o":"u"}')
tempconn[self.enemy] = self
else:
conn[lev].append(self)
findenemy = 0
self.channel.send('{"o":"q"}')
else:
conn[gamer['level']] = [self]
self.channel.send('{"o":"q"}')
elif comm == 'w':
if not hasattr(self, 'enemy'):
self.enemy = tempconn[self]
vrag = self.enemy
gameid = str(self.enemy.gamer['id']) + '_' + str(self.gamer['id'])
self.channel.send('{"o":"o"}')
elif comm == 'r' and games[gameid]['nextstep'] == self.gamer['id']:
dice = []
dice.append(self.gamer['id'])
dice.append(random.randint(1,6))
dice.append(random.randint(1,6))
games[gameid]['dice'].append(dice)
self.channel.send('{"o":"r", "f":"' + str(dice[1]) + '", "s":"' + str(dice[2]) + '"}')
vrag.channel.send('{"o":"r", "f":"' + str(dice[1]) + '", "s":"' + str(dice[2]) + '"}')
games[gameid]['nextstep'] = self.enemy.gamer['id']
elif comm == 'r' and games[gameid]['nextstep'] != self.gamer['id']:
self.channel.send('{"o":"w"}')
elif comm == 'm' and 't' in json_data:
self.channel.send('{"o":"y", "t":"' + str(json_data['text']) + '"}')
vrag.channel.send('{"o":"e", "t":"' + str(json_data['text']) + '"}')
elif comm == 's':
self.channel.send('{"o":"g", "countgame":"' + str(len(games)) + '"}')
elif comm == 'q' and 's' in json_data and 'e' in json_data:
print " - Player step"
elif comm == 'b' and self in conn[gamer['level']]:
conn[gamer['level']].remove(self)
except ValueError:
continue
print 'Closed connection:', self.details[0]
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind(('127.0.0.1', 8088))
server.listen(5)
while stat:
channel, details = server.accept()
ClientThread(channel, details).start()
Perhaps, in addition to answering the question, you give me some
recommendations ...
Answer: There are two options for you to send out a ping and get response.
1. Use `scapy`. Use `sendp` function to send a ping and get response.
2. Use `os.system` to invoke command line to send a ping and get response. Like `os.system('ping 127.0.0.1')`
Hope it helps.:)
|
Need String Comparison 's solution for Partial String Comparison in Python
Question: **Scenario:** I have some tasks performed for respective "Section
Header"(Stored as String), result of that task has to be saved against same
respective "Existing Section Header"(Stored as String)
While mapping if respective task's "Section Header" is one of the "Existing
Section Header" task results are added to it. And if not, new Section Header
will get appended to the Existing Section Header List.
Existing Section Header Looks Like This:
> [ "Activity (Last 3 Days)", "Activity (Last 7 days)", "Executable running
> from disk", "Actions from File"]
For below set of String the expected behaviour is as follows:
"Activity (Last 30 Days) - New Section Should be Added
"Executables running from disk" - Same existing "Executable running from disk"
should be referred [considering extra "s" in Executables same as "Executable".
"Actions from a file" - Same existing "Actions from file" should be referred
[Considering extra article "a"]
Is there any built-in function available python that may help incorporate same
logic. Or any suggestion regarding Algorithm for this is highly appreciated.
Answer: This is a case where you may find [regular
expressions](https://docs.python.org/2/library/re.html) helpful. You can use
[`re.sub()`](https://docs.python.org/2/library/re.html#re.sub) to find
specific substrings and replace them. It will search for _non-overlapping_
matches to a regular expression and repaces it with the specified string.
import re #this will allow you to use regular expressions
def modifyHeader(header):
#change the # of days to 30
modifiedHeader = re.sub(r"Activity (Last \d+ Days?)", "Activity (Last 30 Days)", header)
#add an s to "executable"
modifiedHeader = re.sub(r"Executable running from disk", "Executables running from disk", modifiedHeader)
#add "a"
modifiedHeader = re.sub(r"Actions from File", "Actions from a file", modifiedHeader)
return modifiedHeader
The `r""` refers to [raw
strings](https://docs.python.org/2/library/re.html#raw-string-notation) which
make it a bit easier to deal with the `\` characters needed for regular
expressions, `\d` matches any digit character, and `+` means "1 or more". Read
the page I linked above for more information.
|
Learning Python The Hard Way Example 15 Python shell problems
Question: As the title says, I'm having issues with running commands through python
shell, more specifically, I can't seem to figure out how to open and read
files like it tells me to do in the study drill.
here's everything I've done so far:
PS C:\Users\NikoSuave\Desktop\learn python the hard way\works in progress or finished> python
Python 2.7.8 (default, Jun 30 2014, 16:03:49) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from sys import argv
>>> script, filename = argv
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: need more than 1 value to unpack
>>> txt = open(filename)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'filename' is not defined
>>> filename = argv
>>> script = argv
>>> txt = open(filename)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: coercing to Unicode: need string or buffer, list found
what am I doing wrong? and If i'm way off, could one of you please point me in
the right direction?
Answer: `sys.argv` is a list in Python, which contains the command-line arguments
passed to the script. So this is generally used when you run a python prog.
using command line like:
python prog.py arg1 arg2
Here `arg1` and `arg2` are present in the `argv` list. While being in REPL no
arguments are passed to hence `argv` is empty. This is the reason you keep
getting `ValueError`, `NameError`...
As for opening a file, it like: `file_object = open(filename, mode)` where
`mode` can be `r, w, a, r+` (read, write, append and both read-write). An
example would be:
file = open("newfile.txt", "w")
file.write("hello world in the new file\n")
file.write("and another line\n")
file.close()
The code above opens `newfile.txt` for write and adds the content shown.
Finally closes the file. Similar thing is available while reading the file:
file = open("newfile.txt", "r")
print file.read()
This reads the file `newfile.txt` and prints the content.
|
how to place ".py" file into the search path of your python installation
Question: I have two files say `Geometry.py` and `PeptideBuilder.py` for their
installation They need to be placed into the search path of my python
installation on ubuntu linux machine.
how can I know the search path of my python installation and how can I place
them .
Answer: If you use this for a project, update python path in your main script:
import sys
sys.path.insert(0, '.path/to/folder/with/scripts')
If you need them for any instance of python, you should have sudo rights and
put them in site-packages(check first PYTHONPATH for your py version):
import sys
print(sys.path)
Usual, the standard module path is like „/usr/lib/pythonX.Y/site-packages”.
(that's standard in my Arch, but may be different on Ubuntu, that's way is
better to check)
EDIT: if scripts are strictly related to your project, just puthem on same
level with script that import's them, but I think you allready do that.
|
Unable to launch Selenium with Python in Eclipse
Question: I'm getting **Unresolved import: webdriver Found at:
selenium.webdriver.__init__ 2.44.0** issue when i'm trying to launch selenium
with **python** in **Eclipse**
My sample code
from selenium import webdriver
from selenium.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.python.org")
driver.close()
Using **Python 2.7** , **Pydev 2.2**
**Installed Selenium through PIP**
pip install selenium
Downloading/unpacking selenium
Running setup.py (path:c:\users\ajay_t~1\appdata\local\temp\pip_build_ajay_talpur\selenium\setup.py) egg_info for package selenium
Installing collected packages: selenium
Running setup.py install for selenium
Successfully installed selenium
**When updating Packages**
pip install -U selenium
Requirement already up-to-date: selenium in c:\python27\lib\site-packages
What else i missed please tell me so that i can start executions.
Answer: Please update Environment variable by adding Python Directory path & site
packages path.And retry : C:\Python27\Scripts;C:\Python27\Lib\site-packages;
|
issues with six under django?
Question: I'm trying to use a package called vcrpy to accelerate the execution of my
django application test suite. I'm using django 1.7 on Mac, with Python 2.7.
I added the following couple of lines to one of my tests:
import vcr
with vcr.use_cassette('recording.yaml'):
The result is an import error:
import vcr
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/vcr/__init__.py", line 2, in <module>
from .config import VCR
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/vcr/config.py", line 6, in <module>
from .cassette import Cassette
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/vcr/cassette.py", line 12, in <module>
from .patch import CassettePatcherBuilder
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/vcr/patch.py", line 8, in <module>
from .stubs import VCRHTTPConnection, VCRHTTPSConnection
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/vcr/stubs/__init__.py", line 9, in <module>
from six.moves.http_client import (
ImportError: No module named http_client
The problematic code in the VCR package itself is:
import six
from six.moves.http_client import (
HTTPConnection,
HTTPSConnection,
HTTPMessage,
HTTPResponse,
)
The funny thing: this code seems to run fine when I'm just running it from a
plain python console, but it results in the above ImportError under Django or
under the django manage.py shell.
Any idea what might be wrong?
( some additional details about the location of the six module:
When I'm running plain python console, I get the following:
Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import six
>>> print six.__file__
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/six.pyc
Doing the same thing, with `import django; django.setup()`, from `manage.py
shell` results in exactly the same directory and same six.pyc file.
)
Answer: Maybe a little bit too late for the original question, but I came here trough
Google so for future reference, here's my solution:
### Problem
The problem I found is that mac os comes with not only python but also some
packages pre-installed. Six is one of those packages and therein lies the
conflict. The pre-installed six takes priority over the pip installed six, but
pip still gives information based on what it itself has installed (e.g. six
1.9.0 vs 1.4.1).
pre-installed (version 1.4.1):
`/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/`
pip installed (whatever you have installed, for me it was 1.9.0):
`/Library/Python/2.7/site-packages/`
You can check if this is the case for you by running:
$ python
>>> import six
>>> print six.__file__
'/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six.py'
### Fixing it
The solution is actually quite simple, just put
`export PYTHONPATH="/Library/Python/2.7/site-packages:$PYTHONPATH"`
in your `~/.bashrc` (or whatever file your shell uses). If you've configured
your pip to install somewhere else, put that in the pythonpath instead.
|
How to convert bytearray with non-ASCII bytes to string in python?
Question: I don't know how to convert Python's bitarray to string if it contains non-
ASCII bytes. Example:
>>> string='\x9f'
>>> array=bytearray(string)
>>> array
bytearray(b'\x9f')
>>> array.decode()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0x9f in position 0: ordinal not in range(128)
In my example, I just want to somehow get a string '\x9f' back from the
bytearray. Is that possible?
Answer: In Python 2, just pass it to `str()`:
>>> import sys; sys.version_info
sys.version_info(major=2, minor=7, micro=8, releaselevel='final', serial=0)
>>> string='\x9f'
>>> array=bytearray(string)
>>> array
bytearray(b'\x9f')
>>> str(array)
'\x9f'
In Python 3, you'd want to convert it back to a `bytes` object:
>>> bytes(array)
b'\x9f'
|
Python: Append Multiple Values for One Key in Nested Dictionary
Question: I have the below list of tuples:
p = [("01","Master"),("02","Node"),("03","Node"),("04","Server")]
I want my output to look like:
y = {
"Master":{"number":["01"]},
"Node":{"number":["02", "03"]},
"Server":{"number":["04"]}
}
I have tried the below code:
y = {}
for line in p:
if line[1] in y:
y[line[1]] = {}
y[line[1]]["number"].append(line[0])
else:
y[line[1]] = {}
y[line[1]]["number"] = [line[0]]
And I get the below error:
Traceback (most recent call last):
File "<stdin>", line 4, in <module>
KeyError: 'number'
How do I solve this?
Answer:
from collections import defaultdict
d = defaultdict(lambda: defaultdict(list))
for v, k in p:
d[k]["number"].append(v)
print(d)
defaultdict(<function <lambda> at 0x7f8005097578>, {'Node': defaultdict(<type 'list'>, {'number': ['02', '03']}), 'Master': defaultdict(<type 'list'>, {'number': ['01']}), 'Server': defaultdict(<type 'list'>, {'number': ['04']})})
without defaultdict:
d = {}
from pprint import pprint as pp
for v, k in p:
d.setdefault(k,{"number":[]})
d[k]["number"].append(v)
pp(d)
{'Master': {'number': ['01']},
'Node': {'number': ['02', '03']},
'Server': {'number': ['04']}}
|
python - Selectively choosing nucleotide sequences from a fasta file?
Question: Using biopython how can I snip genes of my interest from a fasta file if the
gene names are stored in a text file?
#extract genes
f1 = open('ortholog1.txt','r')
f2 = open('all.fasta','r')
f3 = open('ortholog1.fasta','w')
genes = [line.rstrip('\n') for line in f1.readlines()]
i=0
for seq_record in SeqIO.parse(f2, "fasta"):
if genes[i] == seq_record.id:
print genes[i]
f3.write('>'+genes[i])
i=i+1
if i==18:
break
f3.write('\n')
f3.write(str(seq_record.seq))
f3.write('\n')
f2.close()
f3.close()
I was trying the above code. But it has some mistakes and is not generic,
since like `ortholog1.txt` (which contain gene names) there are 5 more similar
files. Also the number of genes in each file varies (not 18 always as here).
Here `all.fasta` is the file which contains all genes. `ortholog1.fasta` must
contain the snipped nucleotide sequence.
Answer: Basically, you can make Biopython do all the work.
I'm going to guess that the gene names in "ortholog1.txt" are exactly the same
as in the fasta file, and that there is one gene name per line. If not, you'd
need to tweak them as necessary to make them line up.
from Bio import SeqIO
with open('ortholog1.txt','r') as f:
orthologs_txt = f.read()
orthologs = orthologs_txt.splitlines()
genes_to_keep = []
for record in SeqIO.parse(open('all.fasta','r'), 'fasta'):
if record.description in orthologs:
genes_to_keep.append(record)
with open('ortholog1.fasta','w') as f:
SeqIO.write(genes_to_keep, f, 'fasta')
Edit: Here is one way to keep the output genes in the same order as in the
orthologs file:
from Bio import SeqIO
with open('all.fasta','r') as fasta_file:
record_dict = SeqIO.to_dict(open(SeqIO.parse(fasta_file, 'fasta')
with open('ortholog1.txt','r') as text_file:
orthologs_txt = text_file.read()
genes_to_keep = []
for ortholog in orthologs_txt.splitlines():
try:
genes_to_keep.append( record_dict[ortholog] )
except KeyError:
pass
with open('ortholog1.fasta','w') as output_file:
SeqIO.write(genes_to_keep, output_file, 'fasta')
|
Python Sympy MathML: mathml vs print_mathml
Question: What am I not understanding?
The following code works fine:
from sympy.printing.mathml import print_mathml
s = "x**2 + 8*x + 16"
print_mathml(s)
But this generates an error:
from sympy.printing.mathml import mathml
s = "x**2 + 8*x + 16"
print mathml(s)
Ultimately, what I'm trying to do is get "x**2 + 8*x + 16" converted into
presentation MathML for output on the web. So my plan was to use the mathml()
function on the string, then send the output through c2p, such as the
following:
from sympy.printing.mathml import mathml
from sympy.utilities.mathml import c2p
s = "x**2 + 8*x + 16"
print(c2p(mathml(s))
But as stated, the mathml() function throws up an error.
Answer: I do not have enough reputation points to comment yet, so i answer then
instead.
According the Sympy doc's. Sympy.printing.mathml's `mathml` function is
expecting a expression. But in string format?
<http://docs.sympy.org/dev/modules/printing.html#sympy.printing.mathml.mathml>
Code part:
def mathml(expr, **settings):
"""Returns the MathML representation of expr"""
return MathMLPrinter(settings).doprint(expr)
-
def doprint(self, expr):
"""
Prints the expression as MathML.
"""
mathML = Printer._print(self, expr)
unistr = mathML.toxml()
xmlbstr = unistr.encode('ascii', 'xmlcharrefreplace')
res = xmlbstr.decode()
return res
Which error did you get?
Did you got this:
[....]
return MathMLPrinter(settings).doprint(expr) File "C:\Python27\lib\site-packages\sympy\printing\mathml.py", line 39, in doprint
unistr = mathML.toxml() AttributeError: 'str' object has no attribute 'toxml'
I guess it's something wrong with the library.
unistr = mathML.toxml()
You could take a look here
<http://docs.sympy.org/dev/_modules/sympy/printing/mathml.html> to see the
file.
|
MySQL with Python Flask in Openshift: Internal Server Error
Question: The first weird thing is that just fails when it's deployed in Openshift, in
my localhost works perfectly.
The issue occurs when I try to login a user(query in mysql).
<http://scheduler-sonrie.rhcloud.com/testdb> -> this test mysql connection and
seems OK,
<http://scheduler-sonrie.rhcloud.com/signin> -> fail reporting a message
"Internal Server Error"
This is my code <https://github.com/rchampa/scheduler-old>
There are a few errors when I deploy:
remote: /usr/bin/ld: cannot find -lmysqlclient
remote: collect2: ld returned 1 exit status
remote: error: command 'gcc' failed with exit status 1
remote: Can't roll back MySQL-python; was not uninstalled
And this is the entire logs when I deploy app.
remote: Stopping Python 2.7 cartridge
remote: Syncing git content to other proxy gears
remote: Building git ref 'master', commit bf9510c
remote: Activating virtenv
remote: Checking for pip dependency listed in requirements.txt file..
remote: Requirement already satisfied (use --upgrade to upgrade): aniso8601==0.92 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 1))
remote: Requirement already satisfied (use --upgrade to upgrade): Flask==0.10.1 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages/Flask-0.10.1-py2.7.egg (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 2))
remote: Requirement already satisfied (use --upgrade to upgrade): Flask-HTTPAuth==2.3.0 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 3))
remote: Requirement already satisfied (use --upgrade to upgrade): Flask-RESTful==0.3.1 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 4))
remote: Requirement already satisfied (use --upgrade to upgrade): Flask-SQLAlchemy==2.0 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 5))
remote: Requirement already satisfied (use --upgrade to upgrade): Flask-WTF==0.10.3 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 6))
remote: Requirement already satisfied (use --upgrade to upgrade): itsdangerous==0.24 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages/itsdangerous-0.24-py2.7.egg (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 7))
remote: Requirement already satisfied (use --upgrade to upgrade): Jinja2==2.7.3 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 8))
remote: Requirement already satisfied (use --upgrade to upgrade): MarkupSafe==0.23 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 9))
remote: Downloading/unpacking MySQL-python==1.2.5 (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 10))
remote: Running setup.py egg_info for package MySQL-python
remote:
remote: Requirement already satisfied (use --upgrade to upgrade): pytz==2014.10 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 11))
remote: Requirement already satisfied (use --upgrade to upgrade): six==1.8.0 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 12))
remote: Requirement already satisfied (use --upgrade to upgrade): SQLAlchemy==0.9.8 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 13))
remote: Requirement already satisfied (use --upgrade to upgrade): Werkzeug==0.9.6 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 14))
remote: Requirement already satisfied (use --upgrade to upgrade): WTForms==2.0.1 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages (from -r /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo/requirements.txt (line 15))
remote: Installing collected packages: MySQL-python
remote: Found existing installation: MySQL-python 1.2.3
remote: Not uninstalling MySQL-python at /opt/rh/python27/root/usr/lib64/python2.7/site-packages, outside environment /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv
remote: Running setup.py install for MySQL-python
remote: building '_mysql' extension
remote: gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 -I/opt/rh/mysql55/root/usr/include/mysql -I/opt/rh/python27/root/usr/include/python2.7 -c _mysql.c -o build/temp.linux-x86_64-2.7/_mysql.o -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fwrapv -fPIC -fPIC -g -static-libgcc -fno-omit-frame-pointer -fno-strict-aliasing -DMY_PTHREAD_FASTMUTEX=1
remote: In file included from /opt/rh/mysql55/root/usr/include/mysql/my_config.h:14,
remote: from _mysql.c:44:
remote: /opt/rh/mysql55/root/usr/include/mysql/my_config_x86_64.h:422:1: warning: "HAVE_WCSCOLL" redefined
remote: In file included from /opt/rh/python27/root/usr/include/python2.7/pyconfig.h:6,
remote: from /opt/rh/python27/root/usr/include/python2.7/Python.h:8,
remote: from _mysql.c:29:
remote: /opt/rh/python27/root/usr/include/python2.7/pyconfig-64.h:908:1: warning: this is the location of the previous definition
remote: gcc -pthread -shared -L/usr/lib6464 build/temp.linux-x86_64-2.7/_mysql.o -L/opt/rh/mysql55/root/usr/lib64/mysql -L/opt/rh/python27/root/usr/lib64 -lmysqlclient -lpthread -lz -lm -lrt -lssl -lcrypto -ldl -lpython2.7 -o build/lib.linux-x86_64-2.7/_mysql.so
remote: /usr/bin/ld: cannot find -lmysqlclient
remote: collect2: ld returned 1 exit status
remote: error: command 'gcc' failed with exit status 1
remote: Complete output from command /var/lib/openshift/549c86454382ecb3c400010c/python/virtenv/bin/python2.7 -c "import setuptools;__file__='/var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/build/MySQL-python/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-PDPMLU-record/install-record.txt --single-version-externally-managed --install-headers /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/include/site/python2.7:
remote: running install
remote:
remote: running build
remote:
remote: running build_py
remote:
remote: creating build
remote:
remote: creating build/lib.linux-x86_64-2.7
remote:
remote: copying _mysql_exceptions.py -> build/lib.linux-x86_64-2.7
remote:
remote: creating build/lib.linux-x86_64-2.7/MySQLdb
remote:
remote: copying MySQLdb/__init__.py -> build/lib.linux-x86_64-2.7/MySQLdb
remote:
remote: copying MySQLdb/converters.py -> build/lib.linux-x86_64-2.7/MySQLdb
remote:
remote: copying MySQLdb/connections.py -> build/lib.linux-x86_64-2.7/MySQLdb
remote:
remote: copying MySQLdb/cursors.py -> build/lib.linux-x86_64-2.7/MySQLdb
remote:
remote: copying MySQLdb/release.py -> build/lib.linux-x86_64-2.7/MySQLdb
remote:
remote: copying MySQLdb/times.py -> build/lib.linux-x86_64-2.7/MySQLdb
remote:
remote: creating build/lib.linux-x86_64-2.7/MySQLdb/constants
remote:
remote: copying MySQLdb/constants/__init__.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
remote:
remote: copying MySQLdb/constants/CR.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
remote:
remote: copying MySQLdb/constants/FIELD_TYPE.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
remote:
remote: copying MySQLdb/constants/ER.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
remote:
remote: copying MySQLdb/constants/FLAG.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
remote:
remote: copying MySQLdb/constants/REFRESH.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
remote:
remote: copying MySQLdb/constants/CLIENT.py -> build/lib.linux-x86_64-2.7/MySQLdb/constants
remote:
remote: running build_ext
remote:
remote: building '_mysql' extension
remote:
remote: creating build/temp.linux-x86_64-2.7
remote:
remote: gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -Dversion_info=(1,2,5,'final',1) -D__version__=1.2.5 -I/opt/rh/mysql55/root/usr/include/mysql -I/opt/rh/python27/root/usr/include/python2.7 -c _mysql.c -o build/temp.linux-x86_64-2.7/_mysql.o -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -fno-strict-aliasing -fwrapv -fPIC -fPIC -g -static-libgcc -fno-omit-frame-pointer -fno-strict-aliasing -DMY_PTHREAD_FASTMUTEX=1
remote:
remote: In file included from /opt/rh/mysql55/root/usr/include/mysql/my_config.h:14,
remote:
remote: from _mysql.c:44:
remote:
remote: /opt/rh/mysql55/root/usr/include/mysql/my_config_x86_64.h:422:1: warning: "HAVE_WCSCOLL" redefined
remote:
remote: In file included from /opt/rh/python27/root/usr/include/python2.7/pyconfig.h:6,
remote:
remote: from /opt/rh/python27/root/usr/include/python2.7/Python.h:8,
remote:
remote: from _mysql.c:29:
remote:
remote: /opt/rh/python27/root/usr/include/python2.7/pyconfig-64.h:908:1: warning: this is the location of the previous definition
remote:
remote: gcc -pthread -shared -L/usr/lib6464 build/temp.linux-x86_64-2.7/_mysql.o -L/opt/rh/mysql55/root/usr/lib64/mysql -L/opt/rh/python27/root/usr/lib64 -lmysqlclient -lpthread -lz -lm -lrt -lssl -lcrypto -ldl -lpython2.7 -o build/lib.linux-x86_64-2.7/_mysql.so
remote:
remote: /usr/bin/ld: cannot find -lmysqlclient
remote:
remote: collect2: ld returned 1 exit status
remote:
remote: error: command 'gcc' failed with exit status 1
remote:
remote: ----------------------------------------
remote: Can't roll back MySQL-python; was not uninstalled
remote: Cleaning up...
remote: Command /var/lib/openshift/549c86454382ecb3c400010c/python/virtenv/bin/python2.7 -c "import setuptools;__file__='/var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/build/MySQL-python/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-PDPMLU-record/install-record.txt --single-version-externally-managed --install-headers /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/include/site/python2.7 failed with error code 1 in /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/build/MySQL-python
remote: Traceback (most recent call last):
remote: File "/var/lib/openshift/549c86454382ecb3c400010c/python/virtenv/bin/pip", line 12, in <module>
remote: load_entry_point('pip==1.4', 'console_scripts', 'pip')()
remote: File "/var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages/pip/__init__.py", line 147, in main
remote: return command.main(args[1:], options)
remote: File "/var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages/pip/basecommand.py", line 171, in main
remote: log_fp = open_logfile(log_fn, 'w')
remote: File "/var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages/pip/basecommand.py", line 200, in open_logfile
remote: os.makedirs(dirname)
remote: File "/var/lib/openshift/549c86454382ecb3c400010c/python/virtenv/lib64/python2.7/os.py", line 157, in makedirs
remote: mkdir(name, mode)
remote: OSError: [Errno 13] Permission denied: '/var/lib/openshift/549c86454382ecb3c400010c/.pip'
remote: Running setup.py script..
remote: running develop
remote: running egg_info
remote: creating FlaskApp.egg-info
remote: writing requirements to FlaskApp.egg-info/requires.txt
remote: writing FlaskApp.egg-info/PKG-INFO
remote: writing top-level names to FlaskApp.egg-info/top_level.txt
remote: writing dependency_links to FlaskApp.egg-info/dependency_links.txt
remote: writing manifest file 'FlaskApp.egg-info/SOURCES.txt'
remote: reading manifest file 'FlaskApp.egg-info/SOURCES.txt'
remote: writing manifest file 'FlaskApp.egg-info/SOURCES.txt'
remote: running build_ext
remote: Creating /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages/FlaskApp.egg-link (link to .)
remote: FlaskApp 1.0 is already the active version in easy-install.pth
remote:
remote: Installed /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/repo
remote: Processing dependencies for FlaskApp==1.0
remote: Searching for Flask==0.10.1
remote: Best match: Flask 0.10.1
remote: Processing Flask-0.10.1-py2.7.egg
remote: Flask 0.10.1 is already the active version in easy-install.pth
remote:
remote: Using /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages/Flask-0.10.1-py2.7.egg
remote: Searching for itsdangerous==0.24
remote: Best match: itsdangerous 0.24
remote: Processing itsdangerous-0.24-py2.7.egg
remote: itsdangerous 0.24 is already the active version in easy-install.pth
remote:
remote: Using /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages/itsdangerous-0.24-py2.7.egg
remote: Searching for Jinja2==2.7.3
remote: Best match: Jinja2 2.7.3
remote: Adding Jinja2 2.7.3 to easy-install.pth file
remote:
remote: Using /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages
remote: Searching for Werkzeug==0.9.6
remote: Best match: Werkzeug 0.9.6
remote: Adding Werkzeug 0.9.6 to easy-install.pth file
remote:
remote: Using /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages
remote: Searching for MarkupSafe==0.23
remote: Best match: MarkupSafe 0.23
remote: Adding MarkupSafe 0.23 to easy-install.pth file
remote:
remote: Using /var/lib/openshift/549c86454382ecb3c400010c/app-root/runtime/dependencies/python/virtenv/lib/python2.7/site-packages
remote: Finished processing dependencies for FlaskApp==1.0
remote: Script /var/lib/openshift/549c86454382ecb3c400010c/python//virtenv/bin/activate.fish cannot be made relative (it's not a normal script that starts with #!/var/lib/openshift/549c86454382ecb3c400010c/python/virtenv/bin/python)
remote: Script /var/lib/openshift/549c86454382ecb3c400010c/python//virtenv/bin/activate.csh cannot be made relative (it's not a normal script that starts with #!/var/lib/openshift/549c86454382ecb3c400010c/python/virtenv/bin/python)
remote: Preparing build for deployment
remote: Deployment id is 0aeb5dd6
remote: Activating deployment
remote: HAProxy already running
remote: HAProxy instance is started
remote: Script /var/lib/openshift/549c86454382ecb3c400010c/python//virtenv/bin/activate.fish cannot be made relative (it's not a normal script that starts with #!/var/lib/openshift/549c86454382ecb3c400010c/python/virtenv/bin/python)
remote: Script /var/lib/openshift/549c86454382ecb3c400010c/python//virtenv/bin/activate.csh cannot be made relative (it's not a normal script that starts with #!/var/lib/openshift/549c86454382ecb3c400010c/python/virtenv/bin/python)
remote: Starting Python 2.7 cartridge (app.py server)
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
And this is the rhc tail command:
==> app-root/logs/python.log-20141226024118 <==
File "/opt/rh/python27/root/usr/lib64/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
----------------------------------------
127.13.64.129 - - [26/Dec/2014 02:41:18] "GET / HTTP/1.0" 200 -
----------------------------------------
Exception happened during processing of request from ('127.13.64.129', 20268)
Traceback (most recent call last):
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 295, in _handle_request_noblock
self.process_request(request, client_address)
==> app-root/logs/python.log-20141226073626 <==
127.13.64.129 - - [26/Dec/2014 07:36:26] "GET / HTTP/1.0" 200 -
----------------------------------------
Exception happened during processing of request from ('127.13.64.129', 29632)
Traceback (most recent call last):
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 295, in _handle_request_noblock
self.process_request(request, client_address)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 321, in process_request
self.finish_request(request, client_address)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self)
==> app-root/logs/python.log-20141225094557 <==
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
----------------------------------------
127.13.64.129 - - [25/Dec/2014 21:45:57] "GET / HTTP/1.0" 200 -
----------------------------------------
Exception happened during processing of request from ('127.13.64.129', 22441)
Traceback (most recent call last):
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 295, in _handle_request_noblock
self.process_request(request, client_address)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 321, in process_request
==> app-root/logs/haproxy.log <==
[WARNING] 359/153915 (493821) : Server express/local-gear is UP (leaving maintenance).
[WARNING] 359/154912 (46051) : config : log format ignored for proxy 'stats' since it has no log address.
[WARNING] 359/154912 (46051) : config : log format ignored for proxy 'express' since it has no log address.
[WARNING] 359/154912 (493821) : Stopping proxy stats in 0 ms.
[WARNING] 359/154912 (493821) : Stopping proxy express in 0 ms.
[WARNING] 359/154912 (493821) : Proxy stats stopped (FE: 4 conns, BE: 0 conns).
[WARNING] 359/154912 (493821) : Proxy express stopped (FE: 28 conns, BE: 28 conns).
[WARNING] 359/155417 (46051) : Server express/local-gear is DOWN for maintenance.
[ALERT] 359/155417 (46051) : proxy 'express' has no server available!
[WARNING] 359/155445 (46051) : Server express/local-gear is UP (leaving maintenance).
==> app-root/logs/haproxy_ctld.log <==
I, [2014-12-25T16:52:03.544221 #363070] INFO -- : Starting haproxy_ctld
I, [2014-12-26T14:54:21.950056 #454284] INFO -- : Starting haproxy_ctld
I, [2014-12-26T15:05:45.382845 #476536] INFO -- : Starting haproxy_ctld
I, [2014-12-26T15:12:54.659244 #493843] INFO -- : Starting haproxy_ctld
==> app-root/logs/python.log-20141226123259 <==
Exception happened during processing of request from ('127.13.64.129', 22475)
Traceback (most recent call last):
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 295, in _handle_request_noblock
self.process_request(request, client_address)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 321, in process_request
self.finish_request(request, client_address)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 651, in __init__
self.finish()
==> app-root/logs/python.log <==
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 651, in __init__
self.finish()
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 710, in finish
self.wfile.close()
File "/opt/rh/python27/root/usr/lib64/python2.7/socket.py", line 279, in close
self.flush()
File "/opt/rh/python27/root/usr/lib64/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
----------------------------------------
127.13.64.129 - - [26/Dec/2014 16:02:20] "GET / HTTP/1.0" 200 -
----------------------------------------
Exception happened during processing of request from ('127.13.64.129', 22815)
Traceback (most recent call last):
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 295, in _handle_request_noblock
self.process_request(request, client_address)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 321, in process_request
self.finish_request(request, client_address)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 651, in __init__
self.finish()
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 710, in finish
self.wfile.close()
File "/opt/rh/python27/root/usr/lib64/python2.7/socket.py", line 279, in close
self.flush()
File "/opt/rh/python27/root/usr/lib64/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
----------------------------------------
127.13.64.129 - - [26/Dec/2014 16:02:22] "GET / HTTP/1.0" 200 -
----------------------------------------
Exception happened during processing of request from ('127.13.64.129', 22818)
Traceback (most recent call last):
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 295, in _handle_request_noblock
self.process_request(request, client_address)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 321, in process_request
self.finish_request(request, client_address)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 651, in __init__
self.finish()
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 710, in finish
self.wfile.close()
File "/opt/rh/python27/root/usr/lib64/python2.7/socket.py", line 279, in close
self.flush()
File "/opt/rh/python27/root/usr/lib64/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
----------------------------------------
127.13.64.129 - - [26/Dec/2014 16:02:24] "GET / HTTP/1.0" 200 -
----------------------------------------
Exception happened during processing of request from ('127.13.64.129', 22829)
Traceback (most recent call last):
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 295, in _handle_request_noblock
self.process_request(request, client_address)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 321, in process_request
self.finish_request(request, client_address)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 334, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 651, in __init__
self.finish()
File "/opt/rh/python27/root/usr/lib64/python2.7/SocketServer.py", line 710, in finish
self.wfile.close()
File "/opt/rh/python27/root/usr/lib64/python2.7/socket.py", line 279, in close
self.flush()
File "/opt/rh/python27/root/usr/lib64/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe
----------------------------------------
Answer: Based on the output you provided, OpenShift likely installed the MySQL core
package but didn't install the MySQL development package, which includes the
library files your Python package needs.
Try installing the `mysql-devel` package from `yum` in your OpenShift
environment1:
$ yum install --assumeyes mysql-devel
Then reinstall your Python packages (or redeploy, if it does this for you).
If you have success with installing this package (and compiling your Python
extension), consider adding this prerequisite to your application's [OpenShift
Action Hooks](https://developers.openshift.com/en/managing-action-hooks.html).
1 `--assumeyes` is optional; it suppresses questions.
|
Operators in sage notebook vs importing from Python
Question: Operators behave differently depending on how I run my sage program. in the
notebook:
2^10
==>1024
running my program with `sage -python filename.py`:
from sage.all import *
print(2^10)
==> 8
What do I have to import in Python to replicate the behaviour of the Sage
notebook?
Edit:
Thanks everyone for the basic Python lessons. DSM answered this question in
the comments, turns out sage notebook has a preprocessor.
Answer: In python for `Exponentiation` we use double asterik `**`
>>> print (2**10)
1024
OR you can use built-in function **pow**.
>>> pow(2, 10)
1024
**_pow_**
pow(...)
pow(x, y[, z]) -> number
With two arguments, equivalent to x**y. With three arguments,
equivalent to (x**y) % z, but may be more efficient (e.g. for longs).
enter code here
* * *
`^` is a bitwise operator for performing XOR(`bitwise exclusive or`)
operation.
### _For example_ :
>>> a = [1,2,3]
>>> b = [3,4,5]
>>> a^b
>>> set(a)^set(b)
set([1, 2, 4, 5])
**_[x ^ y](https://wiki.python.org/moin/BitwiseOperators)_**
Does a "bitwise exclusive or".
Each bit of the output is the same as the corresponding bit in x if that bit in y is 0,
and it's the complement of the bit in x if that bit in y is 1.
Just remember about that infinite series of 1 bits in a negative number, and these
should all make sense.
* **[You can find great explanation about Bitwise operation over here](http://stackoverflow.com/questions/1746613/bitwise-operation-and-usage)**
|
Change directory and create a new file on remote using Chilkat SFTP python library
Question: <http://www.example-code.com/python/sftp_writeTextFile.asp>
I think I'm able to login in system using chilkat sftp `sftp =
chilkat.CkSFtp()` 30 days trial version.
now I'm in root directory(in remote machine) and there are two folders. I want
to change one of these two folders and create a txt file there.
How do I proceed
import sys
import chilkat
sftp = chilkat.CkSFtp()
success = sftp.UnlockComponent("Anything for 30-day trial")
if (success != True):
print(sftp.lastErrorText())
sys.exit()
# Set some timeouts, in milliseconds:
sftp.put_ConnectTimeoutMs(15000)
sftp.put_IdleTimeoutMs(15000)
# Connect to the SSH server.
# The standard SSH port = 22
# The hostname may be a hostname or IP address.
port = 22
success = sftp.Connect(hostname,port)
if (success != True):
print(sftp.lastErrorText())
sys.exit()
# Authenticate with the SSH server. Chilkat SFTP supports
# both password-based authenication as well as public-key
# authentication. This example uses password authenication.
success = sftp.AuthenticatePw(username, password)
if (success != True):
print(sftp.lastErrorText())
sys.exit()
print("Success.")
this script successfully executing and printing "Success"
Answer: SFTP protocol does not have a concept of "current working directory" (contrary
to say FTP).
While some SFTP clients (client libraries) allow emulating the "current
working directory" locally, it does not seem to be the case with Chilkat.
So you have to use absolute paths when creating the file, e.g.
`/folder1/file.txt`
To create the file, use
[`OpenFile`](http://www.chilkatsoft.com/refdoc/pythonCkSFtpDoc.html#method35)
method. To write its contents, use one of `WriteFile*` methods.
|
Html response object has not attribute text
Question:
import scrapy
from bs4 import BeautifulSoup
class Spider(scrapy.Spider):
name = "crawl"
start_urls = [url]
def parse(self, response):
soup = BeautifulSoup(response.text)
print soup.prettify()
I have this chunck of code. I'm using scrapy and when I execute it I get this
error:
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/vagrant_data/cars_scraper/cars_scraper/spiders/__init__.py", line 17, in parse
soup = BeautifulSoup(response.text)
exceptions.AttributeError: 'HtmlResponse' object has no attribute 'text'
Answer: Use [`Response.body`](http://doc.scrapy.org/en/latest/topics/request-
response.html#scrapy.http.Response.body) attribute instead of `text`.
|
Generate 10 balls in tkinter canvas
Question: Ok guys. I am trying to generate 10 balls of random color in Tkinter canvas
when I click the generate button. Program works, and random color choice works
for the ball, but I only get one ball generated at a time. Every time I click
the button it randomly moves the ball around, but all I want is 10 balls in 10
random positions at a time. I am using Python 3.4 on a Linux box. This is a
code I've got:
from tkinter import *
import random # to generate random balls
colors = ["red", "blue", "purple", "green", "violet", "black"]
class RandomBalls:
"""
Boilerplate code for window in Tkinter
window = Tk()
window.title("Random title")
window.mainloop()
"""
def __init__(self):
"""
Initialize the window and add two frames, one with button, and another one with
canvas
:return:
"""
window = Tk()
window.title("Random balls")
# A canvas frame
frame1 = Frame(window)
frame1.pack()
self.canvas = Canvas(frame1, width = 200, height = 300, bg = "white")
self.canvas.pack()
# A button frame
frame2 = Frame(window)
frame2.pack()
displayBtn = Button(frame2, text = "Display", command = self.display)
displayBtn.pack()
window.mainloop()
def display(self):
for i in range(0, 10):
self.canvas.delete("circle") # delete references to the old circle
self.x1 = random.randrange(150)
self.y1 = random.randrange(200)
self.x2 = self.x1 + 5
self.y2 = self.y1 + 5
self.coords = self.x1, self.y1, self.x2, self.y2
self.canvas.create_oval(self.coords, fill = random.choice(colors), tags = "circle")
self.canvas.update()
RandomBalls()
Answer: Every time through your loop you are deleting everything you created before,
including what you created the previous iteration. Move the delete statement
outside of the loop:
def display(self):
self.canvas.delete("circle")
for i in range(0, 10):
...
|
Best data-structure to save a word to a corresponding number?
Question: I'm pretty new to python and just need some advice. I'm basically making a
python reddit tool that will help me find the top comments, their score, and
save them so they can be sorted by highest score at a later time.
Initially I was going to save them to a dictionary with a 'score:comment'
format but was told this was not ideal. Any suggestions for the best route to
go in this situation?
Answer: If you plan on sorting it, just use an array. Arrays just make it so much
simpler when it comes to sorting.
Choosing a data structure mainly lies in your goals for the application. Would
you like quick indexing and look up times? Use an array. Is your input size
unknown, and search times are not as important? Use a linked list. Are search
times incredibly important? Use a hash table
|
Vectorize nested for loop Python
Question: I have a numpy array that I'm iterating through with:
import numpy
import math
array = numpy.array([[1, 1, 2, 8, 2, 2],
[5, 5, 4, 1, 3, 2],
[5, 5, 4, 1, 3, 2],
[5, 5, 4, 1, 3, 2],
[9, 5, 8, 8, 2, 2],
[7, 3, 6, 6, 2, 2]])
Pixels = ['U','D','R','L','UL','DL','UR','DR']
for i in range (1,array.shape[0]-1):
for j in range (1,array.shape[1]-1):
list = []
while len(list) < 2:
iToMakeList = i
jToMakeList = j
if iToMakeList > array.shape[0]-1 or iToMakeList < 1 or jToMakeList> array.shape[0]-1 or jToMakeList < 1:
break
PixelCoord = {
'U' : (iToMakeList-1,jToMakeList),
'D' : (iToMakeList+1,jToMakeList),
'R' : (iToMakeList,jToMakeList+1),
'L' : (iToMakeList,jToMakeList-1),
'UL' : (iToMakeList-1,jToMakeList-1),
'DL' : (iToMakeList+1,jToMakeList-1),
'UR' : (iToMakeList-1,jToMakeList+1),
'DR' : (iToMakeList+1,jToMakeList+1)
}
Value = {
'U' : array[iToMakeList-1][jToMakeList],
'D' : array[iToMakeList+1][jToMakeList],
'R' : array[iToMakeList][jToMakeList+1],
'L' : array[iToMakeList][jToMakeList-1],
'UL' : array[iToMakeList-1][jToMakeList-1],
'DL' : array[iToMakeList+1][jToMakeList-1],
'UR' : array[iToMakeList-1][jToMakeList+1],
'DR' : array[iToMakeList+1][jToMakeList+1]
}
candidates = []
for pixel in Pixels:
candidates.append((Value[pixel],pixel))
Lightest = max(candidates)
list.append(PixelCoord[Lightest[1]])
iToMakeList = PixelCoord[Lightest[1]][0]
jToMakeList = PixelCoord[Lightest[1]][1]
I want to accelerate this process. It's very slow.
Assume that the output of this snippet is my final goal and the ONLY thing I
want to do is accelerate this code.
Answer: For your question to make sense to me, I think you need to move where `list =
[]` appears. Otherwise you'll never get to even `i=0`, `j=1` until `list` is
full. I can't imagine that it is slow as currently implemented --- list will
be full very quickly, and then the for loops should be very fast. Here is what
I believe you intended. Please clarify if this is not correct.
for i in range (0,array.shape[0]):
for j in range (0,array.shape[1]):
list = []
while len(list) < 100:
print "identity", i, j
#find neighboring entry with greatest value (e.g., assume it is [i-1, j] with value 10)
list.append((i-1,j))
i = i-1
j = j
#perform operations on list
Let's do some modifications. I'll assume there is a function
`get_max_nbr(i,j)` which returns the coordinates of the maximum neighbor. One
of the places your code is slow is that it will call get_max_nbr for the same
coordinate many times (at each step in the loop it does it 100 times). The
code below uses [memoization](http://stackoverflow.com/questions/1988804/what-
is-memoization-and-how-can-i-use-it-in-python) to get around this (down to 1
time on average). So if this is your bottleneck, this should get you close to
100 times speedup.
maxnbr = {}
for i in range(0,array.shape[0]):
for j in range (0,array.shape[1]):
list = []
current_loc = (i,j)
while len(list) < 100:
if current_loc not in maxnbr: #if this is our first time seeing current_loc
maxnbr[current_loc] = get_max_nbr(*current_loc) #note func(*(i,j)) becomes func(i,j)
current_loc = maxnbr[current_loc]
list.append(current_loc)
#process list here
This doesn't successfully vectorize, but it does create the list (I think) you
want, and it should be a significant improvement. It may be that if we knew
more about the list processing we might be able to find a better approach, but
it's not clear.
|
Django case insensitive login, mixed case username
Question: I'm trying to do case insensitive login for a Django app, without altering
original username case. According to my research, the best approach is to
create a related field to store the username in lowercase at registration. No
reinventing the user model, it's a simple solution, right? My dilemma: How to
do you take data from one model, change it, and save it in another model when
the object is created?
Here is what is in **models.py** :
from django.db import models
from django.contrib.auth.models import User
class UserProfile(models.Model):
user = models.OneToOneField(User)
lc_n = User.username
lc_n = lc_n.lower()
lowercase_name = models.CharField(max_length=30, default=lc_n)
When running `python manage.py makemigrations` there is the following error:
`AttributeError: type object 'User' has no attribute 'username'`
User does have that attribute, but it can't be accessed this way.
Please forgive me, as this must contain some simple fundamental flaw. Any
support that may be offered will be greatly appreciated.
Answer: You don't need any additional models to implement case insensitive login.
Django supports case insentisive filtering by `iexact` operator:
user = User.objects.get(username__iexact=name)
|
Can't create pdf using python PDFKIT Error : " No wkhtmltopdf executable found:"
Question: I tried installing pdfkit Python API in my windows 8 machine. I'm getting
issues related to path.
Traceback (most recent call last):
File "C:\Python27\pdfcre", line 13, in <module>
pdfkit.from_url('http://google.com', 'out.pdf')
File "C:\Python27\lib\site-packages\pdfkit\api.py", line 22, in from_url
configuration=configuration)
File "C:\Python27\lib\site-packages\pdfkit\pdfkit.py", line 38, in __init__
self.configuration = (Configuration() if configuration is None
File "C:\Python27\lib\site-packages\pdfkit\configuration.py", line 27, in __init__
'https://github.com/JazzCore/python-pdfkit/wiki/Installing-wkhtmltopdf' % self.wkhtmltopdf)
IOError: No wkhtmltopdf executable found: ""
If this file exists please check that this process can read it. Otherwise please install wkhtmltopdf - https://github.com/JazzCore/python-pdfkit/wiki/Installing-wkhtmltopdf
Is anybody installed Python PDFKIt in windows machine? How to resolve this
error.
My sample code :
import pdfkit
import os
config = pdfkit.configuration(wkhtmltopdf='C:\\Python27\\wkhtmltopdf\bin\\wkhtmltopdf.exe')
pdfkit.from_url('http://google.com', 'out.pdf')
Answer: I am learning python today, and I met the same problem, lately I set the
windows enviroment variables and everything is OK.
I add the install path of wkhtml to the path, for
example:"D:\developAssistTools\wkhtmltopdf\bin;" is my install path of wkhtml,
and I add it to the path, everything is OK.
import pdfkit
pdfkit.from_url("http://google.com", "out.pdf")
finally, I find a out.pdf.
|
Got 403 error when connecting to Google Analytics with Python 2.7.x
Question: I tried to get data from Google Analytics API with Python client(google-api-
python-client). Here's the code I used:
from apiclient import discovery
from oauth2client.client import SignedJwtAssertionCredentials
from httplib2 import Http
with open("ManagementGate-622edd43c0dd.p12") as f:
private_key = f.read()
credentials = SignedJwtAssertionCredentials(
'[email protected]',
private_key,
'https://www.googleapis.com/auth/analytics.readonly')
http_auth = credentials.authorize(Http())
service = discovery.build('analytics', 'v3', http=http_auth)
result = service.data().ga().get(
ids='ga:79873569',
start_date='7daysAgo',
end_date='today',
metrics='ga:visits,ga:sessions,ga:pageviews').execute()
I created a Service Account on Credentials Page. However, I got an error as
below:
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/analytics/v3/data/ga?metrics=ga%3Avisits%2Cga%3Asessions%2Cga%3Apageviews&alt=json&end-date=today&ids=ga%3A79873569&start-date=7daysAgo returned "User does not have any Google Analytics account.">
The instructions I followed are from:
<https://developers.google.com/accounts/docs/OAuth2ServiceAccount> Is there
anything else I need to do? And why did I get this error? I already enabled
Analytics API on APIs page.
Answer: You are trying to access the Google Analytics API using a service account. A
service account by default does not have access to any Google Analytics
accounts.
What you need to do is take the Service account email address from the Google
Developer console. Go to the Admin section of the Google Analytics Website.
Give this service account access at the **ACCOUNT** level it must be the
**ACCOUNT** level to the Google Analytics account you wish to access.
It wont work at the Web property or the view level it must be the **Account**
level.
|
Print a list of words in random order - Python
Question: I have a project to complete from a book which was received as a Christmas
present (Python Programming for the Absolute Beginner, Third Edition):
_Create a program that prints a list of words in random order. The program
should print all the words and not repeat any._
I have created the following code:
import random
words = ["Please", "Help", "Me", "Merry", "Christmas"]
for i in range(len(words)):
random_index = random.randrange(len(words))
print(words[random_index])
del words[random_index]
I'd like to check if this code is the most efficient way of doing so, but
there is no forum to check against, rather frustratingly!
Is there a better way of doing this? Cheers
Answer: How about using
[`random.sample`](https://docs.python.org/3/library/random.html#random.sample):
>>> import random
>>> words = ["Please", "Help", "Me", "Merry", "Christmas"]
>>> random.sample(words, len(words))
['Merry', 'Me', 'Help', 'Please', 'Christmas']
or
[`random.shuffle`](https://docs.python.org/3/library/random.html#random.shuffle)
if it is okay to modify the original list:
>>> random.shuffle(words)
>>> words
['Me', 'Merry', 'Help', 'Please', 'Christmas']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.