text
stringlengths 226
34.5k
|
---|
How to segment text into sub-sentences based on enumerators?
Question: I am segmenting sentences for a text in python using nltk
`PunktSentenceTokenizer()`. However, there are many long sentences appears in
a enumerated way and I need to get the sub sentence in this case.
Example:
The api allows the user to achieve following goals: (a) aXXXXXX ,(b)bXXXX, (c) cXXXXX.
The required output would be :
`"The api allows the user to achieve following goals aXXXXX. "`, `"The api
allows the user to achieve following goals bXXXXX."` and `"The api allows the
user to achieve following goals cXXXXX. "`
How can I achieve this goal?
Answer: To get the sub-sequences you could use a [RegExp
Tokenizer](http://www.nltk.org/_modules/nltk/tokenize/regexp.html).
An example how to use it to split the sentence could look like this:
from nltk.tokenize.regexp import regexp_tokenize
str1 = 'The api allows the user to achieve following goals: (a) aXXXXXX ,(b)bXXXX, (c) cXXXXX.'
parts = regexp_tokenize(str1, r'\(\w\)\s*', gaps=True)
start_of_sentence = parts.pop(0)
for part in parts:
print(" ".join((start_of_sentence, part)))
|
Python Scikit-learn Perceptron Output Probabilities
Question: I'm using scikit-learn's Perceptron algorithm to do binary classification.
When using some of the other algorithms in the library (RandomForestClassifer,
LogisticRegression, etc.), I can use `model.predict_proba()` to have the
algorithm output the probability of getting a positive (1) for each example.
Is there a way to get a similar output for the Perceptron algorithm?
The closest I've been able to come is `model.decision_function()`, which
outputs a confidence score for the example based on the signed distance to the
hyperplane, but I'm not sure how to convert these confidence scores to the
probability figures I want.
`model.predict()` is also only returning binary values.
Answer: I think what you want is `CalibratedClassifierCV`:
from sklearn import linear_model
from sklearn.datasets import make_classification
from sklearn.calibration import CalibratedClassifierCV
# Build a classification task using 3 informative features
X, y = make_classification(n_samples=1000,
n_features=10,
n_informative=3,
n_redundant=0,
n_repeated=0,
n_classes=2,
random_state=0,
shuffle=False)
per = linear_model.Perceptron()
clf_isotonic = CalibratedClassifierCV(per, cv=10, method='isotonic')
clf_isotonic.fit(X[:900], y[:900])
preds = clf_isotonic.predict_proba(X[900:])
print preds
[Edit] You can use this also to make other `linear_models` produce
probabilities for classification problems
|
How to make values in list of dictionary unique?
Question: I have a list of dictionaries in Python, which looks like following:
d = [{feature_a:1, feature_b:'Jul', feature_c:100}, {feature_a:2, feature_b:'Jul', feature_c:150}, {feature_a:1, feature_b:'Mar', feature_c:110}, ...]
What I want to achieve is that to keep the `feature_a`, `_b` and `_c` unique.
For example, if we have 3 entries which have the same `feature_a` and `_b`,
but have 3 different values of `feature_c` `100`, `100`, `150`, then after the
operation, it should be `100` and `150`.
How can I achieve this?
================================================================ UPDATE:
OK, Thanks for Anand's excellent answer, it works perfectly. However, I have a
further question.
Suppose we have a new `feature_d` and the dictionary looks like:
d = [{feature_a:1, feature_b:'Jul', feature_c:100, feature_d:'A'}, {feature_a:2, feature_b:'Jul', feature_c:150, feature_d: 'B'}, {feature_a:1, feature_b:'Mar', feature_c:110, feature_d:'F'}, ...]
and I only want to deduplicate `feature_a`, `_b` and `_c`, but leave
`feature_d` out. How can I achieve this?
Many thanks.
Answer: If the order of the initial `d` list is not important , you can take the
`.items()` of each dictionary and convert it into a
[`frozenset()`](https://docs.python.org/2/library/stdtypes.html#frozenset) ,
which is hashable, and then you can convert the whole thing to a `set()` or
`frozenset()` , and then convert each `frozenset()` back to dictionary.
Example -
uniq_d = list(map(dict, frozenset(frozenset(i.items()) for i in d)))
`sets()` do not allow duplicate elements. Though you would end up losing the
order of the list. For Python 2.x , the `list(...)` is not needed, as `map()`
returns a list.
* * *
Example/Demo -
>>> import pprint
>>> pprint.pprint(d)
[{'feature_a': 1, 'feature_b': 'Jul', 'feature_c': 100},
{'feature_a': 2, 'feature_b': 'Jul', 'feature_c': 150},
{'feature_a': 1, 'feature_b': 'Mar', 'feature_c': 110},
{'feature_a': 1, 'feature_b': 'Jul', 'feature_c': 100},
{'feature_a': 1, 'feature_b': 'Jul', 'feature_c': 150}]
>>> uniq_d = list(map(dict, frozenset(frozenset(i.items()) for i in d)))
>>> pprint.pprint(uniq_d)
[{'feature_a': 1, 'feature_b': 'Jul', 'feature_c': 100},
{'feature_a': 1, 'feature_b': 'Jul', 'feature_c': 150},
{'feature_a': 1, 'feature_b': 'Mar', 'feature_c': 110},
{'feature_a': 2, 'feature_b': 'Jul', 'feature_c': 150}]
* * *
For the new requirement -
> However, what if that I have another feature_d but I only want to dedup
> feature_a, _b and _c
>
> If two entries which have same feature_a, _b and _c, they are considered the
> same and duplicated, no matter what is in feature_d
A simple way to do this is to use a set and a new list, add only the features
you need to the set, and check using only the features you need. Example -
seen_set = set()
new_d = []
for i in d:
if tuple([i['feature_a'],i['feature_b'],i['feature_c']]) not in seen_set:
new_d.append(i)
seen_set.add(tuple([i['feature_a'],i['feature_b'],i['feature_c']]))
Example/Demo -
>>> d = [{'feature_a':1, 'feature_b':'Jul', 'feature_c':100, 'feature_d':'A'},
... {'feature_a':2, 'feature_b':'Jul', 'feature_c':150, 'feature_d': 'B'},
... {'feature_a':1, 'feature_b':'Mar', 'feature_c':110, 'feature_d':'F'},
... {'feature_a':1, 'feature_b':'Mar', 'feature_c':110, 'feature_d':'G'}]
>>> seen_set = set()
>>> new_d = []
>>> for i in d:
... if tuple([i['feature_a'],i['feature_b'],i['feature_c']]) not in seen_set:
... new_d.append(i)
... seen_set.add(tuple([i['feature_a'],i['feature_b'],i['feature_c']]))
...
>>> pprint.pprint(new_d)
[{'feature_a': 1, 'feature_b': 'Jul', 'feature_c': 100, 'feature_d': 'A'},
{'feature_a': 2, 'feature_b': 'Jul', 'feature_c': 150, 'feature_d': 'B'},
{'feature_a': 1, 'feature_b': 'Mar', 'feature_c': 110, 'feature_d': 'F'}]
|
strip date with -07:00 timezone format python
Question: I have a variable 'd' that contains dates in this format:
2015-08-03T09:00:00-07:00
2015-08-03T10:00:00-07:00
2015-08-03T11:00:00-07:00
2015-08-03T12:00:00-07:00
2015-08-03T13:00:00-07:00
2015-08-03T14:00:00-07:00
etc.
I need to strip these dates, but I'm having trouble because of the timezone.
If I use `d = dt.datetime.strptime(d[:19],'%Y-%m-%dT%H:%M:%S')`, only the
first 19 characters will appear and the rest of the dates are ignored. If I
try `d = dt.datetime.strptime(d[:-6],'%Y-%m-%dT%H:%M:%S`, Python doesn't chop
off the timezone and I still get the error `ValueError: unconverted data
remains: -07:00`. I don't think I can use the dateutil parser because I've
only seen it be used for one date instead of a whole list like I have. What
can I do? Thanks!
Answer: Since you have a list just iterate over and use dateutil.parser:
d = ["2015-08-03T09:00:00-07:00","2015-08-03T10:00:00-07:00","2015-08-03T11:00:00-07:00","2015-08-03T12:00:00-07:00",
"2015-08-03T13:00:00-07:00","2015-08-03T14:00:00-07:00"]
from dateutil import parser
for dte in d:
print(parser.parse(dte))
If for some reason you actually want to ignore the timezone you can use rsplit
with datetime.strptime:
from datetime import datetime
for dte in d:
print(datetime.strptime(dte.rsplit("-",1)[0],"%Y-%m-%dT%H:%M:%S"))
If you had a single string delimited by commas then just use `d.split(",")`
You can use `strftime` to format the string in any format you want if you
actually want a string:
for dte in d:
print(datetime.strptime(dte.rsplit("-",1)[0],"%Y-%m-%dT%H:%M:%S").strftime("%Y-%m-%d %H:%M:%S"))
|
psycopg2 module cannot be found by Python2.7
Question: I installed psycopg2 via pip, but my programs are having trouble finding it.
So, I tried to install psycopg2 via pip again:
user@ubuntu:~/Desktop/progFolder$ sudo pip install psycopg2
Requirement already satisfied (use --upgrade to upgrade): psycopg2 in /usr/local/lib/python2.7/dist-packages
Cleaning up...
Then I tried to use a program that imports it:
user@ubuntu:~/Desktop/progFolder$ python myProg.py
Traceback (most recent call last):
File "myProg.py", line 6, in <module>
import psycopg2
ImportError: No module named psycopg2
And I have tried just importing directly in python:
user@ubuntu:~/Desktop/progFolder$ python
Python 2.7.5 (default, Nov 9 2014, 14:14:12)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import psycopg2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named psycopg2
So I printed my python path.
>>> import sys
>>> print sys.path
['', '/usr/local/lib/python27.zip', '/usr/local/lib/python2.7', '/usr/local/lib/python2.7/plat-linux2', '/usr/local/lib/python2.7/lib-tk', '/usr/local/lib/python2.7/lib-old', '/usr/local/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/site-packages']
And noticed that the path does contain the path to psycopg2.
psycopg2 in /usr/local/lib/python2.7/dist-packages
So, I have no idea on why this is happening. Any help would be appreciated.
UPDATE: I have done
>>>help()
>>>modules
And psycopg2 was not listed among the other modules. (this does not help me
but may help you help me)
Answer: From your python path print, it looks like it **doesn't** have
`/usr/local/lib/python2.7/dist-packages` included in it. You can add it in one
way by:
`sys.path.insert(0, "/usr/local/lib/python2.7/dist-packages")`
|
Allow user to change default text in tkinter entry widget.
Question: I'm writing a python script that requires the user to enter the name of a
folder. For most cases, the default will suffice, but I want an entry box to
appear that allows the user to over-ride the default. Here's what I have:
from Tkinter import *
import time
def main():
#some stuff
def getFolderName():
master = Tk()
folderName = Entry(master)
folderName.pack()
folderName.insert(END, 'dat' + time.strftime('%m%d%Y'))
folderName.focus_set()
createDirectoryName = folderName.get()
def callback():
global createDirectoryName
createDirectoryName = folderName.get()
return
b = Button(master, text="OK and Close", width=10, command=callback)
b.pack()
mainloop()
return createDirectoryName
getFolderName()
#other stuff happens....
return
if __name__ == '__main__':
main()
I know next to nothing about tkInter and have 2 questions.
1. Is over-riding the default entry using `global createDirectoryName` within the callback function the best way to do this?
2. How can I make the button close the window when you press it. I've tried
def callback():
global createDirectoryName
createDirectoryName = folderName.get()
master.destroy
but that simply destroys the window upon running the script.
Answer: I don't know how experienced are you in Tkinter, but I suggest you use
classes.
try:
from tkinter import * #3.x
except:
from Tkinter import * #2.x
class anynamehere(Tk): #you can make the class inherit from Tk directly,
def __init__(self): #__init__ is a special methoed that gets called anytime the class does
Tk.__init__(self) #it has to be called __init__
#further code here e.g.
self.frame = Frame()
self.frame.pack()
self.makeUI()
self.number = 0 # this will work in the class anywhere so you don't need global all the time
def makeUI(self):
#code to make the UI
self.number = 1 # no need for global
#answer to question No.2
Button(frame, command = self.destroy).pack()
anyname = anynamehere() #remember it alredy has Tk
anyname.mainloop()
Also why do you want to override the deafult Entry behavior ? The solution
would be to make another button and bind a command to it like this
self.enteredtext = StringVar()
self.entry = Entry(frame, textvariable = self.enteredtext)
self.entry.pack()
self.button = Button(frame, text = "Submit", command = self.getfolder, #someother options, check tkitner documentation for full list)
self.button.pack()
def getfolder(self): #make the UI in one method, command in other I suggest
text = self.enteredtext.get()
#text now has whats been entered to the entry, do what you need to with it
|
How to save pytest's results/logs to a file?
Question: I am having trouble trying to save -all- of the results shown from pytest to a
file (txt, log, doesn't matter). In the test example below, I would like to
capture what is shown in console into a text/log file of some sort:
import pytest
import os
def test_func1():
assert True
def test_func2():
assert 0 == 1
if __name__ == '__main__':
pytest.main(args=['-sv', os.path.abspath(__file__)])
Console output I'd like to save to a text file:
test-mbp:hi_world ua$ python test_out.py
================================================= test session starts =================================================
platform darwin -- Python 2.7.6 -- py-1.4.28 -- pytest-2.7.1 -- /usr/bin/python
rootdir: /Users/tester/PycharmProjects/hi_world, inifile:
plugins: capturelog
collected 2 items
test_out.py::test_func1 PASSED
test_out.py::test_func2 FAILED
====================================================== FAILURES =======================================================
_____________________________________________________ test_func2 ______________________________________________________
def test_func2():
> assert 0 == 1
E assert 0 == 1
test_out.py:9: AssertionError
========================================= 1 failed, 1 passed in 0.01 seconds ==========================================
test-mbp:hi_world ua$
Answer: It appears that all of your test output is going _stdout_ , so you simply need
to “redirect” your python invocation's output there:
python test_out.py >myoutput.log
You can also “tee” the output to multiple places. E.g., you might want to log
to the file yet also see the output on your console. The above example then
becomes:
python test_out.py | tee myoutput.log
|
Returning values from event handler function wxPython
Question: So in normal python scripts you can do something like this:
def func():
i = 1
return i
i = func()
Generally speaking, another python program would be able to import the file
containing this and just say i = func() in their program and they would get
the value of i.
Now I want to do something similar but instead I need to do it when a user
presses a button or activates an event handler from the GUI. Something like:
import wx
class MyForm(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, wx.ID_ANY, "Pressing dem keyz")
# Add a panel so it looks the correct on all platforms
panel = wx.Panel(self, wx.ID_ANY)
self.btn = wx.ToggleButton(panel, label="TOGGLE")
self.btn2 = wx.ToggleButton(panel, label="TOGGLE 2", pos = (85,0))
self.btn.Bind(wx.EVT_CHAR_HOOK, self.onKeyPress)
self.btn2.Bind(wx.EVT_CHAR_HOOK, self.onKeyPress)
def onKeyPress(self, event):
space = False
keycode = event.GetKeyCode()
print keycode
if keycode == ord(' '):
print "SPACEBAR!"
space = True
self.btn.SetValue(space)
if space == True:
print "Lemme return i please"
i = 1
print i
return i #ALSO PART OF WHAT I WANT TO DO
elif keycode == wx.WXK_RETURN:
self.btn
elif keycode == wx.WXK_LEFT:
self.btn2
print 'YOU MOVED LEFT'
elif keycode == wx.WXK_RIGHT:
self.btn
print 'YOU MOVED RIGHT'
elif keycode == wx.WXK_UP:
print 'YOU MOVED UP'
elif keycode == wx.WXK_DOWN:
print 'YOU MOVED DOWN'
elif keycode == wx.WXK_ESCAPE:
self.Destroy()
#NOW HERE'S THE PART THAT I WANT TO IMPLEMENT
return keycode
#I do not need the keycode necessarily but I want to do something similar to this along with the |return i| above
# Run the program
if __name__ == "__main__":
app = wx.App(False)
frame = MyForm()
frame.Show()
app.MainLoop()
I have looked at these [Call functions from within a wxPython event
handler](http://stackoverflow.com/questions/10567726/call-functions-from-
within-a-wxpython-event-handler\]) , [Return value from wxpython main
frame](http://stackoverflow.com/questions/20674937/return-value-from-wxpython-
main-frame\]) but they don't really address what I want to do here
Answer: Yes, you can call event handler independently, why not?
def OnClick (self, event=None):
if event==None: return self.LastMousePosition
self.LastMousePosition = event.GetMousePosition()
pos = OnClick()
And, returning value from an event handler doesn't make sense. Returned value
may only indicate something to a caller. For instance, if handler returns True
something is continuing, if False, process stops.
Handlers are called only when events are triggered, either by GUI input or you
can post an event manually, thus simulating it.
What you wrote in your answer is the way of passing values out of the event
handlers. That is how it is done.
There is no "returning values from event handlers".
Maybe, but only maybe, somewhere exists an event driven system that stores
returns onto a stack or somewhere for you to pick them up later. WX does not
do this.
|
Single line commands from Python
Question: I am trying to change certain entries in a file using python, which is
possible in Perl with the command below , do we have anything similar in
python, here the string in the file is replaced successfully.
[root@das~] perl -pi -w -e 's/unlock_time=1800/#unlock_time=1900/g;' /etc/pam.d/common-auth
For this i did try simple command in python to start off with, but no luck,
any help in this direction would help, the code below does not give any output
actually.
[root@das~] python -c 'import os ; os.uname()[1]'
Answer: you need to add a print statement (with surrounding brackets for python 3.4;
without for python 2.7).
[root@das~] python -c 'import os ; print(os.uname()[1])'
the other line could then be programmed this way (this will replace the input
file!):
import fileinput
for line in fileinput.input('test.txt', inplace=True):
if line.startswith('unlock_time'):
print('# {}'.format(line))
else:
print(line)
|
Python/Flask Login form throws 500 error on IIS
Question: Due to my restrictions at work, I have to rely on hosting my webapp on **IIS
7.5**. I configured _IIS_ to serve the application via **wfastcgi.py**.
The issue is, that a login form, throws an **HTTP 500** error on IIS, when
clicking on the login button. The site gets loaded without issue. Also, code
works perfectly fine, when I am running it on the flask development server.
I have an extended version of the code, with access to a sqlite DB. I was able
to narrow down the issue to the login session creation part. I am not sure,
whether the issue is about the way how _IIS_ handles _POST_ and _GET_ requests
with _wfastcgi.py_ , or something else.
Here's the simplified version of the code, which throws the same error:
**UPDATE** : Changed the view functions to the proper return redirect.
import wiw_ad_controller as parser
import sys
from flask import Flask, flash, render_template, redirect, url_for, request, session, abort
import os
app = Flask(__name__)
@app.route('/')
@app.route('/index', methods=['GET', 'POST'])
def index():
if not session.get('logged_in'):
return render_template('login.html')
else:
return render_template('index.html')
@app.route('/login', methods=['POST'])
def login():
if request.form['username'] == 'admin' and request.form['password'] == 'admin':
session['logged_in'] = True
flash("Login was successfull")
else:
flash("Wrong password!")
return redirect(url_for('index'))
@app.route('/search', methods=['GET','POST'])
def lookup_data():
if (session.get('logged_in')==True):
if request.method == 'POST' or request.method == 'GET':
if (request.form['gpn'] and request.form['gpn'].isdigit()):
#code removed - not needed
return render_template('fetched_data.html', user_wiw=user_wiw, user_ad=user_ad)
else:
flash('Wrong ID')
return index()
return redirect(url_for('index'))
@app.route("/logout")
def logout():
session['logged_in'] = False
return redirect(url_for('index'))
@app.errorhandler(500)
def internal_error(exception):
app.logger.exception(exception)
return render_template('500.html'), 500
@app.errorhandler(404)
def internal_error(exception):
app.logger.exception(exception)
return render_template('404.html'), 404
if __name__ == '__main__':
app.secret_key=os.urandom(12)
app.run(debug=True)
Answer: Found out the issue. I didn't realize, that by calling my script via
**wfastcgi.py** , the main function never runs.
The login feature did not work, because it needed
`app.secret_key=os.urandom(12)` to be run first.
I used the following code by _Miguel Grinberg_ to find out what the issue is,
since `debug=True` did not provide any output in a **IIS** prod environment.
@app.errorhandler(500)
def internal_error(exception):
app.logger.exception(exception)
file_handler = RotatingFileHandler('C:\inetpub\wwwroot\logs.log', 'a', 1 * 1024 * 1024, 10)
file_handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s: %(message)s [in %(pathname)s:%(lineno)d]'))
app.logger.setLevel(logging.INFO)
file_handler.setLevel(logging.INFO)
app.logger.addHandler(file_handler)
app.logger.info('microblog startup')
return render_template('500.html'), 500
Hope, this will help others in the same situation as me.
**Note:** Read/Write permissions have to be set on wwwroot and all files in
order for this to work.
|
Spark + Python - how to set the system environment variables?
Question: I'm on spark-1.4.1. How can I set the system environment variables for Python?
For instance, in R,
Sys.setenv(SPARK_HOME = "C:/Apache/spark-1.4.1")
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
What about in Python?
import os
import sys
from pyspark.sql import SQLContext
sc = SparkContext(appName="PythonSQL")
sqlContext = SQLContext(sc)
# Set the system environment variables.
# ref: https://github.com/apache/spark/blob/master/examples/src/main/python/sql.py
if len(sys.argv) < 2:
path = "file://" + \
os.path.join(os.environ['SPARK_HOME'], "examples/src/main/resources/people.json")
else:
path = sys.argv[1]
# Create the DataFrame
df = sqlContext.jsonFile(path)
# Show the content of the DataFrame
df.show()
I get this error,
df is not defined.
[](http://i.stack.imgur.com/Aduko.png)
Any ideas?
Answer: Just try it like this: <https://spark.apache.org/docs/latest/sql-programming-
guide.html#creating-dataframes>
By providing `path = "examples/src/main/resources/people.json"` as parameter
to `df = sqlContext.jsonFile(path)`
If you don't provide arguments when you run your python script, then it will
go into `if len(sys.argv) < 2:`, this requires you to have defined
`SPARK_HOME` as a system variable. If not, it won't find your specified .json
file. Which seems to be your problem.
|
Exceptions: Throw in C# and except in (Iron)Python?
Question: anybody have an idea how to achieve that.
Been looking for a solution and all I find are ways to 'throw in python and
catch in C#'.
Ideally I'd like to have a C# method and wrap all my py code in a try/except
block. When the C# method throws I'd like to have py except catch it.
Is it possible at all? My last attempt:
ScriptEngine pyEngine = Python.CreateEngine(options);
dynamic pyScope = pyEngine.CreateScope();
Action<string> fire = (s) => { throw new Exception(); };
pyScope.Fire = fire;
// ... Load the script...
compiled = source.Compile();
compiled.Execute(pyScope);
// ...
// Somewhere else from a function called by the py script itself
void calledByPy()
{
m_pyscope.Fire("s");
}
On the python side my script looks like:
try:
calledByPy()
except System.Exception, e:
print str(e)
I'd like to see that line `print str(e)` called.
Answer: I don't think the trouble is in your Python. I can't quite speak to all the
C#, since I am loading up C# assemblies in IronPython with `clr.AddReference`
rather than running an interpreter with `ScriptEngine`.
This sort of thing works fine:
import clr
import System
clr.AddReference(project_path + '\\bin\\Debug')
from Namespace import CSharpThing
try:
foo = CSharpThing.DoStuff() # Multithreaded. Can throw an AggregateException.
except System.AggregateException as ae:
# I want more information than "One or more exceptions occurred"
for e in ae.InnerExceptions:
print(e)
raise # Rethrow the original exception, so I get a stack trace to it.
|
Plot 4th dimension with Python
Question: I would to know if there is the possibility to plot in four dimensions using
python. In particular I would to have a tridimensional mesh X, Y, Z and
**f(X,Y,Z) = 1** or **f(X,Y,Z) = 0**. So I need to a symbol (for example "o"
or "x") for some specific point (X,Y,Z). I don't need to a color scale.
Note that I have 100 matrices (512*512) composed by 1 or 0: so my mesh should
be 512*512*100.
I hope I have been clear! Thanks.
EDIT: This is my code:
X = np.arange(W.shape[2])
Y = np.arange(W.shape[1])
Z = np.arange(W.shape[0])
X, Y, Z = np.meshgrid(X, Y, Z)
fig = plt.figure()
ax = fig.gca(projection='3d')
for z in range(W.shape[0]):
indexes = np.where(W[z])
ax.scatter(X[indexes], Y[indexes], ???, marker='.')
ax.set_xlabel('X = columns')
ax.set_ylabel('Y = rows')
ax.set_zlabel('Z')
plt.show()
W is my tridimensional matrix, so: **W[0], W[1], etc** are 512x512 matrices.
My question is: what have I to write insted of ??? in my code. I know I
shouldn't ask this, but I can't understand the idea.
Answer: You could create inspect the value of f(x,y,z) for layers of z to see if they
are non-zero or not, and then scatterplot the function based on this.
e.g. for `nz` layers of `(n,n)` matrices, each a slice of a sphere:
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
n, nz = 48, 24
x, y = np.linspace(-n//2,n//2-1,n), np.linspace(-n//2,n//2-1,n)
X, Y = np.meshgrid(x, y)
def f(x,y,z):
return (X**2 + Y**2 + (z-nz//2)**2) < (n*0.2)**2
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for z in range(nz):
layer = f(X, Y, z)
indexes = np.where(layer)
ax.scatter(X[indexes], Y[indexes], layer[indexes]*(z-nz//2), marker='.')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
[](http://i.stack.imgur.com/ShXbg.jpg)
For random non-zero elements of f(x,y,z):
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
n, nz = 12, 10
x, y, z = np.linspace(0,n-1,n), np.linspace(0,n-1,n), np.linspace(0,nz-1,nz)
X, Y, Z = np.meshgrid(x, y, z)
f = np.random.randint(2, size=(n,n,nz))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for z in range(nz):
indexes = np.where(f[...,z])
ax.scatter(X[indexes], Y[indexes], f[indexes]+z, marker='.')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
[](http://i.stack.imgur.com/7zFRJ.png)
But with your large arrays, you may run into problems (a) with memory and the
speed of the plotting and (b) being able to resolve detail in the "central"
block of the plot.
|
Parsing XML with namespaces into a dataframe
Question: I have the following simpplified XML:
<?xml version="1.0" encoding="UTF-8"?>
<soap:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:soap="http://www.w3.org/2003/05/soap-envelope">
<soap:Body>
<ReadResponse xmlns="ABCDEFG.com">
<ReadResult>
<Value>
<Alias>x1</Alias>
<Timestamp>2013-11-11T00:00:00</Timestamp>
<Val>113</Val>
<Duration>5000</Duration>
<Quality>128</Quality>
</Value>
<Value>
<Alias>x1</Alias>
<Timestamp>2014-11-11T00:02:00</Timestamp>
<Val>110</Val>
<Duration>5000</Duration>
<Quality>128</Quality>
</Value>
<Value>
<Alias>x2</Alias>
<Timestamp>2013-11-11T00:00:00</Timestamp>
<Val>101</Val>
<Duration>5000</Duration>
<Quality>128</Quality>
</Value>
<Value>
<Alias>x2</Alias>
<Timestamp>2014-11-11T00:02:00</Timestamp>
<Val>122</Val>
<Duration>5000</Duration>
<Quality>128</Quality>
</Value>
</ReadResult>
</ReadResponse>
</soap:Body>
</soap:Envelope>
and would like to parse it into a dataframe with the following structure
(keeping some of the tags and discarding the rest):
Timestamp x1 x2
2013-11-11T00:00:00 113 101
2014-11-11T00:02:00 110 122
The problem is since the XML file includes namespaces, I don't know how to
proceed. I have gone through several tutorials (e.g.,
<https://docs.python.org/2/library/pyexpat.html>) and questions (e.g., How to
open [this XML file to create dataframe in
Python?](http://stackoverflow.com/questions/26077231/how-to-open-this-xml-
file-to-create-dataframe-in-python) and [Parsing XML with namespace in Python
via 'ElementTree'](http://stackoverflow.com/questions/14853243/parsing-xml-
with-namespace-in-python-via-elementtree)) but none of them have
helped/worked. I appreciate if anyone can help me sorting this out.
Answer: Here is an example on how to parse an xml using lxml and xpaths:
from lxml import etree
namespaces = {'abc': "ABCDEFG.com"}
xmltree = etree.fromstring(xml_string)
items = xmltree.xpath('//abc:Alias/text()', namespaces=namespaces)
print items
|
Can't find any info on Python's read() method (python 2.7)
Question: I'm trying to learn Python by going through Zed Shaw's "Learn Python the hard
way" and I'm stuck at what may seem as a very trivial thing. I'm unable to
find any info on the .read() method. This is what he says in the book:
> Run pydoc file and scroll down until you see the .read() command
> (method/function). See all the other ones you can use? Skip the ones that
> have __ (two underscores) in front because those are junk. Try some of the
> other commands.
If I call `python -m pydoc` I only get a very short list of information
regarding `pydoc.py`, `pydoc.py -k`, `pydoc.py -p`, `pydoc.py -g`, and
`pydoc.py -w`
When I tried calling `python -m pydoc read` I got the following message:
> no Python documentation found for 'read'
Is the `.read` method embedded in Python by default or does it have to be
imported first? I should note that I'm using Powershell on Windows 7. What
could be the problem here?
Answer: `read` is a method on file objects. Use:
python -m pydoc file
to get the documentation for file objects. Note that this is exactly what the
book told you to do, but it appears you left of the `file` argument.
Alternatively, ask for just the method:
python -m pydoc file.read
The official documentation, however, is far richer and useful. See the
documentation on [_File
Objects_](https://docs.python.org/2/library/stdtypes.html#file-objects) for
example.
|
How to redirect stderr to variable in Python 2.7?
Question: I am trying to redirect the stderr of a script to a variable to use it in
`if/else` statements. What I need is the program to behave in one way or
another according to the stderr. I have found [this
post](https://wrongsideofmemphis.wordpress.com/2010/03/01/store-standard-
output-on-a-variable-in-python/) using StringIO which may suit my needs, but I
cannot figure out how to make it to redirect stderr. I'm sure it is quite
easy, but I'm quite new to python and any detailed help will be really
appreciated. Thanks!
Dani
Answer: In order to intercept writes to stderr use following 'hack':
import sys
STDERR = ''
def new_stderr(old):
def new(*args):
# put your code here, you will intercept writes to stderr
print('Intercepted: ' + repr(args))
global STDERR # add new write to STDERR
STDERR += args[0]
old(*args)
return new
sys.stderr.write = new_stderr(sys.stderr.write)
All writes to stderr will be stored in STDERR variable
|
Where comes the output message when submitting a python file to spark using spark-submit
Question: I'm trying out the spark-submit command to submit my python app to cluster.(3
machine cluster on AWS-EMR) Surprisingly I cannot see any intended output from
the task. Then I simplified my app to only print out some fixed strings, but
still I didn't see any of those printed messages. I'm attaching the app and
command below. Hope some one could help me find the reason. Many thanks!
## submit-test.py:
import sys
from pyspark import SparkContext
if __name__ == "__main__":
sc = SparkContext(appName="sparkSubmitTest")
for item in range(50):
print "I love this game!"
sc.stop()
* * *
## Command I used is :
./spark/bin/spark-submit --master yarn-cluster ./submit-test.py
## Output I got is as below:
[hadoop@ip-172-31-34-124 ~]$ ./spark/bin/spark-submit --master yarn-cluster ./submit-test.py
15/08/04 23:50:25 INFO client.RMProxy: Connecting to ResourceManager at /172.31.34.124:9022
15/08/04 23:50:25 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
15/08/04 23:50:25 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (11520 MB per container)
15/08/04 23:50:25 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/08/04 23:50:25 INFO yarn.Client: Setting up container launch context for our AM
15/08/04 23:50:25 INFO yarn.Client: Preparing resources for our AM container
15/08/04 23:50:25 INFO yarn.Client: Uploading resource file:/home/hadoop/.versions/spark-1.3.1.e/lib/spark-assembly-1.3.1-hadoop2.4.0.jar -> hdfs://172.31.34.124:9000/user/hadoop/.sparkStaging/application_1438724051797_0007/spark-assembly-1.3.1-hadoop2.4.0.jar
15/08/04 23:50:26 INFO metrics.MetricsSaver: MetricsConfigRecord disabledInCluster: false instanceEngineCycleSec: 60 clusterEngineCycleSec: 60 disableClusterEngine: false maxMemoryMb: 3072 maxInstanceCount: 500
15/08/04 23:50:26 INFO metrics.MetricsSaver: Created MetricsSaver j-2LU0EQ3JH58CK:i-048c1ded:SparkSubmit:24928 period:60 /mnt/var/em/raw/i-048c1ded_20150804_SparkSubmit_24928_raw.bin
15/08/04 23:50:27 INFO metrics.MetricsSaver: 1 aggregated HDFSWriteDelay 1053 raw values into 1 aggregated values, total 1
15/08/04 23:50:27 INFO yarn.Client: Uploading resource file:/home/hadoop/submit-test.py -> hdfs://172.31.34.124:9000/user/hadoop/.sparkStaging/application_1438724051797_0007/submit-test.py
15/08/04 23:50:27 INFO yarn.Client: Setting up the launch environment for our AM container
15/08/04 23:50:27 INFO spark.SecurityManager: Changing view acls to: hadoop
15/08/04 23:50:27 INFO spark.SecurityManager: Changing modify acls to: hadoop
15/08/04 23:50:27 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
15/08/04 23:50:27 INFO yarn.Client: Submitting application 7 to ResourceManager
15/08/04 23:50:27 INFO impl.YarnClientImpl: Submitted application application_1438724051797_0007
15/08/04 23:50:28 INFO yarn.Client: Application report for application_1438724051797_0007 (state: ACCEPTED)
15/08/04 23:50:28 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1438732227551
final status: UNDEFINED
tracking URL: http://172.31.34.124:9046/proxy/application_1438724051797_0007/
user: hadoop
15/08/04 23:50:29 INFO yarn.Client: Application report for application_1438724051797_0007 (state: ACCEPTED)
15/08/04 23:50:30 INFO yarn.Client: Application report for application_1438724051797_0007 (state: ACCEPTED)
15/08/04 23:50:31 INFO yarn.Client: Application report for application_1438724051797_0007 (state: ACCEPTED)
15/08/04 23:50:32 INFO yarn.Client: Application report for application_1438724051797_0007 (state: ACCEPTED)
15/08/04 23:50:33 INFO yarn.Client: Application report for application_1438724051797_0007 (state: ACCEPTED)
15/08/04 23:50:34 INFO yarn.Client: Application report for application_1438724051797_0007 (state: RUNNING)
15/08/04 23:50:34 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: ip-172-31-39-205.ec2.internal
ApplicationMaster RPC port: 0
queue: default
start time: 1438732227551
final status: UNDEFINED
tracking URL: http://172.31.34.124:9046/proxy/application_1438724051797_0007/
user: hadoop
15/08/04 23:50:35 INFO yarn.Client: Application report for application_1438724051797_0007 (state: RUNNING)
15/08/04 23:50:36 INFO yarn.Client: Application report for application_1438724051797_0007 (state: RUNNING)
15/08/04 23:50:37 INFO yarn.Client: Application report for application_1438724051797_0007 (state: RUNNING)
15/08/04 23:50:38 INFO yarn.Client: Application report for application_1438724051797_0007 (state: RUNNING)
15/08/04 23:50:39 INFO yarn.Client: Application report for application_1438724051797_0007 (state: RUNNING)
15/08/04 23:50:40 INFO yarn.Client: Application report for application_1438724051797_0007 (state: RUNNING)
15/08/04 23:50:41 INFO yarn.Client: Application report for application_1438724051797_0007 (state: RUNNING)
15/08/04 23:50:42 INFO yarn.Client: Application report for application_1438724051797_0007 (state: RUNNING)
15/08/04 23:50:43 INFO yarn.Client: Application report for application_1438724051797_0007 (state: RUNNING)
15/08/04 23:50:44 INFO yarn.Client: Application report for application_1438724051797_0007 (state: FINISHED)
15/08/04 23:50:44 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: ip-172-31-39-205.ec2.internal
ApplicationMaster RPC port: 0
queue: default
start time: 1438732227551
final status: SUCCEEDED
tracking URL: http://172.31.34.124:9046/proxy/application_1438724051797_0007/A
user: hadoop
* * *
Answer: Posting my answer here, since I didn't find them elsewhere.
I first tried: yarn logs -applicationId applicationid_xxxx was told that "Log
aggregation has not completed or is not enabled".
Here comes the steps to dig out the print message:
1. Follow the link at the end of the execution, http://172.31.34.124:9046/proxy/application_1438724051797_0007/A (here reverse ssh and proxy needs to be setup).
2. at the application overview page, find out the AppMaster Node id: ip-172-31-41-6.ec2.internal:9035
3. go back to AWS EMR cluster list, find out the public dns for this id.
4. ssh from the driver node into this AppMaster Node. same key_pair.
5. cd /var/log/hadoop/userlogs/application_1438796304215_0005/container_1438796304215_0005_01_000001 (always choose the first container).
6. cat stdout
As you can see it's very convoluted. Probably will be better off to write
output into a file hosted in S3.
|
How would I call a function within a class from the same class in python?
Question: Given this basic example, but in reality it's more complex: (Name of the file
is TestFile)
class Example:
def test1():
print("First test")
def test2():
print("Second test")
def test3():
#Call test1()
#Call test2()
I've tried `Example.test1()` and `Example.test2()`, which works, but if
someone would do
from TestFile import Example as MagicTrick
Wouldn't the final function break?
Answer: No, it won't break because of that sort of import. Using `as` to import
something under a different name only affects what it's called in the place
where it's imported. In your original file `TestFile`, the class will still be
called `Example` and can still refer to itself as such. In other words, using
`as` in the import just creates a "local alias"; it doesn't change any other
names the imported object might have referring to it.
In reality, you would typically call `self.test1()` in order to call the
method on an instance. That won't work for your example because you didn't
provide a `self` argument, but perhaps that was an oversight in your "basic"
example.
|
sentiment analysis of Non-English tweets in python
Question: Objective: To classify each tweet as positive or negative and write it to an
output file which will contain the username, original tweet and the sentiment
of the tweet.
Code:
import re,math
input_file="raw_data.csv"
fileout=open("Output.txt","w")
wordFile=open("words.txt","w")
expression=r"(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)"
fileAFINN = 'AFINN-111.txt'
afinn = dict(map(lambda (w, s): (w, int(s)), [ws.strip().split('\t') for ws in open(fileAFINN)]))
pattern=re.compile(r'\w+')
pattern_split = re.compile(r"\W+")
words = pattern_split.split(input_file.lower())
print "File processing started"
with open(input_file,'r') as myfile:
for line in myfile:
line = line.lower()
line=re.sub(expression," ",line)
words = pattern_split.split(line.lower())
sentiments = map(lambda word: afinn.get(word, 0), words)
#print sentiments
# How should you weight the individual word sentiments?
# You could do N, sqrt(N) or 1 for example. Here I use sqrt(N)
"""
Returns a float for sentiment strength based on the input text.
Positive values are positive valence, negative value are negative valence.
"""
if sentiments:
sentiment = float(sum(sentiments))/math.sqrt(len(sentiments))
#wordFile.write(sentiments)
else:
sentiment = 0
wordFile.write(line+','+str(sentiment)+'\n')
fileout.write(line+'\n')
print "File processing completed"
fileout.close()
myfile.close()
wordFile.close()
Issue: Apparently the output.txt file is
abc some tweet text 0
bcd some more tweets 1
efg some more tweet 0
Question 1: How do I add a comma between the userid tweet-text sentiment? The
output should be like;
abc,some tweet text,0
bcd,some other tweet,1
efg,more tweets,0
Question 2: The tweets are in Bahasa Melayu (BM) and the AFINN dictionary that
I am using is of English words. So the classification is wrong. Do you know
any BM dictionary that I can use?
Question 3: How do I pack this code in a JAR file?
Thank you.
Answer: Question 1:
`output.txt` is currently simply composed of the lines you are reading in
because of `fileout.write(line+'\n')`. Since it is space separated, you can
separate the line pretty easily
line_data = line.split(' ') # Split the line into a list, separated by spaces
user_id = line_data[0] # The first element of the list
tweets = line_data[1:-1] # The middle elements of the list
sentiment = line_data[-1] # The last element of the list
fileout.write(user_id + "," + " ".join(tweets) + "," + sentiment +'\n')
Question 2: A quick google search gave me this. Not sure if it has everything
you will need though:
<https://archive.org/stream/grammardictionar02craw/grammardictionar02craw_djvu.txt>
Question 3: Try Jython <http://www.jython.org/archive/21/docs/jythonc.html>
|
How to pass Command Line arguments in Robot framework to make it available for all libraries?
Question: I have developed few libraries for robot framework for my feature testing, for
these libraries all variables are coming from a variables.py file. Below is
the code block for variables.py:
#!/usr/bin/env python
import sys
import os
import optparse
import HostProperties
import xml.etree.ElementTree as ET
from robot.api import logger
testBed = 748
tree = ET.parse('/home/p6mishra/mybkp/testLibs/TestBedProperties.xml')
class raftGetTestBedProp(object):
def GetTestBedNumber(self):
_attributeDict = {}
root = tree.getroot()
for _tbProperties in root:
for _tbNumber in _tbProperties:
get_tb = _tbNumber.attrib
if get_tb['name']== str(testBed):
get_tb2 = _tbNumber.attrib
return root, get_tb2['name']
def GetTestBedProperties(self, root, testBedNumber):
propertyList = []
for _tbProperties in root:
get_tb = _tbProperties.attrib
for _tbProperty in _tbProperties:
get_tb1 = _tbProperty.attrib
if get_tb1['name']== str(testBedNumber):
for _tbPropertyVal in _tbProperty:
get_tb2 = _tbPropertyVal.attrib
if 'name' in get_tb2.keys():
propertyList.append(get_tb2['name'])
return propertyList
def GetIPNodeType(self, root, testBedNumber):
for tbNumber1 in root.findall('tbproperties'):
for tbNumber in tbNumber1:
ipv4support = tbNumber.find('ipv4support').text
ipv6support = tbNumber.find('ipv6support').text
lbSetup = tbNumber.find('lbSetup').text
name = tbNumber.get('name')
if name==str(testBedNumber):
return ipv4support, ipv6support, lbSetup
obj1, obj2 = raftGetTestBedProp().GetTestBedNumber()
ipv4support, ipv6support, lbSetup = raftGetTestBedProp().GetIPNodeType(obj1, obj2)
AlltestBedProperties = raftGetTestBedProp().GetTestBedProperties(obj1, obj2)
HostPropertyDict = {}
for testBedProperty in AlltestBedProperties:
try:
val1 = getattr(HostProperties, testBedProperty)
HostPropertyDict[testBedProperty] = val1
except:
logger.write("Error in the Configuration data. Please correct and then proceed with the testing", 'ERROR')
for indexVal in range(len(AlltestBedProperties)):
temp = AlltestBedProperties[indexVal]
globals()[temp] = HostPropertyDict[temp]
This variables.py file returns all variables defined in HostProperties.py file
based on testbed number. If i import this library like `from variables import
*` in other libraries it gives me the required variables.
But the problem is here I have hardcoaded 748 so it works fine for me but i
want to pass this testbed number information from `pybot command` and make it
available for my Robot testcase as well as all the developed libraries.
Answer: Can you post Robot Framework code you use to call these Python files? I think
you could use _pybot -v testBed:748_ and pass it as a parameter to __init__
your class. I am not sure without seeing how you start your Python variables.
A bit different way is to use environment variables:
#!/usr/bin/env python
import sys
import os
import optparse
import HostProperties
import xml.etree.ElementTree as ET
from robot.api import logger
testBed = os.environ['testbed']
tree = ET.parse('/home/p6mishra/mybkp/testLibs/TestBedProperties.xml')
Before starting pybot just define this environment parameter:
export testbed=748
pybot tests.txt
|
Scrollbar into a python tkinter discussion
Question:
from tkinter import *
window = Tk()
ia_answers= "test\n"
input_frame = LabelFrame(window, text="User :", borderwidth=4)
input_frame.pack(fill=BOTH, side=BOTTOM)
input_user = StringVar()
input_field = Entry(input_frame, text=input_user)
input_field.pack(fill=BOTH, side=BOTTOM)
def onFrameConfigure(canvas):
'''Reset the scroll region to encompass the inner frame'''
canvas.configure(scrollregion=canvas.bbox("all"))
canvas = Canvas(window, borderwidth=0, background="white")
ia_frame = LabelFrame(canvas, text="Discussion",borderwidth = 15, height = 100, width = 100)
ia_frame.pack(fill=BOTH, side=TOP)
scroll = Scrollbar(window, orient="vertical", command=canvas.yview)
canvas.configure(yscrollcommand=scroll.set)
scroll.pack(side=RIGHT, fill=Y)
canvas.pack(fill=BOTH, expand=True)
canvas.create_window((4,4), window=ia_frame, anchor="nw")
ia_frame.bind("<Configure>", lambda event, canvas=canvas:onFrameConfigure(canvas))
user_says = StringVar()
user_text = Label(ia_frame, textvariable=user_says, anchor = NE, justify = RIGHT, bg="white")
user_text.pack(fill=X)
ia_says = StringVar()
ia_text = Label(ia_frame, textvariable=ia_says, anchor = NW, justify = LEFT, bg="white")
ia_text.pack(fill=X)
user_texts = []
ia_texts = []
user_says_list = []
ia_says_list = []
def Enter_pressed(event):
"""Took the current string in the Entry field."""
input_get = input_field.get()
input_user.set("")
user_says1 = StringVar()
user_says1.set(input_get + "\n")
user_text1 = Label(ia_frame, textvariable=user_says1, anchor = NE, justify = RIGHT, bg="white")
user_text1.pack(fill=X)
user_texts.append(user_text1)
user_says_list.append(user_says1)
ia_says1 = StringVar()
ia_says1.set(ia_answers)
ia_text1 = Label(ia_frame, textvariable=ia_says1, anchor = NW, justify = LEFT, bg="white")
ia_text1.pack(fill=X)
ia_texts.append(ia_text1)
ia_says_list.append(ia_says1)
input_field.bind("<Return>", Enter_pressed)
window.mainloop()
Hi, I try to build a GUI with tkinter but I've got two problems, the
LabelFrame/Canvas doesn't fill entirely the window and I can't get the
scrollbar to automatically scroll down. Can you help me with that, thank you
very much. Ilan Rossler.
Answer: You need to manually control the width of the inner frame since it is being
managed by the canvas. You can change the width in a binding to the
`<Configure>` event of the canvas (ie: when the canvas changes size, you must
change the size of the frame).
You'll need to be able to reference the window object on the canvas, which
means you need to save the id, or give it a tag.
Here's an example of giving it a tag:
canvas.create_window((4,4), window=ia_frame, anchor="nw", tags=("innerFrame",))
And here's how to change the width when the canvas changes size:
def onCanvasConfigure(event):
canvas = event.widget
canvas.itemconfigure("innerFrame", width=canvas.winfo_width() - 8)
canvas.bind("<Configure>", onCanvasConfigure)
To scroll down, call the `yview` command just like the scrollbar does. You
need to make this happen _after_ the window has had a chance to refresh.
For example, add this as the very last line in `Enter_pressed`:
def Enter_pressed(event):
...
canvas.after_idle(canvas.yview_moveto, 1.0)
|
Except block not catching exception in ipython notebook
Question: When I try this simple example in my current Python environment (a ipython
notebook cell) I am not able to catch TypeError exception:
a = (2,3)
try:
a[0] = 0
except TypeError:
print "catched expected error"
except Exception as ex:
print type(ex), ex
I get:
<type 'exceptions.TypeError'> 'tuple' object does not support item assignment
When I try to run the same copy-pasted code in a different ipython notebook on
the same computer I get the expected output: `catched expected error`.
I understand it has something to do with my current environment, but I have no
idea where to start looking! I tried also another example with AttributeError
and in that case the catch block works.
**EDIT:** When I tried:
>>> print AttributeError
<type 'exceptions.AttributeError'>
>>> print TypeError
<type 'exceptions.AttributeError'>
I remembered that earlier in the session I made an error, which renamed
TypeError:
try:
group.apply(np.round, axis=1) #group is a pandas group
except AttributeError, TypeError :
#it should have been except (AttributeError, TypeError)
print ex
which gave me:
('rint', u'occurred at index 54812')
Answer: I think it may be that the TypeError has to be implicitly imported for some
environments:
from exceptions import TypeError
Give that a go!
|
Python 3.4 cx_freeze [WinError 5] using Selenium - Only on other machines
Question: I've recently started devling into cx_freeze and creating .exe files for other
people to use.
The script is fairly simple: It uses Selenium to scrape javascript-sensitive
content on a website, and gives the user a notification when it finds a
matching href + copies the link to the clipboard:
The main code in main.py:
from bs4 import BeautifulSoup
from selenium import webdriver
import time
import pyperclip
def check():
browser.get(browser.current_url)
page_html = browser.page_source.encode('utf8')
soup = BeautifulSoup(page_html, "lxml")
complete_list = soup.find_all('a', href=True)
for a in complete_list:
if LINK_TO_FIND in a['href']:
pyperclip.copy(a['href'])
while True:
beep()
browser = webdriver.Chrome(executable_path=path_to_chromedriver)
browser.get(URL_TO_CHECK)
while True:
check()
time.sleep(5)
The cx_freeze code in setup.py:
import sys
from cx_Freeze import setup, Executable
build_exe_options = {"packages": ["os", "lxml", "gzip"], "excludes": ["tkinter"]}
base = 'Console'
setup( name = "web_scraper",
version = "0.1",
description = "desc",
options = {"build_exe": build_exe_options},
executables = [Executable("main.py", base=base)])
Until yesterday, this script was running fine on both mine and other machines.
But starting yesterday, this error has started to pop up whenever someone else
runs the newly built .exe:s. (New ones still works fine for me, old versions
still work on other machines):
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\selenium\webdriver\chrome\service.py", line 68, in start
File "C:\Python34\lib\subprocess.py", line 859, in __init__
File "C:\Python34\lib\subprocess.py", line 1112, in _execute_child
PermissionError: [WinError 5] Access is denied
During handling of the above exception, another exception occured:
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\cx_Freeze\initscripts\Console.py", line 27, in (module)
File "main.py", line 48, in (module)
File "C:\Python34\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 62, in __init__
File "C:\Python34\lib\site-packages\selenium\webdriver\chrome\service.py", line 80, in start
selenium.common.exceptions.WebDriverException: Message: 'exe.win32-3.4' executable may have wrong permissions. Please see https://sites.google.com/a/chromium.org/chromedriver/home
Some things I've tried:
* Compiling old versions just to check if there was something wrong with the code.
* Launch the console as admin when doing `python setup.py build`
* Disable my antivirus
* Made sure chromedriver.exe is in the right place (if not, another error is raised).
Answer: Alright, so after about 4 hours of troubleshooting I realized that the
`path_to_chromedriver` was missing `\chromedriver.exe` at the end for the
version I was sending over, but was correct for the verison I was using
locally. Shoot me.
|
File uploaded to salesforce empty using python and simple_salesforce
Question: I'm trying to upload a file to the Folder object in salesforce using python
and simple_salesforce. The file is uploaded but is empty. can anyone tell me
why and how to fix the problem? Thanks.
import base64
import json
from simple_salesforce import Salesforce
userName = 'username'
passWord = 'password'
securitytoken = 'securitytoken'
sf=Salesforce(username='userName', password='passWord', security_token='securitytoken', sandbox = True)
sessionId = sf.session_id
body = ""
with open("Info.txt", "r") as f:
body = base64.b64encode(f.read())
response = requests.post('https://cs17.my.salesforce.com/services/data/v23.0/sobjects/Document/',
headers = { 'Content-type': 'application/json', 'Authorization':'Bearer %s' % sessionId},
data = json.dumps({
'Description':'Information',
'Keywords':'Information',
'FolderId': '00lg0000000MQykAAG',
'Name': 'Info',
'Type':'txt'
})
)
print response.text
Answer: Based on the code above, your request is incomplete. You create the body value
as the b64encoding iof the file you wish to upload, but it isn't included in
your data associated with a 'body' key.
Depending on your version of json, you may experience some problems getting
the output to play nice with Salesforce. I switched from json to simplejson
and that worked better for me.
|
Python: How to open a multisheet .xslx file (with formatting) and edit a few cells and save it as another .xlsx file
Question: I have tried using openpyxl but it seems to fail when I am saving the file
(<http://pastebin.com/VU5LTajH>) and I cannot find any info about the problem.
Any other modules that let you read edit and save an .xlsx file while
retaining the style and formatting of the original file?
Answer: You can use win32com library (I assume that you're working on Windows) and
dispatch excel application via COM. Inside, you can use every method that is
available in VBA (so that's good to work with [Object model
reference](https://msdn.microsoft.com/en-
us/library/office/ff194068%28v=office.15%29.aspx) as documentation).
Below simplified example how I use it.
import win32com.client
excel = win32com.client.gencache.EnsureDispatch ("Excel.Application")
excel.Visible = True
workbook = excel.Workbooks.Open(filepath)
if isinstance(worksheetName, str):
worksheet = workbook.Worksheets(worksheetName)
else: #assume that worksheetName is number
sheet = workbook.Worksheets[worksheetName]
def getRange(worksheet, address):
return worksheet.Range(address)
def getValue(range_):
"""Returns values from specified range, as tuple of tuples (even if range
is single row) or single value if range is one cell"""
return range_.Value2
def setValue(range_, values):
range_.Value2 = values
As I see workbook object has method `SaveAs`, so I'd use it to save it as
another file after changing cells value, like this:
workbook.SaveAs("newFilename")
|
Is it possible to configure Python interpreter to use matplotlib and/or scipy without prepending the class names
Question: I a using a a python package (SloppyCell) that relies heavily on the use of
python plotting tools within scipy and matplotlib. In this software they have
used, on several occasions functions from these modules but have not specified
the class from which they came. For example, they have used:
plot(traj.timepoints, result, 'k--', linewidth=3,zorder = 10)
which gives me a NameError.
I was under the impression this function would be part of matplotlib.pyplot,
however they have not even imported matplotlib, only scipy, which suggests
this 'plot' function is part of scipy. I have attempted to modify their module
slightly by importing matplotlib.pyplot and running the code as:
matplotlib.pyplot.plot(traj.timepoints, result, 'k--', linewidth=3,zorder = 10)
But I get the exact same 'NameError: global name 'plot' is not defined'. This
is even more unusual (to me) because when I use this function outside of
SloppyCell this works fine.
The software obviously works for other people because they have published
papers that use this code. Therefore intuition suggests that there is a way to
get the python interpreter to recognize this (and other functions like it)
without prepending the name of the class from which it came. Does anybody know
if this is possible? Alternatively can anybody think if there are other
problems that I am missing?
Thanks in advance.
Answer: In a plain Python 2.7, I can do:
>>> from matplotlib import pyplot
>>> pyplot.plot
<function plot at 0xb4fe35a4>
In `ipython --pylab` I can do
plot??
pyplot.plot??
meaning `pyplot` has been imported both by name and by `*`. It does the same
with `numpy` (`np` and `*`).
`import numpy as np` gives access to all of `numpy` submodules. But with
`scipy`, you have import them selectively (e.g. `from scipy import sparse`).
I'm not as familiar with `matplotlib`, but it looks like it follows the
`scipy` model.
|
.dat file in python
Question: I'am doing a project in python using OpenCV. I have to store a large amount of
integer data(features of images in the database) in a separate file. I can use
.txt file but it stores integer values as strings. Is there any way that I can
store integer values directly as integers in python like .dat file in MATLAB.?
Answer: You can use [struct](https://docs.python.org/3/library/struct.html) to pack
the integers in a bytes format and write them to a dat file.
With integers, this will result in a file that contains 4 bytes per integer,
which would save a bit of space (over text format) if you have very large
numbers. If you have smaller numbers, a csv format may be better.
import struct
data = [1,2,3,4,5,6,7,8,9]
with open('data.dat', 'wb') as data_file:
data_file.write(struct.pack('i'*len(data), *data))
Then to read it back in
with open('data.dat', 'rb') as data_file:
values = struct.unpack('i'*len(data), data_file.read())
|
Can't get NLTK-Trainer to recognize/ work with scikit-learn classifiers
Question: I've been using the (excellent) NLTK-Trainer in order to train a NaiveBayes
classifier to classify snippets of text. I see that NLTK-Trainer also supports
the scikit-learn algorithms, and I would like to use these in hopes of
decreasing memory usage/ increasing accuracy.
However, when I try to specify one of the scikit-learn classifiers when I run
train_classifier.py, it throws an error:
train_classifier.py: error: argument --classifier/--algorithm: invalid choice: 'sklearn.BernoulliNB' (choose from 'NaiveBayes', 'DecisionTree', 'Maxent', 'GIS', 'IIS', 'MEGAM', 'TADM')
I am running the 32-bit Anaconda distribution (2.20) of Python 3.4.3 on
Windows 7. "pip freeze" gives me the following: NLTK 3.0.4, scikit-learn
0.16.1. I believe I am using the latest version of NLTK-Trainer (I downloaded
it a month ago).
After doing some research, I have two theories into what is going wrong: 1\.
There is some sort of arg parse error that isn't passing the --classifier
sklearn.BernoulliNB to train_classifer.py correctly. After I do a traceback on
the error, it gives me this
`nltk_data\nltk-trainer-master\nltk-trainer-master\train_classifier.py in
<module>() 131 nltk_trainer.classification.args.add_sklearn_args(parser) 132
--> 133 args = parser.parse_args() 134
AppData\Local\Continuum\Anaconda3\lib\argparse.py in parse_args(self, args,
namespace) 1726 # ===================================== 1727 def
parse_args(self, args=None, namespace=None): -> 1728 args, argv =
self.parse_known_args(args, namespace) 1729 if argv: 1730 msg =
_('unrecognized arguments: %s') 1765 except ArgumentError: 1766 err =
_sys.exc_info()[1] -> 1767 self.error(str(err)) 1768 1769 def
_parse_known_args(self, arg_strings, namespace): `
2. My other hypothesis is that the scikit-learn files that were included with Anaconda are in a place where NLTK-Trainer can't find them. Per Jacob Perkins' recommendations here ([comment](http://streamhacker.com/2012/11/22/text-classification-sentiment-analysis-nltk-scikitlearn/#comment-1041386095)) I can run the 'from nltk.classify import scikitlearn' command without error. However, when I look further into the nltk-trainer/args.py code here ([code](https://github.com/japerk/nltk-trainer/blob/master/nltk_trainer/classification/args.py)), I cannot run the code following the 'import command'. All of these lines throw errors.
`from sklearn.feature_extraction.text import TfidfTransformer from
sklearn.pipeline import Pipeline from sklearn import ensemble,
feature_selection, linear_model, naive_bayes, neighbors, svm, tree`
This has been really frustrating, and I can't quite put my finger on why it
isn't working. Any assistance would be much appreciated!
Answer: `argparse` is just code that takes your commandline arguments, and parses
them. It does not use or act on those arguments. That's done by following
code. The parser is just the gatekeeper, making sure that your inputs look
correct.
I'm not familiar with `NLTK-Trainer`, but I can see what it's parser is doing.
From the error message it is clear that your argument, 'sklearn.BernoulliNB'
is getting through. But the `--classifier` argument was set up to only accept
one of the strings in the `choices` list. `['NaiveBayes',
'DecisionTree',...]`. It doesn't accept just any name or module reference.
It is likely that the program takes an accepted name and maps it onto some
other function, module or parameter.
Try calling this code with `-h` or `--help`, to see what arguments it
acceepts. And go to the program documentation to see what it says about the
input. Maybe there is some other way of specifying the alternative algorithms.
The `--classifier` is clearly setup to accept only a predefined set of value.
|
define Keywords in pyparsing for an interpreter
Question: So I know this may be a stupid question and is most likely impossible but is
there a way in pyparsing to create keywords (such as print for python) I am
trying to create a interpreter for a different language in python so that you
can write in this language on android (as python files can be run on python
but the other language can't). For instance in this language there is a PUT
statement that prints out is there a way in pyparsing to "define" this put
statement so that when I import this interpreter I can write PUT "Hello,
World!" instead of (a = 'PUT "Hello, World"', Result = p.parseString(a), print
result[1])
Answer: Rather than write a whole new language, you can add keywords to Python like is
done in this example
(<http://pyparsing.wikispaces.com/file/view/stateMachine2.py/110934709/stateMachine2.py>)
on the pyparsing wiki. The parser picks out your custom keywords, and replaces
them with the expanded Python code that implements that command, then compiles
this code as a regular Python module.
If you want to write a pure DSL, look at the adventure example on that same
wiki examples page, in which I implemented a simple DSL for running an
adventure game using simple commands.
|
Python: Dynamically calling Method in separate script
Question: I'm currently working on a project to control a 6 legged robot. I've got my
scripts set up for the individual joint control and it's working fine. I have
individual scripts for the joint controllers for each leg,
leg_1_joint_control, leg_2_joint_control etc ..., all of these scripts have 3
PID controllers for the motors on each leg called PID1, PID2 & PID3. What I
want to do is dynamically call my PID1,2&3 methods in my
leg_'leg_no'_joint_control.py script (where 'leg_no' is the leg number [1 -
6]) without having to write a separate case for each leg.
Here's an ideal snippet of my leg control code to try and explain:
import leg_1_joint_control
import leg_2_joint_control
import leg_3_joint_control
...
def leg(args, leg_no):
...
e1[t] = leg_'leg_no'_joint_control.PID1(args)
...
and my leg_'leg_no'_joint_control script
...
def PID1(args):
...
return ek
So what I want to do is when I change the leg_no variable, I want to call PID1
in the scripts for the relevant leg. is this possible? I've tried methods such
as getattr() but have had no success.
Answer: Do not try to use dynamic names for variables it will cause headaches to
develop and nightmares to maintain. Just use an array :
import leg_1_joint_control
import leg_2_joint_control
import leg_3_joint_control
...
legs_joint_control = [ leg_1_joint_control, leg_2_joint_control, ... ]
def leg(args, leg_no):
...
e1[t] = legs_joint_control[leg_no].PID1(args)
...
|
trying to parse csv file in python 3.4
Question: So I'm new in parsing csv files and I'm using the pycharm 4.5 IDE. I'm having
a problem parsing a csv file from crunchbase (the file I am dealing with is
pretty huge) and i get this UnicodeEndcodeError, I want to know why this is
happening.
(it's below to see the full error )
return codecs.charmap_encode(input,self.errors,encoding_table[0])
UnicodeEncodeError: 'charmap' codec can't encode characters in position 273-291:
character maps to <undefined>
My code for parsing the csv file is below this line:
import csv
def parseCsvFile():
#The code below reads csv file and puts each category in a list
filename = input("Type file name: ")
input_csv_file = csv.DictReader(open(filename, encoding="utf8"))
for row in input_csv_file:
print (row)
csvfilename.close()
I didn't want to fully show my code on here because its a private project. I'm
not so desperate but lost why my output code appearing as it is. If anybody
has a better way to code, I am open to suggestions.
Answer: can you please let us know why you are using the utf8 encoding ??? do your
project specified that as one of the requirements ??? if it's not then you can
skip that option and it would solve your problem
|
Adjust space between tick labels a in matplotlib
Question: I based my heatmap off of: [Heatmap in matplotlib with
pcolor?](http://stackoverflow.com/questions/14391959/heatmap-in-matplotlib-
with-pcolor)
I checked out [How to change separation between tick labels and axis labels in
Matplotlib](http://stackoverflow.com/questions/21539018/how-to-change-
separation-between-tick-labels-and-axis-labels-in-matplotlib) but it wasn't
what I needed
[](http://i.stack.imgur.com/ltqNl.png)
**How do I fix the positions of the labels so they align with the ticks?**
#!/usr/bin/python
import matplotlib.pyplot as plt
import numpy as np
import random
in_path = '/path/to/data'
in_file = open(in_path,'r').read().split('\r')
wd = '/'.join(in_path.split('/')[:-1]) + '/'
column_labels = [str(random.random()) + '_dsiu' for i in in_file[0].split('\t')[2:]]
row_labels = []
#Organize data for matrix population
D_cyano_counts = {}
for line in in_file[2:]:
t = line.split('\t')
D_cyano_counts[(t[0],t[1])] = [int(x) for x in t[2:]]
#Populate matrix
matrix = []
for entry in sorted(D_cyano_counts.items(), key = lambda x: (np.mean([int(bool(y)) for y in x[-1]]), np.mean(x[-1]))):#, np.mean(x[-1]))):
(taxon_id,cyano), counts = entry
normalized_counts = []
for i,j in zip([int(bool(y)) for y in counts], counts):
if i > 0:
normalized_counts.append(i * (5 + np.log(j)))
else:
normalized_counts.append(0)
#Labels
label_type = 'species'
if label_type == 'species': label = cyano
if label_type == 'taxon_id': label = taxon_id
row_labels.append(str(random.random()))
#Fill in matrix
matrix.append(normalized_counts)
matrix = np.array(matrix)
#Fig
fig, ax = plt.subplots()
heatmap = ax.pcolor(matrix, cmap=plt.cm.Greens, alpha = 0.7)
#Format
fig = plt.gcf()
#
ax.set_frame_on(False)
#
font = {'size':3}
ax.xaxis.tick_top()
ax.set_xticks([i + 0.5 for i in range(len(column_labels))])
ax.set_yticks([i + 0.5 for i in range(len(row_labels))])
ax.set_xticklabels(column_labels, rotation = (45), fontsize = 10, va='bottom')#, fontweight = 'demi')
ax.set_yticklabels(row_labels, fontsize = 9, fontstyle='italic')
cbar = plt.colorbar(heatmap)
help(ax.set_xticklabels)
ax.margins(x=0.01,y=0.01)
fig.set_size_inches(20, 13)
plt.savefig('figure.png')
Answer: you have to set the horizontal alignment of the labels to `left` in your case.
They are centered by default. [The
link](http://stackoverflow.com/a/14854007/588071) from @Jean-Sébastien
contains your answer
ax.set_xticklabels(column_labels, rotation = (45), fontsize = 10, va='bottom', ha='left')
|
Creating lists from csv files with rows with different amounts of entries
Question: I have data in a csv file which looks like this:
fromaddress, toaddress, timestamp
[email protected], [email protected], [email protected], 8-1-2015
[email protected], [email protected], 8-2-2015
[email protected], [email protected], [email protected], [email protected], [email protected], 8-3-2015
[email protected], [email protected], [email protected], [email protected], 8-4-2015
Using Python, I would like to produce a txt file that looks like:
sender1_email.com, recipient1_email.com
sender1_email.com, recipient2_email.com
sender2_email.com, recipient1_email.com
sender3_email.com, recipient1_email.com
sender3_email.com, recipient2_email.com
sender3_email.com, recipient3_email.com
sender3_email.com, recipient4_email.com
sender1_email.com, recipient1_email.com
sender1_email.com, recipient2_email.com
sender1_email.com, recipient3_email.com
Ultimately, I imagine this whole process will take several steps. After
reading in the csv file, I will need to create separate lists for fromaddress
and toaddress (I am ignoring the timestamp column altogether). There is only 1
email address per row in the fromaddress column, however there are any number
of email addresses per row in the toaddress column. I need to duplicate the
fromaddress email address for each toaddress email address listed for each
row. Once this done I need to replace all of the @ symbols with underscore (_)
symbols. Finally, when I write the txt file, I need to add an extra space
between each row so that it is "double-spaced"
I have not gotten very far as I'm a Python newbie and I'm stuck on the first
step. The following code is duplicating the fromaddress for each individual
character in the toaddress column instead of each individual email address. I
also need help with the toaddress list as well. Can anyone help?
import csv
fromaddress = []
toaddress = []
with open("filename.csv", 'r') as f:
c = csv.reader(f, delimiter = ",")
for row in c:
for item in row[1]:
fromaddress.append(row[0]);
print(fromaddress)
Everyone, thanks for all of your help! I tried all your code but unfortunately
I'm not getting the output I need. Instead of getting this (what I want):
sender1_email.com, recipient1_email.com
sender1_email.com, recipient2_email.com
sender1_email.com, recipient3_email.com
sender2_email.com, recipient1_email.com
sender3_email.com, recipient1_email.com
sender3_email.com, recipient2_email.com
I'm getting this:
sender1_email.com,"recipient1_email.com, recipient2_email.com, recipient3_email.com"
sender2_email.com,"recipient1_email.com"
sender3_email.com,"recipient1_email.com, recipient2_email.com"
There is only 1 element in each "fromaddress" row, but there are multiple
elements in each "toaddress" row. Basically, I have to pair each recipient
address with the correct sender address. I think I'm not getting the right
output because of the ( " ) double quotation marks in the csv file to surround
all of the sender addresses in each row.
Answer:
for row in c:
for item in row[1]:
fromaddress.append(row[0]);
`for item in row[1]` is going to only look at the second element in each row.
If you want to loop over each row, then assign column elements to variables,
you want this:
for row in c:
fromaddress.append(row[0]);
toaddress.append(row[1]);
# etc...
|
Solve Polynomial equation of 6th order with Python efficiently
Question: I want to Solve Polynomial equation of 6th order with Python. I've tried the
"basic" version:
avgIrms = 19.61
c_val = (0.000002324*avgIrms**6) - (0.0001527*avgIrms**5) + (0.003961843*avgIrms**4) - (0.052211292*avgIrms**3) + (0.379269091*avgIrms**2) -(0.404399274*avgIrms) + 0.000682896
print(c_val)
After that I've used the numpy with the following code:
import numpy as np
avgIrms = 19.61
ppar = [0.000002324, -0.0001527, 0.003961843, -0.052211292, 0.379269091, -0.404399274, 0.000682896]
p = np.poly1d(ppar)
print(p(avgIrms))
In the both ways the raspberry tooks more than five seconds to process... It's
to much! Any help to solve polynomial equations efficiently? (less than one
second...)
Thanks in advance, Daniel
Answer: First, what you want is to _evaluate_ a polynomial for a given `x`, not to
_solve_ it. Second, I still don't see how do you get your speed up..
Find here a couple of timmings:
>>> import numpy as np
>>> x = 19.61
>>> pr = [0.000002324, -0.0001527, 0.003961843, -0.052211292, 0.379269091, -0.404399274, 0.000682896]
>>> p = pr[::-1] # reverse the order
* Hardcoded solution:
>>> %timeit p[0] + x * p[1] + p[2] * x**2 + p[3] * x**3 + p[4] * x**4 + p[5] * x**5 + p[6] * x**6
809 ns
* Loopy solution:
>>> %%timeit
val = 0
for i in range(len(p)):
val += p[i] * x**i
1.24 µs
* Functional programming solution:
>>> %timeit reduce(lambda acc, i: acc + p[i] * x**i, range(len(p)))
1.61 µs
* Using numpy's `polyval`:
>>> %timeit np.polyval(pr, x)
6.12 µs
* Using numpy's `poly1d`
>>> %%timeit
c = np.poly1d(pr)
c(x)
9.46 µs
So, clearly numpy is slower, as for a such a small array it adds some overhead
in the Python <-> C communication, but still, it is of the order of `6-9 µs`,
I'm using a desktop computer, but I would be pretty impressed if a Raspberry
Pi would really take 5 seconds to do that operation. Are you sure you did the
timings properly?
Any way, either the hardcoded or the loopy solution seem faster than the
functional programming one (the equivalent to the one that you defined as
`horner` in your comment).
|
Scraping specific elements from page
Question: I am new to python, and I was looking into using scrapy to scrape specific
elements on a page.
I need to fetch the Name and phone number listed on a members page.
This script will fetch the entire page, what can I add/change to fetch only
those specific elements?
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["fali.org"]
start_urls = [
"http://www.fali.org/members/",
]
def parse(self, response):
filename = response.url.split("/?id=")[-2] + '%random%'
with open(filename, 'wb') as f:
f.write(response.body)
Answer: I cannot see a page: <http://www.fali.org/members/>
instead it redirects to the home page.
That makes it impossible to give specifics.
Here is an example:
article_title = response.xpath("//td[@id='HpWelcome']/h2/text()").extract()
That parses "Florida Association of Licensed Investigators (FALI)" from their
homepage.You can get browser plugins to help you figure out xpaths. XPath
Helper on chrome makes it easy.
That said -- go through the tutorials posted above. Because you are gonna have
more questions I'm sure and broad questions like this aren't taken well on
stack-overflow.
|
basic python syntax that i don't quite get
Question: I keep getting this error, I'm not sure why though.
Traceback (most recent call last):
File "/home/cambria/Main.py", line 1, in <module>
from RiotAPI import RiotAPI
File "/home/cambria/RiotAPI.py", line 6
def __init__(self, api_key, region=Consts.REGIONS['north_america'])
^
SyntaxError: invalid syntax
I have not used Python for that long, I am just using it because it
facilitates what I'm trying to do well, but I have used various other
languages and as far as I can tell you would want to close these `()'s` in
this statement `def __init__(self, api_key,
region=Consts.REGIONS['north_america'])` however I keep getting a
`SyntaxError: invalid syntax`?
the rest of that definition is as follows, if it helps.
class RiotAPI(object):
def __init__(self, api_key, region=Consts.REGIONS['north_america'])
self.api_key = api_key
self.region = region
EDIT 1: if i add a `:` at the end of `def __init__(self, api_key,
region=Consts.REGIONS['north_america']):` like so, why? and after doing this i
get a new syntax error that i will address after some wisedom
EDIT 2: new syntax error after fixing the first is,
Traceback (most recent call last):
File "/home/cambria/Main.py", line 1, in <module>
from RiotAPI import RiotAPI
File "/home/cambria/RiotAPI.py", line 11
args = ('api_key': self.api_key)
^
SyntaxError: invalid syntax
which is
def _request(self, api_url, params=()):
args = ('api_key': self.api_key)
for key, value in params.items():
if key not in args:
args[key] = value
EDIT 3: This should be the last of it.. no more syntax, just a
Traceback (most recent call last):
File "/home/cambria/Main.py", line 10, in <module>
main()
File "/home/cambria/Main.py", line 5, in main
respons3 = api.get_summoner_by_name('hi im gosan')
File "/home/cambria/RiotAPI.py", line 31, in get_summoner_by_name
return self._request(api_url)
File "/home/cambria/RiotAPI.py", line 12, in _request
for key, value in params.items():
AttributeError: 'tuple' object has no attribute 'items'
in
def _request(self, api_url, params=()):
args = {'api_key': self.api_key}
for key, value in params.items():
if key not in args:
args[key] = value
response = requests.get(
Consts.URL['base'].format(
proxy=self.region,
region=self.region,
url=api_url
),
params=args
)
print response.url
return response.json()
this is the only error i have received that i really don't know much on. Is
this a result of there being no `.items` on my params? or i left it
initialized as an empty dictionary?
Answer: The problem is just that you're missing a : at the end of the line.
def __init__(self, api_key, region=Consts.REGIONS['north_america']):
self.api_key = api_key
self.region = region
|
How to get the time in python on android?
Question: I am trying to make a program that will use the time and display it down to
the seconds. How do I do this? So far I have found the function get_time()
that is part of kivy but I am not sure how to use it. I have imported
everything but it still says "not defined".
Answer:
import datetime
datetime.datetime.now()
that should work on any python framework.
you can use the
[Documentation](https://docs.python.org/2/library/datetime.html) for more
details.
|
Compile Cython on pip package build
Question: I'm developing a Python package, [EcoPy](https://github.com/Auerilas/ecopy),
that is mostly written in pure Python. The main folder is called ecopy.
There's a subfolder called regression that has a Cython file that's already
been built. The main setup.py file includes the code:
ext_modules = cythonize([
Extension(
'ecopy.regression.isoFunc', ['ecopy/regression/isoFunc.pyx'], **opts),
])
When I run
sudo pip install ecopy -e . --upgrade --force-reinstall
the module builds fine. It even re-compiles the isoFunc.c file if I've deleted
it. The problem is that Cython doesn't then convert the .c file to the .so
file, which I need in order to import the function. If I try loading the
module without it, I get
ImportError: No module named isoFunc
If I manually setup file using the command line
python setup.py build_ext --inplace
Cython DOES generate the .so file. How do I get it to generate the .so file
using pip? I've tried to figure out how statsmodels did it by reading their
code, but honestly, its a mystery to me.
It's almost as if the pip command misses the build_ext argument.
Answer: I can answer this question, because I just learned that I'm an idiot.
sudo pip install ecopy -e . --upgrade --force-reinstall
was using an older version from the PyPI that didn't have the new setup.py
with the Cython code. When I did it correctly
sudo pip install -e . --upgrade --force-reinstall
and used the latest version on my hard drive, it worked fine.
Little victories.
|
Installing scipy in cygwin
Question: I'm failing to install scipy in cygwin (32-bit) with any method I've tried
(pip, direct source code). Here is the error I get
from scipy/spatial/ckdtree/src/ckdtree_globals.cxx:9:
/usr/lib/python2.7/site-packages/numpy/core/include/numpy/__multiarray_api.h:1629:1: warning: ‘int _import_array()’ defined but not used [-Wunused-function]
_import_array(void)
^
error: Command "g++ -fno-strict-aliasing -ggdb -O2 -pipe -Wimplicit-function-declaration -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.i686/build=/usr/src/debug/python-2.7.10-1 -fdebug-prefix-map=/usr/src/ports/python/python-2.7.10-1.i686/src/Python-2.7.10=/usr/src/debug/python-2.7.10-1 -DNDEBUG -g -fwrapv -O3 -Wall -I/usr/include/python2.7 -I/usr/lib/python2.7/site-packages/numpy/core/include -Iscipy/spatial/ckdtree/src -I/usr/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c scipy/spatial/ckdtree/src/ckdtree_globals.cxx -o build/temp.cygwin-2.2.0-i686-2.7/scipy/spatial/ckdtree/src/ckdtree_globals.o" failed with exit status 1
There are also a lot of warnings before this. What's the right way to have
scipy on cygwin. I also looked at the official scipy website
<http://www.scipy.org/scipylib/building/windows.html> but I found it a little
hard to follow. However, I'm willing to try that if is the easiest way and is
still works today.
Answer: I'm getting the exact same error on scipy 0.16.0 in 64bit cygwin.
Installing 0.15.0 works for me however:
$ pip install scipy==0.15.0
Sort of annoying but probably a reasonable solution for now.
|
why cannot print non integer epicycloid in command line?
Question: I want to create a program to print epicycloid:
import math
import sys
WIDTH=30
R=10.0
N=3.0
DELTA=0.01
pixels=[[0 for y in range(WIDTH)] for x in range(WIDTH)]
for f in range(0,(int)(2*math.pi/DELTA)):
pixels[(int)(R*math.sin(f*DELTA)-R*math.sin((N+1)*f*DELTA)/(N+1)+WIDTH/2)][(int)(R*math.cos(f*DELTA)-R*math.cos((N+1)*f*DELTA)/(N+1)+WIDTH/2)]=1
for row in pixels:
for cell in row:
sys.stdout.write('[]' if cell==1 else ' ')
sys.stdout.write('\n')
and test with N=3,7,and 3.5, it seems work when N=3 and 7:
N=3
[][][][][][][][]
[][] [][][]
[][] [][]
[] [][]
[][][][][] []
[][] [][] []
[][] [][]
[][] []
[] [][]
[][] []
[] []
[] [][]
[] [][]
[] []
[][] []
[] [][]
[][] []
[][] [][]
[][][] [][] []
[][][][][] []
[] [][]
[][] [][]
[][] [][]
[][][][][][][][]
N=7
[][][]
[][][] [][][]
[][][][][][] [][]
[][] [][] []
[][] [][][]
[] [] [][][]
[] [][]
[] []
[][][][] []
[] []
[][] []
[] [][][]
[] [][]
[][] []
[] []
[][][] []
[] []
[] [][]
[] [] [][][]
[][] [][][]
[][] [][] []
[][][][][][] [][]
[][][] [][][]
[][][]
but when N=3.5 the shape is not I expected (star like something):
[][][]
[][][] [][][]
[][] [][]
[] [][]
[] []
[] [][]
[][] [][][][][]
[][] [][][]
[] []
[][] []
[][][] []
[] [][]
[][] []
[] [][][]
[] []
[] [][]
[] []
[] []
[][] []
[][] []
[][] [] []
[][][] [][][] []
[][][][] [][] [][]
[][][] [][][]
[][][]
I checked the formula so long time,also my parameter used float until convert
the result to pixel position,but still not found what is wrong,can anyone
help?
(I want to solve it using python,but I found the original java version also
has similar problem,I even don't know which type of problem (e.g.:wrong
formula,number type casting,or wrong code) causes it)
Answer: When you have a fractional `N=p/q`, you need to go to `2*q*math.pi`. Let's
explore what I mean.
For `N=3.5`,
parametric plot (sin(t)-sin(4.5t)/4.5 , cos(t)-cos(4.5t)/4.5) for t in [0,2pi]
in [Wolfram
Alpha](http://www.wolframalpha.com/input/?i=parametric+plot+\(sin\(t\)-sin\(4.5t\)%2F4.5+,+cos\(t\)-cos\(4.5t\)%2F4.5\)+for+t+in+%5B0,2pi%5D)
yields:
[](http://i.stack.imgur.com/G5ZEd.png)
which is similar to your output.
On the other hand, an [upper
limit](http://www.wolframalpha.com/input/?i=parametric+plot+\(sin\(t\)-sin\(4.5t\)%2F4.5+,+cos\(t\)-cos\(4.5t\)%2F4.5\)+for+t+in+%5B0,4pi%5D)
of [0,4pi] gets the other half:
[](http://i.stack.imgur.com/Rm79e.png)
Finally, if N is 1/7, we [have to
use](http://www.wolframalpha.com/input/?i=parametric+plot+\(sin\(t\)-7sin\(8t%2F7\)%2F8+,+cos\(t\)-7cos\(8t%2F7\)%2F8\)+for+t+in+%5B0,14pi%5D)
[0,14pi] to get:
[](http://i.stack.imgur.com/TgNMv.png)
|
Simple Pandas issue, Python
Question: I want to import a txt file, and do a few basic actions on it.
For some reason I keep getting an unhashable type error, not sure what the
issue is:
def loadAndPrepData(filepath):
import pandas as pd
pd.set_option('display.width',200)
dataFrame = pd.read_csv(filepath,header=0,sep='\t') #set header to first row and sep by tab
df = dataFrame[0:639,:]
print df
filepath = 'United States Cancer Statistics, 1999-2011 Incidencet.txt'
loadAndPrepData(filepath)
Traceback:
Traceback (most recent call last):
File "C:\Users\Michael\workspace\UCIIntrotoPythonDA\src\Michael_Madani_week3.py", line 16, in <module>
loadAndPrepData(filepath)
File "C:\Users\Michael\workspace\UCIIntrotoPythonDA\src\Michael_Madani_week3.py", line 12, in loadAndPrepData
df = dataFrame[0:639,:]
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\frame.py", line 1797, in __getitem__
return self._getitem_column(key)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\frame.py", line 1804, in _getitem_column
return self._get_item_cache(key)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\generic.py", line 1082, in _get_item_cache
res = cache.get(item)
TypeError: unhashable type
Answer: The problem is that using the item getter (`[]`) needs hashable types. When
you provide it with `[:]` this is fine, but when you provide it with `[:,:]`,
you will get this error.
pd.DataFrame({"foo":range(1,10)})[:,:]
> TypeError: unhashable type
While this works just fine:
pd.DataFrame({"foo":range(1,10)})[:]
However, you should be using `.loc` no matter how you want to slice.
pd.DataFrame({"foo":range(1,10)}).loc[:,:]
foo
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
|
Clean up .html reports and export as .txt files
Question: I've been searching all night but I'm still not sure how to get the job done.
I'm new to python, so please forgive me first if I'm asking some simple
questions.
I've three thousands .html files (all new product description downloaded from
a trusted website) stored in one folder, now I would like to clean up these
files one by one (i.e., only keeping the content/product description and
removing tags and so on) and then store each content as a single .txt file.
After reading a few Q&As posted here, I think I need to use lxml package
instead of beautiful soup because all the .html files are from a highly
trusted source. However, I don't know which command/option within lxml I
should use, could you please kindly let me know?
Answer: lxml is a good choice not only because it's a good source but for its speed.
Just take into consideration that a trusted source doesn't mean properly
formatted markup which is what matters between libraries.
If all the pages have the same structure, xpath will do the job. First you'll
need to get the [xpath](http://www.w3schools.com/xsl/) which Chrome can do for
you, just do 'Inspect Element' -> right click on the html element you need to
parse -> select 'Copy xpath'.
In your python code once you request the page. Take the html and convert it:
from lxml import html
tree = html.fromstring("htmlString") #you can switch this with the path of the html file
name = tree.xpath('XPATH GOES HERE')
This will return a list object in most cases. To get just the text from an
attribute add '/text()' to the end of the xpath. Chrome's xpath versions
sometimes differ from what python reads (I had this issue with tables) so if
it doesn't return anything, play with the xpath a little to make sure to
works.
Alternatively,
You can "navigate" the structure of the html file instead of using xpath with
the
.find_class('css_class_Name') .getnext() .getchildren()
for example and then use
.text_content()
to extract the text from the html element. I encourage you to read these
[docs](http://lxml.de/api/lxml.etree._Element-class.html) for this to see
exactly which one you need to use.
|
Pygame images won't load anymore
Question: Since I started this I've had char.png in the same folder as my .py file
**not** a subfolder and it would load my images but when I added in the
ability to move left and right, now I get an error.
Traceback (most recent call last):File "C:\Users\Shiloh\Google Drive\Python\Platformer\main.py", line 25, in <module>
characterSprite = pygame.image.load('char.png')
pygame.error: Couldn't open char.png
I've gone back, removed everything I added since it was working and it still
won't load the images anymore. I have not moved any files, I've just simply
updated them. I've searched and tried the answers previously given on here but
none of them have seemed to work in my sitation. Thanks in advance.
import pygame
pygame.init()
#Window Resolution
display_width = 800
display_height = 450
gameResolution = pygame.display.set_mode( (display_width,display_height) )
#Window Title
pygame.display.set_caption('Bubble Bobble')
#Colors
colorRed = (255,0,0)
colorOrange = (255,160,96)
colorYellow = (255,216,0)
colorGreen = (0,255,0)
colorBlue = (0,0,255)
colorPurple = (235,193,255)
colorBlack = (0,0,0)
colorWhite = (255,255,255)
gameClock = pygame.time.Clock()
characterSprite = pygame.image.load('char.png')
def sprite(x,y):
gameResolution.blit(characterSprite,(x,y))
x = (display_width * 0.5)
y = (display_height * 0.5)
x_change = 0
crashed = False
while not crashed:
for event in pygame.event.get():
if event.type == pygame.QUIT:
crashed = True
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
x_change = -5
elif event.key == pygame.K_RIGHT:
x_change = 5
if event.type == pygame.KEYUP:
if event.key == pygame.K_LEFT or event.key == pygame.K_RIGHT:
x_change = 0
x += x_change
#print(event) #Prints to console
gameResolution.fill(colorWhite)
sprite(x,y)
pygame.display.update() #You can also use pygame.display.flip()
gameClock.tick(60) #Sets the value of the FPS
pygame.quit()
quit()
Answer: So I think with python 3.x, for `__file__` you have to wrap that in quotes. so
mydir = os.path.dirname('__file__')
pygame.image.load(os.path.join(mydir, 'char.png'))
which should fix the problem.
if it doesnt, the chosen answere
[here](http://stackoverflow.com/questions/27218125/cant-open-png-file-in-
pygame) mentions the issue with permissions. If you run the program as root,
it will start in the home directory. Which is possible that that may be the
issue, and you weren't having a problem using the absolute path. So, if you
are running it as root... stop it.
You can tell if you are in the wrong current working directory by using this
`print (os.getcwd())`
or current file
`print (os.path.dirname(os.path.realpath('__file__')))`
|
Django-cms render_to_response doesn't render to template
Question: I am working on a project in django 1.5.12. version with django-cms installed
. I have a file generated by command "pip freeze > frozen.txt" where is next
information with what I have installed:
Django==1.5.12
MySQL-python==1.2.5
South==1.0.2
argparse==1.2.1
distribute==0.6.24
django-classy-tags==0.6.2
django-cms==2.4.3
django-mptt==0.5.2
django-sekizai==0.8.2
html5lib==1.0b7
six==1.9.0
wsgiref==0.1.2
Well, my problem is that I have an app in my project where I have next code in
views.py:
from django.shortcuts import *
def test(request):
data = {'test 1': [ [1, 10] ], 'test 2': [ [1, 10] ],'test 3':[ [2,20]] }
print data # to see if function works
return render_to_response("project_pages.html",{'data':data},context)
And in my template project_pages.html I have this :
<table>
<tr>
<td>field 1</td>
<td>field 2</td>
<td>field 3</td>
</tr>
{% for author, values in data.items %}
<tr>
<td>{{author}}</td>
{% for v in values.0 %}
<td>{{v}}</td>
{% endfor %}
</tr>
{% endfor %}
</table>
and isn't render the response to the template. And in terminal doesn't show me
any errors. The function is working because it is printing data.
If try to return something like this:
return render_to_response("project_pages.html",{'data':data})
without context at the end (it is different call) it gives me next error:
"You must enable the 'sekizai.context_processors.sekizai' template "
TemplateSyntaxError: You must enable the 'sekizai.context_processors.sekizai' template context processor or use 'sekizai.context.SekizaiContext' to render your templates.
In my settings.py I have sekizai:
TEMPLATE_CONTEXT_PROCESSORS = (
'django.contrib.auth.context_processors.auth',
'django.core.context_processors.i18n',
'django.core.context_processors.request',
'django.core.context_processors.media',
'django.core.context_processors.static',
'cms.context_processors.media',
'sekizai.context_processors.sekizai',
'sekizai.context.SekizaiContext',
)
So, what should I do?
Answer: Review what context variable have. It should have something like that:
Example:
from django.template import RequestContext
def view(request):
...
return render_to_response(
"project_pages.html",
{'data':data},
context_instance=RequestContext(request),
)
|
When comparing arrays why is "in1d" so much slower than "a==b"
Question: I need to be able to compare two images and extract any unique pixels to
create a third image. To accomplish this I did the following:
import cv2
import numpy as np
img = cv2.imread("old.jpg")
img2 = cv2.imread("new.jpg")
image2 = cv2.cvtColor(img2, cv2.COLOR_RGB2RGBA)
for (x,y,z), value in np.ndenumerate(img):
dif = img[x,y,0] == img2[x,y,0] #only checking one color for speed
diff = str(dif)
if "True" in diff:
image2[x,y,3] = 0
cv2.imwrite("result.png", image2)
This worked fairly well but the it took around 10 seconds for a 640 x 480
picture and I was hoping to get it closer to about half that time. So I
changed this line:
dif = img[x,y,0] == img2[x,y,0]
to
dif = np.in1d(img[x,y,0], img2[x,y,0])
The results are identical but instead of speeding things up it now takes about
3 minutes. I'm at a complete loss as to why.
I realize that iterating over elements in large arrays is going to be time
consuming in python but why is in1d so much slower?
(As a side note I would just use the "[palette
method](http://opencvpython.blogspot.com/2012/06/fast-array-manipulation-in-
numpy.html)" but I couldn't see a way to implement it for this purpose due to
my limited knowledge of numpy arrays.)
Answer: `np.in1d` checks each element of its first argument against each element of
its second element in the worse case. For each element `i,j` in `img` it
checks if there is an element `k,l` in `img2` with the same value. Meaning
that for your `640x480` image you can end up doing `(640x480)^2` comparisons.
On the other hand, `==` checks only elementwise, it checks if element `i,j` of
`img` is equals to element `i,j` of `img2`. It will always has `640x480`
comparisons.
`np.in1d` will work if you have images of different sizes, `==` will work only
for images of the same size.
|
Python -- list index out of range -- on a simple SELECT statement?
Question: I'm new to Python, so maybe I'm making a newbie mistake. But this doesn't seem
the kind of error I should get in this circumstance.
On a very simple SELECT statement, I'm getting a "list index out of range"
error.
sql = """
begin tran
-- Several update statements are in this block.
commit tran"""
sql = sql.format(tablename=self.tablename, **self.mappings)
#print(sql)
self.cursor.execute(sql, (self.catalog_id,self.catalog_id,self.catalog_id,))
self.cursor.execute("SELECT CaptionText, DisplayOrder FROM dbo.SizeOrder where CaptionText is not NULL and len(CaptionText)>0") #This is the line that breaks!
size_order = {row[0].lower(): row[1] for row in self.cursor}
It's the next to last row that breaks. It's not doing any funny formatting. I
don't do any of the substitutions in this problem query. When run directly
against the DB, it returns over 300 records.
The trace output definitely implicates that line of code. I really suspected
the next line b/c it involves indexes. But a return after the execute helped
nothing.
Traceback (most recent call last):
File "import.py", line 782, in <module>
main(sys.argv)
File "import.py", line 167, in main
db.transform_catalog((len(args)>4 and args[4] == "--skipimages") or (len(args)>5 and args[5] == "--skipimages") )
File "import.py", line 235, in transform_catalog
self.do_transform(skipImages)
File "import.py", line 264, in do_transform
self.insert_size_types()
File "import.py", line 501, in insert_size_types
self.cursor.execute("SELECT CaptionText, DisplayOrder FROM dbo.SizeOrder where CaptionText is not NULL and len(Capt
ionText)>0")
File "C:\Python33\lib\site-packages\pypyodbc.py", line 1449, in execute
self._free_stmt(SQL_CLOSE)
File "C:\Python33\lib\site-packages\pypyodbc.py", line 1971, in _free_stmt
check_success(self, ret)
File "C:\Python33\lib\site-packages\pypyodbc.py", line 986, in check_success
ctrl_err(SQL_HANDLE_STMT, ODBC_obj.stmt_h, ret, ODBC_obj.ansi)
File "C:\Python33\lib\site-packages\pypyodbc.py", line 951, in ctrl_err
state = err_list[0][0]
IndexError: list index out of range
What could I be doing wrong? Thanks!
Answer: Definitely a bug. An unrelated error during some result processing in the
library's internals, not input-related. [DeepSpace's
link](http://stackoverflow.com/questions/31864360/python-list-index-out-of-
range-on-a-simple-select-statement/31865193#comment51648070_31864360) is the
likely cause.
|
change font size of facet titles using seaborn facetgrid heatmap
Question: Note: this is a different question than "[How can I change the font size using
seaborn FacetGrid?](http://stackoverflow.com/questions/25328003/how-can-i-
change-the-font-size-using-seaborn-facetgrid)". The methods suggested there do
not work when using a heatmap inside a facetgrid.
How can I change the font size of the facet titles when plotting heatmaps
inside a facetgrid?
The code below tries two methods, passing `fontsize=` to `set_titles()` and
wrapping the whole thing in a plotting context. As far as I can tell, neither
seems to have any effect on facet titles when using heatmap, although the
fontweight did change. Are there any other options for controlling facet title
when using heatmap?
import pandas as pd
import numpy as np
import itertools
import seaborn as sns
print("seaborn version {}".format(sns.__version__))
# R expand.grid() function in Python
# http://stackoverflow.com/a/12131385/1135316
def expandgrid(*itrs):
product = list(itertools.product(*itrs))
return {'Var{}'.format(i+1):[x[i] for x in product] for i in range(len(itrs))}
methods=['method 1', 'method2', 'method 3', 'method 4']
times = range(0,100,10)
data = pd.DataFrame(expandgrid(methods, times, times))
data.columns = ['method', 'dtsi','rtsi']
data['nw_score'] = np.random.sample(data.shape[0])
def facet(data,color):
data = data.pivot(index="dtsi", columns='rtsi', values='nw_score')
g = sns.heatmap(data, cmap='Blues', cbar=False)
with sns.plotting_context(font_scale=5.5):
g = sns.FacetGrid(data, col="method", col_wrap=2, size=3, aspect=1)
g = g.map_dataframe(facet)
g.set_titles(col_template="{col_name}", fontweight='bold', fontsize=18)
[](http://i.stack.imgur.com/RBYpn.png)
Answer: Thanks @mwaskon, that is the answer - use `size=` when called `set_titles`.
That leads to more questions, like
* Can you please change `set_titles` used `fontweight=` and `fontsize=` instead of `fontweight=` and `size=`? `size=` is use elsewhere for the facet height in inches.
* why is `sns.plotting_context` completely ineffective in this context?
|
Python : Replacing Values in netcdf file using netCDF4
Question: I have a netcdf file with several values < 0\. I would like to replace all of
them with a single value (say -1). How do I do that using netCDF4? I am
reading in the file like this:
import netCDF4
dset = netCDF4.Dataset('test.nc')
dset[dset.variables['var'] < 0] = -1
Answer: If you want to keep the data in the netCDF variable object, this should work:
import netCDF4
dset = netCDF4.Dataset('test.nc')
dset['var'][:][dset['var'][:] < 0] = -1
dset.close() # if you want to write the variable back to disk
If you don't want to write back to disk, go ahead and just get the numpy array
and slice/assign to it:
data = dset['sea_ice_cover'][:] # data is a numpy array
data[data < 0] = -1
|
Passwordless ssh with paramiko fails to authorize
Question: I am having trouble getting authentication working with paramiko SSHClient.
Trying to go from one virtual machine out to another box on the network. The
general idea is that I create a public/private key pair, ssh into the client
using a password given, take the clients public key and add it to my
known_hosts. Place my public key in the clients authorized_keys. Close that
connection, and then try reconnecting without the password. It fails in the
reconnection. I am using paramiko 1.15.2 and python 2.7.10.
The code goes as follows from this tutorial:
<http://www.minvolai.com/blog/2009/09/How-to-ssh-in-python-using-Paramiko/how-
to-ssh-in-python-using-paramiko/>.
import paramiko, StringIO, os
pkey = paramiko.rsakey.RSAKey.generate(1024)
pub_key = "ssh-rsa %s" % (pkey.get_base64())
file_obj = StringIO.StringIO()
pkey.write_private_key(file_obj)
priv_key = file_obj.getvalue()
server, username, password = ('host', 'username', 'password')
ssh = paramiko.SSHClient()
parmiko.util.log_to_file(log_filename)
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.load_host_keys(os.path.expanduser(os.path.join("~", ".ssh", "known_hosts")))
ssh.connect(server, username=username, password=password)
sftp = ssh.open_sftp()
sftp.get(remote_path, local_path)
sftp.put(local_path, remote_path)
sftp.close()
ssh.close()
key = StringIO.StringIO(priv_key)
privkey = paramiko.rsakey.RSAKey(key)
ssh.connect(server, username=username,pkey=privkey )
This is the debug log that I get:
DEBUG:paramiko.transport:starting thread (client mode): 0x728ac950L
INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.3)
DEBUG:paramiko.transport:kex algos:[u'diffie-hellman-group-exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman-group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'[email protected]'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'[email protected]'] client mac:[u'hmac-md5', u'hmac-sha1', u'[email protected]', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac-ripemd160', u'[email protected]', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5', u'hmac-sha1', u'[email protected]', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac-ripemd160', u'[email protected]', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'[email protected]'] server compress:[u'none', u'[email protected]'] client lang:[u''] server lang:[u''] kex follows?False
DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr
DEBUG:paramiko.transport:using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none
DEBUG:paramiko.transport:Switch to new keys ...
DEBUG:paramiko.transport:Trying SSH key 36f4e43a968404ef8e7f277e1429f0fd
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
DEBUG:paramiko.transport:Trying discovered key 54b98c4b8ba454594e9df58bc8f9b5e7 in /home/apache/.ssh/id_rsa
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
DEBUG:paramiko.transport:Trying discovered key d2a34d82ebe4439672bd2c16540c5bb4 in /home/apache/.ssh/id_dsa
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/apache/miniconda/lib/python2.7/site-packages/paramiko-1.15.2-py2.7.egg/paramiko/client.py", line 307, in connect
File "/home/apache/miniconda/lib/python2.7/site-packages/paramiko-1.15.2-py2.7.egg/paramiko/client.py", line 519, in _auth
paramiko.ssh_exception.AuthenticationException: Authentication failed.
>>> DEBUG:paramiko.transport:EOF in transport thread
EDIT: What really puzzles me is that this works going between two actual
machines on the network. I can ssh into apache@virtualmachine and from apache
in the terminal. I have verified that the key is added during ftp.put().
Though I can't find anything about paramiko having issues going out form a VM.
EDIT2: Using the "look_for_keys=False' gives the same output, but only uses
the given key. Note: it is using a different key as I regenerated one today
different from yesterdays.
ssh.connect(server, username=username, pkey=rkey, look_for_keys=False)
DEBUG:paramiko.transport:starting thread (client mode): 0x84938990L
INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.3)
DEBUG:paramiko.transport:kex algos:[u'diffie-hellman-group-exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman-group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'[email protected]'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'[email protected]'] client mac:[u'hmac-md5', u'hmac-sha1', u'[email protected]', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac-ripemd160', u'[email protected]', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5', u'hmac-sha1', u'[email protected]', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac-ripemd160', u'[email protected]', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'[email protected]'] server compress:[u'none', u'[email protected]'] client lang:[u''] server lang:[u''] kex follows?False
DEBUG:paramiko.transport:Ciphers agreed: local=aes128-ctr, remote=aes128-ctr
DEBUG:paramiko.transport:using kex diffie-hellman-group14-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none
DEBUG:paramiko.transport:Switch to new keys ...
DEBUG:paramiko.transport:Trying SSH key eb06556f5c3461c6e8c4fe70398717e3
DEBUG:paramiko.transport:userauth is OK
INFO:paramiko.transport:Authentication (publickey) failed.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/apache/miniconda/lib/python2.7/site-packages/paramiko-1.15.2-py2.7.egg/paramiko/client.py", line 307, in connect
File "/home/apache/miniconda/lib/python2.7/site-packages/paramiko-1.15.2-py2.7.egg/paramiko/client.py", line 519, in _auth
paramiko.ssh_exception.AuthenticationException: Authentication failed.
>>> DEBUG:paramiko.transport:EOF in transport thread
UPDATE: I got the connect call to work going from the VM to the machine
hosting the machine. Not sure where to think this narrows down the problem :/
Answer: It looks to me from the output you provide as if paramiko is trying multiple
different keys (before getting to the right key that authorizes the login)
located in the same key file. There is usually a maximum amount of attempts,
which is why authentication might fail. This often happens when you have lots
of keys in yout `.ssh/` folder and you use `ssh` to log in without the `-o
IdentitiesOnly=yes` option (this happens even if you use `-i path/to/key` to
specify a specific key file). How that translates to paramiko I don't know,
unfortunately, but I assume that library allows you to specify the key more
precisely. However, I think you'll want to set `look_for_keys` to `False` for
`ssh.connect`, and separate the different keys into different files (one file
for each server?).
|
calling an api concurrently in python
Question: I need to talk to an api to get information about teams. Each team has a
unique id. I call the api with that id, and I get a list of players on each
team (list of dicts). One of the keys for a player is another id that I can
use to get more information about that player. I can bundle all these
player_ids and make a call to the api to get all the additional information
for each player in one api call.
My question is this: I expect the number of teams to grow, it could be quite
large. Also, the number of players for each team could also grow large.
What is the best way to make these api calls concurrently to the api? I can
use the ThreadPool from multiprocessing.dummy, I have also seen genvent used
for something like this.
The calls to the api take some time to get a return value (1-2 seconds for
each bulk api call).
Right now, what I do is this:
for each team:
get the list of players
store the player_ids in a list
get the player information for all the players (passing the list of player_ids)
assemble and process the information
If I use ThreadPool, I can do the following:
create a ThreadPool of size x
result = pool.map(function_to_get_team_info, list of teams)
pool.close()
pool.join()
#process results
def function_to_get_team_info(team_id):
players = api.call(team_id)
player_info = get_players_information(players)
return player_info
def get_players_information(players):
player_ids = []
for player in players:
player_ids.append(player['id'])
return get_all_player_stats(player_ids)
def get_all_player_stats(players_id):
return api.call(players_id)
This processes each team concurrently, and assembles all the information back
in the ThreadPool results.
In order to make this completely concurrent, I think I would need to make my
ThreadPool the size of the number of teams. But I don't think this scales
well. So, I was wondering if I used gevent to process this information if that
would be a better approach.
Any suggestions would be very welcome
Answer: One solution would be to:
* prepare a list of tasks to perform, in your case list of teams IDs to be processed,
* create fixed pool of N thread workers,
* each worker thread pops a task from the list and processes the task (downloads team data), after completion it pops another task,
* when task list is empty, the worker thread stops.
This solution could safe you from the case when processing of a particular
team takes e.g. 100 time units, when other teams are processed in 1 time unit
(on an average).
You can tune number of thread workers depending on number of teams, average
team processing time, number of CPU cores etc.
_Extended answer_
This can be achieved with the Python
[`multiprocessing.Pool`](https://docs.python.org/2/library/multiprocessing.html):
from multiprocessing import Pool
def api_call(id):
pass # call API for given id
if __name__ == '__main__':
p = Pool(5)
p.map(api_call, [1, 2, 3])
|
handle two different erro codes by exception handler in python
Question: HI i am new python and please excuse me if this seems to be a silly question .
I have a function in my code which returns an exception ResponseError and the
ResponseError has two error codes 404 and 403 I want my exception handler to
give two different messages based on the error code if the error code is 404
it should say "file not exist " if the error code is 403 then "forbidden to
access".
Answer: If you are using urllib2 package in python then you could handle like this:
import urllib2
try:
urllib2.urlopen("any url")
except urllib2.HTTPError, err:
if err.code == 404:
print "Files does not exit"
elif err.code == 403:
print "Forbidden to Access"
For example the below code will print out "files does not exist"
import urllib2
try:
urllib2.urlopen("http://www.google.com/events")
except urllib2.HTTPError, err:
if err.code == 404:
print "Files does not exit"
elif err.code == 403:
print "Forbidden to Access"
output - `Files does not exist` since the above url does not exist.
|
Reverse Geocoding using Python and Google API
Question: I am trying to reverse geocode 500 lat and long random points using google
API. I wrote the code below but I notice there are some errors that I need
help with. I want to create a output CSV that has the Lat/Long and complete
address of the reverse geocode and also the JSON geo_Data. Some of the errors
is that my Ouput CSV is not being written and I don't know how to parse out
just the address to my output CSV. Can anyone help me?
Script cited below:
import pandas as pd
import json
import requests
df = pd.read_csv('/Users/albertgonzalobautista/Desktop/Amsterdam_RP500_2.csv')
# create new columns
df['geocode_data'] = ''
df['address']=''
# function that handles the geocoding requests
def reverseGeocode(latlng):
result = {}
url = 'https://maps.googleapis.com/maps/api/geocode/json?latlng={0}&key={1}'
apikey = 'XXX'
request = url.format(latlng, apikey)
data = json.loads(requests.get(request).text)
if len(data['results']) > 0:
result = data['results'][0]
return result
for i, row in df.iterrows():
df['geocode_data'][i] = reverseGeocode(df['lat'][i].astype(str) + ',' + df['lon'][i].astype(str))
for i, row in df.iterrows():
if 'address_components' in row['geocode_data']:
for component in row['geocode_data']['address_components']:
df.to_csv('testingGEO233.csv', encoding='utf-8', index=False)
Answer: You don't have any code in this for loop
for component in row['geocode_data']['address_components']:
maybe try this
for component in row['geocode_data']['address_components']:
df['address'] = row['geocode_data']['address_components']
You might actually want a string instead of a list of the components:
for component in row['geocode_data']['address_components']:
df['address'] = row['geocode_data']['formatted_address']
|
IPython: load extension automatically upon start
Question: In IPython, I can load a custom extension using simple command:
%load_ext physics
This will load the file `~/.config/ipython/extensions/physics.py`.
How can I tell IPython to load the extension automatically on startup?
I have added the line to `/.config/ipython/profile_default/startup/import.py`,
but that does not work:
from numpy import *
%load_ext physics
When I start IPython, I get folowing error:
File "~.config/ipython/profile_default/startup/import.py", line 17
%load_ext physics
^
SyntaxError: invalid syntax
Answer: From the IPython documentation:
**Using extensions**
To load an extension while IPython is running, use the %load_ext magic:
In [1]: %load_ext myextension
To load it each time IPython starts, list it in your configuration file:
c.InteractiveShellApp.extensions = [
'myextension'
]`
Hope that helps
|
Python- text based game not calling the correct room
Question: I am writing a text based game and I want to link each room to four other
rooms- north, south, east and west. I am starting with just north for now. The
user should be able to type 'walk north' and the north room should be called.
I have used three files- one where I will write the main story, one to call
the appropriate room within the story and one for navigation to avoid mutual
importing.
rooms.py:
import actions
class FirstRoom(object):
room_name = 'FIRST ROOM'
north = 'north_room'
def __init__(self):
pass
def start(self):
print self.room_name
while True:
next = raw_input('> ')
actions.walk(next, self.north)
actions.command(next)
class North(object):
room_name = "NORTH ROOM"
def __init__(self):
pass
def start(self):
print self.room_name
actions.py:
import navigation
def walk(next, go_north):
"""Tests for 'walk' command and calls the appropriate room"""
if next == 'walk north':
navigation.rooms(go_north)
else:
pass
navigation.py:
import rooms
first_room = rooms.FirstRoom()
north_room = rooms.North()
def rooms(room):
rooms = {
'first_room': first_room.start(),
'north_room': north_room.start(),
}
rooms[room]
When I run first_room.start() it should print 'FIRST ROOM' which it does. Then
I type in 'walk north' and I expect it to print "NORTH ROOM", but instead it
prints "FIRST ROOM" again.
I can't figure out for the life of me why it doesn't work the way I expect it
to, it's as if it's calling first_room again instead of north_room. Can anyone
figure out what I'm doing wrong?
Answer: My guess is that the issue occurs because of how the dictionary `rooms` is
defined. When you do -
rooms = {
'first_room': first_room.start(),
'north_room': north_room.start(),
}
rooms[room]
The functions get called when you define the dictionary itself, not when you
access the values from it (so both functions get called) , you want to store
function objects (without calling them) as the values and then call them as -
`rooms[room]()` . Example -
def rooms(room):
rooms = {
'first_room': first_room.start,
'north_room': north_room.start,
}
rooms[room]()
|
Python RegEx to report lines a string exists on
Question: I am trying to write something up real quick that will find a specific string
and report to me what lines those string exist on. I am trying to find
exponents so I am looking for "e+" which only occurs on lines that start with
AAA, but not ALL the lines with AAA at the start have e+ in them. Currently,
my code looks like this:
import re
file = raw_input("Enter a file to scan ")
extfile = file+".txt"
#NDoc = raw_input("Enter a file to place results ")
#log = open (NDoc, 'w')
xfile = open(file+".txt")
expcnt = 0
nl = list
for line in xfile:
line = line.strip()
n = re.findall('^PV1.+(e\+)')
if len(n) > 0:
expcnt = expcnt+1
### l = line n is on
nl.append(l)
for item in nl:
# log.write(item+"\n")
print (expcnt,"exponenets exist and occur on the following lines:"
For item in nl:
print item
#print ("Your results can be found in",extfile,".")
Basically, I want to know what I need to put on the line with 3 hashtags(#) in
order to record the line n was found on and put it into a list. Later, I want
to print that list out. NDoc and log at the top are commented out because I am
considering print the results to a new page using `from __future__ import
print_function`, but that can be ignored for now. So I am trying to put all
line numbers into the list nl and all of the exponent strings into the list el
(though list el is just for me to double check things and wont be used later).
Any assistance would be much appreciated, thanks in advance!
Edit:
import re
file = raw_input("Enter a file to scan ")
extfile = file+".txt"
#NDoc = raw_input("Enter a file to place results ")
#log = open (NDoc, 'w')
xfile = open(file+".txt")
expcnt = 0
nl = list
for line_num, line in enumerate(xfile):
line = line.strip()
n = re.findall('^PV1.+(e\+)')
if len(n) > 0:
expcnt = expcnt+1
l = line_num
nl.append(l)
for item in nl:
# log.write(item+"\n")
#print (expcnt,"exponenets exist and occur on the following lines:"
for item in nl:, does anyone know why that would occur?
print item
#print ("Your results can be found in",extfile,".")
For some reason this is returning an error "Expected an indented block" at
`for itme in nl:`
Answer: Since you're already reading the file line by line, you could use `enumerate`.
for line_num, line in enumerate(xfile):
instead of
for line in xfile:
|
While statement not evaluating to false for Selenium Webdriver
Question: I'm migrating a test I wrote in the Selenium IDE to Python WebDriver and I'm
having some issues with a **'while'** loop scenario. Here's the IDE code:
while | selenium.isElementPresent("xpath=//select[@name='servers']/option")
storeValue | //select[@name='servers']/option | myServerIP
waitForElementPresent | name=servers
addSelection | name=servers | ${myServerIP}
waitForValue | //input[@value='Delete'] | Delete
clickAndWait | //input[@value='Delete']
waitForNotText | //select[@name='servers']/option | ${myServerIP}
endWhile
I have a box that contains addresses of time servers that have been entered
(i.e. 129.6.15.30, time-d.nist.gov, etc..). While addresses are listed in the
server list box, the object **"//select[@name='servers']/option"** exists.
Once all the servers have been deleted, the object is no longer in existence.
While the server list object exists..
* Stores the name of top server in the list.
* Selects that server.
* Deletes that server from the list
* Confirms that server name has been removed from the list
As I'm trying to migrate this scenario over to WebDriver, I'm having some
issues.
while expected_conditions.visibility_of_element_located("//select[@name='servers']/option"):
myServerIP = driver.find_element_by_xpath("//select[@name='servers']/option").text
assertExpectedConditionTrue(driver, "By.NAME", "servers")
driver.find_element_by_xpath("//select[@name='servers']/option[contains(text(), '"+ myServerIP + "')]").text
driver.find_element_by_xpath("//select[@name='servers']/option[contains(text(), '"+ myServerIP + "')]").click()
assertExpectedValueConditionTrue(driver, "By.XPATH", "//input[@value='Delete']", "Delete")
driver.find_element_by_xpath("//input[@value='Delete']").click()
assertExpectedConditionFalse(driver, "By.XPATH", "//select[@name='servers']/option", myServerIP)
The server names are found and deleted just fine. However, the 'while' part
never seems to evaluate to being invisible, which causes a
**NoSuchElementException** failure (on the first line in the 'while' loop)
once all the server names are removed. I'm looking for a way to make the
'while' loop evaluate to false so it's exited gracefully once all the server
names have been removed.
Answer: Make a `while True` loop and exit the loop once you get a `TimeoutException`:
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
wait = WebDriverWait(driver, 10)
while True:
try:
wait.until(EC.presence_of_element_located((By.XPATH, "//select[@name='servers']/option")))
except TimeoutException:
break
# rest of the code
Or, alternatively, catch `NoSuchElementException`:
from selenium.common.exceptions import NoSuchElementException
while True:
try:
myServerIP = driver.find_element_by_xpath("//select[@name='servers']/option").text
except NoSuchElementException:
break
* * *
Additionally, `selenium` has this [`Select` class](https://selenium-
python.readthedocs.org/api.html#selenium.webdriver.support.select.Select)
which makes it easy to work with `select->option` HTML blocks:
from selenium.webdriver.support.select import Select
select = Select(driver.find_element_by_name("servers"))
select_by_visible_text(myServerIP)
|
How to filter a big chunk of text which contains no line breaks in python?
Question: here is my problem : I want to filter a big chunk of text with python, but all
the things I found were filtering by line, ie with "if line.startswith", and I
don't think I could do that here :/.
Here is my actual code :
import json
import requests
data = requests.get('http://www.reddit.com/r/todayilearned/new/.json');
print(data.json())
I want to fetch the content between ""title:":" and ",)". Do you have any
ideas ? Thanks !
Answer: Look at the output of `print(data.json())` The JSON Python module makes it
very easy to loop over JSON data with a for ... in ... loop. Try it out
for x in data.json():
print x
See what the output is. After that, add more inner loops to loop through x,
you will see how you can access each piece of JSON data.
|
Parse childs in XML python
Question: I have a XML code like:
<?xml version='1.0' encoding="UTF-8"?>
<coureurs>
<coureur>
<nom>Patrick</nom><hair>Inexistants</hair>
</coureur>
</coureurs>
And i want to print that:
Patrick inexistents
Etc...
For now my code is :
from lxml import etree
tree = etree.parse(file.xml)
for coureur in tree.xpath("/coureurs/coureur/nom"):
print(user.text)
but it returns blank, when i do: for user in
tree.xpath("/coureurs/coureur/hair"): it only returns hair. what should i do?
Answer: I am still not able to reproduce the issue with the xml and code you provided.
But seems like you have left out a lot of xml , and most probably the `XPATH`
may not be working for you if `coureurs` is not the direct root of the xml (or
its direct child) .
In such cases, you can use the following XPATH to get each `coureur` node in
the xml (that is the child of a `coureurs` node) -
//coureurs/coureur
This would give you all `<coureur>` tag elements from the xml , and then you
ca iterate over that to print it's child's text . Example Code -
for user in tree.xpath('//coureurs/coureur'):
for child in user:
print(child.text,end=" ")
print()
* * *
Example/Demo -
In [26]: s = """<?xml version='1.0' encoding="UTF-8"?>
....: <coureurs>
....: <coureur>
....: <nom>Patrick</nom><hair>Inexistants</hair>
....: </coureur>
....: </coureurs>"""
In [28]: tree = etree.fromstring(s.encode())
In [35]: for user in tree.xpath('//coureurs/coureur'):
....: for child in user:
....: print(child.text,end=" ")
....: print()
....:
Patrick Inexistants
|
Automation Microsoft SQL Server 2008 R2 using Python(pywinauto)
Question: I am creating **Microsoft SQL Server Management Studio** **Automation tool
using python**. The problem is I can't select the **Child_tree**(Northwind)
database It's selecting the **Parent_tree**(Databases). I need to do much
more, clicking the child_tree(Northwind) right click option (Ex. Tasks->
backup). Help me to do the best automation code. Thanks in Advance.
import pywinauto
import socket
import binascii
host = socket.gethostname() #Getting system host name
n2 = int('0b111000001100001011100110111001101110111011011110111001001100100', 2) #password
n1 = int('0b111010101110011011001010111001001101110011000010110110101100101', 2) # username
n = int('0b1110011011001010111001001110110011001010111001001101110011000010110110101100101', 2) #servername av
if (host == "systemhostXXX" or host == "systemhostyyy"): # checking the host name
try:
pwa_app = pywinauto.application.Application()
path = pwa_app.start_(r"C:/Program Files (x86)/Microsoft SQL Server/100/Tools/Binn/VSShell/Common7/IDE/Ssms.exe") #Opening the .exe file
print("Status: Application Launched successfully!!")
except:
print("Error: Applicatin Launching Error!!")
try:
pwa_app.ConnecttoServer.ComboBox1.Select("Database Engine") #Selecting the combobox value
pwa_app.ConnecttoServer.edit1.SetText(binascii.unhexlify('%x' % n))
pwa_app.ConnecttoServer.ComboBox3.Select("SQL Server Authentication")
pwa_app.ConnecttoServer.edit2.SetText(binascii.unhexlify('%x' % n1)) # convert binary into string
pwa_app.ConnecttoServer.edit3.SetText(binascii.unhexlify('%x' % n2))
print("Status: Log-in Process!!")
pwa_app.ConnecttoServer.Connect.Click()
except:
print("Error: Log-In Failed!!Please Relaunch!")
try:
pwa_app.ConnecttoServer.Ok.Click() #Button click (OK)
pwa_app.ConnecttoServer.Cancel.Click()
print("Error: Restoration going-on!!")
except:
print("Status: Log-in Success!!")
try:
w_handle = pywinauto.findwindows.find_windows(title=u'Microsoft SQL Server Management Studio', class_name='wndclass_desked_gsk')[0]
window = pwa_app.window_(handle=w_handle)
ctrl = window['TreeView']
ctrl.GetItem([u'SQL Server 8.0.2039']).Click()
ctrl.GetItem([u'SQL Server 8.0.2039', u'Databases', u'Northwind']).Click() #Selecting the database
except:
print("Database selection failed !!")
else: print 'Dear', host,'You are not Authorized to Run this program\n'
Answer: As I could understand in comments, you need waiting until main window is open
after login.
window = pwa_app.Window_(title=u'Microsoft SQL Server Management Studio', class_name='wndclass_desked_gsk')
window.Wait('ready', timeout=20) # default timeout is 5 sec. if any
ctrl = window['TreeView']
ctrl.GetItem([u'SQL Server 8.0.2039']).Click()
ctrl.GetItem([u'SQL Server 8.0.2039', u'Databases', u'Northwind']).Click() #Selecting the database
Please check how it works.
**EDIT** :
It seems you generated the code for `'Microsoft SQL Server Management Studio'`
window using SWAPY. It means that the window was already open.
But in automated workflow `Log-in` is quite long operation (may take up to 10
seconds I believe). So when you clicked "Connect" button, `'Microsoft SQL
Server Management Studio'` is not open yet. You may see some progress window
or even nothing for a few seconds.
Function `find_windows` doesn't wait while the window appears on the screen.
It just finds the window at that moment. So when you execute line
window = pwa_app.Window_(title=u'Microsoft SQL Server Management Studio', class_name='wndclass_desked_gsk')
**WindowSpecification** object is created (`window` variable). `ctrl =
window['TreeView']` is also WindowSpecification object. They are just
descriptions and are not connected with real window/control. But the following
statement
ctrl.GetItem([u'SQL Server 8.0.2039']).Click()
is equivalent to
ctrl.WrapperObject().GetItem([u'SQL Server 8.0.2039']).Click()
or
ctrl.Wait('visible').GetItem([u'SQL Server 8.0.2039']).Click()
pywinauto hides `WrapperObject()` call using power of Python. So it's called
automatically. Default timeout is 5 seconds in this case. It might be
insufficient for long time operations like Log-in. That's why I suggest
calling `Wait('ready', timeout=20)` explicitly. `'ready'` means `'exists
visible enabled'` using logical `AND`.
|
Does the dill python module handle importing modules when sys.path differs?
Question: I'm evaluating dill and I want to know if this scenario is handled. I have a
case where I successfully import a module in a python process. Can I use dill
to serialize and then load that module in a different process that has a
different sys.path which doesn't include that module? Right now I get import
failures but maybe I'm doing something wrong.
Here's an example. I run this script where the foo.py module's path is in my
sys.path:
% cat dill_dump.py
import dill
import foo
myFile = "./foo.pkl"
fh = open(myFile, 'wb')
dill.dump(foo, fh)
Now, I run this script where I do not have foo.py's directory in my
PYTHONPATH:
% cat dill_load.py
import dill
myFile = "./foo.pkl"
fh = open(myFile, 'rb')
foo = dill.load(fh)
print foo
It fails with this stack trace:
Traceback (most recent call last):
File "dill_load.py", line 4, in <module>
foo = dill.load(fh)
File "/home/b/lib/python/dill-0.2.4-py2.6.egg/dill/dill.py", line 199, in load
obj = pik.load()
File "/rel/lang/python/2.6.4-8/lib/python2.6/pickle.py", line 858, in load
dispatch[key](self)
File "/rel/lang/python/2.6.4-8/lib/python2.6/pickle.py", line 1133, in load_reduce
value = func(*args)
File "/home/b/lib/python/dill-0.2.4-py2.6.egg/dill/dill.py", line 678, in _import_module
return __import__(import_name)
ImportError: No module named foo
So, if I need to have the same python path between the two processes, then
what's the point of serializing a python module? Or in other words, is there
any advantage to loading foo via dill over just having an "import foo" call?
Answer: That's an interesting failure. Notice that if you do `dill.dumps(foo)` you
will get the contents of the module `foo`… the part that fails is using
python's built-in import hook (`__import__`) to do little more than to
register the module into `sys.modules`. It should be possible to work around
that and modify `dill` so that the module could be imported if the module is
not found in the PYTHONPATH. However, I do think it's proper that the module
have to be found in the PYTHONPATH… that is what is expected of a module… so
I'm not sure if it's a good idea. But it might be...
As noted above, for a file `foo.py`, with contents: `hello = "hello world, I
am foo"`
>>> import dill
>>> import foo
>>> dill.dumps(foo)
'\x80\x02cdill.dill\n_import_module\nq\x00U\x03fooq\x01\x85q\x02Rq\x03}q\x04(U\x08__name__q\x05h\x01U\x08__file__q\x06U\x06foo.pyq\x07U\x05helloq\x08U\x15hello world, I am fooq\tU\x07__doc__q\nNU\x0b__package__q\x0bNub.'
You can see the contents of the file are preserved in the pickle.
The primarily reason to use `dill` with modules, is that `dill` can record
dynamic modifications to modules. For example, adding a function or other
object:
>>> import foo
>>> import dill
>>> foo.a = 100
>>> with open('foo.pkl', 'w') as f:
... dill.dump(foo, f)
...
>>>
Then restarting… (with `foo` in the PYTHONPATH)
Python 2.7.10 (default, May 25 2015, 13:16:30)
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import dill
>>> with open('foo.pkl', 'r') as f:
... foo = dill.load(f)
...
>>> foo.hello
'hello world, I am foo'
>>> foo.a
100
>>>
I've added this as a bug report / feature request:
<https://github.com/uqfoundation/dill/issues/123>
|
as_formula specifier for sklearn.tree.decisiontreeclassifier in Python?
Question: I was curious if there is an as_formula specifier (like in `statsmodels`) for
`sklearn.tree.decisiontreeclassifier` in Python, or some way to hack one in.
Currently, I must use
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X, Y)
but I would prefer to have something like
clf = clf.fit(formula='Y ~ X', data=df)
The reason is that I would like to specify more than one X without having to
do a lot of array shaping. Thanks.
Answer: Thanks for the information. Although there is no current `Patsy` interface for
`sklearn`, `Patsy` easily provides the functionality I need. As an example...
from sklearn import tree
from patsy import dmatrix
red = [1,0,0,0,0,1,1,0,0,1,1,0]
green = [0,0,0,1,0,1,1,0,0,1,1,0]
blue = [0,0,1,1,0,0,0,1,0,0,0,0]
y = [0,0,0,0,0,1,1,0,0,1,1,0]
X = dmatrix('red + green + blue + 0')
dt_clf = tree.DecisionTreeClassifier()
dt_clf = dt_clf.fit(X, y)
pred_r = [1,1,0,0,1,1,0,0,0,0,0,0]
pred_g = [1,1,0,0,1,1,0,0,0,0,0,0]
pred_b = [0,0,1,1,0,0,0,1,0,0,0,0]
test = dmatrix('pred_r + pred_g + pred_b + 0')
dt_clf.predict(test)
Perhaps even more convenient is the fact that `sklearn` plays well with
`pandas`. Using the same data as above...
import pandas as pd
df = pd.DataFrame()
df['red'] = red
df['green'] = green
df['blue'] = blue
df['y'] = y
dt_clf = dt_clf.fit(df[['red','green','blue']], df['y'])
dt_clf.predict(test)
Hopefully this helps someone in the same situation as me.
note: be very careful that the sequence of Xs remains the same. For example,
don't train as df[['red','green','blue']] then predict
(df[['blue','green','red']]. May seem obvious, but an easy way to mess things
up.
|
Android Socket client unable to send and receive messages
Question: I want to send and receive messages from my socket server which is created in
python on windows with the help of twisted API. My client is going to be my
android phone through I am going send my string messages. Here is my code. can
someone please help out.
public class MainActivity extends AppCompatActivity
{
//TextView textView;
Button sendButton;
Button connect;
EditText message;
OutputStream outputStream;
InputStream inputStream;
Socket socket;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
sendButton = (Button) findViewById(R.id.sendButton);
connect = (Button) findViewById(R.id.button);
message = (EditText) findViewById(R.id.message);
connect.setOnClickListener(new View.OnClickListener()
{
@Override
public void onClick(View view)
{
connect.setText("Disconnect");
AsyncTask asyncTask = new AsyncTask() {
@Override
protected Object doInBackground(Object[] objects)
{
try {
socket = new Socket("192.168.100.106",8888);
try {
outputStream = socket.getOutputStream();
inputStream = new DataInputStream(socket.getInputStream());
} catch (IOException e) {
e.printStackTrace();
}
connect.setOnClickListener(new View.OnClickListener()
{
@Override
public void onClick(View view)
{
try {
socket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
});
sendButton.setOnClickListener( new View.OnClickListener() {
@Override
public void onClick(View view)
{
PrintWriter out = new PrintWriter(outputStream);
String mes = message.getText().toString();
out.print(mes);
}
});
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
};
asyncTask.execute();
}
});
}
}
And here is my socket server script coded in python with the help of twisted
API.
from twisted.internet.protocol import Protocol, Factory
from twisted.internet import reactor
import ctypes # An included library with Python install.
class DataTransfer(Protocol):
def connectionMade(self):
#self.transport.write("""connected""")
self.factory.clients.append(self)
print "clients are ", self.factory.clients
self.username = ""
self.password = ""
self.auth = False
self.ipaddress = self.transport.getPeer()
print self.ipaddress
def connectionLost(self, reason):
self.factory.clients.remove(self)
print reason
def dataReceived(self, data):
print data
a = data.split(':')
if len(a) > 1:
command = a[0]
content = a[1]
msg = ""
self.message(msg)
def message(self, message):
self.transport.write(message + '\n')
factory = Factory()
factory.protocol = DataTransfer
factory.clients = []
reactor.listenTCP(8888, factory)
print "Server started"
reactor.run()
Presently I am able to communicate (ie. connect and disconnect with the
server.) but its just that I am not able to send and receive messages.
Answer: Instead of `PrintWriter out = new PrintWriter(outputStream);` directly use the
outputStream and it should work. :)
|
Running bash commands from python script in certain directory
Question: I am trying to write a Python script to run recursively a program from bash in
multiple directories and save(now just display) output in log file.
The problem is that when I try to run that app from home directory just giving
the full path of input files it crashes, so I usually run it directly in
directory containing input files. I have tried to do this with os.popen() and
pexpect but I think most suitable here will be subprocess.
My code:
import subprocess,
from subprocess import Popen, PIPE
subprocess.Popen("$fp "+pcr+" "+dat, cwd=path+dirs[0], stdout=PIPE)
print output.stdout.read()
where pcr and dat are the names of input files.
As I try to run it I obtain following error:
Traceback (most recent call last):
File "file_ed.py", line 47, in <module>
subprocess.Popen("$fp "+pcr+" "+dat, cwd=retval, stdout=PIPE)
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
What could be the cause of this problem and how to fix it?
Answer: I think your problem is that you want to pass `pcr` and `dat` as arguments to
the program and you think that the whitespace will do that for you – but
Python is not bash. Just use an array and it should work:
subprocess.Popen([fp, pcr, dat], cwd=path+dirs[0], stdout=PIPE)
The reason you got "no such file or directory": Python was looking for a
program named `"$fp "+pcr+" "+dat`, i.e. it assumed that the whitespace (and
the dollar sign, unless you meant it as kind of a pseudo-code placeholder) is
part of the program name. The error tells you that there is no program with
such a name.
However it is hard to guess what you mean because your snippet is incomplete.
I don't really know what `$fp` and `pcr` and `dat` mean.
Also have a look at the examples in the documentation of the subprocess
module.
|
Download CSV from an iPython Notebook
Question: I run an iPython Notebook server, and would like users to be able to download
a pandas dataframe as a csv file so that they can use it in their own
environment. There's no personal data, so if the solution involves writing the
file at the server (which I can do) and then downloading that file, I'd be
happy with that.
Answer: How about using the Filelinks class from IPython? I use this to provide access
to data directly from Jupyter notebooks. Assuming your data is in pandas
dataframe p_df:
from IPython.display import Filelink, FileLinks
p_df.to_csv('/path/to/data.csv', index=False)
p_df.to_excel('/path/to/data.xlsx', index=False)
FileLinks('/path/to/')
Run this as a notebook cell and the result will be a list of links to files
downloadable directly from the notebook. `'/path/to'` needs to be accessible
for the notebook user of course.
|
Configuration file for Flask application
Question: _I'm new to Python, so please bear with me._
I am attempting to create a file in which to store my configuration settings
in a Flask project. However, I seem to be getting errors when I attempt to
import the file.
Here's my configuration file (location: `app/config.py`):
database_uri = 'something here'
secret_key = something here"
And here's where I'm using it (location: `app/models.py`):
from app import config
...
app.config['SQLALCHEMY_DATABASE_URI'] = config.database_uri
However, I seem to be getting this error when launching the application:
[Sat Aug 08 19:00:15.539773 2015] [:error] [pid 29784] [client 188.183.57.54:64122] mod_wsgi (pid=29784): Target WSGI script '/var/www/pwforum/pwforum.wsgi' cannot be loaded as Python module.
[Sat Aug 08 19:00:15.540014 2015] [:error] [pid 29784] [client 188.183.57.54:64122] mod_wsgi (pid=29784): Exception occurred processing WSGI script '/var/www/pwforum/pwforum.wsgi'.
[Sat Aug 08 19:00:15.540146 2015] [:error] [pid 29784] [client 188.183.57.54:64122] Traceback (most recent call last):
[Sat Aug 08 19:00:15.540250 2015] [:error] [pid 29784] [client 188.183.57.54:64122] File "/var/www/pwforum/pwforum.wsgi", line 7, in <module>
[Sat Aug 08 19:00:15.540448 2015] [:error] [pid 29784] [client 188.183.57.54:64122] from app import app as application
[Sat Aug 08 19:00:15.540537 2015] [:error] [pid 29784] [client 188.183.57.54:64122] File "/var/www/pwforum/app/__init__.py", line 12, in <module>
[Sat Aug 08 19:00:15.540685 2015] [:error] [pid 29784] [client 188.183.57.54:64122] from app import views, models
[Sat Aug 08 19:00:15.540773 2015] [:error] [pid 29784] [client 188.183.57.54:64122] File "/var/www/pwforum/app/views.py", line 3, in <module>
[Sat Aug 08 19:00:15.541061 2015] [:error] [pid 29784] [client 188.183.57.54:64122] from app.models import db, User, Category, Topic, Post
[Sat Aug 08 19:00:15.541154 2015] [:error] [pid 29784] [client 188.183.57.54:64122] File "/var/www/pwforum/app/models.py", line 11, in <module>
[Sat Aug 08 19:00:15.541333 2015] [:error] [pid 29784] [client 188.183.57.54:64122] app.config['SQLALCHEMY_DATABASE_URI'] = config.database_uri
[Sat Aug 08 19:00:15.541413 2015] [:error] [pid 29784] [client 188.183.57.54:64122] AttributeError: 'module' object has no attribute 'database_uri'
Answer: Your config file should look like this:
SQLALCHEMY_DATABASE_URI = '<your-db-driver>://<user>:<pw>@<db-url>'
SECRET_KEY = '<your-very-secret-key>'
Then you can do:
app = Flask(__name__)
app.config.from_object('config')
|
PS1() in Python like in Octave?
Question: I am taking an online machine learning course in Octave, and I am looking for
Python equivalents to Octave's commands. One such command is PS1(), which is a
function for changing the characters of the command prompt in Octave to a
passed string.
For example, the default prompt in my Octave command line interface is '> ',
but I could change it to '>> ' by entering the following command:
PS1('>> ')
I've tried using the Google search engine, but I didn't find what I was
looking for.
Is there a Python equivalent for the PS1() function in Octave and, if so, what
is it?
Answer: Yes, it has them, just set
[`sys.ps1`](https://docs.python.org/2/library/sys.html#sys.ps1) and `sys.ps2`
variables:
>>> import sys
>>> sys.ps1 = '$$$ '
$$$ sys.ps2 = '!!! '
$$$
$$$ while 0:
!!! True
!!!
$$$
`sys.ps1` is a prompt for normal lines, while `sys.ps2` is a prompt for blocks
that should be indented (and thus, interpreter allow to pass multiple lines
before executing them), as you can see in `while` example.
BTW, `sys` module contain many helpful interpreter internal interfaces.
|
Assign part of a string to a variable [Python]
Question: So I have the string:
[53,2]
And I want to seperate it so that:
x = 52
y = 2
Answer: Easy!
X, y = [53, 2]
Isn't Python fun?
If your object is actually a string and not a list, you can safely convert it
to a list:
import ast
x, y = ast.literal_eval("[53, 2]")
|
Unexpected error from sparse.spdiags()
Question: In Python 3 I am trying to run the following line of code to get a particular
sparse matrix.
`sparse.spdiags(np.concatenate((-np.ones((9,1)), np.ones((9,1))), axis=1), [0,
1], 9, 10)`
This gives the following error message:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/scipy/sparse/construct.py", line 61, in spdiags
return dia_matrix((data, diags), shape=(m,n)).asformat(format)
File "/usr/lib/python3/dist-packages/scipy/sparse/dia.py", line 138, in __init__
% (self.data.shape[0], len(self.offsets)))
ValueError: number of diagonals (9) does not match the number of offsets (2)
Running what I understand to be the equivalent code in Octave seems to get me
a sparse matrix.
spdiags([-ones(9,1) ones(9,1)],[0 1],9,10)
Compressed Column Sparse (rows = 9, cols = 10, nnz = 18 [20%])
(1, 1) -> -1
(1, 2) -> 1
(2, 2) -> -1
(2, 3) -> 1
(3, 3) -> -1
(3, 4) -> 1
(4, 4) -> -1
(4, 5) -> 1
(5, 5) -> -1
(5, 6) -> 1
(6, 6) -> -1
(6, 7) -> 1
(7, 7) -> -1
(7, 8) -> 1
(8, 8) -> -1
(8, 9) -> 1
(9, 9) -> -1
(9, 10) -> 1
Any ideas on why they are behaving differently, and how to fix it?
**ADDITION**
I'm having an additional problem with Scipy.sparse's output vs Octave's.
PYTHON
>>> sparse.spdiags(np.concatenate((-np.ones((9,1)),np.ones((9,1))), axis=1).T, [0,1],9,10).A
array([[-1., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., -1., 1., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., -1., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., -1., 1., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., -1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., -1., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., -1., 1., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., -1., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., -1., 0.]])
>>> sparse.spdiags(np.concatenate((-np.ones((9,1)),np.ones((9,1))), axis=1).T, [0,1],9,10).A.shape
(9, 10)
>>> sparse.spdiags(np.vstack([-np.ones(9),np.ones(9)]), [0,1],9,10).A
array([[-1., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., -1., 1., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., -1., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., -1., 1., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., -1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., -1., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., -1., 1., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., -1., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., -1., 0.]])
>>> sparse.spdiags(np.vstack([-np.ones(9),np.ones(9)]), [0,1],9,10).A.shape
(9, 10)
>>> sparse.spdiags(np.ones(9)*[[-1],[1]], [0,1],9,10).A
array([[-1., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., -1., 1., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., -1., 1., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., -1., 1., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., -1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., -1., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., -1., 1., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., -1., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., -1., 0.]])
>>> sparse.spdiags(np.ones(9)*[[-1],[1]], [0,1],9,10).A.shape
(9, 10)
OCTAVE
>full(spdiags([-ones(9,1) ones(9,1)],[0 1],9,10))
ans =
-1 1 0 0 0 0 0 0 0 0
0 -1 1 0 0 0 0 0 0 0
0 0 -1 1 0 0 0 0 0 0
0 0 0 -1 1 0 0 0 0 0
0 0 0 0 -1 1 0 0 0 0
0 0 0 0 0 -1 1 0 0 0
0 0 0 0 0 0 -1 1 0 0
0 0 0 0 0 0 0 -1 1 0
0 0 0 0 0 0 0 0 -1 1
>size(full(spdiags([-ones(9,1) ones(9,1)],[0 1],9,10)))
ans =
9 10
Why does scipy and Octave not give the same value at the last column of the
last row?
Answer: Your `concatenate` produces a (9,2) matrix:
In [310]: np.concatenate((-np.ones((9,1)), np.ones((9,1))), axis=1)
Out[310]:
array([[-1., 1.],
[-1., 1.],
[-1., 1.],
[-1., 1.],
[-1., 1.],
[-1., 1.],
[-1., 1.],
[-1., 1.],
[-1., 1.]])
In [311]: _.shape
Out[311]: (9, 2)
`spdiags` doc describes this `data` parameter as `matrix diagonals stored row-
wise`. That is, each row of the matrix corresponds to a diagonal. 9 rows, but
only 2 values in `[0,1]`.
This is an important difference, that I alluded to in my previous answer,
though maybe I didn't stress it enough.
If you want 2 diagonals, you need give it a `(2,9)` array, such as the
transpose of this matrix:
In [317]: sparse.spdiags(np.concatenate((-np.ones((9,1)),
np.ones((9,1))), axis=1).T, [0,1],9,10)
Out[317]:
<9x10 sparse matrix of type '<class 'numpy.float64'>'
with 18 stored elements (2 diagonals) in DIAgonal format>
You could also construct the diagonals with:
In [321]: np.concatenate([-np.ones((1,9)), np.ones((1,9))],axis=0)
Out[321]:
array([[-1., -1., -1., -1., -1., -1., -1., -1., -1.],
[ 1., 1., 1., 1., 1., 1., 1., 1., 1.]])
Or `np.vstack([-np.ones(9),np.ones(9)])` or `np.ones(9)*[[-1],[1]]`.
* * *
Look at my previous answer, but change the final shape, to make more columns
than rows):
octave:17> reshape (1:12, 4, 3)
ans =
1 5 9
2 6 10
3 7 11
4 8 12
octave:18> full(spdiags (reshape (1:12, 4, 3), [-1 0 1], 4,5))
ans =
5 9 0 0 0
1 6 10 0 0
0 2 7 11 0
0 0 3 8 12
In [327]: np.arange(1,13).reshape(3,4)
Out[327]:
array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12]])
In [328]: sparse.spdiags(np.arange(1,13).reshape(3,4), [-1, 0, 1], 4,5).A
Out[328]:
array([[ 5, 10, 0, 0, 0],
[ 1, 6, 11, 0, 0],
[ 0, 2, 7, 12, 0],
[ 0, 0, 3, 8, 0]])
In Octave (and presumably MATLAB) the +1 diagonal starts with 9, ends with 12,
the full last column from the input matrix. Look at [2,6,10] - in a right
angle arrangment.
The scipy diagonal starts with 10, ends with an added 0. The `9` is invisible
in the nonexistent row above. Look at [2,6,10] - in one column.
They are both consistent - in their own way. So at least when there are more
columns than rows, you'll need to take the difference into account when
creating the input matrix.
Another `scipy` function takes the ambiguity out, by expecting the correct
number of elements for each diagonal (as a list of lists):
In [337]: sparse.diags([[1,2,3],[5,6,7,8],[9,10,11,12]],[-1,0,1],(4,5),dtype=int).A
Out[337]:
array([[ 5, 9, 0, 0, 0],
[ 1, 6, 10, 0, 0],
[ 0, 2, 7, 11, 0],
[ 0, 0, 3, 8, 12]])
Note that I had to omit the `4`.
Look at the `tocoo` method in `scipy/sparse/dia.py` to see more about how a
`dia_matrix` maps the diagonals data on to the sparse coordinates (`coo`
format).
|
Include Python in Qt Creator
Question: I am trying to embed python in my c++ code in my qt project as per [this
tutorial](https://docs.python.org/2/extending/embedding.html). I am now
getting this error code: "error: undefined reference to `_imp__Py_Initialize'"
Before this, I had the same problem in CodeBlocks and fixed it by with these
additional arguments "-IC:\Python27\include\ -IC:\Python27\libs\" and
"C:\Python27\libs\python27.lib"
Adding the same commands to my .pro file as such:
QMAKE_CXXFLAGS += -Wall -fexceptions -g -IC:\Python27\include\ -IC:\Python27\libs\ C:\Python27\libs\python27.lib
Allows me to import python.h but nothing more.
I know that questions like this have been posted before, and they helped me
get running in CodeBlocks, but the same information dosent apply to Qt, or I
am implementing it wrong.
Answer: To build on HeyYO's answer, the .pro arguments to fix the problem are:
INCLUDEPATH = c:\Python27\include\ c:\Python27\libs\
LIBS += C:\Python27\libs\python27.lib
QMAKE_CXXFLAGS += C:\Python27\libs\python27.lib
|
Python and Shapefile: Very large coordinates after importing shapefile
Question: I downloaded a shapefile of Boston and wants to plot it out using the code
below. However it's giving me an error `ValueError: lat_0 must be between
-90.000000 and 90.000000 degrees `
Turns out `coords` has values `(33869.92130000144, 777617.2998000011,
330800.31099999696, 959741.1853)` Why is it so large?
[Boston shapefile is obtained here](http://www.mass.gov/anf/research-and-
tech/it-serv-and-support/application-serv/office-of-geographic-information-
massgis/datalayers/zipcodes.html)
**Code**
# Import Boston shapefile
shapefilename = 'ZIPCODES_NT_POLY'
shp = fiona.open(shapefilename + '.shp')
coords = shp.bounds
shp.close()
w, h = coords[2] - coords[0], coords[3] - coords[1]
extra = 0.01
m = Basemap(
projection='tmerc', ellps='WGS84',
lon_0 = np.mean([coords[0], coords[2]]),
lat_0 = np.mean([coords[1], coords[3]]),
llcrnrlon = coords[0] - extra * w,
llcrnrlat = coords[1] - extra * h,
urcrnrlon = coords[2] + extra * w,
urcrnrlat = coords[3] + extra * h,
resolution = 'i', suppress_ticks = True)
**Error**
ValueError: lat_0 must be between -90.000000 and 90.000000 degrees
Answer: You need to reproject from the native projection of Eastings and Northings to
another coordinate reference system. If you want degrees of Latitude and
Longitude, which is usually
[WGS84](https://en.wikipedia.org/wiki/World_Geodetic_System#WGS84) or
[EPSG:4326](http://epsg.io/4326). Here's how to reproject it:
import fiona
import pyproj
from functools import partial
from shapely.geometry import box
from shapely.ops import transform
shp = fiona.open('ZIPCODES_NT_POLY.shp', 'r')
p_in = pyproj.Proj(shp.crs)
bound_box = box(*shp.bounds)
shp.close()
p_out = pyproj.Proj({'init': 'EPSG:4326'}) # aka WGS84
project = partial(pyproj.transform, p_in, p_out)
bound_box_wgs84 = transform(project, bound_box)
print('native box: ' + str(bound_box))
print('WGS84 box: ' + str(bound_box_wgs84))
> native box: POLYGON ((330800.310999997 777617.2998000011, 330800.310999997
> 959741.1853, 33869.92130000144 959741.1853, 33869.92130000144
> 777617.2998000011, 330800.310999997 777617.2998000011))
>
> WGS84 box: POLYGON ((-69.93980848410942 41.23787282321487, -69.899038698261
> 42.87724537285449, -73.53324195423785 42.8704709990465, -73.48147096070339
> 41.2312695091688, -69.93980848410942 41.23787282321487))
Otherwise, most of the parameters that are required by Basemap are in
`shp.crs` (take a look).
|
Installed beignet to use OpenCL on Intel, but OpenCL programs only work when run as root
Question: I have an Intel HD graphics 4000 3rd Gen Processor, and my OS is Linux Mint
17.1 64 bit. I installed `beignet` to be able to use `OpenCL` and thus run
programs on the GPU. I had been having lots of problems using the `pyOpenCL`
bindings, so I just decided to uninstall my current `beignet` version and
install the latest one (You can see the previous question I asked and answered
myself about it [here](http://stackoverflow.com/questions/31900363/pyopencl-
returns-errors-the-first-run-then-only-invalid-program-errors-examp)).
Upgrading `beignet` worked and I can now run `OpenCL` code on my GPU through
`python` and `C/C++` bindings. However, I can only run the programs as root,
otherwise they don't detect my GPU as a valid device.
The programs _work_ , which is great, but now I'm trying to solve the
annoyance of having to run everything as root.
When I don't run them as root, I get the following error:
/dev/dri/card0 not authenticated
Device open failed, aborting...
/dev/dri/card0 not authenticated
Device open failed, aborting...
cl_get_gt_device(): error, unknown device: ffffffff
When I run them as root, they _work_ , but first they show the following
message:
modprobe: FATAL: Module nvidia not found.
I tried `sudo apt-get purge nvidia*` but for some reason that also uninstalled
`pyOpenCL` and of course my python programs stopped working. I also found an
answer here suggesting to check the permissions of the `/dev/nvidia*/` folder,
but it doesn't exist in my computer.
Thanks in advance.
**EDIT:** adding some requested outputs.
Output from `lspci | grep -i vga`:
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)
Output from `glxinfo`:
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: SGI
server glx version string: 1.4
server glx extensions:
GLX_ARB_create_context, GLX_ARB_create_context_profile,
GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float,
GLX_ARB_framebuffer_sRGB, GLX_ARB_multisample,
GLX_EXT_create_context_es2_profile, GLX_EXT_framebuffer_sRGB,
GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_INTEL_swap_event, GLX_MESA_copy_sub_buffer,
GLX_OML_swap_method, GLX_SGIS_multisample, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_SGI_swap_control
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
client glx extensions:
GLX_ARB_create_context, GLX_ARB_create_context_profile,
GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float,
GLX_ARB_framebuffer_sRGB, GLX_ARB_get_proc_address, GLX_ARB_multisample,
GLX_EXT_create_context_es2_profile, GLX_EXT_fbconfig_packed_float,
GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context,
GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating,
GLX_INTEL_swap_event, GLX_MESA_copy_sub_buffer,
GLX_MESA_multithread_makecurrent, GLX_MESA_query_renderer,
GLX_MESA_swap_control, GLX_OML_swap_method, GLX_OML_sync_control,
GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer,
GLX_SGIX_visual_select_group, GLX_SGI_make_current_read,
GLX_SGI_swap_control, GLX_SGI_video_sync
GLX version: 1.4
GLX extensions:
GLX_ARB_create_context, GLX_ARB_create_context_profile,
GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float,
GLX_ARB_framebuffer_sRGB, GLX_ARB_get_proc_address, GLX_ARB_multisample,
GLX_EXT_create_context_es2_profile, GLX_EXT_framebuffer_sRGB,
GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_INTEL_swap_event, GLX_MESA_copy_sub_buffer,
GLX_MESA_multithread_makecurrent, GLX_MESA_query_renderer,
GLX_MESA_swap_control, GLX_OML_swap_method, GLX_OML_sync_control,
GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer,
GLX_SGIX_visual_select_group, GLX_SGI_make_current_read,
GLX_SGI_swap_control, GLX_SGI_video_sync
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile
OpenGL core profile version string: 3.3 (Core Profile) Mesa 10.1.3
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
GL_3DFX_texture_compression_FXT1, GL_AMD_conservative_depth,
GL_AMD_draw_buffers_blend, GL_AMD_performance_monitor,
GL_AMD_seamless_cubemap_per_texture, GL_AMD_shader_trinary_minmax,
GL_AMD_vertex_shader_layer, GL_ANGLE_texture_compression_dxt3,
GL_ANGLE_texture_compression_dxt5, GL_APPLE_object_purgeable,
GL_ARB_ES2_compatibility, GL_ARB_ES3_compatibility, GL_ARB_base_instance,
GL_ARB_blend_func_extended, GL_ARB_clear_buffer_object,
GL_ARB_conservative_depth, GL_ARB_copy_buffer, GL_ARB_debug_output,
GL_ARB_depth_buffer_float, GL_ARB_depth_clamp, GL_ARB_draw_buffers,
GL_ARB_draw_buffers_blend, GL_ARB_draw_elements_base_vertex,
GL_ARB_draw_indirect, GL_ARB_draw_instanced,
GL_ARB_explicit_attrib_location, GL_ARB_fragment_coord_conventions,
GL_ARB_fragment_shader, GL_ARB_framebuffer_object,
GL_ARB_framebuffer_sRGB, GL_ARB_get_program_binary,
GL_ARB_half_float_pixel, GL_ARB_half_float_vertex,
GL_ARB_instanced_arrays, GL_ARB_internalformat_query,
GL_ARB_invalidate_subdata, GL_ARB_map_buffer_alignment,
GL_ARB_map_buffer_range, GL_ARB_multi_draw_indirect,
GL_ARB_occlusion_query2, GL_ARB_pixel_buffer_object, GL_ARB_point_sprite,
GL_ARB_provoking_vertex, GL_ARB_robustness, GL_ARB_sample_shading,
GL_ARB_sampler_objects, GL_ARB_seamless_cube_map,
GL_ARB_shader_atomic_counters, GL_ARB_shader_bit_encoding,
GL_ARB_shader_objects, GL_ARB_shader_texture_lod,
GL_ARB_shading_language_420pack, GL_ARB_shading_language_packing,
GL_ARB_sync, GL_ARB_texture_buffer_object,
GL_ARB_texture_buffer_object_rgb32, GL_ARB_texture_buffer_range,
GL_ARB_texture_compression_rgtc, GL_ARB_texture_cube_map_array,
GL_ARB_texture_float, GL_ARB_texture_gather,
GL_ARB_texture_mirror_clamp_to_edge, GL_ARB_texture_multisample,
GL_ARB_texture_non_power_of_two, GL_ARB_texture_query_levels,
GL_ARB_texture_query_lod, GL_ARB_texture_rectangle, GL_ARB_texture_rg,
GL_ARB_texture_rgb10_a2ui, GL_ARB_texture_storage,
GL_ARB_texture_storage_multisample, GL_ARB_texture_swizzle,
GL_ARB_timer_query, GL_ARB_transform_feedback2,
GL_ARB_transform_feedback3, GL_ARB_transform_feedback_instanced,
GL_ARB_uniform_buffer_object, GL_ARB_vertex_array_bgra,
GL_ARB_vertex_array_object, GL_ARB_vertex_attrib_binding,
GL_ARB_vertex_shader, GL_ARB_vertex_type_10f_11f_11f_rev,
GL_ARB_vertex_type_2_10_10_10_rev, GL_ARB_viewport_array,
GL_ATI_blend_equation_separate, GL_ATI_texture_float, GL_EXT_abgr,
GL_EXT_blend_equation_separate, GL_EXT_draw_buffers2,
GL_EXT_draw_instanced, GL_EXT_framebuffer_blit,
GL_EXT_framebuffer_multisample, GL_EXT_framebuffer_multisample_blit_scaled,
GL_EXT_framebuffer_sRGB, GL_EXT_packed_depth_stencil, GL_EXT_packed_float,
GL_EXT_pixel_buffer_object, GL_EXT_provoking_vertex,
GL_EXT_shader_integer_mix, GL_EXT_texture_array,
GL_EXT_texture_compression_dxt1, GL_EXT_texture_compression_rgtc,
GL_EXT_texture_filter_anisotropic, GL_EXT_texture_integer,
GL_EXT_texture_sRGB, GL_EXT_texture_sRGB_decode,
GL_EXT_texture_shared_exponent, GL_EXT_texture_snorm,
GL_EXT_texture_swizzle, GL_EXT_timer_query, GL_EXT_transform_feedback,
GL_EXT_vertex_array_bgra, GL_IBM_multimode_draw_arrays, GL_KHR_debug,
GL_MESA_pack_invert, GL_MESA_texture_signed_rgba,
GL_NV_conditional_render, GL_NV_depth_clamp, GL_NV_packed_depth_stencil,
GL_OES_EGL_image, GL_OES_read_format, GL_S3_s3tc
OpenGL version string: 3.0 Mesa 10.1.3
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
GL_3DFX_texture_compression_FXT1, GL_AMD_conservative_depth,
GL_AMD_draw_buffers_blend, GL_AMD_performance_monitor,
GL_AMD_seamless_cubemap_per_texture, GL_AMD_shader_trinary_minmax,
GL_ANGLE_texture_compression_dxt3, GL_ANGLE_texture_compression_dxt5,
GL_APPLE_object_purgeable, GL_APPLE_packed_pixels,
GL_APPLE_vertex_array_object, GL_ARB_ES2_compatibility,
GL_ARB_ES3_compatibility, GL_ARB_blend_func_extended,
GL_ARB_clear_buffer_object, GL_ARB_color_buffer_float,
GL_ARB_conservative_depth, GL_ARB_copy_buffer, GL_ARB_debug_output,
GL_ARB_depth_buffer_float, GL_ARB_depth_clamp, GL_ARB_depth_texture,
GL_ARB_draw_buffers, GL_ARB_draw_buffers_blend,
GL_ARB_draw_elements_base_vertex, GL_ARB_draw_instanced,
GL_ARB_explicit_attrib_location, GL_ARB_fragment_coord_conventions,
GL_ARB_fragment_program, GL_ARB_fragment_program_shadow,
GL_ARB_fragment_shader, GL_ARB_framebuffer_object,
GL_ARB_framebuffer_sRGB, GL_ARB_get_program_binary,
GL_ARB_half_float_pixel, GL_ARB_half_float_vertex,
GL_ARB_instanced_arrays, GL_ARB_internalformat_query,
GL_ARB_invalidate_subdata, GL_ARB_map_buffer_alignment,
GL_ARB_map_buffer_range, GL_ARB_multisample, GL_ARB_multitexture,
GL_ARB_occlusion_query, GL_ARB_occlusion_query2,
GL_ARB_pixel_buffer_object, GL_ARB_point_parameters, GL_ARB_point_sprite,
GL_ARB_provoking_vertex, GL_ARB_robustness, GL_ARB_sample_shading,
GL_ARB_sampler_objects, GL_ARB_seamless_cube_map,
GL_ARB_shader_atomic_counters, GL_ARB_shader_bit_encoding,
GL_ARB_shader_objects, GL_ARB_shader_texture_lod,
GL_ARB_shading_language_100, GL_ARB_shading_language_420pack,
GL_ARB_shading_language_packing, GL_ARB_shadow, GL_ARB_sync,
GL_ARB_texture_border_clamp, GL_ARB_texture_compression,
GL_ARB_texture_compression_rgtc, GL_ARB_texture_cube_map,
GL_ARB_texture_cube_map_array, GL_ARB_texture_env_add,
GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar,
GL_ARB_texture_env_dot3, GL_ARB_texture_float, GL_ARB_texture_gather,
GL_ARB_texture_mirror_clamp_to_edge, GL_ARB_texture_mirrored_repeat,
GL_ARB_texture_multisample, GL_ARB_texture_non_power_of_two,
GL_ARB_texture_query_levels, GL_ARB_texture_query_lod,
GL_ARB_texture_rectangle, GL_ARB_texture_rg, GL_ARB_texture_rgb10_a2ui,
GL_ARB_texture_storage, GL_ARB_texture_storage_multisample,
GL_ARB_texture_swizzle, GL_ARB_timer_query, GL_ARB_transform_feedback2,
GL_ARB_transform_feedback3, GL_ARB_transform_feedback_instanced,
GL_ARB_transpose_matrix, GL_ARB_uniform_buffer_object,
GL_ARB_vertex_array_bgra, GL_ARB_vertex_array_object,
GL_ARB_vertex_attrib_binding, GL_ARB_vertex_buffer_object,
GL_ARB_vertex_program, GL_ARB_vertex_shader,
GL_ARB_vertex_type_10f_11f_11f_rev, GL_ARB_vertex_type_2_10_10_10_rev,
GL_ARB_window_pos, GL_ATI_blend_equation_separate, GL_ATI_draw_buffers,
GL_ATI_envmap_bumpmap, GL_ATI_separate_stencil,
GL_ATI_texture_env_combine3, GL_ATI_texture_float, GL_EXT_abgr,
GL_EXT_bgra, GL_EXT_blend_color, GL_EXT_blend_equation_separate,
GL_EXT_blend_func_separate, GL_EXT_blend_minmax, GL_EXT_blend_subtract,
GL_EXT_compiled_vertex_array, GL_EXT_copy_texture, GL_EXT_draw_buffers2,
GL_EXT_draw_instanced, GL_EXT_draw_range_elements, GL_EXT_fog_coord,
GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample,
GL_EXT_framebuffer_multisample_blit_scaled, GL_EXT_framebuffer_object,
GL_EXT_framebuffer_sRGB, GL_EXT_gpu_program_parameters,
GL_EXT_multi_draw_arrays, GL_EXT_packed_depth_stencil,
GL_EXT_packed_float, GL_EXT_packed_pixels, GL_EXT_pixel_buffer_object,
GL_EXT_point_parameters, GL_EXT_polygon_offset, GL_EXT_provoking_vertex,
GL_EXT_rescale_normal, GL_EXT_secondary_color,
GL_EXT_separate_shader_objects, GL_EXT_separate_specular_color,
GL_EXT_shader_integer_mix, GL_EXT_shadow_funcs, GL_EXT_stencil_two_side,
GL_EXT_stencil_wrap, GL_EXT_subtexture, GL_EXT_texture, GL_EXT_texture3D,
GL_EXT_texture_array, GL_EXT_texture_compression_dxt1,
GL_EXT_texture_compression_rgtc, GL_EXT_texture_cube_map,
GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3,
GL_EXT_texture_filter_anisotropic, GL_EXT_texture_integer,
GL_EXT_texture_lod_bias, GL_EXT_texture_object, GL_EXT_texture_rectangle,
GL_EXT_texture_sRGB, GL_EXT_texture_sRGB_decode,
GL_EXT_texture_shared_exponent, GL_EXT_texture_snorm,
GL_EXT_texture_swizzle, GL_EXT_timer_query, GL_EXT_transform_feedback,
GL_EXT_vertex_array, GL_EXT_vertex_array_bgra,
GL_IBM_multimode_draw_arrays, GL_IBM_rasterpos_clip,
GL_IBM_texture_mirrored_repeat, GL_INGR_blend_func_separate, GL_KHR_debug,
GL_MESA_pack_invert, GL_MESA_texture_signed_rgba, GL_MESA_window_pos,
GL_NV_blend_square, GL_NV_conditional_render, GL_NV_depth_clamp,
GL_NV_light_max_exponent, GL_NV_packed_depth_stencil,
GL_NV_primitive_restart, GL_NV_texgen_reflection,
GL_NV_texture_env_combine4, GL_NV_texture_rectangle, GL_OES_EGL_image,
GL_OES_read_format, GL_S3_s3tc, GL_SGIS_generate_mipmap,
GL_SGIS_texture_border_clamp, GL_SGIS_texture_edge_clamp,
GL_SGIS_texture_lod, GL_SUN_multi_draw_arrays
20 GLX Visuals
visual x bf lv rg d st colorbuffer sr ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a F gb bf th cl r g b a ns b eat
----------------------------------------------------------------------------
0x020 24 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
0x021 24 dc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
0x08b 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x08c 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x08d 24 tc 0 32 0 r . . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
0x08e 24 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 16 16 16 16 0 0 Slow
0x08f 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x090 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 8 1 None
0x091 24 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 4 1 None
0x092 24 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 8 1 None
0x093 24 dc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x094 24 dc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x095 24 dc 0 32 0 r . . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
0x096 24 dc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
0x097 24 dc 0 32 0 r y . 8 8 8 8 . . 0 24 8 16 16 16 16 0 0 Slow
0x098 24 dc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x099 24 dc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 8 1 None
0x09a 24 dc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 4 1 None
0x09b 24 dc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 8 1 None
0x05e 32 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
44 GLXFBConfigs:
visual x bf lv rg d st colorbuffer sr ax dp st accumbuffer ms cav
id dep cl sp sz l ci b ro r g b a F gb bf th cl r g b a ns b eat
----------------------------------------------------------------------------
0x05f 0 tc 0 16 0 r y . 5 6 5 0 . . 0 0 0 0 0 0 0 0 0 None
0x060 0 tc 0 16 0 r . . 5 6 5 0 . . 0 0 0 0 0 0 0 0 0 None
0x061 0 tc 0 16 0 r y . 5 6 5 0 . . 0 16 0 0 0 0 0 0 0 None
0x062 0 tc 0 16 0 r . . 5 6 5 0 . . 0 16 0 0 0 0 0 0 0 None
0x063 0 tc 0 16 0 r y . 5 6 5 0 . . 0 24 8 0 0 0 0 0 0 None
0x064 0 tc 0 16 0 r . . 5 6 5 0 . . 0 24 8 0 0 0 0 0 0 None
0x065 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x066 24 tc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x067 24 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
0x068 24 tc 0 32 0 r . . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
0x069 0 tc 0 16 0 r y . 5 6 5 0 . . 0 16 0 0 0 0 0 0 0 None
0x06a 0 tc 0 16 0 r y . 5 6 5 0 . . 0 16 0 16 16 16 0 0 0 Slow
0x06b 32 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
0x06c 24 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 16 16 16 16 0 0 Slow
0x06d 0 tc 0 16 0 r y . 5 6 5 0 . . 0 0 0 0 0 0 0 4 1 None
0x06e 0 tc 0 16 0 r y . 5 6 5 0 . . 0 0 0 0 0 0 0 8 1 None
0x06f 0 tc 0 16 0 r y . 5 6 5 0 . . 0 16 0 0 0 0 0 4 1 None
0x070 0 tc 0 16 0 r y . 5 6 5 0 . . 0 16 0 0 0 0 0 8 1 None
0x071 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x072 24 tc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 8 1 None
0x073 24 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 4 1 None
0x074 24 tc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 8 1 None
0x075 0 dc 0 16 0 r y . 5 6 5 0 . . 0 0 0 0 0 0 0 0 0 None
0x076 0 dc 0 16 0 r . . 5 6 5 0 . . 0 0 0 0 0 0 0 0 0 None
0x077 0 dc 0 16 0 r y . 5 6 5 0 . . 0 16 0 0 0 0 0 0 0 None
0x078 0 dc 0 16 0 r . . 5 6 5 0 . . 0 16 0 0 0 0 0 0 0 None
0x079 0 dc 0 16 0 r y . 5 6 5 0 . . 0 24 8 0 0 0 0 0 0 None
0x07a 0 dc 0 16 0 r . . 5 6 5 0 . . 0 24 8 0 0 0 0 0 0 None
0x07b 24 dc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x07c 24 dc 0 32 0 r . . 8 8 8 8 . . 0 0 0 0 0 0 0 0 0 None
0x07d 24 dc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
0x07e 24 dc 0 32 0 r . . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
0x07f 0 dc 0 16 0 r y . 5 6 5 0 . . 0 16 0 0 0 0 0 0 0 None
0x080 0 dc 0 16 0 r y . 5 6 5 0 . . 0 16 0 16 16 16 0 0 0 Slow
0x081 24 dc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 0 0 None
0x082 24 dc 0 32 0 r y . 8 8 8 8 . . 0 24 8 16 16 16 16 0 0 Slow
0x083 0 dc 0 16 0 r y . 5 6 5 0 . . 0 0 0 0 0 0 0 4 1 None
0x084 0 dc 0 16 0 r y . 5 6 5 0 . . 0 0 0 0 0 0 0 8 1 None
0x085 0 dc 0 16 0 r y . 5 6 5 0 . . 0 16 0 0 0 0 0 4 1 None
0x086 0 dc 0 16 0 r y . 5 6 5 0 . . 0 16 0 0 0 0 0 8 1 None
0x087 24 dc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 4 1 None
0x088 24 dc 0 32 0 r y . 8 8 8 8 . . 0 0 0 0 0 0 0 8 1 None
0x089 24 dc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 4 1 None
0x08a 24 dc 0 32 0 r y . 8 8 8 8 . . 0 24 8 0 0 0 0 8 1 None
Answer: I'm no expert on beignet, but I know a thing or two about direct rendering, so
this is what I could extract of reading some of the documentation.
beignet supports three modes:
1. connect directly to the device with root (_this is what you are doing_)
2. use dri2 to connect via the X server as a normal user
3. use the device directly over a direct render node as a normal user
**Before going any further** , you need to check if direct rendering is
enabled in your video card driver and in the environment you're running your
scripts on. This is done with the following command in the same terminal
you're running your scripts:
glxinfo | grep render
It should output something like `Direct rendering: yes`, which is great. The
glxinfo gives you all sort of other info that could be useful on helping find
the source of your problem, so be sure to _update your question_ with the
complete output of the command (without the `| grep direct` part).
**Method 2** is the easiest method of using direct rendering, but requires the
environment that is running the script to have a `DISPLAY` variable setting.
If you're running it on a terminal emulator (like the Terminal app in Mint) it
should be already set. You can check this by entering the following command:
echo $DISPLAY
it should print `:0` or something like that. If it doesn't show anything, then
your script doesn't know which screen (or video card) to render to. This could
happen if you're working on a remote machine via SSH, or using the terminal
screen directly without a desktop environment. You can of course set this
variable with this command:
export DISPLAY=:0
You just have to be sure that's the correct display to render to (the one
running your X server), you can check in the output of glxinfo.
**Method 3** requires that your video card supports KVM (which apparently it
does) and to enable rendering nodes. This is done when the system is booting
up, so you need to modify your boot loader parameters to enable it. If you're
booting using GRUB2, this is fairly simple (but, you know, back up any
important thing in case anything goes wrong):
**As root** , you must edit the file `/etc/defaults/grub` and look for the
following line (the … is because it can contain more parameters than just
those two):
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash …"
Then, you must add the parameter that enables the rendering nodes at the end
of that variable:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash … drm.rnodes=1"
After that, you save the file and execute the following command as root:
sudo update-grub
Then reboot the computer, and your scripts should run as a normal user.
**UPDATE:** About the NVIDIA error that you're getting running as root, could
it be possible that your laptop (because it's a laptop, right?) has hybrid
graphic cards? if so, then you should be running OpenCL on the right card.
Please update your question with the output of the following command:
lspci | grep -i vga
That should tell us exactly what hardware are you dealing with.
|
Learning Django Unit test of a Jsonresponse POST
Question: I've written a **view** to handle post and get request:
from django.http import JsonResponse, request
import json
def Dati(request):
if request.method == 'GET':
dati = externalfunction()
return JsonResponse(dati)
elif request.method == 'POST':
data = json.loads(request.POST['json'])
for item in data:
print " POST DA clIENT:", item, data[item]
# FEEDBACK
resp = {"ok":"ricevuto"}
return JsonResponse(resp)
here is the **test.py**
from rest_framework.test import APIClient
from myapp.views import Dati
class ComPageTest(TestCase):
def test_url_resolves_to_vista_Dati(self):
found = resolve('/mixd/')
self.assertEqual(found.func, Dati)
def test_Dati_GET(self):
client = APIClient()
response = client.get('/mixd/')
self.assertIn(b'power', response.content)
def test_Dati_POST(self):
client = APIClient()
response = client.post('/mixd/', {'power': 'on'}, format='json')
self.assertIn(b'ok', response.content)
the GET test run fine, the POST return this error:
2015-08-09 10:32:49,131 - ERROR - Internal Server Error: /mixd/
Traceback (most recent call last):
File "/home/x/.virtualenvs/venvv/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 132, in get_response
/datastructures.py", line 322, in __getitem__
raise MultiValueDictKeyError(repr(key))
MultiValueDictKeyError: "'json'"
*** MultiValueDictKeyError: "'json'"
if I access the page manually everything seems fine POST seems to work. is
there some special syntax to follow?
Answer: Your `post data` is seem as
{'power': 'on'}
Then `request.POST['json']` will get `MultiValueDictKeyError: "'json'"` ?.Try
client.post('/mixd/', json.dumps({'power': 'on'}),
content_type="application/json")
And then change
data = json.loads(request.POST['json'])
to
data = json.loads(request.body) #request.raw_post_data on Django < 1.4)
|
IndentationError in python script for organising Music
Question: I have made this script in the past and I want to use it now, but an error
occurs when trying to run it. This script is about organising my music. I have
a directory organised by label and want to grab the artist name from the
directory names inside label and year directory and create new directories
inside the artist and year directory.
The names of directories inside label are like this
LabelName_[Artist-AlbumName]_2015-08-09
And want to create symlinks inside artist directories and year directories( by
date ) like this
2015-08-09_[Artist-AlbumName]_LabelName
import os
basedir = "/home/zab/Music/#01.Label"
artist_parent_dir = "/home/zab/Music/#03.Artist"
date_parent_dir = "/home/zab/Music/#04.ReleaseDate"
for fn in os.listdir(basedir):
label_path = os.path.join( basedir, fn)
for album in os.listdir(label_path):
i = 1
words = album.split("_")
for word in words:
if i == 1:
label = word
elif i == 2:
name = word
else:
date = word
i = i + 1
artist_album = name.split("-")
j = 1
for part in artist_album:
if j == 1:
artist = part.replace("[","")
j = j + 1
date_parts = date.split("-")
z = 1
for part_two in date_parts:
if z == 1:
year = part_two
z = z + 1
if not os.path.isdir(os.path.join(artist_parent_dir,artist)):
os.mkdir(os.path.join(artist_parent_dir,artist))
if not os.path.isdir(os.path.join(date_parent_dir,year)):
os.mkdir(os.path.join(date_parent_dir,year))
src = os.path.join(label_path,album)
artist_dst = os.path.join(artist_parent_dir, artist, name + "_" + label + "_" + date)
year_dst = os.path.join(date_parent_dir,year, date + "_" + name + "_" + label)
if not os.path.exists(artist_dst):
os.symlink(src, artist_dst)
if not os.path.exists(year_dst):
os.symlink(src, year_dst)
File "/home/zab/Music/_Scripts/OrganizeByArtist.py", line 22
artist = part.replace("[","")
^
IndentationError: expected an indented block
What is going wrong? Is part.replace outdated or something? Any suggestion to
improve this script would be appreciated.
Answer: You are mixing tabs and spaces; taking the source from your post here shows:
>>> '''\
... for part in artist_album:
... if j == 1:
... artist = part.replace("[","")
... '''.splitlines()
[' for part in artist_album:', '\t if j == 1:', ' artist = part.replace("[","")']
>>> from pprint import pprint
>>> pprint(_)
[' for part in artist_album:',
'\t if j == 1:',
' artist = part.replace("[","")']
Note the `\t` at the start of the `if` line. Python expands tabs to _8 spaces_
, but you probably have your editor set to use 4 spaces instead. So Python
sees this:
for part in artist_album:
if j == 1:
artist = part.replace("[","")']
where your editor shows you this:
for part in artist_album:
if j == 1:
artist = part.replace("[","")']
Configure your editor to use _spaces only_ for indentation. A good editor will
convert `TAB` keys into spaces for you if so configured.
Quoting from the [Python Style
Guide](http://www.python.org/dev/peps/pep-0008/#tabs-or-spaces) (PEP 8):
> Spaces are the preferred indentation method.
>
> Tabs should be used solely to remain consistent with code that is already
> indented with tabs.
|
python inheritence call new url after overriding
Question: In the following code even after overiding the init function it still calls
the old url,i want the new url contents instead how to do this
import urllib
import json
class process():
def processdata(self):
for di in results:
super(processcontent,self).__init__(new_url)
new_obj = getcontent.getdata()
print new_obj
break
if __name__ == '__main__':
url = "someurl"
p = processcontent(url)
#print p.getdata()
p.processdata()
Answer: First, you should not call `__init__` after initialization, second, you don't
use the attribute `self.url` but the global variable `url`. Since `url` isn't
a real attribute, you should call `getdata` with a parameter `url`:
import urllib
import json
class processcontent(Basecontent,getcontent):
def processdata(self):
obj = self.getdata(self.url)
results = obj["response"]
for di in results:
new_url = di["clusterUrl"]
new_obj = self.getdata(new_url)
print new_obj
break
def main():
url = "someurl"
p = processcontent(url)
#print p.getdata()
p.processdata()
if __name__ == '__main__':
main()
|
Python Only last line is written to file
Question: When I do print all my text is shown but when written to file only the last
step is written.
import json, urllib
from urllib import urlencode
import googlemaps
start = "Bridgewater, Sa, Australia"
finish = "Stirling, SA, Australia"
url = 'http://maps.googleapis.com/maps/api/directions/json?%s' % urlencode((
('origin', start),
('destination', finish)
))
ur = urllib.urlopen(url)
result = json.load(ur)
for i in range (0, len (result['routes'][0]['legs'][0]['steps'])):
j = result['routes'][0]['legs'][0]['steps'][i]['html_instructions']
print j
output = open("output.html", "w")
output.write(j)
output.close()
I have also tried
output.write("%s\n" % j)
I get an error and
output.write(result['routes'][0]['legs'][0]['steps'][i]['html_instructions'])
Shows only last step
What am I missing?
Answer: That is because you are writing to file outside the for loop, so `j` contains
value only for the last iteration and hence you only write the last line to
the file.
You should instead open the file before the loop, and write to file inside the
loop , also would be good to use `with` statement to open the file , so that
closing is automatically handled. Example -
with open("output.html", "w") as output:
for i in range (0, len (result['routes'][0]['legs'][0]['steps'])):
j = result['routes'][0]['legs'][0]['steps'][i]['html_instructions']
print j
output.write(j + '\n')
|
Python not showing the [matplotlib] window with openCV results
Question: I'm working on OpenCV 2.4.9 . Below is my code -
import cv2
import numpy as np
from matplotlib import pyplot as plt
BLUE = [255,0,0]
img1 = cv2.imread( 'sachin.png' )
replicate = cv2.copyMakeBorder( img1, 10, 10, 10, 10, cv2.BORDER_REPLICATE )
reflect = cv2.copyMakeBorder( img1, 10, 10, 10, 10, cv2.BORDER_REFLECT )
reflect101 = cv2.copyMakeBorder( img1, 10, 10, 10, 10, cv2.BORDER_REFLECT_101 )
wrap = cv2.copyMakeBorder( img1, 10, 10, 10, 10, cv2.BORDER_WRAP )
constant = cv2.copyMakeBorder( img1, 10, 10, 10, 10, cv2.BORDER_CONSTANT, value = BLUE )
plt.subplot(231), plt.imshow( img1, 'gray' ), plt.title( 'ORIGINAL' )
plt.subplot(232), plt.imshow( replicate, 'gray' ), plt.title( 'REPLICATE')
plt.subplot(233), plt.imshow( reflect, 'gray' ), plt.title( 'REFLECT' )
plt.subplot(234), plt.imshow( reflect101, 'gray' ), plt.title( 'REFLECT_101' )
plt.subplot(235), plt.imshow( wrap, 'gray' ), plt.title( 'WRAP' )
plt.subplot(236), plt.imshow( constant, 'gray' ), plt.title( 'CONSTANT' )
plt.show()
When I run the above code I got the following error -
Traceback (most recent call last):
File "border.py", line 3, in <module>
from matplotlib import pyplot as plt
ImportError: No module named matplotlib
After googling, I found that Matplotlib module was missing then I installed it
from the following commands - **sudo python get-pip.py** to install **pip**.
**sudo python get-pip.py** to install **matplotlib**. After when I run the
code no error was reported and the code ran successfully but no windows are
displayed. Can anyone tell me where I'm going wrong?
Answer: ## Step 0: test if `openCV` part is fully working
This shall provide you a reasonable feeling the `CV2` works independently of
`matplotlib` well
Try a simple scenario like this. It opens a new **`cv2`** -window **silently**
which need not get O/S-focus, so find it in your TaskBar and pick it manually.
This simple **`GUI_openCV()`** test shall proof your `cv2` installation is
working :
import numpy
import cv2
def nothing_asCallback( x ):
pass
def GUI_openCV():
# Create a black image, a window
img = numpy.zeros( ( 300, 512, 3 ), numpy.uint8 )
cv2.namedWindow( 'cv2-image' )
# ----------------------------------------------------------------GUI-<state>
s = 1
r = 0
g = 0
b = 0
# -------------------------------------------------------------GUI-<ACTOR>-s
# create trackbars for color change
cv2.createTrackbar( 'R', 'cv2-image', 0, 255, nothing_asCallback )
cv2.createTrackbar( 'G', 'cv2-image', 0, 255, nothing_asCallback )
cv2.createTrackbar( 'B', 'cv2-image', 0, 255, nothing_asCallback )
# create switch for ON/OFF functionality
switch = '0 : OFF \n1 : ON'
cv2.createTrackbar( switch, 'cv2-image', 0, 1, nothing_asCallback )
# --------------------------------------------------------------------
print " ------------------------------------------------- press [ESC] to exit "
while( 1 ): # GUI-mainloop()
cv2.imshow( 'cv2-image', img )
k = cv2.waitKey( 1 ) & 0xFF
if k == 27:
break
# get current positions of four trackbars ----------- # GUI-<vars>-DETECT >>>>>>>>>>>>>>>> can be done "internally" ...cv2.createTrackbar( 'R', 'windowName', aGuiStateVariable_R, 255, nothing_asCallback )
new_r = cv2.getTrackbarPos( 'R', 'cv2-image' )
new_g = cv2.getTrackbarPos( 'G', 'cv2-image' )
new_b = cv2.getTrackbarPos( 'B', 'cv2-image' )
new_s = cv2.getTrackbarPos( switch, 'cv2-image' )
if ( new_s != s or new_r != r or new_g != g or new_b != b ):
#------------------------------------------------ # DUMB-<state>-UPDATE
s = new_s
r = new_r
g = new_g
b = new_b
#------- ---------------------------------------- # DUMB-ACTOR
if s == 0:
img[:] = 0
else:
img[:] = [b,g,r]
pass
else:
pass
pass
# -----------------------------------------------------------------
cv2.destroyAllWindows()
## Step 1: test if `matplotlib` ad-hoc installation works
Next comes your ad-hoc **`matplotlib`** installation.
Use whatever, trivial in construction, to validate the **`matplotlib`** part:
"""
Simple demo of the fill function.
"""
import numpy
import matplotlib.pyplot as plt
x = numpy.linspace( 0, 1 )
y = numpy.sin( 4 * numpy.pi * x ) * numpy.exp( -5 * x )
plt.fill( x, y, 'b' )
plt.grid( True )
plt.show()
Or run any matplotlib demo from documentation to see, if your module was
installed correctly.
## Step 2: test `openCV` / `matplotlib` post-processing integration
For successfull integration, follow the documentation about different
RGB(A)-slicing indices in each part of your code. This and colourmap-datatype
mis-alignments account for the most troubles on prototyping.
Do not hesitate to use as many **`cv2.imshow( CV2WINDOW, interimPhaseDATA )`**
as comfortable during your ComputeVision projects.
Do not hesitate to **split your code into smallest
possible`{syntax|processing}-bug-tracking` sprints**, which you step-after-
step cross-validate in both **`openCV`** and **`matplotlib`** visualisations (
`cv2.imshow()` and `plt.show()` )
This is a common bug-hunting practice called _"isolation"_
|
Evaluation float('inf') times symbolic variable with Python
Question: I need integrate `exp(-c1*r)`, with respect to `'r'` that goes from 0 to
infinity, where `'c1'` and `'r'` are symbols.
The problem seems to occur when evaluate `inf*c1` not equal to `inf`
import sympy
from sympy import *
# Defining the variable
c1 = Symbol('c1', positive=True)
r = Symbol('r')
print float('inf')*2
print float('inf')*c1
print integrate(exp(-c1*r),(r,0,10000))
print integrate(exp(-c1*r),(r,0,float('inf')))
How to make that `-inf*c1 = -inf`?
Answer: You probably want to use `sympy.oo`, which is Sympy's symbol for infinity.
Try:
from IPython.display import display
import sympy as sy
sy.init_printing() # LaTeX like pretty printing for IPython
r = sy.Symbol('r', real=True)
c1 = sy.Symbol('c_1', positive=True)
f = sy.exp(-c1*r)
F = sy.integrate(f, (r, 0, sy.oo))
display(F)
|
Initialise or append dictionary list in python
Question: Can the following code snippet be simplified into one statement somehow?
if aKey not in aDict:
aDict[aKey] = [someValue]
else:
aDict[aKey].append(someValue)
I could write a function accepting the `aDict`, `aKey` and `someValue`, but is
there a way to do this using only native python stuff?
Answer: [`collections.defaultdict`](https://docs.python.org/2/library/collections.html#collections.defaultdict)
is made just with this purpose in mind:
In [1]: import collections
In [2]: d = collections.defaultdict(list)
In [3]: d['key'].append(1)
In [4]: d
Out[4]: defaultdict(<type 'list'>, {'key': [1]})
In [5]: d['key'].append(2)
In [6]: d
Out[6]: defaultdict(<type 'list'>, {'key': [1, 2]})
Here, when you access a key that does not exist, it is automatically
initialised with an empty list.
|
Python random generator not so random?
Question: Is the python random generator considered "good"? As in, does it simulate
randomness very well? I made a small program which simulates a person starting
at (0, 0) and taking a random step either east, west, north and south. As you
run simulations with larger and larger number of steps, you would expect the
final position of the person to be closer and closer to the origin. However,
as I increased the number of steps in my simulation, the final position was
getting farther and farther away from the origin. The program I used is here:
import random
def walk():
north = 0
east = 0
for i in range(10000):
direction = random.randint(0, 100)
if direction <= 25:
north = north + 1
elif direction > 25 and direction <= 50:
north = north - 1
elif direction > 50 and direction <= 75:
east = east + 1
else:
east = east - 1
return north, east
def main():
for i in range(500):
north, east = walk()
print("NE : ", north, east)
main()
Answer: > As you run simulations with larger and larger number of steps, you would
> expect the final position of the person to be closer and closer to the
> origin.
This simply is not true. Imagine you start at (0,0). You take one step east,
taking you to (1,0). Now, in order to get closer to the origin, you need to
take a step west. Taking a step in any other direction will take you further
from the origin. So, you have a 1/4 chance of going closer to the origin and a
3/4 chance of going further away from the origin. This is true no matter where
you are; more often than not, a step will take you further from the origin.
Thus, your simulation is behaving as expected; the longer you go, the further
from the origin you get.
* * *
To see why this is true; consider a one-dimensional walk, where you can step
left or right. If you take ten steps, the expected value is 50:50 left and
right; 5 left and 5 right, taking you back to the origin. Of course, you might
end up with 60:40 or 70:30, leaving you one or two steps away from the origin.
It is a small sample size, after all. If you take 1000 steps, you might end up
with something like 520 left to 480 right, pretty close to that 50:50 ratio.
If you take 10000 steps, you'll be even closer to the 50:50 ratio, perhaps
with something like 5050:4950.
However, there is something very important to note; as your sample size gets
larger, the _proportion_ of left to right steps is closer to 50:50, but the
absolute _difference_ in number between left and right steps gets _larger_. In
that last case, you have a 50.5:49.5 ratio, but you're _fifty_ steps away from
the origin, compared to your one step away from the origin in the case where
you have a 60:40 ratio with ten steps.
|
Python quits in PhotoImage
Question: The following code makes Python "quit unexpectedly" when trying to create the
PhotoImage instance (it prints 1 and quits). I'm on OS X 10.9.5, using Python
2.7.10, ActiveTcl 8.6.4 from ActiveState, running the script from IDLE using
Run / Run Module. Any clue? I'm totally new to Python and all the GUI modules
import numpy as np
import collections
import math
import Tkinter
from PIL import Image, ImageTk
# A root window for displaying objects
root = Tkinter.Tk()
# Convert the Image object into a TkPhoto object
im = Image.open('samples.png')
print 1
imgtk = ImageTk.PhotoImage(image=im)
print 2
# Put it in the display window
Tkinter.Label(root, image=imgtk).pack()
root.mainloop()
Answer: did you try something link this:
import Tkinter
from PIL import Image, ImageTk
root = Tkinter.Tk()
imgtk = ImageTk.PhotoImage(file='test.jpg')
Tkinter.Label(root, image=imgtk).pack()
root.mainloop()
|
Python3 - installing Scapy in OS
Question: I installed the networking module **Scapy**. When I import scapy (`import
scapy`) everything works fine. When I import all from scapy (`from scapy.all
import *`), it brings up this error:
Traceback (most recent call last):
File "/Users/***/Downloads/test.py", line 5, in <module>
from scapy.all import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/all.py", line 16, in <module>
from .arch import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/__init__.py", line 75, in <module>
from .bsd import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/bsd.py", line 12, in <module>
from .unix import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/unix.py", line 22, in <module>
from .pcapdnet import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/pcapdnet.py", line 22, in <module>
from .cdnet import *
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scapy/arch/cdnet.py", line 17, in <module>
raise OSError("Cannot find libdnet.so")
OSError: Cannot find libdnet.so
I found out on another post that we might have to download additionnal modules
in order to make scapy fully work. What should be done exactly? I tried using
(port ** install) which didn't work because port is not supported anymore? If
you have any idea how to make it work in python3, I will be active. Here is
more additionnal informations:
python 3.4.3
mac os 10.10.4
scapy-python3==0.14
EDIT: Another interesting thing is :
On all OS except Linux libpcap should be installed for sending and receiving
packets (not python modules - just C libraries). libdnet is recommended for
sending packets, without libdnet packets will be sent by libpcap, which is
limited. Also, netifaces module can be used for alternative and possibly
cleaner way to determine local addresses. Source:
<https://pypi.python.org/pypi/scapy-python3/0.11>
Dnet seems to only work with version 2.7 :
<https://pypi.python.org/pypi/dnet/1.12>
Answer: You have to install libdnet. Not the python library (which does not work on
python3 as you mentioned), but the library itself. There has to be library
file libdnet.so somewhere in your system where python searches for libraries.
Downloading the libdnet source and compiling should make it work:
wget http://libdnet.googlecode.com/files/libdnet-1.12.tgz
tar xfz libdnet-1.12.tgz
cd libdnet-1.12
./configure
make
Also, there is a possibility to use libpcap for sending packets and not to use
libdnet, but I recommend trying to make libdnet work first.
|
How to remove duplicate nodes xml Python
Question: I have a special case xml file structure is something like :
<Root>
<parent1>
<parent2>
<element id="Something" >
</parent2>
</parent1>
<parent1>
<element id="Something">
</parent1>
</Root>
My use case is to remove the duplicated element , I want to remove the
elements with same Id . I tried the following code with no positive outcome
(its not finding the duplicate node)
import xml.etree.ElementTree as ET
path = 'old.xml'
tree = ET.parse(path)
root = tree.getroot()
prev = None
def elements_equal(e1, e2):
if type(e1) != type(e2):
return False
if e1.tag != e1.tag: return False
if e1.text != e2.text: return False
if e1.tail != e2.tail: return False
if e1.attrib != e2.attrib: return False
if len(e1) != len(e2): return False
return all([elements_equal(c1, c2) for c1, c2 in zip(e1, e2)])
for page in root: # iterate over pages
elems_to_remove = []
for elem in page:
for insideelem in page:
if elements_equal(elem, insideelem) and elem != insideelem:
print("found duplicate: %s" % insideelem.text) # equal function works well
elems_to_remove.append(insideelem)
continue
for elem_to_remove in elems_to_remove:
page.remove(elem_to_remove)
# [...]
tree.write("out.xml")
Can someone help me in letting me know how can i solve it. I am very new to
python with almost zero experience .
Answer: First of all what you're doing is a hard problem in the library you're using,
see this question: [How to remove a node inside an iterator in python
xml.etree.ElemenTree](https://stackoverflow.com/questions/19419754/how-to-
remove-a-node-inside-an-iterator-in-python-xml-etree-elementree)
The solution to this would be to use `lxml` which "implements the same API but
with additional enhancements". Then you can do the following fix.
You seem to be only traversing the second level of nodes in your XML tree.
You're getting `root`, then walking the children its children. This would get
you `parent2` from the first page and the `element` from your second page.
Furthermore you wouldn't be comparing across pages here:
**your comparison will only find second-level duplicates within the same
page.**
Select the right set of elements using a proper traversal function such as
`iter`:
# Use a `set` to keep track of "visited" elements with good lookup time.
visited = set()
# The iter method does a recursive traversal
for el in root.iter('element'):
# Since the id is what defines a duplicate for you
if 'id' in el.attr:
current = el.get('id')
# In visited already means it's a duplicate, remove it
if current in visited:
el.getparent().remove(el)
# Otherwise mark this ID as "visited"
else:
visited.add(current)
|
How to format Json query results
Question:
#!/usr/bin/env python
import urllib2
import json
api_key = 'VtxgIC2UnhfUmXe_pBksov7-lguAQMZD'
url = 'http://www.energyhive.com/mobile_proxy/getCurrentValuesSummary?token='+api_key
response = urllib2.urlopen(url)
content = response.read()
for x in json.loads(content):
if x["cid"] == "PWER":
print (x["data"])
for y in json.loads(content):
if y["cid"] == "PWER_GAC":
print(y["data"])
for z in json.loads(content):
if z["cid"] == "PWER_IMM":
print(z["data"])
for a in json.loads(content):
if a["cid"] == "FBAK_IMM":
print(a["data"])
When I run the code I get results in this format
[{u'1439193214000': 2880}]
[{u'1439193214000': 2979}]
[{u'1439193212000': 3276}]
[{u'1439193212000': 135}]
How do I remove all and just print only the numbers after the :
Answer: json.loads turns it into a python structure, in this case a list of
dictionaries, so just call the values function
import urllib2
import json
api_key = 'VtxgIC2UnhfUmXe_pBksov7-lguAQMZD'
url = 'http://www.energyhive.com/mobile_proxy/getCurrentValuesSummary?token='+api_key
response = urllib2.urlopen(url)
content = response.read()
for x in json.loads(content):
if x["cid"] == "PWER":
print (x["data"][0].values()[0])
|
IPython Notebook two functions that depend on value widget
Question: I have an IPython noteboook with **two widets** (`carW` and `speedW`) and
**two functions** (`print_car` and `print_car_and_speed`) that depend on the
values of the widget. What I'm trying to achieve is that the output of
`print_car` changes when the value of `carW` changes and that the output of
`print_car_and_speed` changes whenever either the value of `carW` or `speedW`
change.
Here is the code that I'm using:
from IPython.html import widgets
from IPython.display import display
def print_car(car):
print "Selected car: {}".format(car)
def print_car_and_speed(car, speed):
print "Driving {} with speed: {}".format(car, speed)
carW = widgets.Dropdown(options=['Prius', 'Porsche'])
carW.value = 'Prius'
i = widgets.interactive(print_car, car=carW)
display(i)
speedW = widgets.FloatSlider()
j = widgets.interactive(print_car_and_speed, car=carW, speed=speedW)
display(j)
The problem with this code is that the **output of** `print_car` **is not
displayed** for me. If I comment the last two lines however, the output of
`print_car` is displayed as I would expect it to be.
Ideally I would like the output to follow the following format:
* `carW` widget
* output of `print_car`
* `speedW` widget (don't repeat `carW` widget)
* output of `print_car_and_speed`
Would be great if you could give me pointers how I can achieve this. Thank
you!
Answer: I figured out how to display the output of two functions that depend on the
value of one widget so that they don't get in each other's way via an
intermediate handler that broadcasts the value of the widget to two output
functions:
from IPython.html import widgets
from IPython.display import display
def handler_car(car):
print_car(car)
print_car_and_speed(car, speedW.value)
def handler_speed(speed):
print_car(carW.value)
print_car_and_speed(carW.value, speed)
def print_car(car):
print "Selected car: {}".format(car)
def print_car_and_speed(car, speed):
print "Driving {} with speed: {}".format(car, speed)
carW = widgets.Dropdown(options=['Prius', 'Porsche'])
carW.value = 'Prius'
speedW = widgets.FloatSlider()
i = widgets.interactive(handler_car, car=carW)
display(i)
j = widgets.interactive(handler_speed, speed=speedW)
display(j)
The only bit that's missing for me now is to arrange this in the right order:
* `carW` widget
* output of `print_car`
* `speedW` widget (don't repeat `carW` widget)
* output of `print_car_and_speed`
|
Boto3 to download all files from a S3 Bucket
Question: I'm using boto3 to get files from s3 bucket. I need a similar functionality
like `aws s3 sync`
My current code is
#!/usr/bin/python
import boto3
s3=boto3.client('s3')
list=s3.list_objects(Bucket='my_bucket_name')['Contents']
for key in list:
s3.download_file('my_bucket_name', key['Key'], key['Key'])
This is working fine, as long as the bucket has only files. If a folder is
present inside the bucket, its throwing an error
Traceback (most recent call last):
File "./test", line 6, in <module>
s3.download_file('my_bucket_name', key['Key'], key['Key'])
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/inject.py", line 58, in download_file
extra_args=ExtraArgs, callback=Callback)
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/transfer.py", line 651, in download_file
extra_args, callback)
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/transfer.py", line 666, in _download_file
self._get_object(bucket, key, filename, extra_args, callback)
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/transfer.py", line 690, in _get_object
extra_args, callback)
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/transfer.py", line 707, in _do_get_object
with self._osutil.open(filename, 'wb') as f:
File "/usr/local/lib/python2.7/dist-packages/boto3/s3/transfer.py", line 323, in open
return open(filename, mode)
IOError: [Errno 2] No such file or directory: 'my_folder/.8Df54234'
Is this a proper way to download a complete s3 bucket using boto3. How to
download folders.
Answer: I got the same needs and create the following function that download
recursively the files. The directories are created locally only if they
contain files.
import boto3
import os
def download_dir(client, resource, dist, local='/tmp', bucket='your_bucket'):
paginator = client.get_paginator('list_objects')
for result in paginator.paginate(Bucket=bucket, Delimiter='/', Prefix=dist):
if result.get('CommonPrefixes') is not None:
for subdir in result.get('CommonPrefixes'):
download_dir(client, resource, subdir.get('Prefix'), local)
if result.get('Contents') is not None:
for file in result.get('Contents'):
if not os.path.exists(os.path.dirname(local + os.sep + file.get('Key'))):
os.makedirs(os.path.dirname(local + os.sep + file.get('Key')))
resource.meta.client.download_file(bucket, file.get('Key'), local + os.sep + file.get('Key'))
The function is called that way:
def _start():
client = boto3.client('s3')
resource = boto3.resource('s3')
download_dir(client, resource, 'clientconf/', '/tmp')
|
How to solve decoding while using stanford parser for Chinese text with python
Question: I want to use Stanford Parser to parse Chinese texts with Python interface. My
code is below:
#!~/anaconda/bin/python
# -*- coding: utf-8 -*-
from nltk.parse import stanford
parser = stanford.StanfordParser(path_to_jar='/home/stanford-parser/stanford-parser.jar', path_to_models_jar='/home/stanford-parser/stanford-parser-3.3.0-models.jar',model_path="/home/stanford-parser/chinesePCFG.ser.gz",encoding='utf8')
sentences = parser.raw_parse_sents(("我 是 中国 人。", "他 来自 美国。"))
print sentences
However, when I try to run this code,an decoding error occurs
Traceback (most recent call last):
File "/home/test.py", line 8, in <module>
sentences = parser.raw_parse_sents(("我 是 中国人。", "他 来自 美国。"))
File "/home/anaconda/lib/python2.7/site-packages/nltk/parse/stanford.py", line 176, in raw_parse_sents
return self._parse_trees_output(self._execute(cmd, '\n'.join(sentences), verbose))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position 0: ordinal not in range(128)
I have no idea about why such an error occurs,since my code is edited by
**utf-8** just as the second line. Could anyone please help me explain it and
solve it? I really need the help from stackoverflow community.
Answer: The parser expects a unicode object (you actually told it on creation that you
will be using data encoded in UTF-8. However, what you send to it as a
parameter are just plain string which are basically just sequences of bytes
(in Python 2.x). You can create [unicode
literals](https://docs.python.org/2/howto/unicode.html#unicode-literals-in-
python-source-code) by prepending a string with `u`, e.g. `u"我 是 中国 人"`
>>> word = u"我 是 中国 人"
>>> type(word)
<type 'unicode'>
>>> print word
我 是 中国 人
And to convert an existing plain string into a unicode object:
>>> word = "我 是 中国 人"
>>> type(word)
<type 'str'>
>>> unicode_word = unicode(word, encoding='utf8')
>>> type(unicode_word)
<type 'unicode'>
If these kind of things cause you trouble, I strongly recommend reading the
[Unicode HOWTO](https://docs.python.org/2/howto/unicode.html) section of the
Python documentation, it will probably make the everything much more clear.
### Bonus
To convert a plain string representing a _Unicode escape_ sequence to a
Unicode string, use the [`'unicode_escape'`
encoding](https://docs.python.org/2/library/codecs.html#python-specific-
encodings).
>>> type('\u6211')
<type 'str'>
>>> len('\u6211')
6
>>> converted = '\u6211'.decode('unicode_escape')
>>> type(converted)
<type 'unicode'>
>>> len(converted)
1
>>> print converted
我
|
Insert tweet search result in mongodb with python
Question: I'm trying to insert tweet search results into MongoDB using following code:
import json
import tweepy
from pymongo import MongoClient
ckey = ''
consumer_secret = ''
access_token_key = ''
access_token_secret = ''
auth = tweepy.OAuthHandler(ckey, consumer_secret)
auth.set_access_token(access_token_key, access_token_secret)
api = tweepy.API(auth)
for data in tweepy.Cursor(api.search,q='test',since='2015-08-01',until='2015-08-10').items():
client = MongoClient('localhost', 27017)
db = client['twitter_db']
collection = db['twitter_collection']
tweet_json = json.loads(data)
collection.insert(tweet_json)
But I got error message while parsing results as JSON:
Traceback (most recent call last):
File "twitter_past.py", line 28, in <module>
tweet_json = json.loads(data)
File "//anaconda/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "//anaconda/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
TypeError: expected string or buffer
Any idea for inserting the good stuff in my JSON format?
Answer: Because your cursor doesn't return you a JSON. It returns instances of
`tweepy.models.Status` model. And it obviously can't be parsed as JSON.
To get parsed JSON from the model you can use `data._json`.
|
Parsing file that has nested loop structures into list structure using python
Question: I am struggling to parse an FPGA simulation file (.vwf), specifically at the
point where the input waveforms are specified using a kind of nested loop
system. An example of the file format is:
TRANSITION_LIST("ADDR[0]")
{
NODE
{
REPEAT = 1;
LEVEL 0 FOR 100.0;
LEVEL 1 FOR 100.0;
NODE
{
REPEAT = 3;
LEVEL 0 FOR 100.0;
LEVEL 1 FOR 100.0;
NODE
{
REPEAT = 2;
LEVEL 0 FOR 200.0;
LEVEL 1 FOR 200.0;
}
}
LEVEL 0 FOR 100.0;
}
}
So this means that the channel named "ADDR[0]" has its logic value switched as
follows:
LEVEL 0 FOR 100.0;
LEVEL 1 FOR 100.0;
LEVEL 0 FOR 100.0;
LEVEL 1 FOR 100.0;
LEVEL 0 FOR 200.0;
LEVEL 1 FOR 200.0;
LEVEL 0 FOR 200.0;
LEVEL 1 FOR 200.0;
LEVEL 0 FOR 100.0;
LEVEL 1 FOR 100.0;
LEVEL 0 FOR 200.0;
LEVEL 1 FOR 200.0;
LEVEL 0 FOR 200.0;
LEVEL 1 FOR 200.0;
LEVEL 0 FOR 100.0;
LEVEL 1 FOR 100.0;
LEVEL 0 FOR 200.0;
LEVEL 1 FOR 200.0;
LEVEL 0 FOR 200.0;
LEVEL 1 FOR 200.0;
LEVEL 0 FOR 100.0;
I have set out to try and get this information into a list structure that
looks like:
[[0, 100], [1, 100], [0, 100], [1, 100], [0, 200], [1, 200], [0, 200], [1, 200], [0, 100], [1, 100], [0, 200], [1, 200], [0, 200], [1, 200], [0, 100], [1, 100], [0, 200], [1, 200], [0, 200], [1, 200], [0, 200]]
However, I am struggling to come up with how to do this. I had attempted
something that I though worked but upon revisiting it I spotted my mistakes.
import pyparsing as pp
def get_data(LINES):
node_inst = []
total_inst = []
r = []
c = 0
rep_search = pp.Literal('REPEAT = ') + pp.Word(pp.nums)
log_search = pp.Literal('LEVEL') + pp.Word('01') + pp.Literal('FOR') + pp.Word(pp.nums + '.')
bra_search = pp.Literal('}')
for line in LINES:
print(line)
rep = rep_search.searchString(line)
log = log_search.searchString(line)
bra = bra_search.searchString(line)
if rep:
#print(line)
c += 1
if c > 1: # no logic values have been found when c == 1
for R in range(r[-1]):
for n in node_inst:
total_inst.append(n)
node_inst = []
r.append(int(rep[0][-1]))
elif log:
#print(line)
node_inst.append([int(log[0][1]),
int(round(1000 * float(log[0][-1])))])
elif bra:
#print(line)
if node_inst:
for R in range(r[-1]):
for n in node_inst:
total_inst.append(n)
node_inst = []
if r:
del r[-1]
return total_inst
where essentially r is a list that keeps track of the repeat values but
deletes the last value if a '}' is encountered. This produces something close
but any values within the loop that repeats 2 times will only be repeated 2
times instead of being a part of the loop that repeats 3 times as well.
Any help or tips would be appreciated. I am just drawing a blank with what is
some pretty rough scripting. Anything to do with my code can be changed but
the input file format cannot to my knowledge.
Answer: Something like that, consider that it heavily depends on formatting.
import re
class Node:
def __init__(self, parent, items, repeat):
self.parent = parent
self.items = items
self.repeat = repeat
root = Node(None, [], 1)
node = root
with open('levels.txt') as f:
for line in f:
line = line.strip()
if line == 'NODE':
new_node = Node(node, [], 1)
node.items.append(new_node)
node = new_node
if line == '}':
node = node.parent
res = re.match('REPEAT = (\d+)', line)
if res:
node.repeat=int(res.group(1))
res = re.match('LEVEL (\d+) FOR ([\d.]+)', line)
if res:
node.items.append((int(res.group(1)), float(res.group(2))))
def generate(node):
res = []
for i in xrange(node.repeat):
for item in node.items:
if isinstance(item, Node):
res.extend(generate(item))
elif isinstance(item, tuple):
res.append(item)
return res
res = generate(root)
for r in res:
print r
|
Trying to upgrade graphite and now it's not working properly
Question: I've tried to update graphite from version '0.9.10' to '0.9.13' and I broke
our graphite installation.
The problem is that the graph images no longer render but the tree view still
works and all the old data is still there.
The trace back I get is:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 99, in get_response
request.path_info)
File "/usr/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 271, in resolve
sub_match = pattern.resolve(new_path)
File "/usr/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 271, in resolve
sub_match = pattern.resolve(new_path)
File "/usr/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 159, in resolve
return ResolverMatch(self.callback, args, kwargs, self.name)
File "/usr/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 168, in _get_callback
raise ViewDoesNotExist("Could not import %s. Error was: %s" % (mod_name, str(e)))
ViewDoesNotExist: Could not import graphite.render.views. Error was: cannot import name timezone
I also get the same error if I try `/dashboard`.
Python 2.7 is installed.
Answer: The answer is that I had django version 1.3.1 install when I needed 1.4.1,
installing that fixed my problem.
|
Import modules from different folder (python)
Question: I have a folder, which contains two separate folders, one of which holds some
python modules, and the other one holds a python script that uses those
modules:
parentFolder/
lib/
__init__.py
readFile.py
writeFile.py
folder/
run.py
The `__init__.py` file is empty. In `run.py`, I have the following:
from ..lib import readFile
data = readFile('file.dat')
This gives me the error
Traceback (most recent call last):
File "run.py", line 1, in <module>
from ..lib import readFile
ValueError: Attempted relative import in non-package
What am I missing?
Answer: You need to add `__init__.py` files (can be empty) to each of the directories
to make them a package. See the
[documentation](https://docs.python.org/2/tutorial/modules.html#packages) for
more details.
|
python 2.7 - setting up 2 loggers
Question: I'me trying to setup 2 loggers, unfortunately one of them doesn't write into
the file, Here is a snippet of my code:
LOG_FILENAME = 'test.log'
LOG_FILENAME2 = 'test2.log'
error_counter = 0
error_logger = CustomLogger(LOG_FILENAME2, 'w', '%(asctime)s - %(levelname)s - %(message)s',
'%d/%m/%Y %H:%M:%S')
error_logger.set_level('info')
error_logger.basic_config()
print "This is the first logger: {0}".format(error_logger.get_file_path)
error_logger.log_message("This is a test message of the first instance")
warning_logger = CustomLogger(LOG_FILENAME, 'w', '%(asctime)s - %(levelname)s - %(message)s',
'%d/%m/%Y %H:%M:%S')
warning_logger.set_level('warning')
warning_logger.basic_config()
print "This is the the second logger: {0} ".format(warning_logger.get_file_path)
warning_logger.log_message("this is a test message of the second instance")
Here is the custom class that i've created:
class CustomLogger(object):
LEVELS = {'debug': logging.DEBUG,
'info': logging.INFO,
'warning': logging.WARNING,
'error': logging.ERROR,
'critical': logging.CRITICAL}
def __init__(self, i_file_path=None, i_filemode=None, i_format=None, i_date_format=None, i_log_level=None):
self.__file_path = i_file_path
self.__filemode = i_filemode
self.__format = i_format
self.__date_format = i_date_format
self.__log_level = i_log_level
def basic_config(self):
logging.basicConfig(
filename=self.__file_path,
filemode=self.__filemode,
format=self.__format,
datefmt=self.__date_format,
level=self.__log_level
)
def log_message(self, i_message):
try:
if None in (self.__file_path, self.__log_level, self.__filemode, self.__date_format, self.__format):
raise ErrorLoggerPropertiesRequiredException()
except ErrorLoggerPropertiesRequiredException as e:
print "{0}".format(e.message)
else:
curr_logger = logging.getLogger(self.__file_path)
print "writing to log {0}".format(i_message)
curr_logger.log(self.__log_level, i_message)
It's creating and writing only to the first logger, i've tried many things i
saw on [Python
Documentation](https://docs.python.org/2/library/logging.config.html) that
there is another property called `disable_existing_loggers` which is by
default True, i;ve tried accessing this property using
`logging.config.fileConfig(fname, defaults=None,
disable_existing_loggers=False)`, but it seems that there is no such method.
Answer: I believe you are running into a problem I have run into many times:
logging.basicConfig()
is calling the module level configuration, and, according to the docs:
> This function does nothing if the root logger already has handlers
> configured for it.
In other words, it will only have an effect the first time it is called. This
function is only meant as a "last resort" config, if I understand it
correctly.
You should instead configure each logger based on the "self" reference, rather
than the global basic config...
A basic pattern I use for each module level logger is (note the import
statement!):
import logging.config
try:
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
except Exception as e:
# try to set up a default logger
logging.basicConfig(level=logging.INFO,
format="%(asctime)s:%(name)s:%(lineno)d %(levelname)s : %(message)s")
main_logger = logging.getLogger(__name__)
I am sure I copied this from somewhere...
|
How to get clean text from MediaWiki markup format using mwparserfromhell or a simple parser in python?
Question: I am trying to get clean sentences from the Wikipedia page of a species.
For instance _Abeis durangensis_ (pid = 1268312). Using the Wikipedia API in
python to obtain the Wikipedia page:
import requests
pid = 1268312
q = {'action' : 'query',
'pageids': pid,
'prop' : 'revisions',
'rvprop' : 'content',
'format' : 'json'}
result = requests.get(eswiki_URI, params=q).json()
wikitext = result["query"]["pages"].values()[0]["revisions"][0]["*"]
gives:
{{Ficha de taxón
| name = ''Abies durangensis''
| image = Abies tamazula dgo.jpg
| status = LR/lc
| status_ref =<ref>Conifer Specialist Group 1998. [http://www.iucnredlist.org/search/details.php/42279/all ''Abies durangensis'']. [http://www.iucnredlist.org 2006 IUCN Red List of Threatened Species. ] Downloaded on 10 July 2007.</ref>
| regnum = [[Plantae]]
| divisio = [[Pinophyta]]
| classis = [[Pinopsida]]
| ordo = [[Pinales]]
| familia = [[Pinaceae]]
| genus = ''[[Abies]]''
| binomial = '''''Abies durangensis'''''
| binomial_authority = [[Maximino Martínez|Martínez]]<ref name=ipni>{{ cite web |url=http://www.ipni.org:80/ipni/idPlantNameSearch.do;jsessionid=0B15264060FDA0DCF216D997C89185EC?id=676563-1&back_page=%2Fipni%2FeditSimplePlantNameSearch.do%3Bjsessionid%3D0B15264060FDA0DCF216D997C89185EC%3Ffind_wholeName%3DAbies%2Bdurangensis%26output_format%3Dnormal |title=Plant Name Details for ''Abies durangensis'' |publisher=[[International Plant Names Index|IPNI]] |accessdate=6 de octubre de 2009}}</ref>
| synonyms =
}}
'''''Abies durangensis''''' es una [[especie]] de [[conífera]] perteneciente a la familia [[Pinaceae]]. Son [[endémica]]s de [[México]] donde se encuentran en [[Durango]], [[Chihuahua]], [[Coahuila]], [[Jalisco]] y [[Sinaloa]]. También es conocido como 'Árbol de Coahuila' y 'pino mexicano'.<ref name=cje>{{ cite web |url=http://www.conifers.org/pi/ab/durangensis.htm |title=''Abies durangaensis'' description |author=Christopher J. Earle |date=11 de junio de 2006 |accessdate=6 de octubre de 2009}}</ref>
== Descripción ==
Es un [[árbol]] que alcanza los 40 metros de altura con un [[Tronco (botánica)|tronco]] recto que tiene 150 cm de diámetro. Las [[rama]]s son horizontales y la [[corteza (árbol)|corteza]] de color gris. Las [[hoja]]s son verde brillante de 20–35 mm de longitud por 1-1.5 mm de ancho. Tiene los conos de [[semilla]]s erectos en ramas laterales sobre un corto [[pedúnculo]]. Las [[semilla]]s son [[resina|resinosas]] con una [[núcula]] amarilla con alas.
== Taxonomía ==
''Abies durangensis'' fue descrita por [[Maximino Martínez]] y publicado en ''[[Anales del instituto de Biología de la Universidad Nacional de México]]'' 13: 2. 1942.<ref name = Trop>{{cita web |url=http://www.tropicos.org/Name/24901700 |título= ''{{PAGENAME}}''|fechaacceso=21 de enero de 2013 |formato= |obra= Tropicos.org. [[Missouri Botanical Garden]]}}</ref>
;[[Etimología]]:
'''''Abies''''': nombre genérico que viene del nombre [[latin]]o de ''[[Abies alba]]''.<ref>[http://www.calflora.net/botanicalnames/pageAB-AM.html En Nombres Botánicos]</ref>
'''''durangensis''''': [[epíteto]] geográfico que alude a su localización en [[Durango]].
;Variedades:
* ''Abies durangensis var. coahuilensis'' (I. M. Johnst.) Martínez
;[[sinonimia (biología)|Sinonimia]]:
* ''Abies durangensis subsp. neodurangensis'' (Debreczy, I.Rácz & R.M.Salazar) Silba'
* ''Abies neodurangensis'' Debreczy, I.Rácz & R.M.Salazar<ref>[http://www.theplantlist.org/tpl/record/kew-2609816 ''{{PAGENAME}}'' en PlantList]</ref><ref name = Kew>{{cita web|url=http://apps.kew.org/wcsp/namedetail.do?name_id=2609816 |título=''{{PAGENAME}}'' |work= World Checklist of Selected Plant Families}}</ref>
;''var. coahuilensis'' (I.M.Johnst.) Martínez
* ''Abies coahuilensis'' I.M.Johnst.
* ''Abies durangensis subsp. coahuilensis'' (I.M.Johnst.) Silba
== Véase también ==
* [[Terminología descriptiva de las plantas]]
* [[Anexo:Cronología de la botánica]]
* [[Historia de la Botánica]]
* [[Pinaceae#Descripción|Características de las pináceas]]
== Referencias ==
{{listaref}}
== Bibliografía ==
# CONABIO. 2009. Catálogo taxonómico de especies de México. 1. In Capital Nat. México. CONABIO, Mexico City.
== Enlaces externos ==
{{commonscat}}
{{wikispecies|Abies}}
* http://web.archive.org/web/http://ww.conifers.org/pi/ab/durangensis.htm
* http://www.catalogueoflife.org/search.php
[[Categoría:Abies|durangensis]]
[[Categoría:Plantas descritas en 1942]]
[[Categoría:Plantas descritas por Martínez]]
I am interested in the (unmarked) text just after the infobox, the gloss:
> Abies durangensis es una especie de conífera perteneciente a la familia
> Pinaceae. Son endémicas de México donde se encuentran en Durango, Chihuahua,
> Coahuila, Jalisco y Sinaloa. También es conocido como 'Árbol de Coahuila' y
> 'pino mexicano'.
Until now i consulted <https://www.mediawiki.org/wiki/Alternative_parsers> so
i found that mwparserfromhell is the less complicated parser in python.
However, i dont see clearly how to do what i pretend. When i use the example
proposed in the documentation i just can't see where the gloss is.
for t in templates:
print(t.name).encode('utf-8')
print(t.params)
Ficha de taxón
[u" name = ''Abies durangensis''\n", u' image = Abies tamazula dgo.jpg \n', u' status = LR/lc\n', u" status_ref =<ref>Conifer Specialist Group 1998. [http://www.iucnredlist.org/search/details.php/42279/all ''Abies durangensis'']. [http://www.iucnredlist.org 2006 IUCN Red List of Threatened Species. ] Downloaded on 10 July 2007.</ref>\n", u' regnum = [[Plantae]]\n', u' divisio = [[Pinophyta]]\n', u' classis = [[Pinopsida]]\n', u' ordo = [[Pinales]]\n', u' familia = [[Pinaceae]]\n', u" genus = ''[[Abies]]'' \n", u" binomial = '''''Abies durangensis'''''\n", u" binomial_authority = [[Maximino Mart\xednez|Mart\xednez]]<ref name=ipni>{{ cite web |url=http://www.ipni.org:80/ipni/idPlantNameSearch.do;jsessionid=0B15264060FDA0DCF216D997C89185EC?id=676563-1&back_page=%2Fipni%2FeditSimplePlantNameSearch.do%3Bjsessionid%3D0B15264060FDA0DCF216D997C89185EC%3Ffind_wholeName%3DAbies%2Bdurangensis%26output_format%3Dnormal |title=Plant Name Details for ''Abies durangensis'' |publisher=[[International Plant Names Index|IPNI]] |accessdate=6 de octubre de 2009}}</ref>\n", u' synonyms = \n']
cite web
[u'url=http://www.ipni.org:80/ipni/idPlantNameSearch.do;jsessionid=0B15264060FDA0DCF216D997C89185EC?id=676563-1&back_page=%2Fipni%2FeditSimplePlantNameSearch.do%3Bjsessionid%3D0B15264060FDA0DCF216D997C89185EC%3Ffind_wholeName%3DAbies%2Bdurangensis%26output_format%3Dnormal ', u"title=Plant Name Details for ''Abies durangensis'' ", u'publisher=[[International Plant Names Index|IPNI]] ', u'accessdate=6 de octubre de 2009']
cite web
[u'url=http://www.conifers.org/pi/ab/durangensis.htm ', u"title=''Abies durangaensis'' description ", u'author=Christopher J. Earle ', u'date=11 de junio de 2006 ', u'accessdate=6 de octubre de 2009']
cita web
[u'url=http://www.tropicos.org/Name/24901700 ', u"t\xedtulo= ''{{PAGENAME}}''", u'fechaacceso=21 de enero de 2013 ', u'formato= ', u'obra= Tropicos.org. [[Missouri Botanical Garden]]']
PAGENAME
[]
PAGENAME
[]
cita web
[u'url=http://apps.kew.org/wcsp/namedetail.do?name_id=2609816 ', u"t\xedtulo=''{{PAGENAME}}'' ", u'work= World Checklist of Selected Plant Families']
PAGENAME
[]
listaref
[]
commonscat
[]
wikispecies
[u'Abies']
Answer: Instead of torturing yourself with parsing of something that's not even
expressable in formal grammar, use the [TextExtracts
API](https://www.mediawiki.org/wiki/Extension:TextExtracts#API):
`https://es.wikipedia.org/w/api.php?action=query&prop=extracts&explaintext=1&titles=Abies%20durangensis&format=json`
gives the following output:
> Abies durangensis es una especie de conífera perteneciente a la familia
> Pinaceae. Son endémicas de México donde se encuentran en Durango, Chihuahua,
> Coahuila, Jalisco y Sinaloa. También es conocido como 'Árbol de Coahuila' y
> 'pino mexicano'.
>
> == Descripción ==
>
> Es un árbol que alcanza los 40 metros de altura con un tronco recto que
> tiene 150 cm de diámetro. Las ramas son horizontales y la corteza de color
> gris. Las hojas son verde brillante de 20–35 mm de longitud por 1-1.5 mm de
> ancho. Tiene los conos de semillas erectos en ramas laterales sobre un corto
> pedúnculo. Las semillas son resinosas con una núcula amarilla con alas.
>
> [...]
Append `&exintro=1` to the URL if you need only the lead.
|
Run command and get its stdout, stderr separately in near real time like in a terminal
Question: **Two answers were provided, one of which addresses the first two criteria and
will work well where you just need both the stdout and stderr using Threads
and Queue. The other answer uses select, a non-blocking method for reading
file descriptors, and pty, a method to "trick" the spawned process into
believing it is running in a real terminal just as if it was run from Bash
directly - but may or may not have side-effects. I wish I could accept both
answers, because the "correct" method really depends on the situation and why
you are subprocessing in the first place, but alas, I could only accept one.**
I am trying to find a way in Python to run other programs in such a way that:
1. The stdout and stderr of the program being run can be logged separately.
2. The stdout and stderr of the program being run can be viewed in near-real time, such that if the child process hangs, the user can see. (i.e. we do not wait for execution to complete before printing the stdout/stderr to the user)
3. Bonus criteria: The program being run does not know it is being run via python, and thus will not do unexpected things (like chunk its output instead of printing it in real-time, or exit because it demands a terminal to view its output). This small criteria pretty much means we will need to use a pty I think.
Here is what i've got so far... Method 1:
def method1(command):
## subprocess.communicate() will give us the stdout and stderr sepurately,
## but we will have to wait until the end of command execution to print anything.
## This means if the child process hangs, we will never know....
proc=subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable='/bin/bash')
stdout, stderr = proc.communicate() # record both, but no way to print stdout/stderr in real-time
print ' ######### REAL-TIME ######### '
######## Not Possible
print ' ########## RESULTS ########## '
print 'STDOUT:'
print stdout
print 'STDOUT:'
print stderr
Method 2
def method2(command):
## Using pexpect to run our command in a pty, we can see the child's stdout in real-time,
## however we cannot see the stderr from "curl google.com", presumably because it is not connected to a pty?
## Furthermore, I do not know how to log it beyond writing out to a file (p.logfile). I need the stdout and stderr
## as strings, not files on disk! On the upside, pexpect would give alot of extra functionality (if it worked!)
proc = pexpect.spawn('/bin/bash', ['-c', command])
print ' ######### REAL-TIME ######### '
proc.interact()
print ' ########## RESULTS ########## '
######## Not Possible
Method 3:
def method3(command):
## This method is very much like method1, and would work exactly as desired
## if only proc.xxx.read(1) wouldn't block waiting for something. Which it does. So this is useless.
proc=subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable='/bin/bash')
print ' ######### REAL-TIME ######### '
out,err,outbuf,errbuf = '','','',''
firstToSpeak = None
while proc.poll() == None:
stdout = proc.stdout.read(1) # blocks
stderr = proc.stderr.read(1) # also blocks
if firstToSpeak == None:
if stdout != '': firstToSpeak = 'stdout'; outbuf,errbuf = stdout,stderr
elif stderr != '': firstToSpeak = 'stderr'; outbuf,errbuf = stdout,stderr
else:
if (stdout != '') or (stderr != ''): outbuf += stdout; errbuf += stderr
else:
out += outbuf; err += errbuf;
if firstToSpeak == 'stdout': sys.stdout.write(outbuf+errbuf);sys.stdout.flush()
else: sys.stdout.write(errbuf+outbuf);sys.stdout.flush()
firstToSpeak = None
print ''
print ' ########## RESULTS ########## '
print 'STDOUT:'
print out
print 'STDERR:'
print err
To try these methods out, you will need to `import sys,subprocess,pexpect`
pexpect is pure-python and can be had with
> sudo pip install pexpect
I think the solution will involve python's pty module - which is somewhat of a
black art that I cannot find anyone who knows how to use. Perhaps SO knows :)
As a heads-up, i recommend you use 'curl www.google.com' as a test command,
because it prints its status out on stderr for some reason :D
UPDATE: OK so the pty library is not fit for human consumption. The docs,
essentially, are the source code. Any presented solution that is blocking and
not async is not going to work here. The Threads/Queue method by Padraic
Cunningham works great, although adding pty support is not possible - and it's
'dirty' (to quote Freenode's #python). It seems like the only solution fit for
production-standard code is using the Twisted framework, which even supports
pty as a boolean switch to run processes exactly as if they were invoked from
the shell. But adding Twisted into a project requires a total rewrite of all
the code. This is a total bummer :/
Answer: > The stdout and stderr of the program being run can be logged separately.
You can't use `pexpect` because both stdout and stderr go to the same `pty`
and there is no way to separate them after that.
> The stdout and stderr of the program being run can be viewed in near-real
> time, such that if the child process hangs, the user can see. (i.e. we do
> not wait for execution to complete before printing the stdout/stderr to the
> user)
If the output of a subprocess is not a tty then [it is likely that it uses a block buffering](http://pexpect.readthedocs.org/en/latest/FAQ.html#whynotpipe) and therefore if it doesn't produce much output then _it won't be "real time"_ e.g., if the buffer is 4K then your parent Python process won't see anything until the child process prints 4K chars and the buffer overflows or it is flushed explicitly (inside the subprocess). This buffer is inside the child process and there are no standard ways to manage it from outside. Here's picture that shows stdio buffers and the pipe buffer for `command 1 | command2` shell pipeline:
[](http://www.pixelbeat.org/programming/stdio_buffering/)
> The program being run does not know it is being run via python, and thus
> will not do unexpected things (like chunk its output instead of printing it
> in real-time, or exit because it demands a terminal to view its output).
It seems, you meant the opposite i.e., it is likely that your child process
chunks its output instead of flushing each output line as soon as possible if
the output is redirected to a pipe (when you use `stdout=PIPE` in Python). It
means that the default [threading](http://stackoverflow.com/a/4985080/4279) or
[asyncio solutions](http://stackoverflow.com/a/25960956/4279) won't work as is
in your case.
There are several options to workaround it:
* the command may accept a command-line argument such as `grep --line-buffered` or `python -u`, to disable block buffering.
* [`stdbuf` works for some programs](http://stackoverflow.com/a/12471855/4279) i.e., you could run `['stdbuf', '-oL', '-eL'] + command` using the threading or asyncio solution above and you should get stdout, stderr separately and lines should appear in near-real time:
#!/usr/bin/env python3
import os
import sys
from select import select
from subprocess import Popen, PIPE
with Popen(['stdbuf', '-oL', '-e0', 'curl', 'www.google.com'],
stdout=PIPE, stderr=PIPE) as p:
readable = {
p.stdout.fileno(): sys.stdout.buffer, # log separately
p.stderr.fileno(): sys.stderr.buffer,
}
while readable:
for fd in select(readable, [], [])[0]:
data = os.read(fd, 1024) # read available
if not data: # EOF
del readable[fd]
else:
readable[fd].write(data)
readable[fd].flush()
* finally, you could try `pty` \+ `select` solution with two `pty`s:
#!/usr/bin/env python3
import errno
import os
import pty
import sys
from select import select
from subprocess import Popen
masters, slaves = zip(pty.openpty(), pty.openpty())
with Popen([sys.executable, '-c', r'''import sys, time
print('stdout', 1) # no explicit flush
time.sleep(.5)
print('stderr', 2, file=sys.stderr)
time.sleep(.5)
print('stdout', 3)
time.sleep(.5)
print('stderr', 4, file=sys.stderr)
'''],
stdin=slaves[0], stdout=slaves[0], stderr=slaves[1]):
for fd in slaves:
os.close(fd) # no input
readable = {
masters[0]: sys.stdout.buffer, # log separately
masters[1]: sys.stderr.buffer,
}
while readable:
for fd in select(readable, [], [])[0]:
try:
data = os.read(fd, 1024) # read available
except OSError as e:
if e.errno != errno.EIO:
raise #XXX cleanup
del readable[fd] # EIO means EOF on some systems
else:
if not data: # EOF
del readable[fd]
else:
readable[fd].write(data)
readable[fd].flush()
for fd in masters:
os.close(fd)
I don't know what are the side-effects of using different `pty`s for stdout,
stderr. You could try whether a single pty is enough in your case e.g., set
`stderr=PIPE` and use `p.stderr.fileno()` instead of `masters[1]`. [Comment in
`sh` source suggests that there are issues if `stderr not in {STDOUT,
pipe}`](https://github.com/amoffat/sh/blob/7a4e29655362c86bd3b71c3122a24709b9bcbb10/sh.py#L1194)
|
Get Visio Shape.BoundingBox method with Python
Question: I am using Python with the win32com.client to get the page names and shapes
description for a Microsoft Visio drawing. The Python code below works for
getting the shape index, shape name and shape text. The command to get the
shape bounding box fails with an invalid index.
import sys, win32com.client
import copy
def main ():
try:
visio = win32com.client.Dispatch("Visio.Application")
visio.Visible = 0
dwg = visio.Documents.Open("C:\Users\John\Drawing1.vsdx")
# Used by Visio Shape.BoundingBox method
intFlags = 0
visBBoxUprightWH = 0x1
try:
vsoShapes = dwg.Pages.Item(1).Shapes # Get shapes for Visio Page-1
for s in range (len (vsoShapes)):
# This line works
print "Index = %s, Shape = %s, Text = %s" % (vsoShapes[s].Index, vsoShapes[s].Name, vsoShapes[s].Text)
dblLeft =0.0
dblBottom =0.0
dblRight = 0.0
dblTop = 0.0
# ====== This line will fail with invalid syntax =======
vsoShapes.Item(s).BoundingBox intFlags + visBBoxUprightWH, dblLeft, dblBottom, dblRight, dblTop
except Exception, e:
print "Error", e
dwg.Close()
visio.Quit()
except Exception, e:
print "Error opening visio file",e
visio.Quit()
main()
How do you call this Visio command from Python
vsoShapes.Item(s).BoundingBox intFlags + visBBoxUprightWH, dblLeft, dblBottom, dblRight, dblTop
The Microsoft documention for the Shape.BoundingBox command is located
here:<https://msdn.microsoft.com/en-us/library/office/ff766755.aspx>
Answer: These `dblLeft, dblBottom, dblRight, dblTop` are typed output parameters.
Assigning 0.0 will not help to make them such. Try this instead:
dblLeft, dblBottom, dblRight, dblTop = vsoShapes[s].BoundingBox(intFlags+visBBoxUprightWH)
Check out a similar question: [Python win32 com : how to handle 'out'
parameter?](http://stackoverflow.com/questions/1062129/python-win32-com-how-
to-handle-out-parameter)
|
Time and conditional statement is not working in python?
Question: What the purpose of this file is for, is to switch a Boolean to false when the
time (EST) is between 8 AM-11 PM.
import time as t
from datetime import datetime
while True:
t.sleep(1)
current_time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
current_hour = datetime.now().strftime('%H')
print current_hour
if 23 > current_hour:
print 'Sleep mode activated (1)'
sleep_mode = True
elif current_hour < 7:
print 'Sleep mode activied (2)'
sleep_mode = True
else:
sleep_mode = False
The problem, however is here:
if 23 > current_hour:
print 'Sleep mode activated (1)'
sleep_mode = True
It returns true even if I change '23' to an unrealistic number, or a time in
which is MUCH greater than the current time.
I've also tested to make sure that its working in EST; which it is. I don't
know if theres something wrong with the if statement or something. Please
help?
Answer: You should never do this:
current_hour = datetime.now().strftime('%H')
If you want the current hour, just use:
current_hour = datetime.now().hour
You are converting a `datetime` to a string, then extracting part of that
string and trying to compare it to a different data type. **Don't. Do. That.**
Python's [datetime](https://docs.python.org/2/library/datetime.html) module
has wonderful properties, like
[hour](https://docs.python.org/2/library/datetime.html#datetime.datetime.hour)
\- just use them.
|
Construct caffe.Net object using NetParameter
Question: From the
[documentation](http://caffe.berkeleyvision.org/doxygen/classcaffe_1_1Net.html)
I thought there was a constructor taking a NetParameter argument,
> explicit Net(const NetParameter& param);
but when I try to use it like this:
import caffe
from caffe import layers as L
from google.protobuf import text_format
def logreg(hdf5, batch_size):
# logistic regression: data, matrix multiplication, and 2-class softmax loss
n = caffe.NetSpec()
n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)
n.ip1 = L.InnerProduct(n.data, num_output=2, weight_filler=dict(type='xavier'))
n.accuracy = L.Accuracy(n.ip1, n.label)
n.loss = L.SoftmaxWithLoss(n.ip1, n.label)
return n.to_proto()
logreg_str = str(logreg('examples/hdf5_classification/data/test.txt', 10))
net_param = caffe.proto.caffe_pb2.NetParameter()
_ = text_format.Merge(logreg_str, net_param)
print type(net_param);
caffe.Net(net_param, caffe.TEST)
The below error occurs in ipython
<class 'caffe.proto.caffe_pb2.NetParameter'>
---------------------------------------------------------------------------
ArgumentError Traceback (most recent call last)
<ipython-input-20-edce76ff13a1> in <module>()
14
15 print type(net_param);
---> 16 caffe.Net(net_param, caffe.TEST)
ArgumentError: Python argument types in
Net.__init__(Net, NetParameter, int) did not match C++ signature:
__init__(boost::python::api::object, std::string, std::string, int)
__init__(boost::python::api::object, std::string, int)
So what am I doing wrong here? How do I use this constructor?
Note: I know how use the "read file from disk constructor" already, I want to
use the NetParameter one / or understand why it doesn't work.
Edit after Shai's comment:
I acquired caffe using this command on Jul 26, 2015: git clone
<https://github.com/BVLC/caffe.git>
Here's the file on my disk:
~/caffe/src/caffe$ grep NetParameter net.cpp | head -1
Net<Dtype>::Net(const NetParameter& param) {
~/caffe/src/caffe$ ~/caffe/build/tools/caffe -version
caffe
The -version switch appears to do nothing. I grepped through the source and
was unable to find a version number.
Answer: Nothing wrong with your code. Indeed there's an overloaded constructor of the
Net class in C++, but it's currently not exposed by the python interface. The
python interface is limited to the constructor with the file param.
I'm not sure if simply exposing it in python/caffe/_caffe.cpp is the only
thing keeping us from constructing a python Net object with NetParameter or if
more elaborate changes are needed.
|
How to deal with python import with custom API
Question: I created an API for my work. (python version 3.4) My API look like this:
* MyAPI
* `__init_.py`
* Communication
* `__init_.py`
* `SerialCom.py`
* JsonManager
* `__init__.py`
* `VersionHandler.py`
* Sessions
* `__init__.py`
* `SessionManager.py`
* `Session.py`
* TestAPI
* `__init__.py`
* `mainWindows.py`
* `mainWindowsQtUi.py`
In my testAPI it work well I have to problem with import importing my API this
way:
from Sessions.SessionManager import SessionManager
But when I try to import it in another project I got some trouble with import.
I'm using Visual studio with Python plug-in I added my API in the search paths
it look like that:
My_api/MyAPI
/MyAPI_Test
so in my code I try this :
from My_api.MyAPI.Sessions.SessionManager import SessionManager
And I got an `ImportError`. Visual studio show me the `SessionManager` file in
the API and show me this line:
from Sessions.Session import Session
I'm confused, it works fine with my TestAPI package but fail with an external
package. I guess I missed something but don't know what.
Answer: First of all ensure, that your project directory exists in PYTHONPATH.
If that is not the case, This could occur if you are already importing
`SessionManager` in your Sessions/**init**.py. Then when you try -
from Sessions.SessionManager import SessionManager
python is trying to import SessionManager from SessionManager class.
|
Unable to import os in Brython - TypeError
Question: I am trying to import the os module in Brython, but no matter what I do, no
matter what I try, I am unable to. I get the following error (in the Firefox
console):
"TypeError: obj is undefined for module os" brython.js:6329:21
"message: undefined" brython.js:6330:1
"filename: http://localhost:8000/src/brython.js" brython.js:6331:1
"linenum: 4418" brython.js:6332:1
"Javascript error" TypeError: obj is undefined
Stack-Trace:
$B.get_class@http://localhost:8000/src/brython.js:4418:5
$test@http://localhost:8000/src/brython.js:8873:1
$SetDict.__le__@http://localhost:8000/src/brython.js:8830:50
getattr/method@http://localhost:8000/src/brython.js:5039:8
$module<@http://localhost:8000/src/brython.js line 6329 > eval:966:41
@http://localhost:8000/src/brython.js line 6329 > eval:1:14
run_py@http://localhost:8000/src/brython.js:6329:1
import_py@http://localhost:8000/src/brython.js:6310:8
import_from_stdlib_static@http://localhost:8000/src/brython.js:6378:22
$B.$import@http://localhost:8000/src/brython.js:6454:57
@http://localhost:8000/src/brython.js line 3931 > eval:11:1
brython@http://localhost:8000/src/brython.js:3931:7
onload@http://localhost:8000/boolean/boolean.html:1:1
brython.js:3940:43
"Traceback (most recent call last):
RuntimeError: TypeError: obj is undefined"
When I do this:
import os
def foo(ev):
print(os.getcwd())
doc["submit"].bind('click', foo)
This is just an example - originally the code was much longer. I am using the
latest Brython version (3.2.0). The import of os and the functions work fine
when I try them on the Brython page in the console they provide, and so far os
is the only import that I cannot get to work. Any help is appreciated, thank
you for your time
Update: It works now, but I have no idea why and I don't understand - if
someone could explain what I did wrong that would be great, in case I get a
similar problem in the future I looked around for alternatives to os (I want
to read .txt files from a directory - I know how to do this in "vanilla"
Python using os) and found a solution using glob, which I tried to import.
Brython doesn't seem to come with glob, so I put the glob.py in the lib folder
- but I also saw that glob imports os, so I wondered if it works, which it
did. Then I tried to import os again, and now it works and I don't know why.
from browser import document as doc, alert, html
from glob import glob
import os
# ... #
def foo(ev):
print(os.getcwd())
doc["submit"].bind("click", foo)
Now works. I don't get it, why does it work if I import glob and then os?
Answer: In your code sample `doc` is not defined:
import os
def foo(ev):
print(os.getcwd())
doc["submit"].bind('click', foo)
I can `import os` with no issues.
I suppose your complete code is not like this. Please, provide the complete
sample to have a look if there are other issues and I will edit my answer.
P.D.: I can't comment as I don't have enough reputation, this is why I posted
an answer...
|
AttributeError when monkey patching an object in python
Question: I tried to monkey patch an object in Python.
class C:
def __init__(self):
self.__n = 0
def f(self):
self.__n += 1
def __str__(self):
return str(self.__n)
c = C()
def up(self):
self.__n += 2
import types
c.up = types.MethodType(up, c)
c.up()
But I got an AttributeError
Traceback (most recent call last):
File "untitled4.py", line 19, in <module>
c.up()
File "untitled4.py", line 15, in up
self.__n += 2
AttributeError: 'C' object has no attribute '__n'
How can I fix the error?
Answer: As your `up` function isn't declared inside a class, it is not possible to
apply [name
mangling](https://docs.python.org/3.5/tutorial/classes.html#private-variables)
rules to its locals, while it is applied for the attribute name. Therefore
you'd need to apply the name mangling manually in your function:
def up(self):
self._C__n += 2
|
Why HTML convert python output '<' = <, '>' = >?
Question: I created a script who send mail whith a specific output took from a server. I
splited this output and each element I sent it to a html cell. I also created
a header for the table what is looks like that:
def get_html_table_header(*column_names):
header_string = '<tr width=79 style="background:#3366FF;height:23.25pt;font-size:8.0pt;font-family:Arial,sans-serif;color:white;font-weight:bold;" >'
for column in column_names:
if column is not None:
header_string += '<td>' + column + '</td>'
header_string += '</tr>'
return header_string
def get_concrete_html_table_header():
return get_html_table_header('Num. Row','Cell1','Cell2','Cell3','Comment (enter your feedback below)','Cell4','Cell5','Cell6','Cell7','Cell8','Cell9','Cell10')
When I print the result of this function in linux konsole, it looks like that:
<tr width=79 style="background:#3366FF;height:23.25pt;font-size:8.0pt;font-family:Arial,sans-serif;color:white;font-weight:bold;" ><td>Num. Row</td><td>Cell1</td><td>Cell2</td><td>Cell3</td><td>Comment (enter your feedback below)</td><td>Cell4</td><td>Cell5</td><td>Cell6</td><td>Cell7</td><td>Cell8</td><td>Cell9</td><td>Cell10</td></tr>
When I receive the email, source looks like that:
<tr width="79" style="background:#3366FF;height:23.25pt;font-size:8.0pt;font-family:Arial,sans-serif;color:white;font-weight:bold;"><td>Num. Row</td><td>Cell1</td><td>Cell2</td><td>Cell3</td><td>Comment (enter your feedback below)</td><td>Cell4</td><td>Cell5</td><td>Cell6</td><td>Cell7</td><td>Cell8</td><td>Cell9</td>< td>Cell10</td></tr>
To build email body I`m using function:
def build_email_body(CRs_list):
global criterial_number
if 0 == len(CRs_list):
return None
email_body = ''
email_body += '<html><head><title>My Title</title></head><body>'
email_body += '<p align="center"><font color="#176b54" size="+2"><b>Some info</b></font></p>'
email_body += '<p align="center"><font color="#176b54" size="+1">Another info</font></p>'
email_body += '<table align="center" BORDER=1 CELLSPACING=2 CELLPADDING=2 COLS=3 WIDTH="100%">'
email_body += get_concrete_html_table_header()
for CR in CRs_list:
email_body += get_html_table_row()#create row for every output received(11 cells for every output, according with the header)
email_body += '</table>'
email_body += '</table><br><p align="left"><font color="#176b54" size="+1"><b>=> This is an automatic generated email via script<br>'
email_body += '<br><br>Have a nice day!</b></font></p><br></body></html>'
return email_body
To send email I`m using function:
def send_email(body, recipients, subject, file):
#inform just sender
if None == body:
body = "WARNING -> NO entries retrieved after 5 retries<br>CRAU output:<br>" + dct_newCRs_output + "<br>" + duration
#override recipients to not set junk info
recipients = sender
email = Email(SMTP_SERVER, SENDER, recipients, _CC, subject, body, 'html', file)
email.send()
send() is imported from class Email:
import os, smtplib
from email import encoders
from email.mime.audio import MIMEAudio
from email.mime.base import MIMEBase
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
import mimetypes
class Email:
__config = {}
def __init__(self, smtp_server, sender, recipients, cc, subject, body, body_type, attachments=None):
self.__config = {'smtp_server': smtp_server,
'sender': sender,
'recipients': recipients,
'cc': cc,
'subject': subject,
'body':body,
'body_type':body_type, #plain|html
'attachments':attachments #list of files
}
def getSmtpServer(self):
return self.__config.get('smtp_server')
def getSender(self):
return self.__config.get('sender')
def getRecipients(self):
return self.__config.get('recipients')
def getCc(self):
return self.__config.get('cc')
def getSubject(self):
return self.__config.get('subject')
def getBody(self):
return self.__config.get('body')
def getBodyType(self):
return self.__config.get('body_type')
def getAttachments(self):
return self.__config.get('attachments')
def setSmtpServer(self, host):
self.__config['smtp_server'] = smtp_server
return self
def setSender(self, sender):
self.__config['sender'] = sender
return self
def setRecipients(self, recipients):
self.__config['recipients'] = recipients
return self
def setCc(self, cc):
self.__config['cc'] = cc
return self
def setSubject(self, subject):
self.__config['subject'] = subject
return self
def setBody(self, body):
self.__config['body'] = body
return selfMIMEMultipart
def setBodyType(self, body_type):
self.__config['body_type'] = body_type
return self
def setAttachments(self, attachments):
self.__config['attachments'] = attachments
return self
def attachFilesToEmail(self, attachments, msg):
if None == attachments:
tmpmsg = msg
msg = MIMEMultipart()
msg.attach(tmpmsg)
if None != attachments:
for fname in attachments:
if not os.path.exists(fname):
print "File '%s' does not exist. Not attaching to email." % fname
continue
if not os.path.isfile(fname):
print "Attachment '%s' is not a file. Not attaching to email." % fname
continue
# Guess at encoding type
ctype, encoding = mimetypes.guess_type(fname)
if ctype is None or encoding is not None:
# No guess could be made so use a binary type.
ctype = 'application/octet-stream'
maintype, subtype = ctype.split('/', 1)
if maintype == 'text':
fp = open(fname)
attach = MIMEText(fp.read(), _subtype=subtype)
fp.close()
elif maintype == 'image':
fp = open(fname, 'rb')
attach = MIMEImage(fp.read(), _subtype=subtype)
fp.close()
elif maintype == 'audio':
fp = open(fname, 'rb')
attach = MIMEAudio(fp.read(), _subtype=subtype)
fp.close()
else:
fp = open(fname, 'rb')
attach = MIMEBase(maintype, subtype)
attach.set_payload(fp.read())
fp.close()
# Encode the payload using Base64
encoders.encode_base64(attach)
# Set the filename parameter
filename = os.path.basename(fname)
attach.add_header('Content-Disposition', 'attachment', filename=filename)
msg.attach(attach)
def send(self):
# Create message container - the correct MIME type is multipart/alternative.
msg = MIMEMultipart('alternative')
msg['Subject'] = self.getSubject()
msg['From'] = self.getSender()
msg['To'] = self.getRecipients()
msg['CC'] = self.getCc()
# Record the MIME types of both parts - text/plain and text/html.
#part1 = MIMEText(text, 'plain')
#part2 = MIMEText(html, 'html')
part = MIMEText(self.getBody(), self.getBodyType())
# Attach parts into message container.
# According to RFC 2046, the last part of a multipart message, in this case
# the HTML message, is best and preferred.
msg.attach(part)
# Add attachments, if any
self.attachFilesToEmail(self.getAttachments(), msg)
# Send the message via local SMTP server.
s = smtplib.SMTP(self.getSmtpServer())
# sendmail function takes 3 arguments: sender's address, recipient's address
# and message to send - here it is sent as one string.
s.sendmail(self.getSender(), (self.getRecipients() + self.getCc()).split(","), msg.as_string())
s.quit()
I hope is enough information. Can someone explain to me, why is happening this
and how can I fix it?
Answer: Your code looks correct, the problem is elsewhere.
`<` is what you get when you add `<` as text to a HTML document (since `<`
means "start new element", you need to escape this character in plain text).
The interesting part here is why does it happen only once in the whole string.
If all the `<` had been replaced, my guess would be that you accidentally
added the table as text to the HTML body of the mail.
Maybe the space in `< td>` is a clue: Mails shouldn't have more than 72
characters per line. So maybe the mail client software wraps the HTML?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.