text
stringlengths 226
34.5k
|
---|
Python Logging module not respecting setLevel below '30' (Warning)?
Question: I'm at a bit of a loss here. I could swear that I had this working earlier
last week. Today I've returned to it and can't seem to get logging to work.
In the below sample script, I hope to show a relatively complete demonstration
of my issue.
**test.py**
import logging
logger = logging.getLogger()
print("Logger currently at level '{0}'".format(logger.getEffectiveLevel()))
logger.warning("Testing Warning")
logger.setLevel(60)
print("Logger currently at level '{0}'".format(logger.getEffectiveLevel()))
logger.error("Testing Error")
logger.setLevel(50)
print("Logger currently at level '{0}'".format(logger.getEffectiveLevel()))
logger.error("Testing Error")
logger.setLevel(40)
print("Logger currently at level '{0}'".format(logger.getEffectiveLevel()))
logger.error("Testing Error")
logger.setLevel(30)
print("Logger currently at level '{0}'".format(logger.getEffectiveLevel()))
logger.info("Testing Info")
logger.setLevel(20)
print("Logger currently at level '{0}'".format(logger.getEffectiveLevel()))
logger.info("Testing Info")
logger.setLevel(10)
print("Logger currently at level '{0}'".format(logger.getEffectiveLevel()))
logger.info("Testing Info")
logger.setLevel(0)
print("Logger currently at level '{0}'".format(logger.getEffectiveLevel()))
logger.info("Testing Info")
**output**
Logger currently at level '30'
Testing Warning
Logger currently at level '60'
Logger currently at level '50'
Logger currently at level '40'
Testing Error
Logger currently at level '30'
Logger currently at level '20'
Logger currently at level '10'
Logger currently at level '0'
I have no idea why the logging module is acting differently, considering I'm
using [this](https://docs.python.org/3.3/howto/logging-cookbook.html#using-
logging-in-multiple-modules) official documentation to try and make it happen.
You can see near the end how it's blatantly ignoring the level for `logger`.
Am I somehow bypassing `logger` and instead logging against a default? I don't
understand.
Thanks!
Answer: You forgot to add a handler to your logger. On Python 3.3 (and later), if no
handler is specified, an internal ["handler of last
resort"](https://docs.python.org/3.3/library/logging.html#logging.lastResort),
which has a level of `WARNING`, is used. So even though your messages are sent
through the logger, they are discarded by the internal handler (which is only
there to catch misconfigurations). On Python 2.x, you would have got a one-off
message printed to `stderr` \- `No handlers could be found for logger "root"`.
If you add a line
logger.addHandler(logging.StreamHandler())
before the first print, you will get behaviour like this:
Logger currently at level '30'
Testing Warning
Logger currently at level '60'
Logger currently at level '50'
Logger currently at level '40'
Testing Error
Logger currently at level '30'
Logger currently at level '20'
Testing Info
Logger currently at level '10'
Testing Info
Logger currently at level '0'
Testing Info
**Update:** There is no need to "add a handler manually" - this was just the
easiest way to illustrate what the problem was. You can use `basicConfig()` to
configure logging simply (console or file, limited options) or
use[`dictConfig()`](https://docs.python.org/2/library/logging.config.html#logging.config.dictConfig)
to configure more flexibly. These methods work in the same way across Python
2.7 and 3.2 (and later).
|
"pwd" giving path on windows machine when run through python subprocess
Question: I wrote the following script called `file_I_executed.py`:
import subprocess
def main():
loc = subprocess.popen("pwd")
print loc
Which gave output:
C:\Python27\python.exe C:/Users/username/PycharmProjects/projectname/file_I_executed.py
However, when I try "pwd" on the windows cmd, I get:
C:\Users\username\somedirectory>pwd
'pwd' is not recognized as an internal or external command,
operable program or batch file.
What gives? How does subprocess work, exactly? Why is "pwd" giving me the
python path as well as the path of the script, when it clearly should not give
me that when I run this from the windows command line?
I am using python 2.7.1 from pycharm, on windows 7.
**CLARIFICATION:** I am fully aware that "pwd" is not a windows command.
However, the script shown above gave me the result I indicated, and I don't
understand why.
Answer: The output you're seeing when you run the program under PyCharm is not coming
from the `subprocess.popen("pwd")` call. In fact, this call is never being
executed at all!
You have a `main` function, but you do not have any code that _calls_
`main()`.
The output is simply PyCharm's default printout when it starts your program.
You will get the same output with an empty program. If you run your program
from the command line there will be no output.
If you add a `main()` call to the bottom of the file, you will get an error
when it tries to execute `subprocess.popen("pwd")`, because there is no such
function.
If you change it to the correct `subprocess.Popen("pwd")` you will then get
the expected error that there is no `pwd` command. (Thanks
[anarchos78](http://stackoverflow.com/users/734094/anarchos78) for pointing
this out.)
PyCharm has an integrated debugger which can help troubleshoot things like
this. By stepping through the code, you can easily see which parts of the code
are executed or not.
|
Unorderable types in Python: NoneType() and int(). Using an interface to make this method
Question: I am in the finishing stages of finishing up a project that is due on July 6,
2015 to get all the errors fixed. I seem to have run across an error in which
I have no discern-able answer:
File "Z:\Jordan White's Folder\Concepts of Programming CS 4150 Summer Semester 2015\Projects\cs4150pythonproject\BooleanExpressions.py", line 30, in evaluate
value = self.expr1.evaluate() < self.expr2.evaluate()
TypeError: unorderable types: NoneType() < int()
The line is pointing to line 30 in this code:
'''
Created on Jun 22, 2015
@author: JordanWhite
'''
from RelationalOperators import RelationalOperator
class BooleanExpression():
def __init__(self, op, expr1, expr2):
if expr1 == None or expr2 == None:
raise Exception("null expression argument")
self.op = op
self.expr1 = expr1
self.expr2 = expr2
def evaluate(self):
value = True
int(0 if value is None else value)
if self.op == RelationalOperator.EQ_OP:
value = self.expr1.evaluate() == self.expr2.evaluate()
elif self.op == RelationalOperator.NE_OP:
value = self.expr1.evaluate() != self.expr2.evaluate()
elif self.op == RelationalOperator.GE_OP:
value = self.expr1.evaluate() >= self.expr2.evaluate()
elif self.op == RelationalOperator.GT_OP:
value = self.expr1.evaluate() > self.expr2.evaluate()
elif self.op == RelationalOperator.LE_OP:
value = self.expr1.evaluate() <= self.expr2.evaluate()
elif self.op == RelationalOperator.LT_OP:
value = self.expr1.evaluate() < self.expr2.evaluate()
return value
The expr1 variable is somehow being registered as a NoneType, when it is meant
to be an int() variable. Expression is an interface, as shown here:
'''
Created on Jun 18, 2015
@author: JordanWhite
'''
class Expression():
def evaluate(self):
pass
The goal is when the boolean expression is evaluated, it prints out in the
Parser as true, and when it's true, it fetches the value of x := 12 in the
Pascal program here:
Program a ;
begin
x := 4 ;
if < x 7 then
x := 12
else
x := 14 ;
write ( x )
end .
How on earth do I fix something like this, where I can get rid of the
unorderable types error and then move on with fixing more errors?
Answer: Two things:
* `int(0 if value is None else value)` doesn't accomplish anything because you are not saving the result -- maybe you meant `value = int(...)`?
* `value = self.expr1.evaluate() < self.expr2.evaluate()` The `self.expr1.evaluate()` call is returning `None`, which is what causes your problem. So your next step is figure out why `None` is being returned from that expression. Note that `self.expr1` is _not_ None, its `evaluate` method is _returning_ None.
|
Python trying to sort a list alphabetically after it has been sorted numerically
Question: Program: supposed to read a text and find the top ten most commonly used words
and sort them by frequency, then print the list in order. (this occurs when
the "--topcount" flag is called)
I am trying to slightly modify this program so that it, after finding the top
10 most common words from the text by frequency, then sorts the list
alphabetically and prints that instead, so that it is in alphabetical order
rather than numerical order.
current code:
import sys
def word_dictionary(filename):
word_count = {} #create dict
input = open(filename, 'r')
for line in input:
words = line.split()#split lines on whitespace
for word in words:
word = word.lower() #forces all found words into lowercase
if not word in word_count:
word_count[word] = 1
else:
word_count[word] = word_count[word] + 1
input.close()
return word_count
def print_words(filename):
word_count = word_dictionary(filename)
words = sorted(word_count.keys())
for word in words:
print word, word_count[word]
def get_count(word_count_tuple):
return word_count_tuple[1]
def print_top(filename):
word_count = word_dictionary(filename)
items = sorted(word_count.items(), key=get_count, reverse=True)
for item in items[:20]:
print item[0], item[1]
def main():
if len(sys.argv) != 3:
print 'usage: ./wordcount.py {--count | --topcount} file'
sys.exit(1)
option = sys.argv[1]
filename = sys.argv[2]
if option == '--count':
print_words(filename)
elif option == '--topcount':
print_top(filename)
else:
print 'unknown option: ' + option
sys.exit(1)
if __name__ == '__main__':
main()
I have attempted to do this with:
def get_alph(word_count_tuple):
return word_count_tuple[0]
to replace the "def get_count(word_count_tuple)" function and also modifying
the "print top" function so that
items = sorted(word_count.items(), key = get_alph)
to make a list in alphabetical order, but it did not work as intended and
instead printed the first 10 words of a list of all of the words in the text
sorted alphabetically.
are there any suggestions to help make this program work as intended?
Answer: Sort the _slice_ of the sorted words:
def print_top(filename):
word_count = word_dictionary(filename)
items = sorted(word_count.items(), key=get_count, reverse=True)
for item in sorted(items[:20]):
print item[0], item[1]
This first produces a sorted list by count in `items`, then sorts the first 20
of that sorted list again for the alphabetical order.
Since your `items` are `(word, count)` tuples, you don't need a sort key here;
the tuples are sorted in lexicographic order too; two tuples are first
compared on the first value, and only if equal, by their second value, etc.
Note that sorting the whole `word_count` item list is overkill if you only
need the top K items here. Use the [`heapq.nlargest()`
function](https://docs.python.org/2/library/heapq.html#heapq.nlargest) here
instead; it is a O(NlogK) algorithm instead of O(NlogN); for a large N (large
number of words) this can make a significant difference:
import heapq
def print_top(filename):
word_count = word_dictionary(filename)
items = heapq.nlargest(word_count.items(), key=get_count)
for item in sorted(items[:20]):
print item[0], item[1]
|
python delete specific line and re-assign the line number
Question: I would like delete specific line and re-assign the line number:
eg:
0,abc,def
1,ghi,jkl
2,mno,pqr
3,stu,vwx
what I want: if line 1 is the line need to be delete, then output should be:
0,abc,def
1,mno,pqr
2,stu,vwx
What I have done so far:
f=open(file,'r')
lines = f.readlines()
f.close()
f.open(file,'w')
for line in lines:
if line.rsplit(',')[0] != 'line#':
f.write(line)
f.close()
above lines can delete specifc line#, but I don't konw how to rewrite the line
number before the first ','
Answer: I noticed that your account wasn't created in the past few hours, so I figure
that there's no harm in giving you the benefit of the doubt. You will really
have more fun on StackOverflow if you spend the time to learn its culture.
I wrote a solution that fits your question's criteria on a file that's already
written (you mentioned that you're opening a text file), so I assume it's a
CSV.
I figured that I'd answer your question differently than the other solutions
that implement the [CSV reader
library](https://docs.python.org/2/library/csv.html) and use a temporary file.
import re
numline_csv = re.compile("\d\,")
# substitute your actual file opening here
so_31195910 = """
0,abc,def
1,ghi,jkl
2,mno,pqr
3,stu,vwx
"""
so = so_31195910.splitlines()
# this could be an input or whatever you need
delete_line = 1
line_bank = []
for l in so:
if l and not l.startswith(str(delete_line)+','):
print(l)
l = re.split(numline_csv, l)
line_bank.append(l[1])
so = []
for i,l in enumerate(line_bank):
so.append("%s,%s" % (i,l))
And the output:
>>> so
['0,abc,def', '1,mno,pqr', '2,stu,vwx']
|
concatenate image in python
Question: any body can help identify the problem here?
i have the code here to concatenate H and L to present an image and whenever i
run the code i get :
np.concatenate((H,L))
>> ValueError: zero-dimensional arrays cannot be concatenated
but i don't know why H and L are zero dimensional .thanks in advance
import cv2
import cv
import numpy as np
c1=0.5
c2=0.25
img1=cv2.imread("Penguin-cartoon.png") ## Genuine Image
img=cv2.imread("Penguin-cartoon.png",cv2.CV_LOAD_IMAGE_GRAYSCALE) #gray_scaled Image
A=img.astype(np.int16)
D=[]
C=[]
x,y=img.shape
B = np.empty((x,y), dtype = np.int16)
for j in range(1,y):
for i in range (0,x/2 -1 ):
if i==0:
P=A[j,2*i+2]*c1
B[j,2*i+1]=A[j,2*i+1]-P
elif i==x/2:
U=B[j,2*i-1]*c2
B[j,2*i]=A[j,2*i]+U
else :
P=(A[j,2*i-1]+A[j,2*i+2])*c1
B[j,2*i+1]=A[j,2*i+1]-P
U=(B[j,2*i-1]+B[j,2*i+1])*c2
B[j,2*i]=A[j,2*i]+U
for j in range(1,y):
for i in range (0,x/2 -1 ):
D=B[j,2*i-1]
C=B[j,2*i]
H=D.astype(np.uint8)
L=C.astype(np.uint8)
np.concatenate((H,L))
Answer: The objects `H`, `L` you are concatenating are scalars not arrays, hence the
error. Their assignment in the last `for` loop does not make sens,
for j in range(1,y):
for i in range (0,x/2 -1 ):
D=B[j,2*i-1]
C=B[j,2*i]
H=D.astype(np.uint8)
L=C.astype(np.uint8)
BTW, you should check some tutorials on the use of numpy. The idea is that in
most cases, you can use vectorized numpy operations instead of iterating over
the pixels of your array in Python. The former is much faster.
|
How to use <Button-4> & <Button-5> to zoom in and out the image in python
Question:
def onWheel(event):
d = event.delta
if d < 0:
amt=0.9
else:
amt=1.1
canvas.scale(ALL, 200,200 , amt, amt)
canvas.bind("<Button-4>&<Button-5>", onWheel)
canvas.focus_set()
Here is my code. I run it in a tkinter window in ubuntu machine. But i can
only zoom in the image. zoom out does not work.I want to zoom in & out the
image by using button-4 & button-5
Answer: Finally I got the answer. Here is my code... I changed the above code as :
from Tkinter import *
root = Tk()
Label(root, text='Zoom in & Zoom out using the Mouse Wheel' ,font('Times',12)).pack()
canvas = Canvas(root, width=400, height=400)
canvas.pack(fill=BOTH, expand=1)
canvas.create_rectangle(175,175,225,225,fill='blue')
def zoomin(event):
d = event.delta
if d < 0:
amt=0.9
else:
amt=1.1
canvas.scale(ALL, 200,200 , amt,amt)
canvas.bind("<Button-4>", zoomin)
def zoomout(event):
d = event.delta
if d >0:
amt=1.1
else:
amt=0.9
canvas.scale(ALL, 200,200 , amt, amt)
canvas.bind("<Button-5>", zoomout)
canvas.focus_set()
root.mainloop()
|
Why is accessing a namedtuple by field name slower than accessing a class's member variable?
Question: We were doing some experiments to compare the access time in classes and named
tuples and observed something strange.
import time
from collections import namedtuple as namedtuple
class myclass(object):
def __init__(self, _name, _dob, _value):
self.name = _name
self.dob = _dob
self.value = _value
randomperson1 = myclass( 'randomperson', 10102000, 10.45 )
person = namedtuple( 'person', 'name dob value' )
randomperson2 = person( 'randomperson', 10102000, 10.45)
While using timeit of ipython, the following was observed:
%timeit randomperson1.name,randomperson1.value,randomperson1.dob
10000000 loops, best of 3: 125 ns per loop
%timeit randomperson2.name,randomperson2.value,randomperson2.dob
1000000 loops, best of 3: 320 ns per loop
%timeit randomperson2[0],randomperson2[1],randomperson2[2]
10000000 loops, best of 3: 127 ns per loop
Why's accessing a namedtuple by field name so much slower than accessing a
class's member variable? Is there any way to speed this up?
Answer: That's because in `namedtuple` attributes `name, value, dob` are not simple
attributes on the instance. They actually are turned into something more
complicated
**collections.py**
_field_template = '''\
{name} = _property(_itemgetter({index:d}), doc='Alias for field number {index:d}')
'''
e.g.
dob = property(itemgetter(2), doc='Alias for field number 2')
So as you can see there are additional layers over it. People who created
`namedtuple` decided that they want consistency with memory efficiency at the
cost of CPU efficiency. And that is the reason.
This can be easily observed when you create your own custom class emulating
this:
from operator import itemgetter
class CustomTuple(tuple):
my_attr = property(itemgetter(0))
test_tuple = CustomTuple([1])
and now measure access to `test_tuple.my_attr`. You should get pretty much the
same results.
|
pandas add row instead of column
Question: I'm new to pandas, but trying to simply add a row
class Security:
def __init__(self):
self.structure = ['timestamp', 'open', 'high', 'low', 'close', 'vol']
self.df = pd.DataFrame(columns=self.structure) # index =
def whats_inside(self):
return self.df
"""
Some skipped code...
"""
def add_data(self, timestamp, open, high, low, close, vol):
data = [timestamp, open, high, low, close, vol]
self.df = self.df.append (data)
sec = Security()
print sec.whats_inside()
sec.add_data ('2015/06/01', '1', '2', '0.5', '1', '100')
print sec.whats_inside()
but the output is:
0 close high low open timestamp vol
0 2015/06/01 NaN NaN NaN NaN NaN NaN
1 1 NaN NaN NaN NaN NaN NaN
2 2 NaN NaN NaN NaN NaN NaN
3 0.5 NaN NaN NaN NaN NaN NaN
4 1 NaN NaN NaN NaN NaN NaN
5 100 NaN NaN NaN NaN NaN NaN
This means, I'm adding a column instead of row. Yes, I've tried to google but
still didnt get the point how do make it simple pythonic way.
p.s. I know that's simple, but I'm just missing something important.
Answer: There are several ways to add a new row. Perhaps the easiest one is (if you
want to add the row to the end) is to use `loc`:
df.loc[len(df)] = ['val_a', 'val_b', .... ]
`loc` expects an index. `len(df)` will return the number of rows in the
dataframe so the new row will be added to the end of the dataframe.
'['val_a', 'val_b', .... ]' is a list of values of the row, in the same order
of the columns, so the list's length must be equal to the number of columns,
otherwise you will get a `ValueError` exception. An exception for this is that
if you want all the columns to have the same values you are allowed to have
that value as a single element in the list, for example `df.loc[len(df)] =
['aa']`.
**NOTE:** a good idea will be to always use `reset_index` before using this
method because if you ever delete a row or work on a filtered dataframe you
are not guaranteed that the rows' indexes will be in sync with the number of
rows.
|
Import a .so file from Django
Question: I have a C++ code
#include <Python.h>
static PyObject* py_veripy(PyObject* self, PyObject* args){
/* BODY */
return Py_BuildValue("i", 1);
}
// Bind Python function names to our C functions
static PyMethodDef veripy_methods[] = {
{"veripy", py_veripy, METH_VARARGS},
{NULL, NULL}
};
// Python calls this to let us initialize our module
extern "C" void initveripy(){
(void) Py_InitModule("veripy", veripy_methods);
}
I have compiled it and created a .so file. I want to use this library in one
of my Django app, but I am confused as to where to keep this library in my
directory. I have tried keeping it with the code, obviously, but it doesn't
seem to find it. I found
[this](http://www.dreamincode.net/forums/topic/252650-making-importing-and-
using-a-c-library-in-python-linux-version/) tutorial similar to my case, but
the C++ code there doesn't have Python.h included in it. Please help.
Answer: I think that you should create a setup.py file, so that this compiles your
library and put it somewhere.
For example, here an example of a setup.py file that I used for a personal
project:
from setuptools import setup, Extension
setup(
name="modulename",
version="0.1",
test_suite = "nose.collector",
author='<author name>',
setup_requires=['nose>=0.11'],
description="My super module implementation",
ext_modules=[Extension("modulename", ["src/mysupermodule.c"])]
)
This will generate a 'build/' folder, but of course you can find more about
setuptools on [the official documentation
page](https://pythonhosted.org/setuptools/setuptools.html).
|
Using python with sqlite3
Question: I am using python 3.4.2 with sqlite3. import pypyodbc as pyodbc import sqlite3
print ("Connecting via ODBC")
#conn = pyodbc.connect('DSN=NZSQL')
print ("Connecting via ODBC")
# get a connection, if a connect cannot be made an exception will be raised here
conn = pyodbc.connect("DRIVER=NetezzaSQL;SERVER=xyz;port=123;DATABASE=qwe;UID=hjk;PWD=lkm;")
cur=conn.cursor()
print("connected")
def fetch():
sql_string="CREATE TABLE A as select distinct a.cust,a.loc, a.key,a.bus, \
a.prod,a.sty_code,a.color_code,trim(sty_code) AS style_code ,trim(color_code) AS colour_code, \
price,collect, silh, sty_grp,net_sales_amt_usd,net_sales_qty, AUR,mfsrp \
from final a where a.date between '29Jun2014' and '30May2015' \
cur.execute('%s'%(sql_string))
cur.execute("select * A;")
sales=list(cur)
for row in sales:
print(row)
output: (Decimal('1567678888'), 'QWERTY', None, datetime.date(2015, 1, 20),
Decimal('1567675888'), '52122', 'LID8T', '52122', 'LID8T', '52122_LID8T',
'Legacy CAP', 'Handle', ' CAP Pouch', Decimal('282.00000'), Decimal('2'),
Decimal('141.00000000000000000000000'), '1. $0 - $200')
Question: 1\. As this value is directly imported from ntezza database so can
we format the values in proper format? How? 2\. How to display
Decimal('1567678888') as 1567678888 as number? 3\. How to display
datetime.date(2015, 1, 20) as proper date time format?
Answer: Firstly be more specific with your question.Now for your question,
To get top N records in sqlite3 ,
SELECT * FROM SO_table LIMIT 5;
For order By:
SELECT * FROM SO_table ORDER BY upvotes ASC;
|
Sending data as key-value pair using fetch polyfill in react-native
Question: The following code is to make HTTP POST request with fetch polyfill:
fetch(url, {
method: 'post',
body: JSON.stringify({
'token': this.state.token
})
})
.then((response) => response.json())
.then((responseData) => {
console.log(responseData)
})
.done();
This request sends data as a stringified json obj. Is there a way to send data
as key value pair similar to requests.post(url, data=payload) in python.
Answer: Sounds like you want the same format as a querystring, so import/require a
package like <https://www.npmjs.com/package/query-string> which doesn't appear
to depend on any browser features and has a stringify method:
queryString.stringify({
foo: 'bar',
nested: JSON.stringify({
unicorn: 'cake'
})
});
//=> foo=bar&nested=%7B%22unicorn%22%3A%22cake%22%7D
Alternatively you could just use the relevant part of its source code, though
this would still be subject to [its
license](https://github.com/sindresorhus/query-string/blob/master/license):
function toQueryString(obj) {
return obj ? Object.keys(obj).sort().map(function (key) {
var val = obj[key];
if (Array.isArray(val)) {
return val.sort().map(function (val2) {
return encodeURIComponent(key) + '=' + encodeURIComponent(val2);
}).join('&');
}
return encodeURIComponent(key) + '=' + encodeURIComponent(val);
}).join('&') : '';
}
You can then use the return value in your `body` parameter in `fetch`:
fetch(url, {
method: 'post',
body: toQueryString({ 'token': this.state.token })
})
|
pyinstaller not reading my hook file and doesn't work with win32com.shell
Question: According to the docs of pyinstaller, if you name a file `hook-
fully.qualified.import.name.py` it will read this file whenever you do an
import of the matching `.py` file.
However, my script looks like this:
import pythoncom
from win32com.shell import shell
from win32com import storagecon
...
And pyinstaller refuses to recognize `win32com.shell` with the following
error: `ImportError: No module named 'win32com.shell'`. So I've created `hook-
win32com.shell.py` with the following code:
hiddenimports = [
'win32com.shell.shell',
]
pyinstaller never reads this file, however it does read `hook-win32com.py` so
I've also tried with just adding `'win32com.shell' to the above hook file but
that didn't do much.
1. How do I get pyinstaller to read my hook file
2. How do I get it to include `win32com.shell`? (So i get rid of "No module named" in runtime of the .exe)
Answer: This seems to be the case with:
<https://github.com/pyinstaller/pyinstaller/issues/1322>
Apparently the new python3 graph is now used everywhere in pyinstaller, so
this bug seems to apply for python2 users as well.
I suggest rewriting win32com.shell calls with ctypes.shell32, or cffi.
|
How to allow users to login protected flask-rest-api in Angularjs using HTTP Authentication?
Question: **_Guys If My Question is not clear please comment below._**
**Basic HTTP Authentication for REST API in flask+angularjs**
I just want to login to the flask-rest-api in angularjs, **I don't know how to
send the login info (username and password)to flask-rest-api.** In this app
There is one table after successfully login and it will load the data. Here we
are **not using any data-base** but username and password is **hard-coded in
rest-server code**. and username="admin" and password="1234". When can modify,
update, addNewData. I took this from [this
blog](http://blog.miguelgrinberg.com/post/designing-a-restful-api-using-flask-
restful), here they are using in knockout, I am trying to in Angularjs
**Login form**
<div id="login" class="modal hide fade" tabindex="=1" role="dialog" aria-labelledby="loginLabel" aria-hidden="true">
<div class="modal-header">
<h3 id="loginLabel">Sign In</h3>
</div>
<div class="modal-body">
<form class="form-horizontal">
<div class="control-group">
<label class="control-label" for="inputUsername">Username</label>
<div class="controls">
<input ng-model="username" type="text" id="inputUsername" placeholder="Username">
</div>
</div>
<div class="control-group">
<label class="control-label" for="inputPassword">Password</label>
<div class="controls">
<input ng-model="password" type="password" id="inputPassword" placeholder="Password">
</div>
</div>
</form>
</div>
<div class="modal-footer">
<button ng-click="submitData(username, password)" class="btn btn-primary" data-dismiss="modal" aria-hidden="true">Sign In</button>
</div>
</div>
**HTML Code Which call Login Model**
<div class="navbar">
<div class="navbar-inner">
<a class="btn" data-toggle="modal" data-target="#login">Login</a>
</div>
</div>
**AngulurJS code**
<script>
var app = angular.module('myApp', []);
app.controller('tasksCtrl', function($scope, $http) {
$scope.submitData=function(username, password){
var config={
params:{
username:username, password:password
}
};
};
//$http.get("data.json")
$http.get("/todo/api/v1.0/tasks")
.success(function(response) {
console.log(response.tasks)
$scope.tasks = response.tasks;
});
$scope.editTask = function(task) {
$scope.selectedTask = task;
};
$scope.removeRow = function(task) {
$scope.tasks.splice(task, 1);
};
$scope.addNewTask = function() {
//$scope.tasks.push({title :$scope.task1,description: $scope.description1});
$scope.tasks.push({title: $scope.task1, description: $scope.description1});
$scope.task1 = '';
$scope.description1 = '';
// $scope.tasks.push('dhsh');
};
});
</script>
**REST-API-SERVER**
import six
from flask import Flask, jsonify, abort, request, make_response, url_for, render_template
from flask.ext.httpauth import HTTPBasicAuth
app = Flask(__name__, static_url_path="")
auth = HTTPBasicAuth()
@auth.get_password
def get_password(username):
if username == 'admin':
return '1234'
return None
@auth.error_handler
def unauthorized():
return make_response(jsonify({'error': 'Unauthorized access'}), 403)
@app.errorhandler(400)
def bad_request(error):
return make_response(jsonify({'error': 'Bad request'}), 400)
@app.errorhandler(404)
def not_found(error):
return make_response(jsonify({'error': 'Not found'}), 404)
tasks = [
{
'id': 1,
'title': u'Buy groceries',
'description': u'Milk, Cheese, Pizza, Fruit, Tylenol',
'done': False
},
{
'id': 2,
'title': u'Learn Python',
'description': u'Need to find a good Python tutorial on the web',
'done': False
}
]
def make_public_task(task):
new_task = {}
for field in task:
if field == 'id':
new_task['uri'] = url_for('get_task', task_id=task['id'],
_external=True)
else:
new_task[field] = task[field]
return new_task
@app.route('/')
@auth.login_required
def index():
return render_template('index.html')
@app.route('/todo/api/v1.0/tasks', methods=['GET'])
@auth.login_required
def get_tasks():
return jsonify({'tasks': [make_public_task(task) for task in tasks]})
@app.route('/todo/api/v1.0/tasks/<int:task_id>', methods=['GET'])
@auth.login_required
def get_task(task_id):
task = [task for task in tasks if task['id'] == task_id]
if len(task) == 0:
abort(404)
return jsonify({'task': make_public_task(task[0])})
@app.route('/todo/api/v1.0/tasks', methods=['POST'])
@auth.login_required
def create_task():
if not request.json or 'title' not in request.json:
abort(400)
task = {
'id': tasks[-1]['id'] + 1,
'title': request.json['title'],
'description': request.json.get('description', ""),
'done': False
}
tasks.append(task)
return jsonify({'task': make_public_task(task)}), 201
@app.route('/todo/api/v1.0/tasks/<int:task_id>', methods=['PUT'])
@auth.login_required
def update_task(task_id):
task = [task for task in tasks if task['id'] == task_id]
if len(task) == 0:
abort(404)
if not request.json:
abort(400)
if 'title' in request.json and \
not isinstance(request.json['title'], six.string_types):
abort(400)
if 'description' in request.json and \
not isinstance(request.json['description'], six.string_types):
abort(400)
if 'done' in request.json and type(request.json['done']) is not bool:
abort(400)
task[0]['title'] = request.json.get('title', task[0]['title'])
task[0]['description'] = request.json.get('description',
task[0]['description'])
task[0]['done'] = request.json.get('done', task[0]['done'])
return jsonify({'task': make_public_task(task[0])})
@app.route('/todo/api/v1.0/tasks/<int:task_id>', methods=['DELETE'])
@auth.login_required
def delete_task(task_id):
task = [task for task in tasks if task['id'] == task_id]
if len(task) == 0:
abort(404)
tasks.remove(task[0])
return jsonify({'result': True})
if __name__ == '__main__':
app.run(debug=True)
Answer: The way you make basic authentication from client side is by supplying
`Authorization: Basic <encoded username:password>` header in HTTP request.
encoded username:password is done in specific manner described below:
> 1. Username and password are combined into a string "username:password"
> 2. The resulting string is then encoded using the RFC2045-MIME variant of
> Base64, except not limited to 76 char/line[9]
>
So modify your rest calls to include above header in your Angularjs code or
find a library to do that.
as @Boris mentioned in comments above, see this link
<http://jasonwatmore.com/post/2014/05/26/AngularJS-Basic-HTTP-Authentication-
Example.aspx> it has nice Angular service written to do just what you want
|
render_to_string() complains of a NoneType error
Question: So I'm testing my code in `python manage.py shell`, however I'm getting the
following error: `AttributeError: 'NoneType' object has no attribute 'find'`.
Here's what I've done in the app's `models.py`:
from djmoney.models.fields import MoneyField
from django.template.loader import render_to_string
from django.db import models
from datetime import date
import mandrill
<...>
def email_invoice(self):
dated, amount, subtenant = self.dated, self.amount, self.subtenant.name
net_amount = self.amount / 1.13
hst = amount - net_amount
context = {
'date': dated,
'amount': amount,
'net_amount': net_amount,
'hst': hst,
'subtenant': subtenant
}
msg_html = render_to_string('email/invoice.html', context)
message = {
'to': [{'email': '<left out>',
'name': '<left out>',
'type': 'to'}],
'subject': 'Testing',
'html': msg_html,
'from_email': '<left out>',
'from_name': '<left out>'
}
try:
mandrill_client = mandrill.Mandrill('<left out>')
result = mandrill_client.messages.send(message=message)
except mandrill.Error as e:
print('A mandrill error occurred: %s - %s' % (e.__class__, e))
raise
The traceback seems to suggest that the issue is with `render_to_string()`,
but I don't see what I've done wrong with its implementation: I tested that
the template exists (it's an email template) in the shell. In fact,
`render_to_string('email/invoice.html')` without a context works fine. The
issue is with the context.
I tried playing around with a simple dict in the shell, like `context =
{'hst': 123}` and that works too. For some reason it doesn't like my dict
above, and complains of a NoneType. I individually checked each object field
to make sure it exists and that it returns a value in the shell.
Any idea where this NoneType is coming from?
EDIT: Here is the full traceback
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/cyehia/Documents/Apps/gcap/rental/models.py", line 41, in email_invoice
msg_html = render_to_string('email/invoice.html', context)
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/django/template/loader.py", line 99, in render_to_string
return template.render(context, request)
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/django/template/backends/django.py", line 74, in render
return self.template.render(context)
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/django/template/base.py", line 209, in render
return self._render(context)
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/django/template/base.py", line 201, in _render
return self.nodelist.render(context)
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/django/template/debug.py", line 79, in render_node
return node.render(context)
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/django/template/debug.py", line 92, in render
output = force_text(output)
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/django/utils/encoding.py", line 90, in force_text
s = six.text_type(s)
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/djmoney/models/fields.py", line 133, in __str__
locale = self.__get_current_locale()
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/djmoney/models/fields.py", line 103, in __get_current_locale
locale = translation.to_locale(translation.get_language())
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/django/utils/translation/__init__.py", line 185, in to_locale
return _trans.to_locale(language)
File "/Users/cyehia/Documents/Apps/venv/lib/python3.4/site-packages/django/utils/translation/trans_real.py", line 75, in to_locale
p = language.find('-')
AttributeError: 'NoneType' object has no attribute 'find'
Answer: The issue seems to be with an [incompatibly between Django 1.8 and django-
money](http://github.com/django-money/django-money/issues/118). I'd try
manually activating a language with:
from django.utils.translation import activate
def email_invoice(self):
<....>
activate('en')
msg_html = render_to_string('email/invoice.html', context)
|
How to perform local polynomial fitting in Python
Question: I have 200k data points and I'm trying to obtain derivative of fitted
polynomial. I divided my data set into smaller ones every 0.5 K, the data is
Voltage vs Temperature. My code roughly looks like this:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
testset=pd.read_csv('150615H0.csv',sep='\t')
x=np.linspace(1,220,219)
ub=min(testset['T(K)'])
lb=min(testset['T(K)'])-1
q={i:testset[(testset['T(K)'] < ub+i) & (testset['T(K)'] > lb+i)] for i in x}
f={j:np.polyfit(q[j]['T(K)'],q[j]['Vol(V)'],4) for j in q}
fs={k:np.poly1d(f[k]) for k in f}
fsd={l:np.polyder(fs[l],1) for l in fs}
for kk in q:
plt.plot(q[kk]['T(K)'],fsd[kk](q[kk]['T(K)']),color='blue',linewidth=2,label='fit')
Unsurprinsingly, the derivative is discontinuous and I don't like it. Is there
any other way to fit polynomial locally and get continuous derivative at the
same time ?
Answer: Have a look at the [Savitzky-Gollay
filter](https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter) for an
efficient local polynomial fitting.
It is implemented, for instance, in
[`scipy.signal.savgol_filter`](http://docs.scipy.org/doc/scipy-
dev/reference/generated/scipy.signal.savgol_filter.html#scipy.signal.savgol_filter).
The derivative of the fitted polynomial can be obtained with the `deriv=1`
argument.
|
How to display all the tables created in sqlite database on python3
Question:
import sqlite3
conn = sqlite3.connect('boo2.db')
c = conn.cursor()
x=c.execute("SELECT * FROM sqlite_master where type='table'")
for y in x:
print(x)
the output is
********************************
<sqlite3.Cursor object at 0x00272DE0>
Process finished with exit code 0
**************************************
But not any tables .. How to get the table names ?
Answer: you need to fetch the results from the query:
import sqlite3
conn = sqlite3.connect('boo2.db')
c = conn.cursor()
x=c.execute("SELECT * FROM sqlite_master where type='table'")
for y in x.fetchall():
print(y)
|
Python - stop WSGI application after specific request
Question: I need to create an application that ends after receiving a specific request.
I use `wsgiref.simple_server` and run handling request in separate thread.
There is my code:
from wsgiref.simple_server import make_server
import re
import threading
import urllib2
def webapp(environ, start_response):
path = environ.get('PATH_INFO', '').lstrip('/')
for regex, callback in urls:
match = re.search(regex, path)
if match is not None:
environ['app.url_args'] = match.groups()
return callback(environ, start_response)
return not_found(environ, start_response)
KEEP_RUNNING = True
srv = make_server('localhost', 8081, webapp)
def t_serve():
while KEEP_RUNNING:
srv.handle_request()
def main():
t = threading.Thread(target=t_serve)
t.start()
print 'Service is running.'
t.join()
def index(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/html')])
return ['Service works']
def t_stop_service():
print 'Service shutdown'
urllib2.urlopen('http://localhost:8081/')
def stop_service(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/html')])
global KEEP_RUNNING
KEEP_RUNNING = False
threading.Thread(target=t_stop_service).start()
return ['Service is stopping']
def not_found(environ, start_response):
"""Called if no URL matches."""
start_response('404 NOT FOUND', [('Content-Type', 'text/html')])
return ['<h1>Not Found</h1>']
urls = [
(r'^$', index),
(r'^stop/?$', stop_service)
]
if __name__ == '__main__':
main()
After I make request "stop" request processing ends, but the process of the
program is not ending. How to fix it?
Answer: Depends on the WSGI server whether it will work, with good quality ones it
usually doesn't for good reasons, but have you tried using `sys.exit()` from
the request handler?
If you must, you could also use `os._exit(1)`. This will cause Python to exit
immediately, although it will not allow any cleanup code to run.
|
generate a random string with only certain characters allowed to repeat in python
Question: Okay, I'm trying to generate a 10 character string containing specific
characters. With the conditions being, the letters can NOT be repeats (but the
numbers CAN), and the string can ONLY contain a total of TWO letters, no more,
no less. The final string output MUST be 8 numbers, and 2 letters.
numbers = ["1", "2", "3"]
letters = ["X", "Y", "Z"]
i.e. the string output should be similar to
123X21Y213
The problem is, I don't know how to do this, elegantly. I did create a
solution, but it was way too many lines, and it involved constantly checking
the previous character of the string. I'm really bad at this, and need
something simpler.
Any help would be appreciated.
Answer:
numbers = ["1", "2", "3"]
letters = ["X", "Y", "Z"]
from random import sample, shuffle
samp = sample(letters,2)+sample(numbers*3,8)
shuffle(samp)
print("".join(samp))
113332X2Z2
Or use choice and range:
from random import sample, shuffle,choice
samp = sample(letters,2)+[choice(numbers) for _ in range(8)]
shuffle(samp)
print("".join(samp))
1212ZX1131
|
Python audio just makes the windows ding noises
Question: So, I am using Python3 making something that plays songs. I have it working so
if I press 1, it plays the playlist, if I press 2, it plays the first song,
and if I press 3, it plays the second song. It works with Circles, but in the
playlist once it gets to Bullseye, it just dings (Like when a notification
comes up and you click somewhere else) constantly. When you press 3, it dings
once and sits there. I think it may be a connection with the song
(BullsEye.mp3) is this my code or has anyone else had this issue before?
from time import *
import winsound
from winsound import *
input = input('1 for playlist - 2 for Circles - 3 for BullsEye ')
var = int(input)
while var==1:
winsound.PlaySound("Circles.mp3", winsound.SND_ALIAS)
winsound.PlaySound("BullsEye.mp3", winsound.SND_ALIAS)
if var==2:
winsound.PlaySound("Circles.mp3", winsound.SND_ALIAS)
if var==3:
winsound.PlaySound("BullsEye.mp3", winsound.SND_ALIAS)
Answer: Use `winsound.SND_FILENAME` instead of `winsound.SND_ALIAS` if you want to
pass a filename instead of a predefined alias such as `'SystemExit'` otherwise
it just plays a default sound if you pass unrecognized alias (that is likely
for an arbitrary filename):
#!/usr/bin/env python3
audio_files = ["Circles.wav", "BullsEye.wav"]
def play(filename):
winsound.PlaySound(filename, winsound.SND_FILENAME)
choice = int(input('1 for playlist - 2 for Circles - 3 for BullsEye '))
if choice == 1:
for filename in audio_files:
play(filename)
elif choice in {2, 3}:
play(audio_files[choice-2])
else:
assert 0
Note: `PlaySound` plays wav files, not mp3. To play mp3 using a default
player, you could use `os.startfile()`.
|
Extending django user model and errors
Question: django 1.8.2
**this is my model:**
class AppUser(AbstractUser):
_SEX = (
('M', 'Male'),
('F', 'Female'),
)
_pregex = RegexValidator(regex=r'^\+?1?\d{9,15}$', message="Phone number must be entered in the format: '+999999999'. Up to 15 digits allowed.")
phone = models.CharField(validators=[_pregex], max_length=16, blank=True)
gender = models.CharField(max_length=1, blank=True, choices=_SEX)
birthday = models.DateField(blank=True)
vericode = models.CharField(max_length=40, blank=True) # verification code over SMS?
verified = models.DateTimeField(null=True) # datetime stored when verification happened
@property
def age(self):
today = date.today()
return today.year - self.birthday.year - ((today.month, today.day) < (self.birthday.month, self.birthday.day))
**this is my settings:**
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# register externals
EXTERNAL_LIBS_PATH = os.path.join(BASE_DIR, "_externals", "libs")
EXTERNAL_APPS_PATH = os.path.join(BASE_DIR, "_externals", "apps")
APPS_PATH = os.path.join(BASE_DIR, "apps")
sys.path = ["", EXTERNAL_APPS_PATH, EXTERNAL_LIBS_PATH, APPS_PATH] + sys.path
# TEST PATH
TEST_ASSETS = os.path.join(BASE_DIR, "_test")
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'suit',
'django.contrib.auth',
'django.contrib.admin',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.postgres',
'cacheops',
'rest_framework',
'rest_framework.authtoken',
'corsheaders',
'djoser',
'consents'
)
# Custom model for Auth
# AUTH_USER_MODEL = 'consents.AppUser'
**my folder structure is this**
app
-settings.py etc
apps
-consents
In settings.py I added apps path: APPS_PATH = os.path.join(BASE_DIR, "apps")
to sys.path,
_when I run python manage.py syncdb (or anything else) I get this:_
(cmr) F:\_Projects\cmr\containers\backend>python manage.py syncdb
F:\_Projects\cmr\.venv\cmr\lib\importlib\_bootstrap.py:321: RemovedInDjango19Warning: django.contrib.contenttypes.generic is deprecated and will be removed in Django 1.9. Its contents have been moved to the fields, forms, and admin submodules of django.contrib.contenttypes.
return f(*args, **kwds)
SystemCheckError: System check identified some issues:
ERRORS:
auth.User.groups: (fields.E304) Reverse accessor for 'User.groups' clashes with reverse accessor for 'AppUser.groups'.
HINT: Add or change a related_name argument to the definition for 'User.groups' or 'AppUser.groups'.
auth.User.user_permissions: (fields.E304) Reverse accessor for 'User.user_permissions' clashes with reverse accessor for 'AppUser.user_permissions'.
HINT: Add or change a related_name argument to the definition for 'User.user_permissions' or 'AppUser.user_permissions'.
consents.AppUser.groups: (fields.E304) Reverse accessor for 'AppUser.groups' clashes with reverse accessor for 'User.groups'.
HINT: Add or change a related_name argument to the definition for 'AppUser.groups' or 'User.groups'.
consents.AppUser.user_permissions: (fields.E304) Reverse accessor for 'AppUser.user_permissions' clashes with reverse accessor for 'User.user_permissions'.
HINT: Add or change a related_name argument to the definition for 'AppUser.user_permissions' or 'User.user_permissions'.
**If I uncomment this line in settings**
# AUTH_USER_MODEL = 'consents.AppUser'
I get another error:
ValueError: Dependency on unknown app: consents
I just need to add few fields to default User model (don't want a new create a
totally new auth class subclassign AbstractBaseUser)
**So what am I doing wrong?**
Answer: **solution**
1. Was to erase, re-create db
2. Remove *.pyc files
3. Remove migrations folders
then python manage.py makemigrations worked perfectly.
|
Bundling data with your .spec file in PyInstaller
Question: So I've read all of the questions here and cannot for the life of me see why
this doesn't work. I have a .spec file that looks like this:
# -*- mode: python -*-
block_cipher = None
a = Analysis(['newtestsphinx.py'],
pathex=['C:\\Program Files (x86)\\speechfolder'],
hiddenimports=[],
hookspath=None,
runtime_hooks=None,
excludes=None,
cipher=block_cipher)
pyz = PYZ(a.pure,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas + [('grammar2.jsgf', 'C:\\Program Files (x86)\\speechfolder\\grammar2.jsgf', 'DATA')],
name='newtestsphinx.exe',
debug=False,
strip=None,
upx=True,
console=True )
so like all of the examples if I understand them, I added 'grammar2.jsgf' to
the bundle in the root directory, I believe the format of this is
['path_to_put_in', 'path_its_in_now', 'label']
So then I run the command to create my new file:
pyinstaller --onefile newtestsphinx.spec
and the first thing I do now in my code is this:
print os.path.isfile('grammar2.jsgf')
it returns false 100%, and my program also can't find the file to use it. Any
help would be awesome, thanks!
Answer: The issue at hand is that pyinstaller should extract a bunch of necessary
supporting files to a temporary directory when it runs. When trying to access
these supporting files, you need to pre-pend access to the files with the
correct directory name. From the
[docs](http://pythonhosted.org/PyInstaller/#adapting-to-being-frozen):
import sys
import os
if getattr(sys, 'frozen', False):
# we are running in a |PyInstaller| bundle
basedir = sys._MEIPASS
else:
# we are running in a normal Python environment
basedir = os.path.dirname(__file__)
So then when trying to access your file:
print os.path.isfile(os.path.join(basedir, 'grammar2.jsgf'))
You should see it return True. Another helpful thing to do is to print out the
basedir, and ensure that execution doesn't end, using something simple like:
raw_input('Press enter to end execution.')
This will allow you to see where the temporary directory is - then you can go
explore a little bit and get an idea of how it all works.
|
How to know which file to open (and open it) in Python based on its name matching a pattern?
Question: I have a file whose name will get updated daily, but the name will always
match some simple pattern e.g. it will always begin with 'report' e.g.
**report 1 A 2X.csv**
How do I open the file on any given day, not knowing what its name will be but
knowing the pattern and knowing there will only ever be one file in the folder
that matches?
If I define:
pattern = 'report'
then how do I combine `os.listdir('.')` and `str.startswith(pattern)` with
`open()` to achieve this?
Answer: Try this:
from glob import glob
file = open(glob('report *')[0])
|
Storing data in a data frame
Question: New to python I'm struggling with a way to combine operation on my raw data
and a way to store them in a data frame and then using it again with pandas
and R.
some example of my code:
if 'Subject' in f:
for i in range (len(time)):
(...)
if Description[j] == 'response':
RT.append(time[j] - time_stim)
motor_time.append(time[j] - time_response)
break
My raw data is a .txt file example below:
Type,Description,Time,Channel
Categorie,PC,153,1,All
Stimulus,S11,1510,1,All
Stimulus,S202,3175,1,All
Stimulus,EMG_Onset,3978,1,EMGL
Stimulus,response,4226,1,All
Categorie,CC,5785,1,All
Stimulus,S11,7141,1,All
Stimulus,S202,8807,1,All
Stimulus,EMG_Onset,9549,1,EMGL
Stimulus,EMG_Onset,9965,1,EMGL
Stimulus,response,10249,1,All
In this example, I want to store RT or motor_time which I got from this piece
of code in a yet non-existent data frame to use it first with python then with
R. This data frame would have to store all parameters for all my experimental
conditions and subject
In this case, all results are stored in numeric np.array and I don't know how
to use them with specific R code I created before.
Thanks.
Answer: I should first say, that I do not see any reason for you to mix python and R.
If you have your analysis already in R, you can directly read your TXT file
into R data frame
df = read.csv("myfile.txt")
head(df) # to display the first few rows of your data frame
Your 1st, 2nd and 5th columns will be converted to factors (you can change it
if you desired).
If you want python, you can read your file into pandas DataFrame.
import pandas as pd
df = pd.read_csv("myfile.txt")
df.head() # to display the first few rows of your data frame
If this is not a solution for your question, please indicate what do you want
beyond this?
There is a [rpy](http://rpy.sourceforge.net/) package which allows you to use
R code in python. It requires extra python programming code anyway.
As to importing pandas data frame into R: I would save it into CSV file or
other format (save as "save on hard disk") and then open in R. But CSV file is
what you initially get, so no point for you.
|
How to access/set 'select' tag in HTML with python
Question: I'm trying to extract events from a page of HTML - <http://www.staffordshire-
pcc.gov.uk/space/>
I want to select different areas using python but came unstuck with the
following HTML:
<select data-ng-options="key as value.name for (key,value) in areaGroups | orderBy:'name'" data-ng-model="selectedAreaGroup" data-ng-change="updateAreaGroup()" class="ng-pristine ng-valid ng-touched">
<option value="" class="" selected="selected">Choose an area</option>
<option value="string:CannockChase" label="Cannock Chase District">Cannock Chase District</option>
<option value="string:EastStaffordshire" label="East Staffordshire">East Staffordshire</option>
<option value="string:Lichfield" label="Lichfield District">Lichfield District</option>
<option value="string:Newcastle" label="Newcastle Borough">Newcastle Borough</option>
<option value="string:SouthStaffordshire" label="South Staffordshire">South Staffordshire</option>
<option value="string:Stafford" label="Stafford Borough">Stafford Borough</option>
<option value="string:StaffordshireMoorlands" label="Staffordshire Moorlands">Staffordshire Moorlands</option>
<option value="string:SoTCentral" label="Stoke-on-Trent Central">Stoke-on-Trent Central</option>
<option value="string:SoTNorth" label="Stoke-on-Trent North">Stoke-on-Trent North</option>
<option value="string:SoTSouth" label="Stoke-on-Trent South">Stoke-on-Trent South</option>
<option value="string:Tamworth" label="Tamworth Borough">Tamworth Borough</option>
I use Mechanize to find forms on pages, but as there's no form attached to the
tag, I can't work out how I would select it, and then submit a value.
What's the best option for me to pursue?
Answer: You can select the form by the order at which it appears on the page, firstly
import & open
import mechanize
br = mechanize.Browser()
br.open('http://www.staffordshire-pcc.gov.uk/space/')
Loop through all the forms in the page
forms = [f.name for f in br.forms()]
Lets check whether form[0] is the correct index for the form with the dropdown
(as in your question) Set the control variable and print out the values
control = forms[0].controls[0]
form_values = [item.attrs['value'] for item in control.items]
print form_values
If this it the correct form you should see:
["string:CannockChase", "string:EastSta....
If not cycle through the indexes until you find the correct index (* see
below).
Finally once you have found the correct form you can set a value and submit:
br.form[0*] = form_values[0]
r = br.submit()
// read out the HTML from the resulting page
print r.read()
* this index is whichever represents the dropdown form as in your question
|
Python - Get a function to overwrite a variable
Question: So I'm testing out some mechanics for a text-based game I was going to make.
If the player has armour it would halve the damage they take and if not they
would take full damage. The problem I'm having is that whenever I try to run
the functions twice, it resets the health because it has been hardcoded. So
I'm wondering how can I get the function to overwrite the health variable
after it runs each time?
Here is the code:
import random
inventory = ["Armour","Sword","Healing Potion"]
health=100
def randDmg():
dealtDamage = random.randint(1,10)
print("You have taken "+str(dealtDamage)+" damage.")
return dealtDamage
def dmgCheck(damage, health):
if "Armour" in inventory:
damage = damage/2
else:
damage = damage
health-=damage
return health
print("Your new health is "+str(dmgCheck(randDmg(), health)))
Answer: Defining a global at the top of your dmgCheck function would work well - then
you don't need to pass it in as a local. While you're at it, if you call the
randDmg function within dmgCheck you won't need to pass that in either.
import random
inventory = ["Armour","Sword","Healing Potion"]
health=100
def randDmg():
dealtDamage = random.randint(1,10)
print("You have taken "+str(dealtDamage)+" damage.")
return dealtDamage
def dmgCheck():
global health
damage = randDmg()
if "Armour" in inventory:
damage = damage/2
else:
damage = damage
health-=damage
return health
print("Your new health is" + str(dmgCheck()))
print("Your new health is" + str(dmgCheck()))
That last bit could also be simplified by using pythons' string formating
syntax:
print("Your new health is %s" % dmgCheck())
To do something similar using a Python Class you could use:
import random
class Game(object):
def __init__(self, inventory, health):
self.inventory = inventory
self.health = health
def randDmg(self):
dealtDamage = random.randint(1,10)
print("You have taken "+str(dealtDamage)+" damage.")
return dealtDamage
def dmgCheck(self):
damage = self.randDmg()
if "Armour" in self.inventory:
damage = damage/2
else:
damage = damage
self.health -= damage
return self.health
def play(self):
result = self.dmgCheck()
print("Your new health is %s" % result)
game = Game(["Armour","Sword","Healing Potion"], 100)
game.play()
game.play()
|
Pygame Key events only detects a limited amount of keys being held down
Question: Hi I have used pygame (the modules for python) for a while. Now I have written
a RPG game that has multiple keys been held down at once. It seem that only 2
or 3 keys are detected whiles been held down. If anyone knows how to fix this
problem it would be great. Try out my code below for python 2.7 and see if you
have the same problem. Thanks
import pygame
def main():
# Initialise screen
pygame.init()
clock = pygame.time.Clock()
screen = pygame.display.set_mode((150, 50))
pygame.display.set_caption('Basic Pygame program')
# Fill background
background = pygame.Surface(screen.get_size())
background = background.convert()
background.fill((250, 250, 250))
# Display some text
font = pygame.font.Font(None, 36)
text = font.render("Hello There", 1, (10, 10, 10))
textpos = text.get_rect()
textpos.centerx = background.get_rect().centerx
background.blit(text, textpos)
# Blit everything to the screen
screen.blit(background, (0, 0))
pygame.display.flip()
q=0
w=0
e=0
r=0
#Event loop
while True:
for event in pygame.event.get():
if event.type == QUIT:
return
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_q :
q = 1
if event.key == pygame.K_w :
w = 1
if event.key == pygame.K_e :
e = 1
if event.key == pygame.K_r :
r = 1
if event.type == pygame.KEYUP:
if event.key == pygame.K_q :
q = 0
if event.key == pygame.K_w :
w = 0
if event.key == pygame.K_e :
e = 0
if event.key == pygame.K_r :
r = 0
count = q+w+e+r
print("Total: "+str(count)+" q: "+str(q) + " w: "+str(w)+ " e: "+str(e)+ " r: "+str(r))
clock.tick(30)
screen.blit(background, (0, 0))
pygame.display.flip()
if __name__ == '__main__': main()
Here I have tryed with pygame.key.get_pressed() but it still does not seem to
work with more than 3 keys being held down. )-:
from pygame.locals import *
import pygame
def main():
# Initialise screen
pygame.init()
clock = pygame.time.Clock()
screen = pygame.display.set_mode((150, 50))
pygame.display.set_caption('Basic Pygame program')
# Fill background
background = pygame.Surface(screen.get_size())
background = background.convert()
background.fill((250, 250, 250))
# Display some text
font = pygame.font.Font(None, 36)
text = font.render("Hello There", 1, (10, 10, 10))
textpos = text.get_rect()
textpos.centerx = background.get_rect().centerx
background.blit(text, textpos)
# Blit everything to the screen
screen.blit(background, (0, 0))
pygame.display.flip()
#Event loop
while True:
for event in pygame.event.get():
if event.type == QUIT:
return
q=0
w=0
e=0
r=0
keys=pygame.key.get_pressed()
if keys[K_q] :
q = 1
if keys[K_w] :
w = 1
if keys[K_e] :
e = 1
if keys[K_r] :
r = 1
count = q+w+e+r
print("Total: "+str(count)+" q: "+str(q) + " w: "+str(w)+ " e: "+str(e)+ " r: "+str(r))
clock.tick(30)
screen.blit(background, (0, 0))
pygame.display.flip()
if __name__ == '__main__': main(
)
Answer: You can used 'get_pressed' which will give you a bool value for every key on
the keyboard.
<http://www.pygame.org/docs/ref/key.html#pygame.key.get_pressed>
keys = pygame.key.get_pressed()
if keys[K_LEFT]:
go_left()
|
Why do I have invalid syntax in default pygame.py?
Question: Always, when I want to use Python framework `pygame` and I want to compile it,
it's printing this:
Traceback (most recent call last):
File "/home/hanisek/game.py", line 1, in <module>
import pygame
File "/home/hanisek/pygame.py", line 3
03f3 0d0a 5fff 9355 6300 0000 0000 0000
^
SyntaxError: invalid syntax
Do you know why?
Answer: You are not importing `pygame` the framework, you are importing `pygame.py`, a
file in the same directory:
import pygame
File "/home/hanisek/pygame.py", line 3
Remove or rename that file.
|
Python: Float infinite length (Precision float)
Question: My code:
def calc_pi(acc):
pos = False
sum = 4.0
for i in range(2, acc):
if not pos:
sum -= 4.0/(2*i-1)
pos = True
else:
sum += 4.0/(2*i-1)
pos = False
return float(sum)
print(calc_pi(5000))
And of course, I'm trying to calculate a pi, with more than 10 after-point-
digits. But Python seems to round to 10. Is there a simple way, to prevent it
from doing this? Like a million after-point-digits?
Thank you!
Answer: You can use the [Decimal](https://docs.python.org/2/library/decimal.html)
class provided by the standard library.
From the docs:
> Unlike hardware based binary floating point, the decimal module has a user
> alterable precision (defaulting to 28 places) which can be as large as
> needed for a given problem:
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
|
How can I save a plot in python using matplotlib?
Question: When using:
In [42]: tz_counts[:10].plot(kind='barh', rot=0)
it return:
Out[42]: <matplotlib.axes._subplots.AxesSubplot at 0x7fa216b464d0>
But I can't see any file, it isn't showing either.
I start learning python today. Can someone help me?
Answer: You can use
[savefig](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.savefig):
import matplotlib.pyplot as plt
plt.plot(range(10), range(10))
plt.savefig('a.pdf')
|
Javascript 'require()' method
Question: In python, when you import a module the statements inside the 'if name ==
_main_ ' block of the imported module is not executed.
Is there any equivalent approach which can prevents the execution of unwanted
statements in the imported module in javascript?
Answer: Via [fuyushimoya](http://stackoverflow.com/users/1737627/fuyushimoya)'s
comment.
> When a file is run directly from Node, require.main is set to its module.
> That means that you can determine whether a file has been run directly by
> testing
>
>
> require.main === module
>
>
> For a file foo.js, this will be true if run via node foo.js, but false if
> run by require('./foo').
— [Node.js
documentation](https://nodejs.org/api/modules.html#modules_accessing_the_main_module)
So:
if (require.main === module) {
// Code that runs only if the module is executed directly
} else {
// Code that runs only if the code is loaded as a module
}
|
When it is necessary to close a file and when it is not in python?
Question: I was trying to write code that manages resources in a responsible way. I
understand that a common idiom to make sure a file is closed after using is
with open("filename.txt", 'r') as f:
# ...use file `f`...
However, I often see people do the following:
data = cPickle.load(open("filename.pkl", 'r'))
I am wondering is that a right way and does Python always close `filename.pkl`
even if `cPickle` throws an exception? Some explanation (or pointers to
articles that explains) whether that is safe?
Answer: Python doesn't close the file when you open it.thus when you pass it in
following code without wrapping the script by `with` statement :
data = cPickle.load(open("filename.pkl", 'r'))
You need to close if manually.
This is a example form python documentation about `pickle` module :
import pprint, pickle
pkl_file = open('data.pkl', 'rb')
data1 = pickle.load(pkl_file)
pprint.pprint(data1)
data2 = pickle.load(pkl_file)
pprint.pprint(data2)
pkl_file.close()
As you can see the file has been closed at the end of code!
Also for more info you read the following python documentation about closing a
file :
> When you’re done with a file, call `f.close()` to close it and free up any
> system resources taken up by the open file. After calling `f.close()`,
> attempts to use the file object will automatically fail.
>>> f.close()
>>> f.read()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
ValueError: I/O operation on closed file
> It is good practice to use the with keyword when dealing with file objects.
> This has the advantage that the file is properly closed after its suite
> finishes, even if an exception is raised on the way. It is also much shorter
> than writing equivalent try-finally blocks:
>>>
>>> with open('workfile', 'r') as f:
... read_data = f.read()
>>> f.closed
True
|
How to do feature selection and reduction on a LIBSVM file in Spark using Python?
Question: I have a couple of LIBSVM files with which I have to implement clustering in
spark using python. The file has **space** as the delimiter and the first
column represents the type [ 1 or -1] and the rest all are the features which
are in the format [1:2.566]. There are a lot of columns like this and I would
like to perform a feature selection over this [preferably implement the
ChiSquareTest model] and then use PCA or SVD to perform a feature reduction
process. But, I could not find a decent tutorial for python in spark to
implement these processes.
I found a [link](https://spark.apache.org/docs/1.2.0/mllib-statistics.html)
online that had a sample script to implement Chisqtest in python. I used the
same logic to implement the model and I could not get it done. Under the
Hypothesis testing division in that link, the code parallelizes the
RDD[LabeledPoint] before passing to the ChiSqTest model. I tried the same
logic in different manner and I got different errors.
data = MLUtils.loadLibSVMFile(sc, "PATH/FILENAME.txt")
label = data.map(lambda x: x.label)
features = data.map(lambda x: x.features)
obs = sc.parallelize(LabeledPoint(label,features))
This gave me an error stating **TypeError: float() argument must be a string
or a number**
Then, I normalized the data using Normalizer() and did the same thing and got
the same error. So, I wrote a function that returns a labeledpoint
def parsepoint(line):
values = [float(x) for x in line.split(' ')]
return sc.parallelize(LabeledPoint(values[0],values[1:]))
parsedData = data.map(lambda x: parsepoint(x))
obs = sc.parallelize(parsedData)
This gave me an error stating **Pipeline RDD is not iterable**. I tried
several other methods and everything ended up in an error. Could someone
please tell me where I am going wrong? And, for the feature reduction process
using PCA or SVD, I could not find a sample script in python. Any inputs for
that would be very helpful for me.
Stack Trace:
Py4JJavaError Traceback (most recent call last)
<ipython-input-1-8d0164c0957d> in <module>()
10 sct = SparkContext()
11 data = MLUtils.loadLibSVMFile(sct, "PATH")
---> 12 print data.take(1)
13 #label = data.map(lambda x: x.label)
14 #features = data.map(lambda x: x.features)
SPARK_HOME\rdd.pyc in take(self, num)
1263
1264 p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1265 res = self.context.runJob(self, takeUpToNumLeft, p, True)
1266
1267 items += res
SPARK_HOME\context.pyc in runJob(self, rdd, partitionFunc, partitions, allowLocal)
879 mappedRDD = rdd.mapPartitions(partitionFunc)
880 port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions,
--> 881 allowLocal)
882 return list(_load_from_socket(port, mappedRDD._jrdd_deserializer))
883
SPARK\python\lib\py4j-0.8.2.1-src.zip\py4j\java_gateway.py in __call__(self, *args)
536 answer = self.gateway_client.send_command(command)
537 return_value = get_return_value(answer, self.gateway_client,
--> 538 self.target_id, self.name)
539
540 for temp_arg in temp_args:
SPARK\python\lib\py4j-0.8.2.1-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
298 raise Py4JJavaError(
299 'An error occurred while calling {0}{1}{2}.\n'.
--> 300 format(target_id, '.', name), value)
301 else:
302 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 2, localhost): java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(Unknown Source)
at java.net.SocketOutputStream.write(Unknown Source)
at java.io.BufferedOutputStream.write(Unknown Source)
at java.io.DataOutputStream.write(Unknown Source)
at java.io.FilterOutputStream.write(Unknown Source)
at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$wr ite$1(PythonRDD.scala:413)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:425)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:425)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:425)
at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3.apply(PythonRD D.scala:248)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:208)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$ $failJobAndIndependentStages(DAGScheduler.scala:1266)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler .scala:1257)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAG Scheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala :1411)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Answer: `MLUtils.loadLibSVMFile` returns `RDD[LabeledPoint]` so you can pass an output
directly to the `Statistics.chiSqTest`. Using example data:
from pyspark.mllib.util import MLUtils
from pyspark.mllib.stat import Statistics
data = MLUtils.loadLibSVMFile(sc, 'data/mllib/sample_libsvm_data.txt')
chiSqResults = Statistics.chiSqTest(data)
print chiSqResults[-1]
|
Import Error: No module name
Question: I am facing some issue while importing a class. The folder structure is as
below:
python_space
|_ __init__.py
|_ ds_Tut
|_ __init__.py
|_ stacks
|_ __init__.py
|_ stacks.py (contains class Stack)
|_ trees
|_ __init__.py
|_ parseTree.py (wants to import Stack class from above)
Used the following code to import:
from stacks.stacks import Stack
Getting the following error:
"ImportError: No module named stacks.stacks"
Answer: stacks is inside the ds_Tut module. does this work?
from ds_Tut.stacks.stacks import Stack
|
NameError when using input() with Python 3.4
Question: I am a new Python user and I have been working through a number of tutorials.
This has included running some of the code from the Command Prompt. This
worked fine when I first tested the code but for some reason it seems to have
stopped working and I am now getting errors when using Input(). I have
included the code below and the error message I am receiving.
Code:
import sys
print (sys.version)
print("hello world")
myName = input("What is your name?")
print(myName)
if (myName == "Matt"):
print("Matt is great!")
elif (myName == "Bob"):
print("Bob is ok")
else:
print("Hello world")
input("Press enter to continue")
Error Message:
C:\Users\matt.xxxx>cd C:\Python34\Scripts\Projects
C:\Python34\Scripts\Projects>helloworld.py
2.7.7 (default, Jun 1 2014, 14:21:57) [MSC v.1500 64 bit (AMD64)]
hello world
What is your name?Matt
Traceback (most recent call last):
File "C:\Python34\Scripts\Projects\helloworld.py", line 6, in <module>
myName = input("What is your name?")
File "<string>", line 1, in <module>
NameError: name 'Matt' is not defined
C:\Python34\Scripts\Projects>
I know this can occur when using older versions of python, however I have
checked and I am fairly sure I am using version 3.4 (checked using import sys
etc.). I have recently installed PyCharm which is the only thing I can think
of that has changed. The code works in PyCharm and from IDLE but not from the
Command Prompt. Any help would be much appreciated.
Answer: From your example , I believe you are running the script using -
`helloworld.py` \- this would cause Windows to lookup the default application
associated with the extension `.py` and run it.
I am guessing in your case when you installed PyCharm, it somehow made Python
2.7.7 the default application for `.py` files (Or it was like that from the
start) so when you directly run `.py` files (even from command prompt) they
run using Python 2.7.7 .
You said in your comment that when running `python` directly from command
prompt, you are getting python 3.4 , so the easiest way to fix your issue
would be to use that to run your script.
Run it using the command -
python helloworld.py
As a long term solution, you may want to consider changing the default
application associated with `.py` files. You can checkout [this
link](http://www.7tutorials.com/how-associate-file-type-or-protocol-program)
for guide on how to do that
|
Kivy and in-game sounds: Game update loop waits for sound to finish before continuing [FPS issues using SoundLoader in Kivy]
Question: I'm learning to program Python by making a game using Kivy, but I'm having
trouble implementing sounds for different events (eg. shield_on.play() when
shield-item is picked up.) because the game update loop appears to halt for a
short while until the sound has finished playing. I've made a short version of
the relevant code here...
shield_on = soundLoader('shield_on.wav')
class game(Widget):
#...loads of other stuff...
def update_loop(foo):
self.player_one.update()
self.player_two.update()
self.player_item_collision_detector()
if "game_file_says_player_one's_shields_are on":
self.player_one.drawShield()
shield_on.play()
Presently, I simply load my sounds globally. I know that's bad, but they're
also my only globals. Then there is a Widget containing the game itself which
has a lot of stuff and an update loop... it updates the player positions,
checks for collisions with items - and on collision the item, here the shield,
is registered as "on" in a game-file. Then the update-loop checks that game-
file for the status of "shields", sees they are on and should play the sound.
The sound plays just fine, however the loop appears to halt until its finished
playing the sound. Essentially, the players stop for a microsecond. How can I
make the update loop not wait for the sounds to finish...?
Answer: # Explanation and workaround with PyGame:
The cause of the problem is explained here: github.com/kivy/kivy/issues/2728
Essentially, the SoundLoader.load() function should return the class most
suited to play the soundfile you pass it. It ends up not doing exactly that,
and as I understand it the fault doesn't lie with Kivy but with GStreamer.
This causes a significant temporary framerate drop for the app - regardless of
where you call the .play() method.
Two possible solutions to this is offerend in the github-thread; 1) Either
ensure a suitable class is returned directly - using SoundSDL2 2) Use PyGame
instead
I implemented the latter, and it works fine.
# Initialize files and PyGame mixer:
import pygame
pygame.init()
pygame.mixer.pre_init(44100, 16, 2, 4096) # Frequency, size, channels and buffersize
shield_on = pygame.mixer.Sound("shield_on.wav")
class game(Widget):
...
def update_loop(self):
...
if "game_file_says_shield_is_on":
shield_on.play()
Hopefully this is of some help to others!
I'd like to say the above answer was useful as well because it enabled me to
identify the real problem. I'd give it a vote, but I don't have the reputation
here yet.
|
DJANGO_SETTINGS_MODULE How to configure
Question: I am working in a project with Django 1.8 and Python-3.4 I want install the
mockups package for automate the data creation in my application. I've
installed this package with `pip install django-mockups` and `easy_install
django-mockups`
I add the 'mockups' entry in my INSTALLED_APPS in my settings.py file
INSTALLED_APPS = (
'suit',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'mockups',
'tracks',
'albums',
'artists',
'userprofiles',
)
I want to see in the django admin utility inside my console my commands
available in relation with my packages installed, but at the end, I get the
message of Note about of my environment variable `DJANGO_SETTINGS_MODULE` and
I cannot see the django-mockups package in the list
(venv)➜ myproject django-admin help
Type 'django-admin help <subcommand>' for help on a specific subcommand.
Available subcommands:
[django]
check
compilemessages
createcachetable
dbshell
diffsettings
dumpdata
flush
inspectdb
loaddata
makemessages
makemigrations
migrate
runfcgi
runserver
shell
showmigrations
sql
sqlall
sqlclear
sqlcustom
sqldropindexes
sqlflush
sqlindexes
sqlmigrate
sqlsequencereset
squashmigrations
startapp
startproject
syncdb
test
testserver
validate
Note that only Django core commands are listed as settings are not properly configured (error: Requested setting INSTALLED_APPS, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.).
(venv)➜ myproject
I check that django-mockups package is installed checking the following path
directories, that in fact, exists.
/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/django_mockups-0.4.8.dist-info` and
/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/mockups`
And `django-mockups` package is installed
(venv)➜ mockups pip freeze
Django==1.8.2
django-mockups==0.4.8
django-suit==0.2.13
Pillow==2.9.0
wheel==0.24.0
(venv)➜ mockups
My `DJANGO_SETTINGS_MODULE` is set of this way: In the `manage.py` file
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "sfotipy.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
And in the wsgi.py
"""
WSGI config for myproject project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
application = get_wsgi_application()
And finally when I try start the django server, I get this output:
(venv)➜ myproject ./manage.py runserver
/home/bgarcial/.virtualenvs/venv/lib/python3.4/importlib/_bootstrap.py:321: RemovedInDjango19Warning: django.utils.importlib will be removed in Django 1.9.
return f(*args, **kwds)
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/django/core/management/__init__.py", line 312, in execute
django.setup()
File "/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/django/apps/config.py", line 86, in create
module = import_module(entry)
File "/home/bgarcial/.virtualenvs/venv/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2231, in _gcd_import
File "<frozen importlib._bootstrap>", line 2214, in _find_and_load
File "<frozen importlib._bootstrap>", line 2203, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1448, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/mockups/__init__.py", line 2, in <module>
from mockups.factory import Factory
File "/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/mockups/factory.py", line 1, in <module>
from mockups import generators
File "/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/mockups/generators.py", line 100, in <module>
class StringGenerator(Generator):
File "/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/mockups/generators.py", line 101, in StringGenerator
coerce_type = unicode
NameError: name 'unicode' is not defined
(venv)➜ myproject
How to can I set correctly my DJANGO_SETTINGS_MODULE environment variable? Is
this DJANGO_SETTINGS_MODULE configuration the origin of that mockups does not
work? Tanks a lot :)
Thanks
Answer: I find this [issue](https://github.com/sorl/django-mockups/issues/20) in
github.
~~Maybe~~ django-mockups is not support Python 3.
* * *
Python 3 change `unicode` to `str`, and the old `str` to `bytes`.
So if you run django-mockups with Python 3. `NameError` will be raised
As the traceback show, django-mockups is written in Python 2.
|
Python write text to .tar.gz
Question: I look for a possibility to write a text file directly (OnTheFly) in a .tar.gz
file with python. The best would be a solution like `fobj = open (arg.file,
"a")` to append the text.
I want to use this feature for long log files that you are not allowed to
split.
Thanks in advance
Answer: Yes, this is possible, but most likely not in the way you'd like to use it.
`.tar.gz` is actually two things in one:
[`gz`](https://en.wikipedia.org/wiki/Gzip) or
[`gzip`](https://en.wikipedia.org/wiki/Gzip) is being used for compression,
but this tool can only compress _single files_ , so if you want to zip
multiple files to a compressed archive, you would need to join these files
first. This is what [`tar`](https://en.wikipedia.org/wiki/Tar_\(computing\))
does, it takes multiple files and joins them to an archive.
If you have a single long logfile, just
[`gzip`ing](https://en.wikipedia.org/wiki/Gzip) it would be easier. For this,
Python has the [`gzip`](https://docs.python.org/3.4/library/gzip.html) module,
you can write directly into the compressed file:
import gzip
with gzip.open('logfile.gz', 'a') as log:
# Needs to be a bytestring in Python 3
log.write(b"I'm a log message.\n")
If you definitely need to write into a `tar`-archive, you can use Python's
[`tarfile`](https://docs.python.org/2/library/tarfile.html) module. However,
this module does not support _appending_ to a file (mode `'a'`), therefore a
tarfile might not be the best solution for logging.
|
Is it required to close a Psycopg2 connection at the end of a script?
Question: What are the consequences of not closing a `psycopg2` connection at the end of
a Python script? For example, consider the following snippet:
import psycopg2
psycopg2.connect("dbname=test")
The script opens a connection, but does not close it at the end. Is the
connection still open at the end of the execution? If so, is there an issue
with not closing the connection?
Answer: Normally when your python program exits, all the sockets it owns will be
closed, and open transactions aborts. But it's good practice to close the
connection at the very end.
Closing a connection as soon as you don't need it anymore results in freeing
system resources. Which is always good.
Keep in mind that if you do close your connection, to first commit your
changes. As you can read in the psycopg2 API:
> Close the connection now (rather than whenever del is executed). The
> connection will be unusable from this point forward; an InterfaceError will
> be raised if any operation is attempted with the connection. The same
> applies to all cursor objects trying to use the connection. Note that
> closing a connection **without committing the changes first will cause any
> pending change to be discarded as if a ROLLBACK was performed**
|
How can I press the button "enter" in python 2.7, without having the user pressing it, from code
Question: This question has probably been answered again, since I searched for it before
I asked this question. The people who answered said that win32api should be
used, but I don't know where it is, so I can import it(python can't find it
when I import it) or how to use it, since I started learning python recently.
What I need, is a code, that will automatically press the "enter" button. If I
need a certain library, I would like to know where I can find it. Please,
inform me if my question isn't clear or you need me to add more things. Thanks
in advance :)
Answer: it is also possible to send keypresses without installing additional modules,
see the following as a framework:
import ctypes
SendInput = ctypes.windll.user32.SendInput
PUL = ctypes.POINTER(ctypes.c_ulong)
class KeyBdInput(ctypes.Structure):
_fields_ = [("wVk", ctypes.c_ushort),
("wScan", ctypes.c_ushort),
("dwFlags", ctypes.c_ulong),
("time", ctypes.c_ulong),
("dwExtraInfo", PUL)]
class HardwareInput(ctypes.Structure):
_fields_ = [("uMsg", ctypes.c_ulong),
("wParamL", ctypes.c_short),
("wParamH", ctypes.c_ushort)]
class MouseInput(ctypes.Structure):
_fields_ = [("dx", ctypes.c_long),
("dy", ctypes.c_long),
("mouseData", ctypes.c_ulong),
("dwFlags", ctypes.c_ulong),
("time",ctypes.c_ulong),
("dwExtraInfo", PUL)]
class Input_I(ctypes.Union):
_fields_ = [("ki", KeyBdInput),
("mi", MouseInput),
("hi", HardwareInput)]
class Input(ctypes.Structure):
_fields_ = [("type", ctypes.c_ulong),
("ii", Input_I)]
def PressKey(hexKeyCode):
extra = ctypes.c_ulong(0)
ii_ = Input_I()
ii_.ki = KeyBdInput( hexKeyCode, 0x48, 0, 0, ctypes.pointer(extra) )
x = Input( ctypes.c_ulong(1), ii_ )
ctypes.windll.user32.SendInput(1, ctypes.pointer(x), ctypes.sizeof(x))
def ReleaseKey(hexKeyCode):
extra = ctypes.c_ulong(0)
ii_ = Input_I()
ii_.ki = KeyBdInput( hexKeyCode, 0x48, 0x0002, 0, ctypes.pointer(extra) )
x = Input( ctypes.c_ulong(1), ii_ )
ctypes.windll.user32.SendInput(1, ctypes.pointer(x), ctypes.sizeof(x))
#############################################################################################################################
lookup = {"a":(0x41),"A":(0x10,0x41),
"b":(0x42),"B":(0x10,0x42),
"c":(0x43),"C":(0x10,0x43),
"d":(0x44),"D":(0x10,0x44),
"e":(0x45),"E":(0x10,0x45),
"f":(0x46),"F":(0x10,0x46),
"g":(0x47),"G":(0x10,0x47),
"h":(0x48),"H":(0x10,0x48),
"i":(0x49),"I":(0x10,0x49),
"j":(0x4a),"J":(0x10,0x4a),
"k":(0x4b),"K":(0x10,0x4b),
"l":(0x4c),"L":(0x10,0x4c),
"m":(0x4d),"M":(0x10,0x4d),
"n":(0x4e),"N":(0x10,0x4e),
"o":(0x4f),"O":(0x10,0x4f),
"p":(0x50),"P":(0x10,0x50),
"q":(0x51),"Q":(0x10,0x51),
"r":(0x52),"R":(0x10,0x52),
"s":(0x53),"S":(0x10,0x53),
"t":(0x54),"T":(0x10,0x54),
"u":(0x55),"U":(0x10,0x55),
"v":(0x56),"V":(0x10,0x56),
"w":(0x57),"W":(0x10,0x57),
"x":(0x58),"X":(0x10,0x58),
"y":(0x59),"Y":(0x10,0x59),
"z":(0x5a),"Z":(0x10,0x5a),
"0":(0x30),
"1":(0x31),
"2":(0x32),
"3":(0x33),
"4":(0x34),
"5":(0x35),
"6":(0x36),
"7":(0x37),
"8":(0x38),
"9":(0x39),
"!":(0x10,0x31),
"?":(0x10,0xbf),
"\n":(0x0d),
" ":(0x20),
"'":(0x6c),
"*":(0x10,0x38),
"+":(0x10,0xbb),
"/":(0xbf),
"(":(0x10,0x39),
")":(0x10,0x30),
"-":(0xbd),
".":(0xbe),}
to use this, you can either add:
PressKey(lookup["\n"])
ReleaseKey(lookup["\n"])
to the bottom of the script, or save the first section of code as a script and
import it.
i built the lookup dictionary for all the key codes i required, but a list of
key codes can be found here: [MSDN Virtual Key
Codes](https://msdn.microsoft.com/en-
us/library/windows/desktop/dd375731\(v=vs.85\).aspx)
|
Python CSV read row by row and insert new data
Question: I have a csv file, from which I will read row by row and for certain field the
data need to be processed and insert the results into another field in the
same row before moving on to the next field.
I tried various methods like:
w = open('test.csv', 'w+')
csv_r = csv_reader(w)
csv_w = csv.writer(w)
for row in csv_r:
row[10] = results
csv_w.writerows(row)
But I am getting blank. Is there any other way doing this?
Basically i need to read a specific element in a row and then process the data
and the result will be appended into another element in the same row.
Answer: `w+` empties/truncates your file so you have nothing to iterate over. It is
`r+` for reading and writing.
To update the file either store all the updated rows, reopen the file and
write or a much better approach is use a
[tempfile.NamedTemporaryFile](https://docs.python.org/2/library/tempfile.html#tempfile.NamedTemporaryFile)
to write to then replace the original with shutil.move.
from tempfile import NamedTemporaryFile
from shutil import move
import csv
with open("test.csv") as f, NamedTemporaryFile("w",dir=".", delete=False) as temp:
# write all updated rows to out tempfile
csv_w = csv.writer(out)
csv_r = csv.reader(f)
for row in csv_r:
row[10] = results
csv_w.writerows(row)
# replace original file with updated
move(temp.name,"test.csv")
|
Getting the value of selected item in optionMenu Tkinter
Question: I've made some optionMenu in Tkinter in python, I want to get the value that
has been selected by the user. I've used var.get() in the method that gets
called when an item is clicked but I'm not getting the correct value. I keep
getting "status", which is the value that I've initially assigned to var,
using var.set(). The items in my menu get initialized after a browse button is
clicked, hence I've filled the list in a method called browser. in the command
attribute of each item I've called justamethod to print the value. Here is my
code:
self.varLoc= StringVar(master)
self.varLoc.set("status")
self.varColumn= StringVar(master)
self.varColumn.set("")
self.locationColumn= Label(master,text="Select a column as a location indicator", font=("Helvetica", 12))
self.columnLabel= Label(master,text="Select a column to process", font=("Helvetica", 12))
global locationOption
global columnOption
columnOption= OptionMenu (master, self.varColumn,"",*columnList)
locationOption= OptionMenu (master, self.varLoc,"",*columnList)
self.locationColumn.grid (row=5, column=1, pady=(20,0), sticky=W, padx=(5,0))
locationOption.grid (row=5, column=3, pady=(20,0))
def browser (selft):
filename = askopenfilename()
#open_file(filename)
t=threading.Thread (target=open_file, args=(filename, ))
#t=thread.start_new_thread (open_file,(filename, )) # create a new thread to handle the process of opening the file.
# we must then send the file name to the function that reads it somehow.
t.start()
t.join() #I use join because if I didn't,next lines will execute before open_file is completed, this will make columnList empty and the code will not execute.
opt=columnOption.children ['menu']
optLoc= locationOption.children ['menu']
optLoc.entryconfig (0,label= columnList [0], command=selft.justamethod)
opt.entryconfig (0, label= columnList [0], command=selft.justamethod)
for i in range(1,len (columnList)):
opt.add_command (label=columnList[i], command=selft.justamethod)
optLoc.add_command (label=columnList[i], command=selft.justamethod)
def justamethod (self):
print("method is called")
print(self.varLoc.get())
window= Tk () #main window.
starter= Interface (window)
window.mainloop() #keep the window open until the user decides to close it.
Can anyone tell me how to get the value of the selected item?
Thanks.
Answer: The `justamethod` function won't print anything other than the initialized
value of `varLoc` because you don't _do_ anything with it.
Since menu options cannot take a `variable` argument, instead of trying to
update the value of one variable for all menu options, how about passing some
arbitrary value for each one of the menu buttons?
Example:
from Tkinter import *
root = Tk()
def callback(var):
print ("%d" %var)
menubar = Menu(root)
# create a pulldown menu, and add it to the menu bar
filemenu = Menu(menubar, tearoff=0)
filemenu.add("command", label="Open", command=lambda: callback(1))
filemenu.add("command", label="Save", command=lambda: callback(2))
filemenu.add_separator()
filemenu.add_command(label="Exit", command=root.quit)
menubar.add_cascade(label="File", menu=filemenu)
# create more pulldown menus
editmenu = Menu(menubar, tearoff=0)
editmenu.add("command", label="Cut", command=lambda: callback(3))
editmenu.add("command", label="Copy", command=lambda: callback(4))
editmenu.add("command", label="Paste", command=lambda: callback(5))
menubar.add_cascade(label="Edit", menu=editmenu)
helpmenu = Menu(menubar, tearoff=0)
helpmenu.add("command", label="About", command=lambda: callback(6))
menubar.add_cascade(label="Help", menu=helpmenu)
# display the menu
root.config(menu=menubar)
root.mainloop()
(Example taken directly from [this
tutorial](http://effbot.org/tkinterbook/menu.htm#menu.Menu.add-method).)
In the case of your code, since you make the menu buttons within a `for` loop
using the counter `i`, you could simply do something like `command = lambda:
self.justamethod(i)`, and then within `justamethod` print out the `i` argument
that is passed to see what I mean.
I hope this helps you to solve your problem, as I cannot really modify your
provided code to offer a solution as it is unusable.
|
Accesing script scope variables from modules
Question: We use IronPython in our open source project. I have problem accesing the
variables added to the script scope like
private ScriptScope CreateScope(IDictionary<string, object> globals)
{
globals.Add("starting", true);
globals.Add("stopping", false);
var scope = Engine.CreateScope(globals);
scope.ImportModule("math");
return scope;
}
<https://github.com/AndersMalmgren/FreePIE/blob/master/FreePIE.Core/ScriptEngine/Python/PythonScriptEngine.cs#L267>
I can use the globals from the main script, but any module that is loaded will
fail. How can it be fixed?
update: Given this module _mymodule.py_
if starting: #starting is defined on the scope
...
From the main script executed using this code
void RunLoop(string script, ScriptScope scope)
{
ExecuteSafe(() =>
{
var compiled = Engine.CreateScriptSourceFromString(script).Compile();
while (!stopRequested)
{
usedPlugins.ForEach(p => p.DoBeforeNextExecute());
CatchThreadAbortedException(() => compiled.Execute(scope));
scope.SetVariable("starting", false);
threadTimingFactory.Get().Wait();
}
scope.SetVariable("stopping", true);
CatchThreadAbortedException(() => compiled.Execute(scope));
});
}
<https://github.com/AndersMalmgren/FreePIE/blob/master/FreePIE.Core/ScriptEngine/Python/PythonScriptEngine.cs#L163>
from mymodule import * #this will load the moduel and it fails with

edit: In response to @BendEg's answer
I tried this
scope.SetVariable("__import__", new Func<CodeContext, string, PythonDictionary, PythonDictionary, PythonTuple, object>(ResolveImport));
`ImportDelegate` is not defined so tried using a Func instead, the
ResolveImport method never triggers and I get the same exception that the name
is not defined
edit: I changed the scope creation to
var scope = Engine.GetBuiltinModule();
globals.ForEach(g => scope.SetVariable(g.Key, g.Value));
Now the import delegate triggers but it crashes on first line with `global
name 'mouse' is not defined`, mouse is not used from the module. It seems its
confused when I add my custom globals to the `BuiltinModule`
Answer: As far as i know, importing some module will create a new scope. So when
creating an instance of `PythonModule` via `from ... import ...` they has it's
own scope. In this new scope, your public variables are not available. Please
correct me if i am wrong.
Workaround:
You could create some static class, which holdes the values. Than you can be
sure, you always have them. For example:
namespace someNS
{
public static class SomeClass
{
public static bool Start { get; set; }
}
}
And than in your IP-Code:
from someNS import SomeClass
# Now you can access the member
yourVal = SomeClass.Start
maybe this is some thing you can use. You event don't need to set it as
variable in the scope.
**EDIT**
Maybe this is working for you. In the code i override module importing and try
to set the global vars:
First thing you need is, give IronPython some delegate, for module importing:
# Scope should be your default scope
scope.SetVariable("__import__", new ImportDelegate(ResolveImport));
Then override the import function:
private object ResolveImport(CodeContext context, string moduleName, PythonDictionary globals, PythonDictionary locals, PythonTuple fromlist)
{
// Do default import but set module
var builtin = IronPython.Modules.Builtin.__import__(context, moduleName, globals, locals, fromlist, 0);
context.ModuleContext.Module.__setattr__(context, "some_global", "Hello World");
return builtin;
}
**EDIT**
Definition of `ImportDelegate`
delegate object ImportDelegate(CodeContext context, string moduleName, PythonDictionary globals, PythonDictionary locals, PythonTuple fromlist);
|
pseudo increasing the 'resolution' of a value table
Question: I have an measurement array with 16.000 entries in the form of
[t] [value]
the problem is my data logger is too slow and i only have measurement points
every second. For my simulation i need the resolution pseudo increased. So
that every time step is divided by 1000 and every measured value has to be
copied a 1000 times. (see figure for clarity). So I pseudo increase the
resolution of my measurement file.
How do I do that efficiently(!!!) in `Python` using `numpy`. I dont want to
iterate when creating an array of 16.000.000 entries.
The trivial answer of just dividing my time array by 1000 is not applicable in
this case.
Edit: to make it even more complicated: Other than in my picture the time-
delta is NOT equidistant for every timestep.
Answer: Though it's difficult to tell exactly what you're asking, I'm guessing that
you're just looking to interpolate between the values that you've already got.
Good thing `numpy` has a simple built-in for this, the `interp1d` module
([docs](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.interpolate.interp1d.html)):
>>> from scipy import interpolate
>>> x = np.arange(0, 10)
>>> y = np.exp(-x / 3.0)
>>> f = interpolate.interp1d(x, y)
>>> x_new = np.array([1.5, 2.5, 3.5])
>>> f(x_new)
array([ 0.61497421, 0.44064828, 0.31573829])
As far as the second part of your question is concerned `numpy` again has a
great built-in for you! The `np.repeat` function should do exactly what you're
looking for, right up through the variable time steps. The docs can be found
[here](http://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html).
Example below:
>>> values = np.array([1, 2, 3, 4])
>>> np.repeat(values, [2, 1, 2, 1])
array([1, 1, 2, 3, 3, 4])
|
Truncated output using Python bottle 0.12.8 as a CGI application under Windows on an Apache server
Question: This is the application:
#!/home2/friendv0/Python-2.7.9/bin/python
from bottle import Bottle
app = Bottle()
@app.get('/')
def hello():
return """<!DOCTYPE html>
<html lang="en">
<head>
<title>bottle Test</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta charset="utf-8">
</head>
<body>
Hello!
</body>
</html>
"""
app.run(server='cgi')
The resulting output is:
<!DOCTYPE html>
<html lang="en">
<head>
<title>bottle Test</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta charset="utf-8">
</head>
<body>
Hello!
</body>
Note that the closing </html> tag is missing. This only occurs when the
applications is run as a CGI script under Windows 7 (or on Windows 8) -- not
when it is installed as a native WSGI application. I have tried this with both
Apache 2.2 and Apache 2.4. Note that the same CGI script runs _without_
truncation when installed on a Linux system running Apache 2.2. What is
puzzling is that I have successfully run other WSGI applications as CGI
scripts under Windows without experiencing truncation using the same technique
that bottle uses, namely:
from wsgiref.handlers import CGIHandler
CGIHandler().run(application)
Has anyone else experienced the same problem? As a side note: The reason why I
am interested in running bottle as a CGI script is because my anticipated
volume is very low so performance will not be an issue. But on the Linux
server (where, fortunately, CGI is working), I do not have the ability to
restart the server and if I have to make emergency changes to the source code,
I need the changes to go into effect immediately.
Answer: Well, I have figured out the problem. The string literal length is 201
characters long (which is the content-length in the header). Each line is
terminated by a single LF (linefeed) character (even though on Windows the
actual text is terminated with CRLF). Yet, when the text is being sent out to
the browser, each line ending is now a CR-LF pair, making the actual output
longer than 201 characters, but because of the content-length being set in the
header, there is truncation. I went back to my other, working WSGi-CGI
application and now remember that because there were instances where I was
sending out images, that I had set the stdout stream to binary mode (not
necessary on Unix/Linux). This clearly had the side effect of preventing extra
carriage return characters from being inserted in text streams if I had
templates without them to begin with. So, now I have the following code:
import os
if os.name == 'nt':
import msvcrt
msvcrt.setmode(0, os.O_BINARY ) # 0 = sysin
msvcrt.setmode(1, os.O_BINARY ) # 0 = stdout
app.run(server='cgi')
|
What happen in the background on a web server?
Question: I'm just started to learn back-end web development using Python and Flask
framework.
My first application is the simplest one and it returns _"Hello World!"_ when
the user send a request for website's homepage.
Below, you can see the structure of my application :
myWebsiteDirectory/
app/
__init__.py
setup.py
wsgi.py
And below you see the content of the python files:
**setup.py**
from setuptools import setup
setup(name='YourAppName',
version='1.0',
description='OpenShift App',
author='Your Name',
author_email='[email protected]',
url='http://www.python.org/sigs/distutils-sig/',
install_requires=['Flask>=0.10.1'],
)
**wsgi.py**
#!/usr/bin/python
import os
#virtenv = os.environ['OPENSHIFT_PYTHON_DIR'] + '/virtenv/'
virtenv = os.path.join(os.environ.get('OPENSHIFT_PYTHON_DIR','.'), 'virtenv')
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
execfile(virtualenv, dict(__file__=virtualenv))
except IOError:
pass
#
# IMPORTANT: Put any additional includes below this line. If placed above this
# line, it's possible required libraries won't be in your searchable path
#
from app import app as application
#
# Below for testing only
#
if __name__ == '__main__':
from wsgiref.simple_server import make_server
httpd = make_server('localhost', 8051, application)
# Wait for a single request, serve it and quit.
httpd.serve_forever()
**__init__.py**
from flask import Flask
app = Flask(__name__)
app.debug = True
@app.route('/')
def not_again():
return 'Hello World!'
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
What is my question:
What happens when I upload this files on the server and what happens when a
user request my website?
In the other words:
1. When Python interpret Each one of the above files on the server? (And how many times each file interpret)?
2. What happens when a user send a request? His/Her request make a file re-interpreted or the request refers to a running function as an argument? If so, shouldn't there is an infinite loop on the server to catch the request? if so, where is that infinite loop?
3. What happens when a user send a request when the web server is not finished the previous request yet? Those the argument/script refers/interpret again in a separate environment for this new user or he/she must wait for server to finish answering the previous request?
And again, in the other words:
* **How user's requests handle on a web server?**
_Although the above question is based on Python & Flask web framework web
developing, but it there is a general mechanism for all the languages and
frameworks, please let me know that general procedure and not this specific
case._
Answer: If you have no good idea about how a web server works, since you are
interested in Python, I suggest you have a read of:
* <http://ruslanspivak.com/lsbaws-part1/>
* <http://ruslanspivak.com/lsbaws-part2/>
* <http://ruslanspivak.com/lsbaws-part3/>
If interested then in a walk through of doing something with a Python web
framework to build a site, then also consider reading:
* <http://tutorial.djangogirls.org/en/index.html>
It is a good basic introduction to get people going.
These will give you fundamentals to work with. How specific WSGI servers or
service providers work can then be a bit different, but you will be better
able to understand by working through the above first.
|
Problems on Spark dealing with list of Python object
Question: I am learning Spark, and I just got a problem when I used Spark to deal with a
list of Python object. The following is my code:
import numpy as np
from pyspark import SparkConf, SparkContext
### Definition of Class A
class A:
def __init__(self, n):
self.num = n
### Function "display"
def display(s):
print s.num
return s
def main():
### Initialize the Spark
conf = SparkConf().setAppName("ruofan").setMaster("local")
sc = SparkContext(conf = conf)
### Create a list of instances of Class A
data = []
for i in np.arange(5):
x = A(i)
data.append(x)
### Use Spark to parallelize the list of instances
lines = sc.parallelize(data)
### Spark mapping
lineLengths1 = lines.map(display)
if __name__ == "__main__":
main()
When I run my code, it seemed not printing the number of each instance (But it
should have printed 0, 1, 2, 3, 4). I try to find the reasons, but I have no
ideas on this. I would really appreciate if anyone help me.
Answer: First of all `display` is never executed. RDDs are lazily evaluated so as long
you don't perform [an
action](https://spark.apache.org/docs/latest/programming-guide.html#actions)
(like `collect`, `count` or `saveAsTextFile`) nothing really happens.
Another part of the problem requires an understanding of Spark architecture.
Simplifying things a little bit Driver program is responsible for
`SparkContext` creation and sending tasks to the Worker Nodes. Everything that
happens during transformations (in your case `map`) is executed on the Workers
so the output of the print statement goes to the Worker stdout. If you want to
obtain some kind of output you should consider using logs instead.
Finally if your goal is to get some kind of side effect it would be idiomatic
to use `foreach` instead of map.
|
Python multiprocessing: Process object not callable
Question: So, recently, I've been experimenting with the multiprocessing module. I wrote
this script to test it:
from multiprocessing import Process
from time import sleep
def a(x):
sleep(x)
print ("goo")
a = Process(target=a(3))
b = Process(target=a(5))
c = Process(target=a(8))
d = Process(target=a(10))
if __name__ == "__main__":
a.start()
b.start()
c.start()
d.start()
However, when I try to run it, it throws this error:
goo
Traceback (most recent call last):
File "C:\Users\Andrew Wong\Desktop\Python\test.py", line 9, in <module>
b = Process(target=a(5))
TypeError: 'Process' object is not callable
...And I can't tell what's going on. Does anyone know what happened, and how I
can fix it?
Answer: Pass arguments to the function that is ran by a `Process` is done differently
- looking at the
[documentation](https://docs.python.org/2/library/multiprocessing.html) it
shows:
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',)) # that's how you should pass arguments
p.start()
p.join()
Or in your case:
from multiprocessing import Process
from time import sleep
def a(x):
sleep(x)
print ("goo")
e = Process(target=a, args=(3,))
b = Process(target=a, args=(5,))
c = Process(target=a, args=(8,))
d = Process(target=a, args=(10,))
if __name__ == "__main__":
e.start()
b.start()
c.start()
d.start()
**Addition:**
Good catch by Luke (in the comments below) - you're overriding the function
`a` with the variable name `a` when doing:
a = Process(target=a, args=(3,))
You should use a different name.
|
How to plot a histogram by different groups in matplotlib
Question: I have a table like:
value type
10 0
12 1
13 1
14 2
Generate a dummy data:
import numpy as np
value = np.random.randint(1, 20, 10)
type = np.random.choice([0, 1, 2], 10)
I want to accomplish a task in Python 3 with matplotlib (v1.4):
* plot a histogram of `value`
* group by `type`, i.e. use different colors to differentiate types
* the position of the "bars" should be "dodge", i.e. side by side
* since the range of value is small, I would use `identity` for bins, i.e. the width of a bin is 1
The **questions** are:
* how to assign colors to bars based on the values of `type` and draw colors from colormap (e.g. `Accent` or other cmap in matplotlib)? I don't want to use _named color_ (i.e. `'b', 'k', 'r'`)
* the bars in my histogram overlap each other, how to "dodge" the bars?
* * *
**Note** (lest the post will be down voted and deemed to be "naive")
1. I have tried on Seaborn, matplotlib and `pandas.plot` for two hours and failed to get the desired histogram.
2. I read the examples and Users' Guide of matplotlib. Surprisingly, I found no tutorial about how to assign colors from colormap.
3. I have searched on Google but failed to find a succinct example.
4. I guess one could accomplish the task with `matplotlib.pyplot`, without import a bunch of modules such as `matplotlib.cm`, `matplotlib.colors`.
Answer: For your first question, we can create a dummy column equal to 1, and then
generate counts by summing this column, grouped by value and type.
For your second question you can pass the colormap directly into `plot` using
the `colormap` parameter:
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import seaborn
seaborn.set() #make the plots look pretty
df = pd.DataFrame({'value': value, 'type': type})
df['dummy'] = 1
ag = df.groupby(['value','type']).sum().unstack()
ag.columns = ag.columns.droplevel()
ag.plot(kind = 'bar', colormap = cm.Accent, width = 1)
plt.show()

|
Issue with sending POST requests using the library requests
Question:
import requests
while True:
try:
posting = requests.post(url,json = data,headers,timeout = 3.05)
except requests.exceptions.ConnectionError as e:
continue
# If a read_timeout error occurs, start from the beginning of the loop
except requests.exceptions.ReadTimeout as e:
continue
a link to more code : [Multiple accidental POST requests in
Python](http://stackoverflow.com/questions/31201059/multiple-accidental-post-
requests-in-python/31201394?noredirect=1#comment50407262_31201394) This code
is using `requests` library to perform POST requests indefinitely. I noticed
that when try fails multiple of times and the while loop starts all over
multiple of times, that when I can finally send the post request, I find out
multiple of entries from the server side at the same second. I was writing to
a txt file at the same time and it showed one entry only. Each entry is 5
readings. Is this an issue with the library itself? Is there a way to fix
this?! No matter what kind of conditions that I put it still doesn't work :/ !
[You can notice the reading at 12:11:13 has 6 parameters per second while at
12:14:30 (after the delay, it should be every 10 seconds) it is a few entries
at the same second!!! 3 entries that make up 18 readings in one second,
instead of 6 only!](http://i.stack.imgur.com/KMJYO.png)
Answer: It looks like the server receives your requests and acts upon them but fails
to respond in time (3s is a pretty low timeout, a load spike/paging operation
can easily make the server miss it unless it employs special measures). I'd
suggest to
* process requests asynchronously (e.g. spawn threads; [Asynchronous Requests with Python requests](http://stackoverflow.com/questions/9110593/asynchronous-requests-with-python-requests) discusses ways to do this with `requests`) and do not use timeouts (TCP has its own timeouts, let it fail instead).
* [reuse the connection(s)](http://stackoverflow.com/questions/24873927/python-requests-module-and-connection-reuse) (TCP has quite a bit of overhead for connection establishing/breaking) or use UDP instead.
* include some "hints" (IDs, timestamps etc.) to prevent the server from adding duplicate records. (I'd call this one a workaround as the real problem is you're not making sure if your request was processed.)
From the server side, you may want to:
* Respond ASAP and act upon the info later. Do not let pending action prevent answering further requests.
|
How to archive a remote git repository programmatically using Python?
Question: I am trying to archive a remote git repo using Python code. I did it
successfully using Git command line with following command.
> git archive --format=zip --remote=ssh://path/to/my/repo -o archived_file.zip
HEAD:path/to/directory filename
This command fetches the required file from the repo and stores the zip in my
current working directory. Note that there is no cloning of remote repo
happening.
Now I have to do it using Python code. I am using GitPython 1.0.1. I guess if
it is doable using command line then it should be doable using GitPython
library. According to the
[docs](http://gitpython.readthedocs.org/en/latest/tutorial.html#meet-the-repo-
type),
repo.archive(open(join(rw_dir, 'archived_file.zip'), 'wb'))
Above line of code will archive the repo. Here repo is the instance of Repo
class. It can be initialized using
repo = Repo('path to the directory which has .git folder')
If I give path to my remote repo(Ex. ssh://path/to/my/repo) in above line it
goes to find it in directory where the .py file containing this code is
residing(Like, Path\to\python\file\ssh:\path\to\my\repo), which is not what I
want. So to sum up I can archive a local repo but not a remote one using
GitPython. I may be able to archive remote repo if I am able to create a repo
instance pointing to the remote repo. I am very new to Git and Python.
Is there any way to archive a remote repo using Python code without cloning it
in local?
Answer: This is by the way a terrible idea, since you already have begun using
gitpython, and I have never tried working with that, but I just really want to
let you know, that you can do it without cloning it in local, without using
gitpython.
Simply run the git command, in a shell, using subprocess.. [running bash
commands in python](http://stackoverflow.com/questions/4256107/running-bash-
commands-in-python)
* * *
edit: added some demonstration code, of reading stdout and writing stdin.
some of this is stolen from here: <http://eyalarubas.com/python-subproc-
nonblock.html>
The rest is a small demo.. first two prerequisites
**shell.py**
import sys
while True:
s = raw_input("Enter command: ")
print "You entered: {}".format(s)
sys.stdout.flush()
**nbstreamreader.py:**
from threading import Thread
from Queue import Queue, Empty
class NonBlockingStreamReader:
def __init__(self, stream):
'''
stream: the stream to read from.
Usually a process' stdout or stderr.
'''
self._s = stream
self._q = Queue()
def _populateQueue(stream, queue):
'''
Collect lines from 'stream' and put them in 'quque'.
'''
while True:
line = stream.readline()
if line:
queue.put(line)
else:
raise UnexpectedEndOfStream
self._t = Thread(target = _populateQueue,
args = (self._s, self._q))
self._t.daemon = True
self._t.start() #start collecting lines from the stream
def readline(self, timeout = None):
try:
return self._q.get(block = timeout is not None,
timeout = timeout)
except Empty:
return None
class UnexpectedEndOfStream(Exception): pass
then the actual code:
from subprocess import Popen, PIPE
from time import sleep
from nbstreamreader import NonBlockingStreamReader as NBSR
# run the shell as a subprocess:
p = Popen(['python', 'shell.py'],
stdin = PIPE, stdout = PIPE, stderr = PIPE, shell = False)
# wrap p.stdout with a NonBlockingStreamReader object:
nbsr = NBSR(p.stdout)
# issue command:
p.stdin.write('command\n')
# get the output
i = 0
while True:
output = nbsr.readline(0.1)
# 0.1 secs to let the shell output the result
if not output:
print "time out the response took to long..."
#do nothing, retry reading..
continue
if "Enter command:" in output:
p.stdin.write('try it again' + str(i) + '\n')
i += 1
print output
|
Python unique grouping people task
Question: My task is to generate all posible way to group given number of people from
given number of total people. For example, if there are 4 guys at total, for
groups that contain 2 guys, I have to get array like this:
ResultArr = {0: [1,2], 1:[1,3], 2:[1,4], 3:[2,3], 4:[2,4], 5:[3,4]}
I have this code:
elNum = 3 #Number of guys in one group
elLimit = 5 #Number of all guys
CheckArr = [1] * elNum
ResultArr = {}
ResultArr[0] = [-1] * elNum
goodCheckArr = [1] * elNum
for i in range(1, elNum+1):
ResultArr[0][i-1] = i
Checking = True
lenResultArr = 1
while Checking:
checkable = True
for i in range(0, elNum):
for ii in range(0, elNum): #
if (CheckArr[i] == CheckArr[ii]) and not i == ii:
checkable = False
if checkable:
status2 = [0]*lenResultArr
for i1 in range(0, len(ResultArr)):
print " "
print "ResultArr[",i1,"] = ", ResultArr[i1]
status1 = [0]*elNum
for i2 in range(0, elNum):
for i3 in range(0, elNum):
if ResultArr[i1][i2] == CheckArr[i3]:
status1[i2] = 1
print "status1 = ", status1, " ChechArr = ", CheckArr
if sum(status1) == elNum:
status2[i1] = 1
print "status2[",str(i1),"] = ", status2[i1]
if sum(status2) == 0:
goodCheckArr = CheckArr
ResultArr[lenResultArr] = goodCheckArr
lenResultArr = lenResultArr + 1
print "**** ResultArr = ", ResultArr
print "sum(status2) = 0 len(ResultArr) = ", str(len(ResultArr)), " lenResultArr = ", str(lenResultArr)
else:
print "sum(status2) > 0 len(ResultArr) = ", str(len(ResultArr)), " lenResultArr = ", str(lenResultArr)
print "************************************************************"
print "***",ResultArr
CheckArr[elNum-1] = CheckArr[elNum-1] + 1
for i in range(elNum-1, -1, -1):
if CheckArr[i] > elLimit:
CheckArr[i-1] = CheckArr[i-1] + 1
CheckArr[i] = 1
Checking = False
else:
Checking = True
print "************************************************************"
print "***",ResultArr
I think y problem is in the last piece of code:
print "************************************************************"
print "***",ResultArr
CheckArr[elNum-1] = CheckArr[elNum-1] + 1
for i in range(elNum-1, -1, -1):
if CheckArr[i] > elLimit:
CheckArr[i-1] = CheckArr[i-1] + 1
CheckArr[i] = 1
Checking = False
else:
Checking = True
print "************************************************************"
print "***",ResultArr
The output looks like this:
************************************************************
*** {0: [1, 2, 3], 1: [1, 2, 4]}
************************************************************
*** {0: [1, 2, 3], 1: [1, 2, 5]}
**I have no idea why new elements of changeble checking arrey apears in
results array.**
Answer: You may simply use the following code to get all the combinations of length 2.
import itertools
persons = [1, 2, 3, 4]
comb = itertools.combinations(persons, 2)
for i in comb:
print i
(1, 2)
(1, 3)
(1, 4)
(2, 3)
(2, 4)
(3, 4)
|
ImportError: cannot import name GoogleCredentials
Question: I'm trying to use GoogleCredentials.get_application_default() in python in an
AppEngine project:
from oauth2client.client import GoogleCredentials
from ferris.core.google_api_helper import build
...
gcs = build("storage", "v1", GoogleCredentials.get_application_default())
response = gcs.objectAccessControls.insert(bucket=bucket,
object=filename,
body={"entity": "user-<email>",
"role": "READER", }).execute()
logging.info(response)
And am receiving the following error:
File "/base/data/home/apps/xxx", line 2, in <module>
from oauth2client.client import GoogleCredentials
ImportError: cannot import name GoogleCredentials
This happens both in the dev environment, and in production. Anyone have any
ideas what I'm doing wrong?
Thanks indeed,
Kiril.
Answer: Below my code to insert credentials
import json
from oauth2client import appengine
from apiclient import discovery
import httplib2
import logging
SCOPE_FULL_CONTROL = 'https://www.googleapis.com/auth/devstorage.full_control'
http = httplib2.Http()
credentials = appengine.AppAssertionCredentials(scope=SCOPE_FULL_CONTROL)
http = credentials.authorize(http)
client = discovery.build('storage', 'v1', http=http)
def api_insert_gcs_user_acl(bucket, bucket_object, e_mail):
# Cloud Storage API : https://developers.google.com/resources/api-libraries/documentation/storage/v1/python/latest/
req = client.objectAccessControls().insert(
bucket=bucket,
object=bucket_object,
body=dict(entity='user-' + e_mail, role='READER')
)
resp = req.execute()
logging.info(json.dumps(resp, indent=2))
The appengine Google Cloud Storage Client Library does not support setting or
deleting ACL entries. But the SDK can use the REST API and a service account
to access the hosted Cloud storage. An appengine service accounts makes it
very easy to use OAuth2 and Python APIs. To make this work in the SDK, you
have to use two options in development server:
--appidentity_email_address=<developer service account e-mail address>
--appidentity_private_key_path=<d:/.../gcs-blobstore.pem key>
|
An error in signature when pushing using pygi2
Question: I'm facing problem when pushing using pygit2 `v0.21.3` . here is my code :
import pygit2 as git
repo = git.Repository("path/to/my/repo.git") # just for testing,it will not be local
for rem in repo.remotes:
rem.push_url = rem.url
rem.credentials = git.UserPass("user","passowrd")
sig = git.Signature("user","[email protected]")
rem.push('refs/heads/master',signature=sig)
# in v0.22.0 it will be like below
# rem.push(['refs/heads/master'],signature=sig)
But,I always received this message :
Traceback (most recent call last):
File "C:\test.py", line 9, in <module>
rem.push('refs/heads/master',signature=sig)
File "C:\Python34\lib\site-packages\pygit2-0.21.3-py3.4-win32.egg\pygit2\remote.py",line 353, in push
err = C.git_push_update_tips(push, ptr, to_bytes(message))
TypeError: initializer for ctype 'git_signature *' must be a cdata pointer, not bytes
When I tried it with version `0.22.0` it didn't raise an error, but push
operation also didn't work.
**Note** : I think the problem with signature parameter, because when I pass
`None` It works fine with default signature .
Thanks.
Answer: I had updated pygit2 to `v0.22.1`, libgit2 to `v0.22.3` , it fixed the
problem.
|
Python 3.4 : How to do xml validation
Question: I'm trying to do XML validation against some XSD in python. I was successful
using lxml package. But the problem starts when I tried to port my code into
python 3.4. I tried to install lxml for 3.4 version. Looks like my enterprise
linux doesn't play very well with lxml.
**pip installation:**
pip install lxml
Collecting lxml
Downloading lxml-3.4.4.tar.gz (3.5MB)
100% |################################| 3.5MB 92kB/s
Installing collected packages: lxml
Running setup.py install for lxml
Successfully installed lxml-3.4.4
**After pip Installation :**
> python
Python 3.4.1 (default, Nov 12 2014, 13:34:29)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from lxml import etree
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /ws/satjonna-sjc/pyats/lib/python3.4/site-packages/lxml/etree.cpython-34m.so: undefined symbol: xmlMemDisplayLast
>>>
**git installation:**
git clone git://github.com/lxml/lxml.git lxml
Cloning into 'lxml'...
remote: Counting objects: 25078, done.
remote: Total 25078 (delta 0), reused 0 (delta 0), pack-reused 25078
Receiving objects: 100% (25078/25078), 21.38 MiB | 2.66 MiB/s, done.
Resolving deltas: 100% (9854/9854), done.
Checking connectivity... done.
**After git Installation :**
> python
Python 3.4.1 (default, Nov 12 2014, 13:34:29)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from lxml import etree
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'etree'
I found lxml equivalent
[xml.etree.ElementTree](https://docs.python.org/3.4/library/xml.etree.elementtree.html).
But the main problem is apart from rewriting the entire code, I need to find
an xml.etree alternative for lxml validation moethod (
**_etree.fromstring(xmlstring, xmlparser)_** ). Any suggestions to make this
work will be really helpful.
Answer: Just for the sake of troubleshooting, did you run `pip install lxml` as
administrator? And are the libxml2 and libxslt dependencies installed?
This is from the installation page: <http://lxml.de/installation.html>
On Linux (and most other well-behaved operating systems), pip will manage to
build the source distribution as long as libxml2 and libxslt are properly
installed, including development packages, i.e. header files, etc. See the
requirements section above and use your system package management tool to look
for packages like libxml2-dev or libxslt-devel. If the build fails, make sure
they are installed. Alternatively, setting STATIC_DEPS=true will download and
build both libraries automatically in their latest version, e.g.
STATIC_DEPS=true pip install lxml.
|
Do not require authentication for OPTIONS requests
Question: My settings.py
REST_FRAMEWORK = {
'UNICODE_JSON': True,
'NON_FIELD_ERRORS_KEY': '__all__',
'DEFAULT_AUTHENTICATION_CLASSES': (
# TODO(dmu) HIGH: Support OAuth or alike authentication
'rest_framework.authentication.TokenAuthentication',
),
'DEFAULT_PERMISSION_CLASSES': (
'rest_framework.permissions.IsAuthenticated',
),
'DEFAULT_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',
),
'ALLOWED_VERSIONS': ['v1'],
'DEFAULT_VERSIONING_CLASS': 'rest_framework.versioning.NamespaceVersioning',
'TEST_REQUEST_DEFAULT_FORMAT': 'json',
'TEST_REQUEST_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',
)
}
When I do this I get authentication error:
curl -X OPTIONS http://127.0.0.1:8000/api/passenger/v1/order/ | python -m json.tool
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 58 0 58 0 0 469 0 --:--:-- --:--:-- --:--:-- 475
{
"detail": "Authentication credentials were not provided."
}
I'd like my server respond with "schema" description and not required
authentication. At the same time I want it to require authentication for GET,
POST, PUT, PATCH and DELETE requests as usual.
How can I achieve that?
**MY SOLUTION**
Thank you, Alasdair, for the idea. I personally used this solution:
from rest_framework.permissions import DjangoObjectPermissions
OPTIONS_METHOD = 'OPTIONS'
class DjangoObjectPermissionsOrOptions(DjangoObjectPermissions):
def has_permission(self, request, view):
if request.method == OPTIONS_METHOD:
return True
else:
return super(DjangoObjectPermissions, self).has_permission(request, view)
Answer: Django rest framework comes with a permissions class
[`IsAuthenticatedOrReadOnly`](http://www.django-rest-framework.org/api-
guide/permissions/#isauthenticatedorreadonly), which allows authenticated
users to perform any request, and unauthorised users to make GET, HEAD or
OPTIONS requests.
Your use case is pretty similar, so you could try the following (untested):
class IsAuthenticatedOrOptions(BasePermission):
"""
The request is authenticated as a user, or an OPTIONS request.
"""
def has_permission(self, request, view):
return (
request.method == 'OPTIONS' or
request.user and
request.user.is_authenticated()
)
'DEFAULT_PERMISSION_CLASSES': (
'path.to.IsAuthenticatedOrOptions',
),
|
Compiling Cython with C header files error
Question: So I'm trying to wrap some C code with Cython. I read read applied Cython's
tutorials on doing this
([1](http://docs.cython.org/src/tutorial/clibraries.html),
[2](http://docs.cython.org/src/userguide/external_C_code.html)), but these
tutorials do not say much on how to compile the code once you have wrapped it
with Cython, and so I have an error saying it can't find my C code.
First, my cython script ("calcRMSD.pyx"):
import numpy as np
cimport numpy as np
cdef extern from "rsmd.h":
double rmsd(int n, double* x, double* y)
#rest of the code ommited
The C code I am trying to wrap ("rmsd.h"):
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
extern "C" {
// svd from lapack
void dgesvd_(char*,char*,int*,int*,double*,int*,double*,double*,int*,double*,
int*,double*,int*,int*);
}
double rmsd(int n, double* x, double* y)
{
//code omitted
}
Setup.py
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
from Cython.Build import cythonize
import numpy as np
setup(
ext_modules = cythonize([Extension("calcRMSD",
sources = ["calcRMSD.pyx"],
include_dirs = [np.get_include()],
libraries = ["dgesvd"]
#extra_compile_args = ["-I."],
#extra_link_args = ["-L./usr/lib/liblapack.dylib"]
)])
)
My error:
calcRMSD.c:269:10: fatal error: 'rsmd.h' file not found
#include "rsmd.h"
I read this stack overflow thread [Using Cython To Link Python To A Shared
Library](http://stackoverflow.com/questions/16993927/using-cython-to-link-
python-to-a-shared-library)
but following it gives me different errors. If I try to put rmsd.h in sources,
it says it doesnt recognize the file type.
[How to link custom C (which itself needs special linking options to compile)
with Cython?](http://stackoverflow.com/questions/14951798/how-to-link-custom-
c-which-itself-needs-special-linking-options-to-compile-wit?rq=1)
This looks somewhat promising but im not sure how to use it.
Please help!
Answer: First of all it has to find the include file, `rsmd.h`. You need to add the
path where this header can be found to the `include_dirs` parameter. The error
about the missing file should disappear.
Then you will additionally need to include the library you get from compiling
that C code. If that's `librsmd.a` you would add `'rsmd'` to the `libraries`
parameter. Additionally you might need a `library_dirs` parameter that
contains the path where that library can be found.
|
Get the title of a window of another program using the process name
Question: This question is probably quite basic but I'm having difficulty cracking it. I
assume that I will have to use something in `ctypes.windll.user32`. Bear in
mind that I have little to no experience using these libraries or even
`ctypes` as a whole.
I have used this code to list all the window titles, but I have no idea how I
am supposed to change this code to get a window title with a process name:
import ctypes
EnumWindows = ctypes.windll.user32.EnumWindows
EnumWindowsProc = ctypes.WINFUNCTYPE(ctypes.c_bool, ctypes.POINTER(ctypes.c_int), ctypes.POINTER(ctypes.c_int))
GetWindowText = ctypes.windll.user32.GetWindowTextW
GetWindowTextLength = ctypes.windll.user32.GetWindowTextLengthW
IsWindowVisible = ctypes.windll.user32.IsWindowVisible
titles = []
def foreach_window(hwnd, lParam):
if IsWindowVisible(hwnd):
length = GetWindowTextLength(hwnd)
buff = ctypes.create_unicode_buffer(length + 1)
GetWindowText(hwnd, buff, length + 1)
titles.append(buff.value)
return True
EnumWindows(EnumWindowsProc(foreach_window), 0)
print(titles)
This code is from <https://sjohannes.wordpress.com/2012/03/23/win32-python-
getting-all-window-titles/>
If my question is unclear, I would like to achieve something like this (just
an example - I'm not asking specifically about Spotify):
getTitleOfWindowbyProcessName("spotify.exe") // returns "Avicii - Waiting For Love" (or whatever the title is)
A complication that may arise, if there are multiple windows running with the
same process name (e.g. multiple chrome windows)
Thank you.
* * *
**EDIT** : To clarify, I want some code that takes a process name and returns
a (possibly empty) list of window titles owned by that process as strings.
Answer: Here's what I meant in the comment:
import win32gui
def enumWindowsProc(hwnd, lParam):
print win32gui.GetWindowText(hwnd)
win32gui.EnumWindows(enumWindowsProc, 0)
Below, I pasted the whole thing...it doesn't work on the PC that I am at right
now, since I messed up with security settings(it's an XP!!!) and I get a bunch
of _Access denied_ (error code: **5**) errors, but here it is:
import sys
import os
import traceback
import ctypes
from ctypes import wintypes
import win32con
import win32api
import win32gui
import win32process
def enumWindowsProc(hwnd, lParam):
if (lParam is None) or ((lParam is not None) and (win32process.GetWindowThreadProcessId(hwnd)[1] == lParam)):
text = win32gui.GetWindowText(hwnd)
if text:
wStyle = win32api.GetWindowLong(hwnd, win32con.GWL_STYLE)
if wStyle & win32con.WS_VISIBLE:
print("%08X - %s" % (hwnd, text))
def enumProcWnds(pid=None):
win32gui.EnumWindows(enumWindowsProc, pid)
def enumProcs(procName=None):
pids = win32process.EnumProcesses()
if procName is not None:
bufLen = 0x100
bytes = wintypes.DWORD(bufLen)
_OpenProcess = ctypes.cdll.kernel32.OpenProcess
_GetProcessImageFileName = ctypes.cdll.psapi.GetProcessImageFileNameA
_CloseHandle = ctypes.cdll.kernel32.CloseHandle
filteredPids = ()
for pid in pids:
try:
hProc = _OpenProcess(wintypes.DWORD(win32con.PROCESS_ALL_ACCESS), ctypes.c_int(0), wintypes.DWORD(pid))
except:
print("Process [%d] couldn't be opened: %s" % (pid, traceback.format_exc()))
continue
try:
buf = ctypes.create_string_buffer(bufLen)
_GetProcessImageFileName(hProc, ctypes.pointer(buf), ctypes.pointer(bytes))
if buf.value:
name = buf.value.decode().split(os.path.sep)[-1]
#print name
else:
_CloseHandle(hProc)
continue
except:
print("Error getting process name: %s" % traceback.format_exc())
_CloseHandle(hProc)
continue
if name.lower() == procName.lower():
filteredPids += (pid,)
return filteredPids
else:
return pids
def main(args):
if args:
procName = args[0]
else:
procName = None
pids = enumProcs(procName)
print(pids)
for pid in pids:
enumProcWnds(pid)
if __name__ == "__main__":
main(sys.argv[1:])
Needless to say that:
* In order for this code to work you need to run it as a privileged user (Administrator); at least [SeDebugPrivilege](https://msdn.microsoft.com/en-us/library/windows/desktop/bb530716\(v=vs.85\).aspx) is required.
* There might be surprises when the processes are running in 32/64 bit modes (the `python` process that you execute this code from, and the target processes enumerated by the code)
|
TypeError: Type str doesn't support the buffer API in assertTrue in testcase
Question: I am using python 3.4 and Django 1.8.2
I am performing some test cases about of the Artist object using some asserts:
I want that the page return me in my test of `/artist/<id>` (stored in the res
variable) and return me the 200 status code which is the O.K. in http protocol
Also, I want that check in the content the word with the name of artist
from django.test import TestCase
from .models import Artists
class TestArtist(TestCase):
def setUp(self):
self.artist = Artists.objects.create(first_name = 'Ricky',
last_name ='Martin')
def test_existe_vista(self):
#print (self.client.get('/artists/%d' % self.artist.id))
res = self.client.get('/artists/%d' % self.artist.id)
self.assertEqual(res.status_code, 200)
self.assertTrue('Ricky' in res.content)
The output is:
(venv)➜ myproject ./manage.py test artists
Creating test database for alias 'default'...
/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/django/db/backends/sqlite3/base.py:57: RuntimeWarning: SQLite received a naive datetime (2015-07-08 05:09:23.051431) while time zone support is active.
RuntimeWarning)
E
======================================================================
ERROR: test_existe_vista (artists.tests.TestArtist)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/bgarcial/workspace/Project/myprojec/artists/tests.py", line 43, in test_existe_vista
self.assertTrue('Ricky' in res.content)
TypeError: Type str doesn't support the buffer API
----------------------------------------------------------------------
Ran 1 test in 0.286s
FAILED (errors=1)
Destroying test database for alias 'default'...
/home/bgarcial/.virtualenvs/venv/lib/python3.4/site-packages/django/db/backends/sqlite3/base.py:57: RuntimeWarning: SQLite received a naive datetime (2015-07-08 05:09:23.236391) while time zone support is active.
RuntimeWarning)
(venv)➜ project
What mean `TypeError: Type str doesn't support the buffer API` in my
assertTrue assertion?
About of the subject I also want comment that when I execute the first test or
the first line (before to asserts) in which I print the artist object, this
object is printed of this way:
How to can I print the object readable for humans?
The **str** method does not works for me here inside the test case like in my
admin.py files
With respect to the SQLite Warning I guess that I should explore :) Thanks
Answer: Going throught the same thing here. Try doing `self.assertTrue('Ricky' in
str(res.content))`
|
How to convert a set of osm files to shape files using ogr2ogr in python
Question: I strongly believe that this question is already asked but I can't find the
answer so I am placing it before you. I am having a problem while running the
script to convert osm files to shp files. The script is reading all the osm
files but just creating one shp file of the first osm file at the end instead
of converting all the osm files. I am providing the code I used below. So
please kindly help me in resolving me this.
* * *
from xml.dom import minidom
import os, sys
import xml.etree.ElementTree as ET
### ruta a gdal-data C:\Program Files (x86)\PostgreSQL\9.4\gdal-data
path = r"C:\Users\Administrator\Desktop\CHECKING\T2"
systemOutput = 'Shp'
print ("\n#### Execute python NY_osm2shapes")
print ("#### MONITORING CITIES")
print ("#### Conversor osm to shapes")
print ("#### OSM Path: " + path)
print "#### "
"""
Modify
Win: C:/Program Files/GDAL/gdal-data/osmconfig.ini
Linux: /usr/share/gdal/1.11/osmconfig.ini
report_all_ways=yes #activate lines without tag
attributes=landuse, plots #inside [lines]
attributes=landuse, plots #inside [multipolygons]
"""
### Check if path from argv
try:
if len(sys.argv) >= 2:
print("#### Path from argv: ", sys.argv[1])
path = sys.argv[1]
else:
print "#### Path set to", path
sys.exit()
except:
pass
#### Ogr config
print "\n#### Process: osm to shapes"
ogrOutputType = '' #-f "Esri Shapefile"'
ogrProjection = '' # -t_srs EPSG:4326' #+ epsg
ogrProjectionA = '' #-a_srs EPSG:3827'
ogrProjectionIn = '' #-s_srs EPSG:3827' #-t_srs EPSG:4326
ogrConfigType = ' --config OSM_USE_CUSTOM_INDEXING NO'
ogr2ogr = 'ogr2ogr %s %s %s %s %s %s -overwrite %s %s %s %s layer %s'
### Process
for l in os.walk(path):
archivos = l[2]
ruta = l[0]
for a in archivos:
if a.endswith(".osm"):
osmFile = os.path.join(ruta, a)
folder = os.path.join(ruta, systemOutput)
shapeFile = a[:-4]
ogrFileOutput = " -nln " + shapeFile
print "Archivo Shape: ", shapeFile,
layerType = shapeFile[-1]
if layerType=="0":
print "\t TIPO 0: Circles"
ogrSelectLayer = "lines"
ogrLcoType = ' -lco SHPT=ARC'
ogrSelect = ' -select ID_string'
elif layerType == "1":
print "\t TIPO 1: Blocks"
ogrSelectLayer = "lines"
ogrLcoType = ' -lco SHPT=ARC'
ogrSelect = ' -select Land_use'
elif layerType == "2":
print "\t TIPO 2: Plots"
ogrSelectLayer = "lines"
ogrLcoType = ' -lco SHPT=ARC'
ogrSelect = ' -select Plot'
elif layerType == "3":
print "\t TIPO 3: Medians"
ogrSelectLayer = "lines"
ogrLcoType = ' -lco SHPT=ARC'
ogrSelect = ' -select ID_string'
else:
print "ELSE ERROR*"
systemOutput = ogr2ogr % (ogrOutputType, folder, osmFile, ogrProjectionA, ogrProjectionIn, ogrProjection, ogrFileOutput, ogrLcoType, ogrConfigType, ogrSelect, ogrSelectLayer)
#print ("Fichero: ", osmFile, shapeFile, layerType, ogrSelectLayer)
os.system(systemOutput)
print "End process"
Answer: The way you used `os.walk` returns in `archivos` all `osm` files in the last
`ruta` of the tree structure traversed. That is possibly (at least part of)
your problem, or it may be so in the future.
You have to use `os.walk` differently:
import os, re
ext_regx = '\.osm$'
archivos = []
for ruta, dirs, archs in os.walk( path ) :
for arch in archs :
if re.search( ext_regx, arch ) :
archivos.append( os.path.join( ruta, arch ) )
for osmFile in archivos :
print( osmFile )
...
Now if the code inside the `for` loop does not do what you mean to, that is
another issue. I suggest you:
1. Add `print( systemOutput )` to check that each command executed is what you intend it to be.
2. Check that the files and dirs refered to in that command are correct.
PS: each item in `archivos` will already contain the dir part, so you have to
`split` the folder part, instead of `join`ing.
PS2: you might need to use double backslashes for dirs. Also, bear in mind
[`os.sep`](http://docs.python.org/2/library/os.html#os.sep).
|
How to handle lists as single values in csv with Python
Question: I am handling a csv import and got troubles with a value that should be in
list form but is read as string.
One of the csv rows looks like the following:
['name1', "['name2', 'name3']"]
As you can see the value in the second column is a list but is read as a
string. My problem is that I need to iterate through that list and the length
of that list can vary from row to row.
I am wondering where the problem lies. Can csv read not handle a list? Is
there a way to turn that string in the second column into a list rather than
using regex? Here is the code that I am running:
import csv
import os
content = []
file_path = os.path.abspath(file)
if os.path.exists(file_path):
with open(file_path, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter = ',')
for row in csvreader:
content.append(row)
for row in content[1:5]:
print row
print row[0], row[1]
for name in row[1]:
print name
The output row looks as above but when iterating though row[1] it does not
iterate though the list of names but through every single character. Anyone
got an idea? Thanks in advance for any help!
Answer: An easy way to convert string to list is using `ast.literal_eval` function.
Example -
>>> import ast
>>> s = "['name2', 'name3']"
>>> s
"['name2', 'name3']"
>>> l = ast.literal_eval(s)
>>> l
['name2', 'name3']
>>> type(l)
<class 'list'>
From ast
[documentation](https://docs.python.org/2/library/ast.html#ast.literal_eval) -
> **ast.literal_eval(node_or_string)**
>
> Safely evaluate an expression node or a Unicode or Latin-1 encoded string
> containing a Python literal or container display. The string or node
> provided may only consist of the following Python literal structures:
> strings, numbers, tuples, lists, dicts, booleans, and None.
But if your complete csv looks like that, you should consider using
[`json`](https://docs.python.org/2/library/json.html) to parse the csv, rather
than `csv` module.
|
Is there a significantly better way to find the most common word in a list (Python only)
Question: Considering a trivial implementation of the problem, I am looking for a
significantly faster way to find the most common word in a Python list. As
part of Python interview I received feedback that this implementation is so
inefficient, that it is basically failure. Later, I tried many algorithms I
found, and only some heapsearch based solutions are a bit faster, but not
overwhelmingly (when scaled to tens of millions of items, heapsearch is about
30% faster; on trivial lengths like thousand, it is almost the same; using
timeit).
def stupid(words):
freqs = {}
for w in words:
freqs[w] = freqs.get(w, 0) + 1
return max(freqs, key=freqs.get)
As this is a simple problem and I have some experience (although I am nowhere
algorithms guru or competitive coder) I was surprised.
Of course, I would like to improve my skills and learn that so much better way
of solving the problem, so your input will be appreciated.
Clarification for duplicate status: My point is to find out if there is
actually much (asymptotically) better solution and other similar questions
have picked an answer that is not much better. If this is not enough to make
the question unique, close this question, of course.
**Update**
Thank you all for the input. Regarding the interview situation, I remain with
the impression that hand written search algorithm was expected (that may be
somewhat more efficient) and/or the reviewer was assessing code from the point
of view of another language, with different constant factors. Of course,
everyone can have own standards.
What was important for me was to validate if I am totally clueless (I had the
impression that I am not) or just usually write not the best possible code. It
is still possible that even better algorithm exists, but if it remained hidden
for the community here for a few days, I am fine with that.
I am picking the most upvoted answer - it seems fair to do so, even though
more than one people privided usefull feedback.
**Minor update**
It seems that using defaultdict has a noticeable advantage over using the
'get' method, even if it is statically aliased.
Answer: That sounds like a bad interview question, probably a case of the interviewer
expecting a certain answer. It definitely sounds like s/he didn't clearly
explain what s/he was asking.
Your solution is `O(n)` (where `n = len(words)`), and using a heap doesn't
change that.
There are faster _approximate_ solutions...
|
Memory leak in reading files from Google Cloud Storage at Google App Engine (python)
Question: Below is part of the python code running at Google App Engine. It fetches a
file from Google Cloud Storage by using cloudstorage client.
The problem is that each time the code reads a big file(about 10M), the memory
used in the instance will increase linearly. Soon, the process is terminated
due to "Exceeded soft private memory limit of 128 MB with 134 MB after
servicing 40 requests total".
class ReadGSFile(webapp2.RequestHandler):
def get(self):
import cloudstorage as gcs
self.response.headers['Content-Type'] = "file type"
read_path = "path/to/file"
with gcs.open(read_path, 'r') as fp:
buf = fp.read(1000000)
while buf:
self.response.out.write(buf)
buf = fp.read(1000000)
fp.close()
If I comment out the following line, then memory usage in instance does
change. So it should be the problem of webapp2.
self.response.out.write(buf)
It is supposed that webapp2 will release memory space after finishing the
response. But in my code, it does not.
Answer: Suggested by above user voscausa's comment, I changed the scheme for file
downloading, that is, to serve file downloading by using Blobstore. Now the
problem of memory leak is solved.
Reference:
<https://cloud.google.com/appengine/docs/python/blobstore/#Python_Using_the_Blobstore_API_with_Google_Cloud_Storage>
from google.appengine.ext import blobstore
from google.appengine.ext.webapp import blobstore_handlers
class GCSServingHandler(blobstore_handlers.BlobstoreDownloadHandler):
def get(self):
read_path = "/path/to/gcs file/" # The leading chars should not be "/gs/"
blob_key = blobstore.create_gs_key("/gs/" + read_path)
f_name = "file name"
f_type = "file type" # Such as 'text/plain'
self.response.headers['Content-Type'] = f_type
self.response.headers['Content-Disposition'] = "attachment; filename=\"%s\";"%f_name
self.response.headers['Content-Disposition'] += " filename*=utf-8''" + urllib2.quote(f_name.encode("utf8"))
self.send_blob(blob_key)
|
Python key pressed without Tk
Question: I use a Raspberry Pi via SSH from my Windows 7 and I build a robot. If you
press an arrow, it will move. I detect the key with TkInter module, but it
needs a graphic environment. So if I am only in an SSH terminal, it can't run.
Is there some module which can detect keys and doesn't need a window?
Answer: i have not tried it, but a quick search has shown up this:
[example](http://jeffhoogland.blogspot.co.uk/2014/10/pyhook-for-linux-with-
pyxhook.html)
[github source](https://github.com/JeffHoogland/pyxhook)
which is essentially a linux implementation of pyhook (windows only)
so to use it:
import pyxhook
import time
#This function is called every time a key is presssed
def kbevent( event ):
#print key info
print event
#If the ascii value matches spacebar, terminate the while loop
if event.Ascii == 32:
global running
running = False
#Create hookmanager
hookman = pyxhook.HookManager()
#Define our callback to fire when a key is pressed down
hookman.KeyDown = kbevent
#Hook the keyboard
hookman.HookKeyboard()
#Start our listener
hookman.start()
#Create a loop to keep the application running
running = True
while running:
time.sleep(0.1)
#Close the listener when we are done
hookman.cancel()
|
Create a list of lists using Python
Question: I have a list with year and day starting from December till February from 2003
to 2005. I want to divide this list into list of lists to hold year day from
December to February:
a = ['2003337', '2003345', '2003353', '2003361', '2004001', '2004009', '2004017', '2004025', '2004033', '2004041', '2004049', '2004057', '2004337', '2004345', '2004353', '2004361', '2005001', '2005009', '2005017', '2005025', '2005033', '2005041', '2005049', '2005057']
Output should be like:
b = [['2003337', '2003345', '2003353', '2003361', '2004001', '2004009', '2004017', '2004025', '2004033', '2004041', '2004049', '2004057'] ['2004337', '2004345', '2004353', '2004361', '2005001', '2005009', '2005017', '2005025', '2005033', '2005041', '2005049', '2005057']]
and then loop over each list of lists. I could use [even
splitting](http://stackoverflow.com/questions/312443/how-do-you-split-a-list-
into-evenly-sized-chunks-in-python) but there is a chance of missing year
days. So it would be better not to do evenly split. Any suggestions?
Answer: Convert to datetime, then group by the year whose end is nearest.
import datetime
import itertools
#convert from a "year-day" string to a datetime object
def datetime_from_year_day(s):
year = int(s[:4])
days = int(s[4:])
return datetime.datetime(year=year, month=1, day=1) + datetime.timedelta(days=days-1)
#returns the year whose end is closest to the date, whether in the past or future
def nearest_year_end(d):
if d.month <= 6:
return d.year-1
else:
return d.year
a = ['2003337', '2003345', '2003353', '2003361', '2004001', '2004009', '2004017', '2004025', '2004033', '2004041', '2004049', '2004057', '2004337', '2004345', '2004353', '2004361', '2005001', '2005009', '2005017', '2005025', '2005033', '2005041', '2005049', '2005057']
result = [list(v) for k,v in itertools.groupby(a, lambda s: nearest_year_end(datetime_from_year_day(s)))]
print result
Result:
[['2003337', '2003345', '2003353', '2003361', '2004001', '2004009', '2004017', '2004025', '2004033', '2004041', '2004049', '2004057'], ['2004337', '2004345', '2004353', '2004361', '2005001', '2005009', '2005017', '2005025', '2005033', '2005041', '2005049', '2005057']]
|
How to replace None only with empty string using pandas?
Question: the code below generates a _df_ :
import pandas as pd
from datetime import datetime as dt
import numpy as np
dates = [dt(2014, 1, 2, 2), dt(2014, 1, 2, 3), dt(2014, 1, 2, 4), None]
strings1 = ['A', 'B',None, 'C']
strings2 = [None, 'B','C', 'C']
strings3 = ['A', 'B','C', None]
vals = [1.,2.,np.nan, 4.]
df = pd.DataFrame(dict(zip(['A','B','C','D','E'],
[strings1, dates, strings2, strings3, vals])))
+---+------+---------------------+------+------+-----+
| | A | B | C | D | E |
+---+------+---------------------+------+------+-----+
| 0 | A | 2014-01-02 02:00:00 | None | A | 1 |
| 1 | B | 2014-01-02 03:00:00 | B | B | 2 |
| 2 | None | 2014-01-02 04:00:00 | C | C | NaN |
| 3 | C | NaT | C | None | 4 |
+---+------+---------------------+------+------+-----+
I would like to replace all `None` (real `None` in python, not str) inside
with `''`(empty string).
The **expected** _df_ is
+---+---+---------------------+---+---+-----+
| | A | B | C | D | E |
+---+---+---------------------+---+---+-----+
| 0 | A | 2014-01-02 02:00:00 | | A | 1 |
| 1 | B | 2014-01-02 03:00:00 | B | B | 2 |
| 2 | | 2014-01-02 04:00:00 | C | C | NaN |
| 3 | C | NaT | C | | 4 |
+---+---+---------------------+---+---+-----+
* * *
what I did is
`df = df.replace([None], [''], regex=True)`
But I got
+---+---+---------------------+---+------+---+
| | A | B | C | D | E |
+---+---+---------------------+---+------+---+
| 0 | A | 1388628000000000000 | | A | 1 |
| 1 | B | 1388631600000000000 | B | B | 2 |
| 2 | | 1388635200000000000 | C | C | |
| 3 | C | | C | | 4 |
+---+---+---------------------+---+------+---+
* * *
1. all the dates becomes big numbers
2. Even `NaT` and `NaN` are replaced, which I don't want.
How can I achieve that correctly and efficently?
Answer: It looks like `None` is being promoted to `NaN` and so you cannot use
`replace` like usual, the following works:
In [126]:
mask = df.applymap(lambda x: x is None)
cols = df.columns[(mask).any()]
for col in df[cols]:
df.loc[mask[col], col] = ''
df
Out[126]:
A B C D E
0 A 2014-01-02 02:00:00 A 1
1 B 2014-01-02 03:00:00 B B 2
2 2014-01-02 04:00:00 C C NaN
3 C NaT C 4
So we generate a mask of the `None` values using `applymap`, we then use this
mask to iterate over each column of interest and using the boolean mask set
the values.
|
Python - Efficiently find the set of all characters in a pandas DataFrame?
Question: I want to find the set of all unique characters contained within a pandas
DataFrame. One solution that works is given below:
from operator import add
set(reduce(add, map(unicode, df.values.flatten())))
However, the solution above takes a long time with large DataFrames. What are
more efficient ways of doing this?
I am trying to find all unique characters in a pandas DataFrame so I can
choose an appropriate delimiter when writing the DataFrame to disk as a csv.
Answer: Learned this from Jeff
[here](http://stackoverflow.com/questions/20084382/unique-values-in-a-pandas-
dataframe)
This should be doable using Pandas built-ins:
a = pd.DataFrame( data=np.random.randint(0,100000,(1000000,20)))
# now pull out unique values (less than a second for 2E7 data points)
b = pd.unique( a.values.ravel() )
|
Python - Pandas - Dataframe: Row Specific Conditional Column Offset
Question: I am trying to do a dataframe transformation that I cannot solve. I have tried
multiple approaches from stackoverflow and the pandas documentation: apply,
apply(lambda: ...), pivots, and joins. Too many attempts to list here, but not
sure which approach is the best or if maybe I tried the right approach with
the wrong syntax.
Basically, I have a dataframe, and I need to 1) offset the columns, 2) the
number of columns to offset by varies and depends on a variable in the
dataframe, 3) create columns at the end of the dataframe where needed to
accommodate the offset, and 4) place zeros in the newly created intervals.
df1 = pd.DataFrame({'first' : ['John', 'Mary', 'Larry', 'jerry'], '1' : [5.5, 6.0,10,20], '2' : [100, 200, 300, 400], '3' : [150, 100, 240, 110], 'offset' : ([1,0,2,1])})
goal_df = pd.DataFrame({'first' : ['John', 'Mary', 'Larry', 'jerry'], '1' : [0.0, 6.0, 0.0, 0], '2' : [5.5, 200, 0.0, 20], '3' : [100, 100, 10, 400], '4' : [150, 0.0, 300, 110], '5' : [0.0, 0.0, 240, 0.0]})
df1
1 2 3 first offset
5.5 100 150 John 1
6.0 200 100 Mary 0
10.0 300 240 Larry 2
20.0 400 110 jerry 1
goal_df
1 2 3 4 5 first
0 5.5 100 150 0 John
6 200.0 100 0 0 Mary
0 0.0 10 300 240 Larry
0 20.0 400 110 0 jerry
This data set will have c. 500 rows and c. 120 columns. The amount of the
offset will very between 0-12. I thought about doing this with base Python
functions, but I also found that difficult and the time consumer by the
program would defeat the ultimate purpose which is to remove some tasks being
done in Microsoft Excel.
I complain a lot about how Excel is inferior for big tasks like this, but it
seems so far that the current spreadsheet offset() function in excel does do
this in a very easy to use way but with thousands of formulas, is very slow. I
have sold my workplace on the benefits of Python over Excel, and this is my
first real trial, so speed is very important to me because I'm trying to
convince my colleagues that Python can gobble up this spreadsheet much quicker
than the current excel file weighing in a 96Mb in file size.
I came pretty close with the melt() function, and then taking the former
column numbers and added the offset to them. However, I've had a lot of
problems trying to reform the dataframe using pivot. No luck with apply or
apply(lambda)!
Thanks for any help anyone can give!
Answer: This is not especially elegant or concise but ought to do the trick. I find it
a little easier to shuffle columns around in numpy (also should be a bit
faster) so I first convert from a dataframe to array.
arr = df1.values[:,:-2] # just the numbers
offset = df1.values[:,-1] # just the offsets
column_pad = 2
arr2 = np.zeros( (arr.shape[0],arr.shape[1]+column_pad) )
Here's the key code, which simply shifts each row by the amount of the offset.
for i, j in enumerate(offset):
arr2[i,j:3+j] = arr[i]
array([[ 0. , 5.5, 100. , 150. , 0. ],
[ 6. , 200. , 100. , 0. , 0. ],
[ 0. , 0. , 10. , 300. , 240. ],
[ 0. , 20. , 400. , 110. , 0. ]])
Beyond that it's just a little bit of manual labor to add space for the
columns and put them in the right order.
df2 = df1.copy()
last_column = 6
for i in map(str,range(3,last_column)):
df2[i] = 0
df2 = df2[ map(str,range(1,last_column))+['first','offset']]
Then load arr2 into df2.
df2.loc[:,'1':'5'] = arr2
1 2 3 4 5 first offset
0 0 5.5 100 150 0 John 1
1 6 200.0 100 0 0 Mary 0
2 0 0.0 10 300 240 Larry 2
3 0 20.0 400 110 0 jerry 1
|
os.system(<command>) execution through Python :: Limitations?
Question: I'm writing a python (ver 2.7) script to automate the set of commands in this
Getting Started [example](http://inotool.org/quickstart) for
[INOTOOL](http://inotool.org/).
Problem: When I run this entire script, I repeatedly encounter these errors:
Current Directory is not empty
No project is found in this directory
No project is found in this directory
But, when I run a first script only up till the code line marked, and manually
type in the next three lines, or when I run these last three lines (starting
from the "ino init -t blink" line) after manually accessing the beep folder,
then I am able to successfully execute the same code.
Is there a limitation with os.system() that I'm encountering?
My code:
import os,sys
def upload()
os.system("cd /home/pi/Downloads")
os.system("mkdir beep")
os.system("cd beep") #will refer to this code junction in question description
os.system("ino init -t blink")
os.system("ino build")
os.system("ino upload")
sys.exit(0)
Answer: Yes, when `os.system()` commands are run for `cd` , it does not actually
change the current directory for the python process' context. From
[documentation](https://docs.python.org/2/library/os.html#os.system) \-
> **os.system(command)**
>
> Execute the command (a string) in a subshell. This is implemented by calling
> the Standard C function system(), and has the same limitations. Changes to
> sys.stdin, etc. are not reflected in the environment of the executed
> command.
So even though you are changing directory in os.system() call, the next
os.system call still occurs in same directory. Which could be causing your
issue.
You shoud try using
[`os.chdir()`](https://docs.python.org/2/library/os.html#os.chdir) to change
the directory instead of `os.system()` calls.
The Best would be to use `subprocess` module as @PadraicCunningham explains in
his answer.
|
Python: print the time zone from strftime
Question: I want to print the time zone. I used `%Z` but it doesn't print:
import datetime
now = datetime.datetime.now()
print now.strftime("%d-%m-%Y")
print now.strftime("%d-%b-%Y")
print now.strftime("%a,%d-%b-%Y %I:%M:%S %Z") # %Z doesn't work
Do I perhaps need to import `pytz`?
Answer: It is a documented behavior: `datetime.now()` returns a naive datetime object
and [`%Z` returns an empty string in such
cases](https://docs.python.org/2.7/library/datetime.html#strftime-and-
strptime-behavior). You need an aware datetime object instead.
To print a local timezone abbreviation, you could use [`tzlocal`
module](https://pypi.python.org/pypi/tzlocal) that can return your local
timezone as a [`pytz`](http://pytz.sourceforge.net) tzinfo object that may
contain a historical timezone info e.g., from [the tz
database](https://www.iana.org/time-zones/repository/tz-link.html):
#!/usr/bin/env python
from datetime import datetime
import tzlocal # $ pip install tzlocal
now = datetime.now(tzlocal.get_localzone())
print(now.strftime('%Z'))
# -> MSK
print(now.tzname())
# -> MSK
This code works for timezones with/without daylight saving time. It works
around and during DST transitions. It works if the local timezone had
different utc offset in the past even if the C library used by python has no
access to a historical timezone database on the given platform.
* * *
In Python 3.3+, when platform supports it, you could use [`.tm_zone`
attribute](https://docs.python.org/3/library/time.html#time.struct_time), to
get the tzname:
>>> import time
>>> time.localtime().tm_zone
'MSK'
Or using `datetime` module:
>>> from datetime import datetime, timezone
>>> datetime.now(timezone.utc).astimezone().tzname()
'MSK'
The code is portable but the result may be incorrect on some platforms
(without `.tm_zone` (`datetime` has to use `time.tzname` in this case) and
with ["interesting" timezones](http://bugs.python.org/issue1647654)).
On older Python versions, on a system with an "uninteresting" timezone, you
could use `time.tzname`:
>>> import time
>>> is_dst = time.daylight and time.localtime().tm_isdst > 0
>>> time.tzname[is_dst]
'MSK'
An example of an "interesting" timezone is Europe/Moscow timezone in 2010-2015
period.
Similar issues are discussed in [Getting computer's UTC offset in
Python](http://stackoverflow.com/a/3168394/4279).
|
Python: putting lists from a file into a list
Question: i'm very begginer in python.
i have a file with lists of coordinates. it seems like that :
[-122.661927,45.551161], [-98.51377733,29.655474], [-84.38042879, 33.83919567].
i'm trying to put this into a list with:
with open('file.txt', 'r') as f:
for line in f:
list.append(line)
the result i got is
['[-122.661927,45.551161], [-98.51377733,29.655474], [-84.38042879, 33.83919567]']
could sombody help me how can i get rid of the "'" marks at the beggining and
the end of the list?
Answer: Try using `ast.literal_eval`.
Example -
import ast
lst = []
with open('file.txt', 'r') as f:
for line in f:
lst.extend(ast.literal_eval(line))
From documentation -
> **ast.literal_eval(node_or_string)**
>
> Safely evaluate an expression node or a Unicode or Latin-1 encoded string
> containing a Python literal or container display. The string or node
> provided may only consist of the following Python literal structures:
> strings, numbers, tuples, lists, dicts, booleans, and None.
Also, please note its bad to use `list` as a variable name, as it shadows the
`list` built-in function.
|
How to pass weights to a Seaborn FacetGrid
Question: I have a set of data that I'm trying to plot using a FacetGrid in seaborn.
Each data point has a weight associated with it, and I want to plot a weighted
histogram in each of the facets of the grid.
For example, say I had the following (randomly created) data set:
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
d = pd.DataFrame(np.array([np.random.randint(0, 6, 5000),
np.random.normal(0, 1., 5000),
np.random.uniform(0, 1, 5000)]).T,
columns=('cat', 'val', 'weight'))
This data is structured like this:
cat val weight
0 0 -0.844542 0.668081
1 0 -0.521177 0.521396
2 1 -1.160358 0.788465
3 0 -0.394765 0.115242
4 5 0.735328 0.003495
Normally, if I didn't have weights, I would plot it like this:
fg = sns.FacetGrid(d, col='cat', col_wrap=3)
fg.map(plt.hist, 'val')
This makes a grid of histograms where each histogram shows the distribution of
the variable `val` for one value of the category `cat`.
What I would like to do is to weight each of the histograms. If I were making
a single histogram with Matplotlib, I would do this:
plt.hist(d.val, weights=d.weight)
I tried passing the weights argument to `FacetGrid.map`, but it raises an
error due to the way seaborn slices the data internally to make the grid:
fg.map(plt.hist, 'val', weights=d.weight)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-33-1403d26cff86> in <module>()
9
10 fg = sns.FacetGrid(d, col='cat', col_wrap=3)
---> 11 fg.map(plt.hist, 'val', weights=d.weight)
/opt/conda/lib/python3.4/site-packages/seaborn/axisgrid.py in map(self, func, *args, **kwargs)
443
444 # Draw the plot
--> 445 self._facet_plot(func, ax, plot_args, kwargs)
446
447 # Finalize the annotations and layout
/opt/conda/lib/python3.4/site-packages/seaborn/axisgrid.py in _facet_plot(self, func, ax, plot_args, plot_kwargs)
527
528 # Draw the plot
--> 529 func(*plot_args, **plot_kwargs)
530
531 # Sort out the supporting information
/opt/conda/lib/python3.4/site-packages/matplotlib/pyplot.py in hist(x, bins, range, normed, weights, cumulative, bottom, histtype, align, orientation, rwidth, log, color, label, stacked, hold, **kwargs)
2894 histtype=histtype, align=align, orientation=orientation,
2895 rwidth=rwidth, log=log, color=color, label=label,
-> 2896 stacked=stacked, **kwargs)
2897 draw_if_interactive()
2898 finally:
/opt/conda/lib/python3.4/site-packages/matplotlib/axes/_axes.py in hist(self, x, bins, range, normed, weights, cumulative, bottom, histtype, align, orientation, rwidth, log, color, label, stacked, **kwargs)
5647 if len(w[i]) != len(x[i]):
5648 raise ValueError(
-> 5649 'weights should have the same shape as x')
5650 else:
5651 w = [None]*nx
ValueError: weights should have the same shape as x
So, is there any way to make a plot like this?
Answer: You'll need to write a little wrapper function around `plt.hist` that accepts
a vector of weights as a positional argument. Something like
def weighted_hist(x, weights, **kwargs):
plt.hist(x, weights=weights, **kwargs)
g = sns.FacetGrid(df, ...)
g.map(weighted_hist, "x_var", "weight_var")
g.set_axis_labels("x_var", "count")
|
Do not require authentication for GET requests from browser
Question: This question is closely related to [Do not require authentication for OPTIONS
requests](http://stackoverflow.com/questions/31274810/do-not-require-
authentication-for-options-requests)
My settings.py
REST_FRAMEWORK = {
'UNICODE_JSON': True,
'NON_FIELD_ERRORS_KEY': '__all__',
'DEFAULT_AUTHENTICATION_CLASSES': (
'rest_framework.authentication.TokenAuthentication',
),
'DEFAULT_PERMISSION_CLASSES': (
'platformt_core.something.permissions.DjangoObjectPermissionsOrOptions',
),
'DEFAULT_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',
),
'ALLOWED_VERSIONS': ['v1'],
'DEFAULT_VERSIONING_CLASS': 'rest_framework.versioning.NamespaceVersioning',
'TEST_REQUEST_DEFAULT_FORMAT': 'json',
'TEST_REQUEST_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',
)
}
platformt_core/something/permissions.py
from rest_framework.permissions import DjangoObjectPermissions
OPTIONS_METHOD = 'OPTIONS'
class DjangoObjectPermissionsOrOptions(DjangoObjectPermissions):
def has_permission(self, request, view):
if request.method == OPTIONS_METHOD:
return True
else:
return super(DjangoObjectPermissions, self).has_permission(request, view)
When I do this request from browser:
GET /api/passenger/v1/order/ HTTP/1.1
Host: 127.0.0.1:8000
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36
Accept-Encoding: gzip, deflate, sdch
Accept-Language: en-US,en;q=0.8
Cookie: csrftoken=3XytVl8Oh2PJxcSs1ufI9TIZMOTC5Pix
I expect something neat like "Self describing APIs" described at
<http://www.django-rest-framework.org/topics/documenting-your-api/>
But instead I get:
HTTP/1.0 401 UNAUTHORIZED
Date: Wed, 08 Jul 2015 20:45:23 GMT
Server: WSGIServer/0.1 Python/2.7.6
Content-Type: application/json;q=0.8; charset=utf-8
WWW-Authenticate: Token
Allow: POST, OPTIONS
{"detail":"Authentication credentials were not provided."}
Is there a neat way to achieve that? I mean I would like have browseable API,
but API requests should be still secured with authentication.
Answer: Can't you just use?
'DEFAULT_PERMISSION_CLASSES': ('rest_framework.permissions.IsAuthenticatedOrReadOnly',)
This will allow read-only access to unauthenticated users.
|
Entry widget doesn't work
Question: I'm fairly new to python and I need help with this problem. How do I get a
user to input something on canvas? I've tried taking out the `x=` and `y=` but
it doesn't work... after I run the module it says "Non-Keyword arg after
Keyword arg". Please help.
from tkinter import*
def main():
global window
global tkinter
global canvas
window = Tk()
canvas = Canvas(window, width=400, height=300, bg='blue')
e1 = Entry(canvas)
canvas.create_window(window=e1, x=100,y=100)
e1.pack()
canvas.pack()
window.mainloop()
main()
Answer: The position of the created window has to be passed as the first two
arguments, not as keyword arguments.
canvas.create_window(100, 100, window=e1)
Also, the `Entry` widget doesn't need to be packed when used in this manner,
so the `e1.pack()` line should be removed entirely.
|
New to Python (3.4.3), trying to pip install basic libraries and receiving this message
Question: `Command "C:\Python34\python.exe -c "import setuptools,
tokenize;__file__='C:\\Users\\Jamey\\AppData\\Local\\Temp\\pip-
build-4xxi4hts\\numpy\\setup.py';exec(compile(getattr(tokenize, 'open',
open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install
--record C:\Users\Jamey\AppData\Local\Temp\pip-l8qukpar-record\install-
record.txt --single-version-externally-managed --compile" failedwith error
code 1 in C:\Users\Jamey\AppData\Local\Temp\pip-build-4xxi4hts\numpy`
It has a lot of errors, do I need to add some basic libaries before installing
more advance ones? Also tells me I need Microsoft Visual C++ 10.0
Answer: For numpy and other such libraries, these are difficult to build on Windows
because they need to be compiled during installation. Setting up a proper
build environment is difficult on Windows.
You have a few choices:
1. Download a build of Python that includes these libraries. A popular package of Python that includes numpy (and other scientific libraries) is [Anaconda](https://store.continuum.io/cshop/anaconda/). The [scipy download page](http://www.scipy.org/install.html) lists some more options for you.
2. Download pre-compiled binaries for Windows. These are available as standard Windows installers. For example [here is a link](http://sourceforge.net/projects/numpy/files/NumPy/1.9.2/) to the downloads for numpy 1.9 (you can download the Windows installer from there).
3. A list of pre-compiled Python modules for Windows is [available here](http://www.lfd.uci.edu/~gohlke/pythonlibs/). I suggest you bookmark that page and when you have trouble installing something on Windows, chances are it will be listed there.
If you are downloading packages, make sure you download for the right version
of Python. If there are two versions (a 64 bit and a 32 bit) available, **this
refers to the version of _Python_ not Windows** that you have installed.
|
Parsing/Printing depending on char length in Python; only 98% on CodeEval
Question: So I'm working through a CodeEval problem right now, and, for some reason, I
can't get past a 98/100 score.
Here's a link to the challenge on CodeEval:
<https://www.codeeval.com/open_challenges/167/>
Here's my code:
# -*- coding: utf-8 -*-
import sys
zeefile = open(sys.argv[1])
for line in zeefile:
if len(line) <= 55:
sys.stdout.write(line)
elif len(line) > 55:
line = line[:40].rsplit(" ", 1)[0]
sys.stdout.write(line)
sys.stdout.write('... <Read More> \n')
I've beaten my head against this wall for a several hours, even with a few
devs far more talented than I'll ever be.
We're perplexed, to say the least. Ultimately, it's not a big deal, but I'd
like to know if there's something being missed here so I can learn from it.
I've checked the code over and over, I've checked the input, I've checked the
output...I can't find any inconsistency, or anything that suggests I'm missing
that last 2% of a successful solution.
Any idea what we're missing as to why this isn't coming back as a 100% legit
solution to the problem? I'm hoping some fresh eyes and sharp minds can help
me out on this one! Thank you very much!
Answer: Try the following code (100% on Code Eval):
import sys
with open(sys.argv[1], 'r') as in_f:
for line in in_f:
line = line.strip()
if len(line) > 55:
line = "{0}... <Read More>".format(line[:40].rsplit(" ", 1)[0].rstrip())
sys.stdout.write("{0}\n".format(line))
I used this file:
Tom exhibited.
Amy Lawrence was proud and glad, and she tried to make Tom see it in her face - but he wouldn't look.
Tom was tugging at a button-hole and looking sheepish.
Two thousand verses is a great many - very, very great many.
Tom's mouth watered for the apple, but he stuck to his work.
and got the following output:
Tom exhibited.
Amy Lawrence was proud and glad, and... <Read More>
Tom was tugging at a button-hole and looking sheepish.
Two thousand verses is a great many -... <Read More>
Tom's mouth watered for the apple, but... <Read More>

|
Plotting a bar chart from data in a python dictionary
Question: I have a dictionary in my python script that contains data I want to create a
bar chart with.
I used matplotlib and was able to generate the bar chart image and save it.
However that was not good enough because I want to send that bar chart out as
an email and I cannot embed the bar chart image in my email's html body. I
know I can use the `<img>` tag but the issue is that the email body is
populated in Jenkins as part of a pre-send script and externally generated
images cannot be sent out that way.
I am thinking of using D3.js to generate visualizations with the dictionary
data. I need help with where and how to start and glue python and js together.
FYI, currently I am using HTML package in python to generate html using python
code. This allows me to generate html tables very easily but html has not tags
for bar chart etc hence the issue.
Answer: As a minimal example for an html format bar chart generated from a python
`matplotlib` and `mpld3`, you have,
import matplotlib.pyplot as plt
import numpy as np
import mpld3
fig, ax = plt.subplots(1,1)
N = 10
bar = ax.bar(range(N),np.random.normal(size=N))
mpld3.save_html(fig,'./out.html')
and then use `out.html` or call `mpld3.fig_to_html(fig)` to generate the code
or `mpld3.fig_to_dict` to output a json-serializable dictionary representation
of the figure.
|
How to enable WASAPI exclusive mode in pyaudio
Question: I'm using [these](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyaudio)
precompiled binaries of pyaudio with WASAPI support. I want to play a wav file
via WASAPI. I found index of default output device for this api:
import pyaudio
p = pyaudio.PyAudio()
print p.get_host_api_info_by_index(3)
>>{'index': 3, 'name': u'Windows WASAPI', 'defaultOutputDevice': 11L, 'type': 13L, 'deviceCount': 3L, 'defaultInputDevice': 12L, 'structVersion': 1L}
Then I play a wav file via this device:
import pyaudio
import wave
CHUNK = 1024
wf = wave.open('test.wav', 'rb')
# instantiate PyAudio (1)
p = pyaudio.PyAudio()
# open stream (2)
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output_device_index=11,
output=True)
# read data
data = wf.readframes(CHUNK)
# play stream (3)
while data != '':
stream.write(data)
data = wf.readframes(CHUNK)
# stop stream (4)
stream.stop_stream()
stream.close()
# close PyAudio (5)
p.terminate()
When file is playing I'm still able to hear another sounds in the system, but
in exclusive WASAPI mode all other sounds must be blocked. So how to enable
WASAPI exclusive mode in pyaudio?
Answer: There is need to change sources of pyaudio. We need to modify
_portaudiomodule.c.
Include pa_win_wasapi.h:
#include pa_win_wasapi.h
Change this line:
outputParameters->hostApiSpecificStreamInfo = NULL;
On this:
struct PaWasapiStreamInfo wasapiInfo;
wasapiInfo.size = sizeof(PaWasapiStreamInfo);
wasapiInfo.hostApiType = paWASAPI;
wasapiInfo.version = 1;
wasapiInfo.flags = (paWinWasapiExclusive|paWinWasapiThreadPriority);
wasapiInfo.threadPriority = eThreadPriorityProAudio;
outputParameters->hostApiSpecificStreamInfo = (&wasapiInfo);
Now we need to compile pyaudio.
1. Place portaudio dir in pyaudio with name portaudio-v19, name is important
2. Install MinGW/MSYS: gcc, make and MSYS console we need
3. In MSYS console cd to portaudio-v19
4. `./configure --with-winapi=wasapi --enable-shared=no`
5. `make`
6. `cd ..`
7. change these lines:
`external_libraries += ['winmm']`
`extra_link_args += ['-lwinmm']`
in setup.py on these:
`external_libraries += ["winmm","ole32","uuid"]`
`extra_link_args += ["-lwinmm","-lole32","-luuid"]`
8. `python setup.py build --static-link -cmingw32`
9. `python setup.py install --skip-build`
That's all. Now pyadio is able to play sound in WASAPI exclusive mode.
|
sklearn classifier get ValueError: bad input shape
Question: I have a csv, struct is `CAT1,CAT2,TITLE,URL,CONTENT`, CAT1, CAT2, TITLE
,CONTENT are in chinese.
I want train `LinearSVC` or `MultinomialNB` with X(TITLE) and
feature(CAT1,CAT2), both get this error. below is my code:
PS: I write below code through this example [scikit-learn
text_analytics](https://github.com/scikit-learn/scikit-
learn/blob/master/doc/tutorial/text_analytics/solutions/exercise_02_sentiment.py)
import numpy as np
import csv
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline
label_list = []
def label_map_target(label):
''' map chinese feature name to integer '''
try:
idx = label_list.index(label)
except ValueError:
idx = len(label_list)
label_list.append(label)
return idx
c1_list = []
c2_list = []
title_list = []
with open(csv_file, 'r') as f:
# row_from_csv is for shorting this example
for row in row_from_csv(f):
c1_list.append(label_map_target(row[0])
c2_list.append(label_map_target(row[1])
title_list.append(row[2])
data = np.array(title_list)
target = np.array([c1_list, c2_list])
print target.shape
# (2, 4405)
target = target.reshape(4405,2)
print target.shape
# (4405, 2)
docs_train, docs_test, y_train, y_test = train_test_split(
data, target, test_size=0.25, random_state=None)
# vect = TfidfVectorizer(tokenizer=jieba_tokenizer, min_df=3, max_df=0.95)
# use custom chinese tokenizer get same error
vect = TfidfVectorizer(min_df=3, max_df=0.95)
docs_train= vect.fit_transform(docs_train)
clf = LinearSVC()
clf.fit(docs_train, y_train)
error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-24-904eb9af02cd> in <module>()
1 clf = LinearSVC()
----> 2 clf.fit(docs_train, y_train)
C:\Python27\lib\site-packages\sklearn\svm\classes.pyc in fit(self, X, y)
198
199 X, y = check_X_y(X, y, accept_sparse='csr',
--> 200 dtype=np.float64, order="C")
201 self.classes_ = np.unique(y)
202
C:\Python27\lib\site-packages\sklearn\utils\validation.pyc in check_X_y(X, y, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric)
447 dtype=None)
448 else:
--> 449 y = column_or_1d(y, warn=True)
450 _assert_all_finite(y)
451 if y_numeric and y.dtype.kind == 'O':
C:\Python27\lib\site-packages\sklearn\utils\validation.pyc in column_or_1d(y, warn)
483 return np.ravel(y)
484
--> 485 raise ValueError("bad input shape {0}".format(shape))
486
487
ValueError: bad input shape (3303, 2)
Answer: Thanks to @meelo, I solved this probelm. As he say, in my code, `data` is
feature vector, `target` is target value. I mixed up two things.
I learned TfidfVectorizer process data to [data, feature] , and each data
should map to just one target.
If I want to predict two type targets. I need two distinct targets:
1. `target_C1` with all C1 value
2. `target_C2` with all C2 value.
Then use the two targets and original data to train two classifier for each
target.
|
python average of random sample many times
Question: I am working with pandas and I wish to sample 2 stocks from **each trade
date** and store as part of the dataset the average "Stock_Change" and the
average "Vol_Change" for the given day in question based on the sample taken
(in this case, 2 stocks per day). The actual data is much larger spanning
years and hundreds of names. My sample will be of 100 names, I just use 2 for
the purposes of this question.
Sample data set:
In [3]:
df
Out[3]:
Date Symbol Stock_Change Vol_Change
0 1/1/2008 A -0.05 0.07
1 1/1/2008 B -0.06 0.17
2 1/1/2008 C -0.05 0.07
3 1/1/2008 D 0.05 0.13
4 1/1/2008 E -0.03 -0.10
5 1/2/2008 A 0.03 -0.17
6 1/2/2008 B 0.08 0.34
7 1/2/2008 C 0.03 0.17
8 1/2/2008 D 0.06 0.24
9 1/2/2008 E 0.02 0.16
10 1/3/2008 A 0.02 0.05
11 1/3/2008 B 0.01 0.39
12 1/3/2008 C 0.05 -0.17
13 1/3/2008 D -0.01 0.37
14 1/3/2008 E -0.06 0.23
15 1/4/2008 A 0.03 0.31
16 1/4/2008 B -0.07 0.16
17 1/4/2008 C -0.06 0.29
18 1/4/2008 D 0.00 0.09
19 1/4/2008 E 0.00 -0.02
20 1/5/2008 A 0.04 -0.04
21 1/5/2008 B -0.06 0.16
22 1/5/2008 C -0.08 0.07
23 1/5/2008 D 0.09 0.16
24 1/5/2008 E 0.06 0.18
25 1/6/2008 A 0.00 0.22
26 1/6/2008 B 0.08 -0.13
27 1/6/2008 C 0.07 0.18
28 1/6/2008 D 0.03 0.32
29 1/6/2008 E 0.01 0.29
30 1/7/2008 A -0.08 -0.10
31 1/7/2008 B -0.09 0.23
32 1/7/2008 C -0.09 0.26
33 1/7/2008 D 0.02 -0.01
34 1/7/2008 E -0.05 0.11
35 1/8/2008 A -0.02 0.36
36 1/8/2008 B 0.03 0.17
37 1/8/2008 C 0.00 -0.05
38 1/8/2008 D 0.08 -0.13
39 1/8/2008 E 0.07 0.18
One other point, the samples can not contain the same security more than once
(sample without replacement). My guess is that this a good R question but I
don't know the last thing about R . .
I have no idea of even how to start this question.
thanks in advance for any help.
## Edit by OP
I tried this but don't seem to be able to get it to work on a the group-by
dataframe (grouped by Symbol and Date):
In [35]:
import numpy as np
import pandas as pd
from random import sample
# create random index
rindex = np.array(sample(range(len(df)), 10))
# get 10 random rows from df
dfr = df.ix[rindex]
In [36]:
dfr
Out[36]:
Date Symbol Stock_Change Vol_Change
6 1/2/2008 B 8% 34%
1 1/2/2008 B -6% 17%
37 1/3/2008 C 0% -5%
25 1/1/2008 A 0% 22%
3 1/4/2008 D 5% 13%
12 1/3/2008 C 5% -17%
10 1/1/2008 A 2% 5%
2 1/3/2008 C -5% 7%
26 1/2/2008 B 8% -13%
17 1/3/2008 C -6% 29%
## OP Edit #2
As I read the question I realize that I may not have been very clear. What I
want to do is sample the data many times (call it X) for each day and in
essence end up with X times "# of dates" as my new dataset. This may not look
like it makes sense with the data i am showing but my actual data has 500
names and 2 years (2x365 = 730) of dates and I wish to sample 50 random names
for each day for a total of 50 x 730 = 36500 data points.
first attempt gave this:
In [10]:
# do sampling: get a random subsample with size 3 out of 5 symbols for each date
# ==============================
def get_subsample(group, sample_size=3):
symbols = group.Symbol.values
symbols_selected = np.random.choice(symbols, size=sample_size, replace=False)
return group.loc[group.Symbol.isin(symbols_selected)]
df.groupby(['Date']).apply(get_subsample).reset_index(drop=True)
Out[10]:
Date Symbol Stock_Change Vol_Change
0 1/1/2008 A -5% 7%
1 1/1/2008 A 3% -17%
2 1/1/2008 A 2% 5%
3 1/1/2008 A 3% 31%
4 1/1/2008 A 4% -4%
5 1/1/2008 A 0% 22%
6 1/1/2008 A -8% -10%
7 1/1/2008 A -2% 36%
8 1/2/2008 B -6% 17%
9 1/2/2008 B 8% 34%
10 1/2/2008 B 1% 39%
11 1/2/2008 B -7% 16%
12 1/2/2008 B -6% 16%
13 1/2/2008 B 8% -13%
14 1/2/2008 B -9% 23%
15 1/2/2008 B 3% 17%
16 1/3/2008 C -5% 7%
17 1/3/2008 C 3% 17%
18 1/3/2008 C 5% -17%
19 1/3/2008 C -6% 29%
20 1/3/2008 C -8% 7%
21 1/3/2008 C 7% 18%
22 1/3/2008 C -9% 26%
23 1/3/2008 C 0% -5%
24 1/4/2008 D 5% 13%
25 1/4/2008 D 6% 24%
26 1/4/2008 D -1% 37%
27 1/4/2008 D 0% 9%
28 1/4/2008 D 9% 16%
29 1/4/2008 D 3% 32%
30 1/4/2008 D 2% -1%
31 1/4/2008 D 8% -13%
32 1/5/2008 E -3% -10%
33 1/5/2008 E 2% 16%
34 1/5/2008 E -6% 23%
35 1/5/2008 E 0% -2%
36 1/5/2008 E 6% 18%
37 1/5/2008 E 1% 29%
38 1/5/2008 E -5% 11%
39 1/5/2008 E 7% 18%
Answer:
import pandas as pd
import numpy as np
# replicate your data structure
# ==============================
np.random.seed(0)
dates = pd.date_range('2008-01-01', periods=100, freq='B')
symbols = 'A B C D E'.split()
multi_index = pd.MultiIndex.from_product([dates, symbols], names=['Date', 'Symbol'])
stock_change = np.random.randn(500)
vol_change = np.random.randn(500)
df = pd.DataFrame({'Stock_Change': stock_change, 'Vol_Change': vol_change}, index=multi_index).reset_index()
# do sampling: get a random subsample with size 3 out of 5 symbols for each date
# ==============================
def get_subsample(group, X=100, sample_size=3):
frame = pd.DataFrame(columns=['sample_{}'.format(x) for x in range(1,X+1)])
for col in frame.columns.values:
frame[col] = group.loc[group.Symbol.isin(np.random.choice(symbols, size=sample_size, replace=False)), ['Stock_Change', 'Vol_Change']].mean()
return frame.mean(axis=1)
result = df.groupby(['Date']).apply(get_subsample)
Out[169]:
Stock_Change Vol_Change
Date
2008-01-01 1.3937 0.2005
2008-01-02 0.0406 -0.7280
2008-01-03 0.6073 -0.2699
2008-01-04 0.2310 0.7415
2008-01-07 0.0718 -0.7269
2008-01-08 0.3808 -0.0584
2008-01-09 -0.5595 -0.2968
2008-01-10 0.3919 -0.2741
2008-01-11 -0.4856 0.0386
2008-01-14 -0.4700 -0.4090
... ... ...
2008-05-06 0.1510 0.1628
2008-05-07 -0.1452 0.2824
2008-05-08 -0.4626 0.2173
2008-05-09 -0.2984 0.6324
2008-05-12 -0.3817 0.7698
2008-05-13 0.5796 -0.4318
2008-05-14 0.2875 0.0067
2008-05-15 0.0269 0.3559
2008-05-16 0.7374 0.1065
2008-05-19 -0.4428 -0.2014
[100 rows x 2 columns]
|
Why is my program faster than the one using a python built in function?
Question: Ok so, I was doing a puzzle on coderbyte, and here is what the puzzle stated:
Have the function SimpleMode(arr) take the array of numbers stored in arr and
return the number that appears most frequently (the mode). For example: if arr
contains [10, 4, 5, 2, 4] the output should be 4. If there is more than one
mode return the one that appeared in the array first (ie. [5, 10, 10, 6, 5]
should return 5 because it appeared first). If there is no mode return -1. The
array will not be empty.
So here is my program:
import time
from random import randrange
def SimpleMode(arr):
bestMode=0
numTimes=0
for x in range(len(arr)):
if len(arr)>0:
currentNum=arr[0]
currentMode=0
while currentNum in arr:
currentMode+=1
arr.remove(currentNum)
if currentMode>numTimes:
numTimes=currentMode
bestMode=currentNum
else: break
if numTimes==1: bestMode=-1
return bestMode
start_time = time.time()
numbers = [randrange(1,10) for x in range(0, 1000)]
print(SimpleMode(numbers))
print("--- %s seconds ---" % (time.time() - start_time))
And here is a much simpler program which someone else wrote:
import time
from random import randrange
def SimpleMode(arr):
best = -1
best_count = 1
for c in arr:
if arr.count(c) > best_count:
best = c
best_count = arr.count(c)
return best
start_time = time.time()
numbers = [randrange(1,10) for x in range(0, 1000)]
print(SimpleMode(numbers))
print("--- %s seconds ---" % (time.time() - start_time))
Now I know that using my method of timing this it depends on what my CPU is
doing and whatnot so this is not the most accurate way, but leaving that aside
what I found was that my computer took 0.012000 seconds to run my program, yet
it took 0.025001 seconds to run the second program.
Now here is where I am puzzled. My program which I have written myself takes
less than half the time the other program takes which uses a built-in python
function and has only one for-loop whereas my program has a while loop inside
a for-loop.
Can anyone provide any insight into this?
Answer: The second program calls `count` twice each iteration, and since `count` is
O(n) (that is, it has to walk through the entire array, just like a for loop
would), the time quickly adds up.
That said, your program can be reduced even further:
import collections
def SimpleMode(arr):
if not arr:
return -1
counts = collections.Counter(arr)
return max(counts, key=lambda k: (counts[k], -arr.index(k)))
In addition, note that your initial program mutates its input (it effectively
destroys the list you pass it because of the `.remove` calls, which will suck
if you wanted to do anything with `arr` after calling `SimpleMode`).
And finally, in Python the `[1, 2, 3, 4]` construct is called a list, not an
array. There exists something called an array in Python, and it's _not_ this
(most of the time it's a NumPy array, but it can also be an array from the
`array` module in the stdlib).
|
Django-storages-redux: cannot import name 'setting'
Question: I'm trying to deploy a Django website to Amazon Web Services using python 3.
Now, django-storages is not compatible with python3, so I installed django-
storages-redux, which is compatible. But, when I'm trying to:
python3 manage.py runserver
I'm getting this:
Traceback (most recent call last):
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/core/handlers/base.py", line 132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/jokes/views.py", line 6, in home_page
response = render(request, 'home.html')
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/shortcuts.py", line 67, in render
template_name, context, request=request, using=using)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/template/loader.py", line 99, in render_to_string
return template.render(context, request)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/template/backends/django.py", line 74, in render
return self.template.render(context)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/template/base.py", line 209, in render
return self._render(context)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/template/base.py", line 201, in _render
return self.nodelist.render(context)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/template/debug.py", line 79, in render_node
return node.render(context)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/template/loader_tags.py", line 135, in render
return compiled_parent._render(context)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/template/base.py", line 201, in _render
return self.nodelist.render(context)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/template/debug.py", line 79, in render_node
return node.render(context)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/templatetags/static.py", line 105, in render
url = self.url(context)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/contrib/staticfiles/templatetags/staticfiles.py", line 16, in url
return static(path)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/contrib/staticfiles/templatetags/staticfiles.py", line 9, in static
return staticfiles_storage.url(path)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/utils/functional.py", line 226, in inner
self._setup()
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/contrib/staticfiles/storage.py", line 394, in _setup
self._wrapped = get_storage_class(settings.STATICFILES_STORAGE)()
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/core/files/storage.py", line 329, in get_storage_class
return import_string(import_path or settings.DEFAULT_FILE_STORAGE)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/utils/module_loading.py", line 26, in import_string
module = import_module(module_path)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2231, in _gcd_import
File "<frozen importlib._bootstrap>", line 2214, in _find_and_load
File "<frozen importlib._bootstrap>", line 2203, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1448, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/storages/backends/s3boto.py", line 24, in <module>
from storages.utils import setting
ImportError: cannot import name 'setting'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.4/wsgiref/handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/contrib/staticfiles/handlers.py", line 63, in __call__
return self.application(environ, start_response)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/core/handlers/wsgi.py", line 189, in __call__
response = self.get_response(request)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/core/handlers/base.py", line 218, in get_response
response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/core/handlers/base.py", line 261, in handle_uncaught_exception
return debug.technical_500_response(request, *exc_info)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/views/debug.py", line 97, in technical_500_response
html = reporter.get_traceback_html()
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/views/debug.py", line 383, in get_traceback_html
c = Context(self.get_traceback_data(), use_l10n=False)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/views/debug.py", line 328, in get_traceback_data
frames = self.get_traceback_frames()
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/views/debug.py", line 501, in get_traceback_frames
'vars': self.filter.get_traceback_frame_variables(self.request, tb.tb_frame),
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/views/debug.py", line 234, in get_traceback_frame_variables
cleansed[name] = self.cleanse_special_types(request, value)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/views/debug.py", line 189, in cleanse_special_types
if isinstance(value, HttpRequest):
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/utils/functional.py", line 226, in inner
self._setup()
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/contrib/staticfiles/storage.py", line 394, in _setup
self._wrapped = get_storage_class(settings.STATICFILES_STORAGE)()
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/core/files/storage.py", line 329, in get_storage_class
return import_string(import_path or settings.DEFAULT_FILE_STORAGE)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/django/utils/module_loading.py", line 26, in import_string
module = import_module(module_path)
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2231, in _gcd_import
File "<frozen importlib._bootstrap>", line 2214, in _find_and_load
File "<frozen importlib._bootstrap>", line 2203, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1448, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/home/antoni4040/Documents/django-trunk/django/bin/thermostatis/env/lib/python3.4/site-packages/storages/backends/s3boto.py", line 24, in <module>
from storages.utils import setting
ImportError: cannot import name 'setting'
Strange, what is that setting file that it tries to import? Has anyone
succeeded deploying like this?
Answer: I ran into this issue as well. I was able to resolve by **starting a fresh
environment and reinstalling the package**. I believe I had two packages
fighting over the 'storages' name.
This was with Python 3.4, Django 1.7.1, and django-storages-redux 1.2.3.
|
Python - Readline skipping characters
Question: I ran into a curious problem while parsing json objects in large text files,
and the solution I found doesn't really make much sense. I was working with
the following script. It copies bz2 files, unzips them, then parses each line
as a json object.
import os, sys, json
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
# USER INPUT
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
args = sys.argv
extractDir = outputDir = ""
if (len(args) >= 2):
extractDir = args[1]
else:
extractDir = raw_input('Directory to extract from: ')
if (len(args) >= 3):
outputDir = args[2]
else:
outputDir = raw_input('Directory to output to: ')
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
# RETRIEVE FILE
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
tweetModel = [u'id', u'text', u'lang', u'created_at', u'retweeted', u'retweet_count', u'in_reply_to_user_id', u'coordinates', u'place', u'hashtags', u'in_reply_to_status_id']
filenames = next(os.walk(extractDir))[2]
for file in filenames:
if file[-4:] != ".bz2":
continue
os.system("cp " + extractDir + '/' + file + ' ' + outputDir)
os.system("bunzip2 " + outputDir + '/' + file)
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
# PARSE DATA
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
input = open (outputDir + '/' + file[:-4], 'r')
output = open (outputDir + '/p_' + file[:-4], 'w+')
for line in input.readlines():
try:
tweet = json.loads(line)
for field in enumerate(tweetModel):
if tweet.has_key(field[1]) and tweet[field[1]] != None:
if field[0] != 0:
output.write('\t')
fieldData = tweet[field[1]]
if not isinstance(fieldData, unicode):
fieldData = unicode(str(fieldData), "utf-8")
output.write(fieldData.encode('utf8'))
else:
output.write('\t')
except ValueError as e:
print ("Parse Error: " + str(e))
print line
line = input.readline()
quit()
continue
print "Success! " + str(len(line))
input.flush()
output.write('\n')
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
# REMOVE OLD FILE
# =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
os.system("rm " + outputDir + '/' + file[:-4])
While reading in certain lines in the `for line in input.readlines():` loop,
the lines would occasionally be truncated at inconsistent locations. Since the
newline character was truncated as well, it would keep reading until it found
the newline character at the end of the next json object. The result was an
incomplete json object followed by a complete json object, all considered one
line by the parser. I could not find the reason for this issue, but I did find
that changing the loop to
filedata = input.read()
for line in filedata.splitlines():
worked. Does anyone know what is going on here?
Answer: After looking at the source code for file.readlines and string.splitlines I
think I see whats up. **Note: This is python 2.7 source code so if you're
using another version... maybe this answer pertains maybe not.**
[readlines](https://hg.python.org/cpython/file/cfb84be6c7fc/Objects/fileobject.c#l1662)
uses the function
[Py_UniversalNewlineFread](https://hg.python.org/cpython/file/cfb84be6c7fc/Objects/fileobject.c#l2816)
to test for a newline
[splitlines](https://hg.python.org/cpython/file/cfb84be6c7fc/Objects/stringlib/split.h#l339)
uses a constant
[STRINGLIB_ISLINEBREAK](https://hg.python.org/cpython/file/cfb84be6c7fc/Objects/stringlib/stringdefs.h#l15)
that just tests for \n or \r. I would suspect Py_UniversalNewlineFread is
picking up some character in the file stream as linebreak when its not really
intended as a line break, could be from the encoding.. I don't know... but
when you just dump all that same data to a string the splitlines checks it
against \r and \n theres no match so splitlines moves on until the real line
break is encountered and you get your intended line.
|
python equivalent of qx in perl-user/password prompt included
Question: In Perl, if I have execute a script and pass a password to it programatically,
I would do this:
my $result = qx { "Calling some script which prompts for a user and password" <<EOF
administrator
password
EOF
};
It executes the following while capturing its output:
/bin/sh -c ' "Calling some script which prompts for a user and password" <<EOF
administrator
password
EOF
'
May I know the equivalent of this in Python?
Answer: If I udnerstand your question correctly, you're trying to start an external
script within its own process, and send that script some data - a password -
via its standard input.
In Python, this is done using the
[`subprocess`](https://docs.python.org/2/library/subprocess.html#module-
subprocess) module. This module runs external scripts and has a `stdin`,
`stdout` and `stderr` parameters.
For example, suppose that the script is `md5sum`. You would like to send it
the password `secret`:
>>> import subprocess as sp
>>> p = sp.Popen('md5sum', stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
>>> p.communicate(input="secret") # <-- Your password
('5ebe2294ecd0e0f08eab7690d2a6ee69 -\n', '') # <-- Process output
`p.communicate()` returns an (`stdout`, `stderr`) tuple, which is useful for
processing the script output.
You may also find [this answer](http://stackoverflow.com/a/8475367/51197)
useful.
|
How do I get the return value from a process run by subprocess.Popen?
Question: I am trying to assign the return value of `python --version` to a variable
using the code below.:
#! /usr/bin/env python
import os
import sys
import subprocess
os.system('pwd')
print ("check the version of python")
output = subprocess.Popen(["python --version"], stdout=subprocess.PIPE, shell=True)
(ver, error)= output.communicate()
print "python is:",ver
The output I'm getting is :
/home/mobaxterm/Desktop/mypy
check the version of python
Python 2.7.10
python is:
Please let me know what I am missing.
Thanks in advance.
Answer: Python writes its version information to `stderr` (variable `error`, in your
code) - like most other products.
Nothing was written to `stdout`. You will notice that the variable `ver`
**was** printed, it was an empty string.
You also failed to direct `stderr`:
output = subprocess.Popen(["python --version"], stdout=subprocess.PIPE,
stderr=subprocess.PIPE, shell=True)
|
scrapy: spider destructor (__del__) never executed
Question: I have created an scrapy spider that works well (it does what is supposed to
do), but when finish working it doesn't execute the destructor code (**del**)
Versions are: \- python 2.7.3 \- scrapy 0.24.6 \- Fedora 18
class MySpider(scrapy.Spider):
stuff
def __del__(self):
stuff_1
How could I execute my "stuff-1" code when MySpider is done?
Answer: Use signals. In particular,
[`spider_closed`](http://doc.scrapy.org/en/latest/topics/signals.html#scrapy.signals.spider_closed)
signal:
from scrapy import signals
from scrapy.xlib.pydispatch import dispatcher
class MySpider(scrapy.Spider):
def __init__(self):
dispatcher.connect(self.spider_closed, signals.spider_closed)
def spider_closed(self, spider):
stuff_1()
|
ValueError: endog must be in the unit interval
Question: While using statsmodels, I am getting this weird error: `ValueError: endog
must be in the unit interval.` Can someone give me more information on this
error? Google is not helping.
Code that produced the error:
"""
Multiple regression with dummy variables.
"""
import pandas as pd
import statsmodels.api as sm
import pylab as pl
import numpy as np
df = pd.read_csv('cost_data.csv')
df.columns = ['Cost', 'R(t)', 'Day of Week']
dummy_ranks = pd.get_dummies(df['Day of Week'], prefix='days')
cols_to_keep = ['Cost', 'R(t)']
data = df[cols_to_keep].join(dummy_ranks.ix[:,'days_2':])
data['intercept'] = 1.0
print(data)
train_cols = data.columns[1:]
logit = sm.Logit(data['Cost'], data[train_cols])
result = logit.fit()
print(result.summary())
And the traceback:
Traceback (most recent call last):
File "multiple_regression_dummy.py", line 20, in <module>
logit = sm.Logit(data['Cost'], data[train_cols])
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/statsmodels/discrete/discrete_model.py", line 404, in __init__
raise ValueError("endog must be in the unit interval.")
ValueError: endog must be in the unit interval.
Answer: I got this error when my target column had values larger than 1. Make sure
your target column is between 0 and 1 (as is required for a Logistic
Regression) and try again. For example, if you have target column with values
1-5, make 4 and 5 the positive class and 1,2,3 the negative class. Hope this
helps.
|
How to convert rows in DataFrame in Python to dictionaries
Question: For example, I have DataFrame now as
id score1 score2 score3 score4 score5
1 0.000000 0.108659 0.000000 0.078597 1
2 0.053238 0.308253 0.286353 0.446433 1
3 0.000000 0.083979 0.808983 0.233052 1
I want to convert it as
id scoreDict
1 {'1': 0, '2': 0.1086, ...}
2 {...}
3 {...}
Anyway to do that?
Thanks in advance!
Answer:
import pandas as pd
# your df
# =========================
print(df)
id score1 score2 score3 score4 score5
0 1 0.0000 0.1087 0.0000 0.0786 1
1 2 0.0532 0.3083 0.2864 0.4464 1
2 3 0.0000 0.0840 0.8090 0.2331 1
# to_dict
# =========================
df.to_dict(orient='records')
Out[318]:
[{'id': 1.0,
'score1': 0.0,
'score2': 0.10865899999999999,
'score3': 0.0,
'score4': 0.078597,
'score5': 1.0},
{'id': 2.0,
'score1': 0.053238000000000001,
'score2': 0.308253,
'score3': 0.28635300000000002,
'score4': 0.44643299999999997,
'score5': 1.0},
{'id': 3.0,
'score1': 0.0,
'score2': 0.083978999999999998,
'score3': 0.80898300000000001,
'score4': 0.23305200000000001,
'score5': 1.0}]
|
installing PyGObject via pip in virtualenv
Question: I'm actually upgrading an old django app from python2.7 to python3.4. While
installing pygobject via pip, I got this error:
Collecting pygobject
Using cached pygobject-2.28.3.tar.bz2
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 20, in <module>
File "/tmp/pip-build-9dp0wn96/pygobject/setup.py", line 272
raise SystemExit, 'ERROR: Nothing to do, gio could not be found and is essential.'
^
SyntaxError: invalid syntax
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-9dp0wn96/pygobject
I am trying to install it in a virtualenv. System-wide installation doesn't
work either... I am working on arch linux with python3.4
I have installed the arch package named pygobject-devel 3.16.2-1 but I still
can't import gobject python module
What is this damned missing gio?
Any help is welcomed... Thanx in advance !
Answer: Ok I just managed it !
To install PyGObject in virtrualenv, give up with pip.
1. Install PyGObject system-wide (with your package manager or compile it manually). For example, in my case :
sudo pacman -Suy python-gobject2
2. Link it in your virtualenv :
ln -s /usr/lib/python3.4/site-packages/gobject* /WHEREVER/IS/YOUR/VIRTUALENV/venv/lib/python3.4/site-packages/
3. You might need to link some other modules (in my case glib) :
ln -s /usr/lib/python3.4/site-packages/glib* /WHEREVER/IS/YOUR/VIRTUALENV/venv/lib/python3.4/site-packages/
You might find some helpful infos about system-wide and virtualenv
installations and interactions between modules here :
[virtualenv: Specifing which packages to use system-wide vs
local](http://stackoverflow.com/questions/14571454/virtualenv-specifing-which-
packages-to-use-system-wide-vs-local)
|
Dynamic Datasets and SQLAlchemy
Question: I am refactoring some old SQLite3 SQL statements in Python into SQLAlchemy. In
our framework, we have the following SQL statements that takes in a dict with
certain known keys and potentially any number of unexpected keys and values
(depending what information was provided).
import sqlite3
import sys
def dict_factory(cursor, row):
d = {}
for idx, col in enumerate(cursor.description):
d[col[0]] = row[idx]
return d
def Create_DB(db):
# Delete the database
from os import remove
remove(db)
# Recreate it and format it as needed
with sqlite3.connect(db) as conn:
conn.row_factory = dict_factory
conn.text_factory = str
cursor = conn.cursor()
cursor.execute("CREATE TABLE [Listings] ([ID] INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL UNIQUE, [timestamp] REAL NOT NULL DEFAULT(( datetime ( 'now' , 'localtime' ) )), [make] VARCHAR, [model] VARCHAR, [year] INTEGER);")
def Add_Record(db, data):
with sqlite3.connect(db) as conn:
conn.row_factory = dict_factory
conn.text_factory = str
cursor = conn.cursor()
#get column names already in table
cursor.execute("SELECT * FROM 'Listings'")
col_names = list(map(lambda x: x[0], cursor.description))
#check if column doesn't exist in table, then add it
for i in data.keys():
if i not in col_names:
cursor.execute("ALTER TABLE 'Listings' ADD COLUMN '{col}' {type}".format(col=i, type='INT' if type(data[i]) is int else 'VARCHAR'))
#Insert record into table
cursor.execute("INSERT INTO Listings({cols}) VALUES({vals});".format(cols = str(data.keys()).strip('[]'),
vals=str([data[i] for i in data]).strip('[]')
))
#Database filename
db = 'test.db'
Create_DB(db)
data = {'make': 'Chevy',
'model' : 'Corvette',
'year' : 1964,
'price' : 50000,
'color' : 'blue',
'doors' : 2}
Add_Record(db, data)
data = {'make': 'Chevy',
'model' : 'Camaro',
'year' : 1967,
'price' : 62500,
'condition' : 'excellent'}
Add_Record(db, data)
This level of dynamicism is necessary because there's no way we can know what
additional information will be provided, but, regardless, it's important that
we store all information provided to us. This has never been a problem because
in our framework, as we've never expected an unwieldy number of columns in our
tables.
While the above code works, it's obvious that it's not a clean implementation
and thus why I'm trying to refactor it into SQLAlchemy's cleaner, more robust
ORM paradigm. I started going through SQLAlchemy's official tutorials and
various examples and have arrived at the following code:
from sqlalchemy import Column, String, Integer
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
class Listing(Base):
__tablename__ = 'Listings'
id = Column(Integer, primary_key=True)
make = Column(String)
model = Column(String)
year = Column(Integer)
engine = create_engine('sqlite:///')
session = sessionmaker()
session.configure(bind=engine)
Base.metadata.create_all(engine)
data = {'make':'Chevy',
'model' : 'Corvette',
'year' : 1964}
record = Listing(**data)
s = session()
s.add(record)
s.commit()
s.close()
and it works beautifully with that data dict. Now, when I add a new keyword,
such as
data = {'make':'Chevy',
'model' : 'Corvette',
'year' : 1964,
'price' : 50000}
I get a `TypeError: 'price' is an invalid keyword argument for Listing` error.
To try and solve the issue, I modified the class to be dynamic, too:
class Listing(Base):
__tablename__ = 'Listings'
id = Column(Integer, primary_key=True)
make = Column(String)
model = Column(String)
year = Column(Integer)
def __checker__(self, data):
for i in data.keys():
if i not in [a for a in dir(self) if not a.startswith('__')]:
if type(i) is int:
setattr(self, i, Column(Integer))
else:
setattr(self, i, Column(String))
else:
self[i] = data[i]
But I quickly realized this would not work at all for several reasons, e.g.
the class was already initialized, the data dict cannot be fed into the class
without reinitializing it, it's a hack more than anything, et al.). The more I
think about it, the less obvious the solution using SQLAlchemy seems to me.
So, my main question is, **how do I implement this level of dynamicism using
SQLAlchemy?**
I've researched a bit to see if anyone has a similar issue. The closest I've
found was [Dynamic Class Creation in
SQLAlchemy](http://stackoverflow.com/questions/2768607/dynamic-class-creation-
in-sqlalchemy) but it only talks about the constant attributes ("**tablename**
" et al.). I believe the unanswered [SQLalchemy dynamic attribute
change](http://stackoverflow.com/questions/29105206/sqlalchemy-dynamic-
attribute-change) may be asking the same question. While Python is not my
forte, I consider myself a highly skilled programmer (C++ and JavaScript are
my strongest languages) in the context scientific/engineering applications, so
I may not hitting the correct Python-specific keywords in my searches.
I welcome any and all help.
Answer:
class Listing(Base):
__tablename__ = 'Listings'
id = Column(Integer, primary_key=True)
make = Column(String)
model = Column(String)
year = Column(Integer)
def __init__(self,**kwargs):
for k,v in kwargs.items():
if hasattr(self,k):
setattr(self,k,v)
else:
engine.execute("ALTER TABLE %s AD COLUMN %s"%(self.__tablename__,k)
setattr(self.__class__,Column(k, String))
setattr(self,k,v)
might work ... maybe ... I am not entirely sure I did not test it
a better solution would be to use a relational table
class Attribs(Base):
listing_id = Column(Integer,ForeignKey("Listing"))
name = Column(String)
val = Column(String)
class Listing(Base):
id = Column(Integer,primary_key = True)
attributes = relationship("Attribs",backref="listing")
def __init__(self,**kwargs):
for k,v in kwargs.items():
Attribs(listing_id=self.id,name=k,value=v)
def __str__(self):
return "\n".join(["A LISTING",] + ["%s:%s"%(a.name,a.val) for a in self.attribs])
another solution would be to store json
class Listing(Base):
__tablename__ = 'Listings'
id = Column(Integer, primary_key=True)
data = Column(String)
def __init__(self,**kwargs):
self.data = json.dumps(kwargs)
self.data_dict = kwargs
the best solution would be to use a no-sql key,value store (maybe even just a
simple json file? or perhaps shelve? or even pickle I guess)
|
How can I use regex to search for repeating word in a string in Python?
Question: Is it possible to search for a repeating word in a string use regex in
**Python**?
For instance:
string = ("Hello World hello mister rain")
re.search(r'[\w ]+[\w ]+[\w ]+[\w ]+[\w ]', string)
Can I do it so I won't have to repeat `[\w ]+[\w ]`. Can't I just specify `[\w
]*5` instead?
Answer: I think this would be easier using plain Python:
from collections import Counter
string = "Hello World hello mister rain" # note: no () needed
words = string.split()
for word, count in Counter(map(str.lower, words)).iteritems():
if count > 1:
print "The word '{}' is repeated {} times.".format(word, count)
|
Scipy.linalg.eig() giving different eigenvectors from GNU Octave's eig()
Question: I want to compute the eigenvalues for a generalized eigenvalue problem with
lambda * M * v = K * v, where lambda is the eigenvalue, v is an eigenvector,
and M and K are matrices. Let's say we have
K =
1.8000 + 0.0000i -1.0970 + 0.9550i
-1.0970 - 0.9550i 1.8000 + 0.0000i
M =
209 0
0 209
In Octave, if I do `[V,D]=eig(K,M)`, I get:
V =
0.53332 - 0.46429i -0.53332 + 0.46429i
0.70711 + 0.00000i 0.70711 + 0.00000i
D =
Diagonal Matrix
0.34555 0
0 3.25445
However, if I do scipy.linalg.eig(K, b=M) using Scipy in python, I get a
different result:
>>> import numpy as np
>>> import scipy as sp
>>> import scipy.linalg
>>> K = np.mat([[1.8, -1.097+0.995j], [-1.097-0.955j, 1.8]])
>>> M = np.mat([[209., 0.], [0., 209.]])
>>> M
matrix([[ 209., 0.],
[ 0., 209.]])
>>> K
matrix([[ 1.800+0.j , -1.097+0.955j],
[-1.097-0.955j, 1.800+0.j ]])
>>> D, V = sp.linalg.eig(K, b=M)
>>> D
array([ 0.00165333 -1.99202696e-19j, 0.01557155 +0.00000000e+00j])
>>> V
array([[ 0.70710678 +0.00000000e+00j, -0.53332494 +4.64289256e-01j],
[ 0.53332494 +4.64289256e-01j, 0.70710678 -8.38231384e-18j]])
The eigenvalues should be the ones in the D array.
Why are the eigenvalues different in these two examples? Am I misunderstanding
something?
edit: corrected typo and redid calculation.
edit: I used Octave 3.8.2. in Mac OS X 10.10.3.
Answer: I think I understand what's going on. Scipy is probably providing the correct
eigenvalues. Octave [accepts a second matrix in its eig()
function](http://octave.sourceforge.net/octave/function/eig.html) but doesn't
specify what it does. [Matlab's
documentation](http://www.mathworks.com/help/matlab/ref/eig.html#bti99bb-1)
does say it's for a generalized eigenvalue problem, but in Octave adding the
second matrix appears to have no affect on the eigenvalues. This looks like a
bug in Octave.
|
<br> Tag parsing using python and beautifulsoup
Question: So I am trying to golf course extract data from a given website in which it
will create a CSV that contains the name and address. For the address though
the website where I am taking the data from has
tag breaking it apart. Is it possible to parse out the two address which is
break apart by a
into two separate columns?
SO it it looks like this on the HTML
<div class="location">10799 E 550 S<br>Zionsville, Indiana, United States</div>
I want that it will be broken into
Column1:10799 E 550 S
Column2:Zionsville, Indiana, United States
Here is my code:
import csv
import requests
from bs4 import BeautifulSoup
courses_list = []
with open('Garmin_GC.csv', 'w') as file:
writer = csv.writer(file)
for i in range(3): #893
url = "http://sites.garmin.com/clsearch/courses/search?course=&location=&country=US&state=&holes=&radius=&lang=en&search_submitted=1&per_page={}".format(
i * 20)
r = requests.get(url)
soup = BeautifulSoup(r.text)
g_data2 = soup.find_all("div", {"class": "result"})
for item in g_data2:
try:
name = item.find_all("div", {"class": "name"})[0].text
except IndexError:
name = ''
print "No Name found!"
try:
address = item.find_all("div", {"class": "location"})[0].get_text(separator=' ')
print address
except IndexError:
address = ''
print "No Address found!"
writer.writerow([name.encode("utf-8"), address.encode("utf-8")])
Answer: Use the [`.stripped_strings`
generator](http://www.crummy.com/software/BeautifulSoup/bs4/doc/):
address = list(item.find('div', class_='location').stripped_strings)
This'll produce a list of two strings:
>>> from bs4 import BeautifulSoup
>>> markup = '''<div class="location">10799 E 550 S<br>Zionsville, Indiana, United States</div>'''
>>> soup = BeautifulSoup(markup)
>>> list(soup.find('div', class_='location').stripped_strings)
[u'10799 E 550 S', u'Zionsville, Indiana, United States']
Putting that in the context of your code:
try:
name = item.find('div', class_='name').text
except AttributeError:
name = u''
try:
address = list(item.find('div', class_='location').stripped_strings)
except AttributeError:
address = [u'', u'']
writer.writerow([v.encode("utf-8") for v in [name] + address])
where the two address values are written to two separate columns.
|
Python Logging Module logging timestamp to include microsecond
Question: I am using python's logging module for logs, but needed the timestamp to
include microsecond. It seems the timestamp can only get as precise as
millisecond. Here's my test code
import logging
logging.basicConfig(format='%(asctime)s %(levelname)s {%(module)s} [%(funcName)s] %(message)s',
datefmt='%Y-%m-%d,%H:%M:%S:%f', level=logging.INFO)
class log2_test():
def test_class(self):
logging.warning("Warning2 inside the class")
def get_started2():
logging.info("Logged2 Here")
if __name__ == '__main__':
get_started2()
Here's the output I get --
2015-07-09,16:36:37:f INFO {logger} [get_started2] Logged2 Here
Somehow, %f is not recognized. Python version is 2.7.6.
How do I get the timestamp to include microseconds? Thanks in advance.
Answer: I do not think `strftime()` would support the `%f` directly. The logger does
though provide milliseconds as a separate msecs
[attribute](https://docs.python.org/2/library/logging.html?highlight=msec#logrecord-
attributes), so you could just add it yourself after the existing timestamp as
follows:
logging.basicConfig(format='%(asctime)s.%(msecs)03d %(levelname)s {%(module)s} [%(funcName)s] %(message)s', datefmt='%Y-%m-%d,%H:%M:%S', level=logging.INFO)
This gave me the following output using your script:
2015-07-10,09:21:16.841 INFO {test script} [get_started2] Logged2 Here
|
Finding discrete logic levels in a waveform
Question: I have some waveforms that I'm trying to process with Python. I would like to
find discrete logic levels that these signals are displaying. I have a 1D
array of each waveform's x and y values.
The data looks something like this example staircase:

How can I find the different levels? Here they might be around (1.8, 3.3),
(2.1, 3.0), (2.7, 2.6) and so on.
I have tried following the approach discussed in [this similar
question](http://stats.stackexchange.com/questions/36309/how-do-i-find-peaks-
in-a-dataset) relating to R. I implemented a LOWESS smoother (which is the
curve you see) with a tight fit to eliminate noise, since the real waveforms
have a non-trivial noise component, and then tried doing a rolling max with a
window over the data, but I can't get anything solid.
Answer: For your simple example, you can just define a skip size and plot a step
function,
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import argrelmin, argrelmax
#Generate some example signal
res = 1000
x = np.linspace(0,2*np.pi,res)
y = np.sin(x)
plt.plot(x,y)
#Take a number of discrete steps
Nsteps = 10
skip = x.shape[0]/Nsteps
xstep = x[::skip]; ystep = y[::skip]
plt.plot(xstep,ystep,'o')
#Plot the step graph
plt.step(xstep, ystep)
#Get max and min (only 2 in this simple case)
plt.plot(x[argrelmax(y)[0]],y[argrelmax(y)[0]],'ro')
plt.plot(x[argrelmin(y)[0]],y[argrelmin(y)[0]],'ro')
plt.show()
which gives

For the more complicated solution in the link, I think this will involve
building up the ystep by looking for local min/max in a given range, something
like:
ystep =[]
for rec in range(0,x.shape[0],skip):
ystep.append(y[argrelmax(y[rec:rec+skip])[0]])
EDIT: complete example giving local min/max for a range
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import argrelmin, argrelmax
#Generate some example signal
res = 1000
x = np.linspace(0,2*np.pi,res)
y = np.sin(x)
plt.plot(x,y)
#Take a number of discrete steps
Nsteps = 10
skip = x.shape[0]/Nsteps
#Get local minimum and maximum in blocks for whole set of data
xstep = []; ystep =[]
for rec in range(0,x.shape[0],skip):
srt = rec
end = rec+skip-1
if (y[srt] > y[end]):
indx = argrelmax(y[srt:end],mode='wrap')[0]
elif (y[srt] < y[end]):
indx = argrelmin(y[srt:end],mode='wrap')[0]
xstep.append(x[srt+indx])
ystep.append(y[srt+indx])
#Plot location of min/max
plt.plot(xstep,ystep,'o')
#Plot the step graph
plt.step(xstep, ystep)
plt.show()
|
How to resolve pythonodbc issue with Teradata in Ubuntu
Question: I am getting non text error with Pythonodbc in Teradata Ubuntu
`saranya@saranya-XPS-8500:~/Desktop$ python test.py`
Traceback (most recent call last): File "test.py", line 3, in
conn=pyodbc.connect('DRIVER={Teradata};DBCNAME=**._**._.**;UID=*****;PWD=*****;',
ANSI=True, autocommit=True)
**pyodbc.Error: ('632', '[632] 523 630 (0) (SQLDriverConnect)')**
The solution provided in post [Pyodbc Issue with
Teradata](http://stackoverflow.com/questions/26621179/pyodbc-issue-with-
teradata) is not helping.
Also, exporting ODBCINI, NLSPATH, LD_LIBRARY_HOME,ODBC_HOME values are not
helping either.
Any help will be appreciated
Answer: I saw a similarly obscure response, which seems like it could be related to
any number of problems while setting up the connection. Working on a linux
box, I was able to get it to work by setting up a DSN. To do this, create a
file in your home directory `~/.odbc.ini` similar to the following:
[ODBC]
InstallDir=/opt/teradata/client/15.10
Trace=0
TraceDll=/opt/teradata/client/15.10/lib64/odbctrac.so
TraceFile={trace location, ie. /home/yourusername/odbctrace/trace.log}
TraceAutoStop=0
[ODBC Data Sources]
testdsn=tdata.so
[testdsn]
Driver=/opt/teradata/client/15.10/lib64/tdata.so
Description=Teradata database
DBCName={ip address of your database}
LastUser=
Username={your database's username}
Password={your database's password}
Database={database to use}
DefaultDatabase={default database to use}
Note: you have to fill out the above `{xxx}` with your values. I've used the
default library installation values for the teradata odbc drivers on linux.
Now, with this DSN file set up, set the environment variable
export ODBCINI=/home/yourusername/.odbc.ini
Then you should be able to run the script
import pyodbc
pyodbc.pooling = False
conn = pyodbc.connect('DSN=testdsn')
Even better, if you are connecting to Teradata, install the python teradata
module with:
sudo pip install teradata
Once this is installed, you can create connections with the following script
import teradata
from datetime import *
udaExec = teradata.UdaExec(appName="Generic Name" +
datetime.now().strftime("%Y_%m_%d:%H:%M:%S"), version="1.0",
configureLogging = True, logConsole=True)
session = udaExec.connect(method="odbc", DSN="testdsn")
By setting the `configureLogging=True` and `logConsole=True` options, you can
output additional debugging information.
This is at least what worked for me!
### Erroneous Errors
At the time of writing, the Teradata ODBC drivers in Python output erroneous
error messages. If you drop a table that exists, it says:
Error: (3807, u"[42S02] [Teradata][ODBC Teradata Driver][Teradata Database] Object 'database.table_name' does not exist. ")
And if you create a table that does not previously exist, it says:
Error: (3803, u"[42S01] [Teradata][ODBC Teradata Driver][Teradata Database] Table 'database.table_name' already exists. ")
|
IndentationError: expected an indented block when use unicode
Question: I'm getting this error:
> IndentationError: expected an indented block when use unicode
but without:
def __unicode__(self):
return self.BankName
it work correctly.
My models.py:
from django.db import models
class Bank(models.Model):
BankName=models.CharField(max_length=20)
CardNum=models.CharField(max_length=20)
def __unicode__(self):
return self.BankName
when typing python `manage.py runserver`
Validating models...
Unhandled exception in thread started by <bound method Command.inner_run of <django.contrib.staticfiles.management.commands.runserver.Command object at 0x310f450>>
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/django/core/management/commands/runserver.py", line 91, in inner_run
self.validate(display_num_errors=True)
File "/usr/lib/python2.7/dist-packages/django/core/management/base.py", line 266, in validate
num_errors = get_validation_errors(s, app)
File "/usr/lib/python2.7/dist-packages/django/core/management/validation.py", line 30, in get_validation_errors
for (app_name, error) in get_app_errors().items():
File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 158, in get_app_errors
self._populate()
File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 64, in _populate
self.load_app(app_name, True)
File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 88, in load_app
models = import_module('.models', app_name)
File "/usr/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module
__import__(name)
File "/home/zahra/Web/rezvan/agancy/models.py", line 7
return self.BankName
^
IndentationError: expected an indented block
Please help me.
Answer:
from django.db import models
class Bank(models.Model):
BankName=models.CharField(max_length=20)
CardNum=models.CharField(max_length=20)
def __unicode__(self):
return self.BankName
Please [read the Python tutorial sections on control structures, which
describes multi-level
indentation](https://docs.python.org/2/tutorial/controlflow.html).
|
python TUI popup
Question: I need some hints to find a simple solution for inserting a popup window
inside a python console app.
This app runs normally unattended, because it's done to be launched from
crontab.
It uses everywhere logging to display messages and save them to logfiles.
However, in some cases, the app needs user intervention to choose some options
when it is not able to find a suitable one.
That's why I inserted a --interactive option in argparse, and when the app
needs user intervention, a popup window in console should appear, allowing the
user to choose between some items in a list.
Here's an extract of the output to give you an example :
INFO : Try to fuzzy-match 'Orange Itbn'
INFO : Fuzzy-matched alternative entries : ['Orange Is The New Black']
INFO : Fuzzy matched 'Orange Itbn' as seriesname 'Orange Is The New Black'
INFO : MOVE /Users/spadazz/testing/orange itbn.s03e10.hdtv.720p.mkv TO:
/Volumes/NAS/TV Shows/Orange Is The New Black/S03/Orange Is The New Black.S03E10.hdtv.720p.mkv
INFO : Try to fuzzy-match 'Sur'
INFO : Fuzzy-matched alternative entries : ['Survivors 2008', 'Survivors']
WARNING :
Series 'Sur' not uniquely matched in titles
Choose between these titles :
['Survivors 2008', 'Survivors']
WARNING :
******************************************
**** INSERT HERE THE CALL TO THE POPUP ***
******************************************
Now, I've read some documentation about tkinter, curses and npyscreen but I
wasn't able to come up with something simple for this purpose.
I don't wanna mess with the app structure or put the log messages in a main
window..
I just wanna a popup that allows me to choose between some options, even with
a simple keypress like '1' and '2' etc...
This should be a python solution too, possibly without calling external
commands from os.
Any ideas ??
Thanks
Answer: With a little help from Nicholas Cole, who wrote npyscreen, I was able to fix
this :
import npyscreen as np
class myPop(np.NPSApp):
def setopt(self, title, oList, multi):
self.title = title
self.options = oList
self.multi = multi
self.height = len(self.options)+1
def main(self):
F = np.Popup(name="Choose an option")
if self.multi:
opt = F.add(np.TitleMultiSelect, name=self.title, max_height=self.height, values=self.options, scroll_exit=True)
else:
opt = F.add(np.TitleSelectOne, name=self.title, max_height=self.height, values=self.options, scroll_exit=True)
F.edit()
self._values = opt.get_selected_objects()
self.result = ( self._values if self.multi and len(self._values) > 1 else self._values[0] )
def ChooseOption(title, oList, multi=False):
pop = myPop()
pop.setopt(title, oList, multi)
pop.run()
return pop.result
# Show a popup with radiobuttons to select 1 item from a list
print ChooseOption('choose a single element', ['a','b','c','d'])
# Show a popup with radiobuttons to multi-select items from a list
print ChooseOption('choose multi-elements', ['a','b','c','d'], True)
Hope this helps. Enrico
|
python django user not authenticated with user object
Question: Ok I am facing this problem with authentication
user_profile = UserProfile.objects.get(email="[email protected]")
new_password = form.cleaned_data['new_password']
user_profile.user.set_password(new_password)
user_profile.user.save()
user = authenticate(username=user_profile.user.username, password=new_password)
print(user)
When I print user it gives `None`
Why the user is not being authenticated?
Answer: The authenticate function from `django.contrib.auth` returns an instance of
`django.contrib.auth.models.User` class if the credentials match or None if
they don't.
I guess no user with username and password provided by you exists. That is why
you are getting None.
I suggest going to the shell and running:
from django.contrib.auth.models import User
from django.contrib.auth.hashers import make_password
password = make_password("your_password")
user = User.objects.filter(username="username",password=password)
and see what user holds.
|
Can I link a Java library using Jython with my code in Python
Question: I need to use the library Jena which is in Java in my code that is written in
Python. Now, I want to know if Jython can bridge between these two or not!!!
according to [this
thread](http://stackoverflow.com/questions/9727398/invoking-jython-from-
python-or-vice-versa) Jython can be invoked in a python code. but I need to
access the functions in Jena and get back the answers into my python code. I
am speculating that the code should look like below. the main part is the
importing a java library, running the desired function and getting the results
back.
import execnet
gw = execnet.makegateway("popen//python=jython", var1, var2)
channel = gw.remote_exec("""
import Jena #it is a java library.
reason(var1, var2)
...
""")
fun(channel)
Answer: Yes, in principal you can take your existing Python code, import some Java
libs into it, and then run it under Jython and it will work. That said, not
all Python packages will work under Jython for various reasons, often due to C
extensions or other differences in the implementation of the Jython
interpreter from regular CPython.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.