text
stringlengths 226
34.5k
|
---|
Alternatives to python griddata
Question: I am using griddata to resample a numpy 2 dimensional array on a grid.
z.shape = (1000, 1000)
x, y = np.arange(-5, 5, 0.01), np.arange(-5, 5, 0.01)
newx, newy = np.arange(-2, 2, 0.1), np.arange(-2, 2, 0.1)
griddata((x, y), z, (newx[None, :], newy[:, None]))
The code should:
* resample z (which represents an image) to a new _coarser_ or _finer_ grid
* the new grid does not necessarily cover all of the original one.
However griddata cannot manage a regular input grid. Does anyone know an easy
alternative?
Answer: Use any of the methods suitable for data on a grid listed in the
documentation:
<https://docs.scipy.org/doc/scipy/reference/interpolate.html#multivariate-
interpolation>
That is:
<https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RegularGridInterpolator.html>
or
<https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RectBivariateSpline.html>
or
<https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.interpolation.map_coordinates.html>
Note also that you are using `griddata` incorrectly. Your code corresponds to
interpolating from a line defined by your 1000 (x, y) coordinates, where each
point has 1000 values associated with it. However, interpolation to 2D from a
1D line is badly defined, and the failure results from trying to triangulate a
set of points that are along a line.
You should do
import numpy as np
from scipy.interpolate import griddata
z = np.random.rand(100, 100)
z.shape = (100, 100)
x, y = np.arange(-5, 5, 0.1), np.arange(-5, 5, 0.1)
xx, yy = np.meshgrid(x, y, indexing='ij')
newx, newy = np.arange(-2, 2, 0.1), np.arange(-2, 2, 0.1)
griddata((xx.ravel(), yy.ravel()), z.ravel(), (newx[None, :], newy[:, None]))
This will work correctly --- however, 1000x1000 = 1000000 points in 2D is
simply way too much data for triangulation-based unstructured interpolation
(needs large amounts of memory for the triangulation + it's slow), so you
should use the gridded data algorithms.
|
PyRO 4 - lookup fails when I try to find a registered object
Question: I'm fighting against this problem for about a week, and I don't know anymore
where to look to find a solution.
As the title says, as soon as I successfully register a pyro object, I try to
find it on the NS, in order to operate with it, but the lookup fails.
I post a simplified version of my code, to make the situation more clear:
Server is the class in which the PyRO NS starts:
import threading, socket, sys
import Pyro4
class Server():
def __init__(self):
self.start_ns_loop()
def start_ns(self):
print("Starting the Name Server...")
try:
Pyro4.naming.startNSloop()
except socket.error:
print("Name Server already running.")
sys.exit(0)
def start_ns_loop(self):
ns_thread = threading.Thread(target=self.start_ns, args=[])
ns_thread.daemon = True
ns_thread.start()
TextAnalyzer class is the class that I use to do some statistics about a file:
import nltk, argparse, Pyro4, socket
class TextAnalyzer():
def __init__(self):
#init function in which I do all my things...
'''after the init, I've some methods. I don't list them
because they are not important in this discussion'''
def get_ip_addr(self):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("8.8.8.8", 80))
ns_ip = str(s.getsockname()[0])
s.close()
return ns_ip
def main():
global nsip, PYRO_OBJ_NAME
text_analyzer_name = "Text_Analyzer_"
# Parser configuration for the input values
parser = argparse.ArgumentParser(description="Values: ")
parser.add_argument("-id", help="Sets the text analyzer ip.")
parser.add_argument("-nsip", help="Sets the Name Server ip.")
args = parser.parse_args()
if args.id is not None:
identifier = str(args.id)
else:
identifier = ""
if args.nsip is not None:
name_server_ip = str(args.nsip)
else:
name_server_ip = ""
a = TextAnalyzer()
try:
if name_server_ip != "":
nsip = Pyro4.naming.locateNS(name_server_ip)
else:
nsip = Pyro4.naming.locateNS()
PYRO_OBJ_NAME = text_analyzer_name + str(identifier)
print("PyRO Object name: " + PYRO_OBJ_NAME)
daemon = Pyro4.Daemon(a.get_ip_addr())
uri_text_analyzer = daemon.register(a)
nsip.register(PYRO_OBJ_NAME, uri_text_analyzer, safe=True)
print("URI " + PYRO_OBJ_NAME + ": " + str(uri_text_analyzer))
daemon.requestLoop()
except Pyro4.naming.NamingError as e:
print(str(e))
if __name__ == "__main__":
main()
Connection class provides all methods useful to find the object on the NS,
sftp and ssh connection, finds the PID of the objects running, etc...
import Pyro4, paramiko, socket, time
from PyQt4 import QtCore, QtGui
class Connection(QtGui.QMainWindow):
def __init__(self):
super(Connection, self).__init__()
self.text_analyzer_name = "Text_Analyzer_"
self.identifier = None
self.address = None
self.password = None
self.object_pid = None
self.authentication_ok = False
def get_ip_addr(self):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("8.8.8.8", 80))
ns_ip = str(s.getsockname()[0])
s.close()
print(ns_ip)
return ns_ip
# This method finds the object on the NS
def find_obj(self, identifier, a, p):
self.identifier = identifier
self.address = a
self.password = p
self.open_server_connection()
time.sleep(5)
if self.authentication_ok is True:
try:
ns = Pyro4.naming.locateNS()
print("Return del locateNS(): " + str(ns))
uri_text_analyzer = ns.lookup(self.text_analyzer_name + str(self.identifier))
self.text_analyzer = Pyro4.Proxy(uri_text_analyzer)
return True
except Pyro4.errors.NamingError as e:
print(str(e))
self.ssh_connection_close_and_cleanup()
return False
def open_server_connection(self):
print("Object ID: " + str(self.identifier))
ssh_connection = paramiko.SSHClient()
ssh_connection.load_system_host_keys()
ssh_connection.set_missing_host_key_policy(paramiko.AutoAddPolicy)
try:
if str(self.address).__contains__('@'):
(username, hostname) = self.address.split('@')
print("User: " + username + ", Host: " + hostname)
print("Tento la connessione.")
ssh_connection.connect(str(hostname), username=username, password=str(self.password), timeout=5, allow_agent=False)
else:
ssh_connection.connect(str(self.address), password=str(self.password), timeout=5, allow_agent=False)
self.authentication_ok = True
ns_ip = self.get_ip_addr()
sftp_connection = ssh_connection.open_sftp()
print("Sftp connection open.")
print("Transferring " + self.text_analyzer_name + str(self.identifier) + "...")
sftp_connection.put("text_analyzer.py", "./text_analyzer.py")
print("Transferring Pyro4...")
sftp_connection.put("Pyro4.zip", "./Pyro4.zip")
print("Unpacking...")
stdin, stdout, stderr = ssh_connection.exec_command("tar -xzvf Pyro4.zip")
time.sleep(3)
print("Executing " + self.text_analyzer_name + str(self.identifier) + ":")
stdin, stdout, stderr = ssh_connection.exec_command("echo $$; exec python3 text_analyzer.py -id {} -nsip {}".format(self.identifier, ns_ip))
# Object PID saving
self.object_pid = int(stdout.readline())
print("PID del " + self.text_analyzer_name + str(self.identifier) + ": " + str(self.object_pid))
# Connections close
ssh_connection.close()
sftp_connection.close()
except (paramiko.AuthenticationException, socket.error) as e:
self.authentication_ok = False
ssh_connection.close()
print("Connection failed, error: " + str(e))
def ssh_connection_close_and_cleanup(self):
ssh_connection = paramiko.SSHClient()
ssh_connection.load_system_host_keys()
ssh_connection.set_missing_host_key_policy(paramiko.AutoAddPolicy)
try:
if str(self.address).__contains__('@'):
(username, hostname) = self.address.split('@')
ssh_connection.connect(str(hostname), username=username, password=str(self.password), timeout=5, allow_agent=False)
else:
ssh_connection.connect(str(self.address), password=str(self.password), timeout=5, allow_agent=False)
self.host = hostname
print("Killing PID: " + str(self.object_pid))
ssh_connection.exec_command("/bin/kill -KILL {}".format(self.object_pid))
ssh_connection.exec_command("rm -r Pyro4")
ssh_connection.exec_command("rm -r Pyro4.zip")
ssh_connection.exec_command("rm text_analyzer.py")
time.sleep(5)
ssh_connection.close()
except(paramiko.AuthenticationException, socket.error) as e:
ssh_connection.close()
print("Connection failed")
print(str(e))
So, basically, this is what I'm doing.
The problem is that the lookup fails while trying to find the remote object on
the NS, inside the find_obj() method (contained within Connection class), but
I know for sure that the remote object was registered successfully.
The error given is "unknown name" from Pyro4.errors.NamingError.
I have really no clue why it isn't working...
Further specs: I'm running it on Mac OS X Mavericks, with PyRO 4 and Python
3.4.
Thanks in advance for your replies.
Answer: You can use the nsc tool to look in the name server's database. Make sure you
connect to the correct name server. You will see that the object you request
is really not in there. (Otherwise Pyro will hand you the uri).
Another tip might be to check that the name you're requesting is exactly the
same as what you registered with. Also check if you have multiple name servers
running perhaps, and is your code getting them mixed up maybe?
|
Insert figure into iPython markdown cell
Question: In iPython I create an image in one cell using the Pandas plot function. In a
different markdown cell in the same notebook I would like to add this figure
inline.
Is it possible to reference notebook internal figures in a markdown cell
without saving them to disk?
Answer: You can do it easily if you keep the
[Axes](http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes) instance
when creating the figure:
t = arange(0,6,0.01)
x = sin(t)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(t,x)
If you're using pandas plotting functions, first create the axes as before and
then pass the instance to the pandas plotting function as argument:
`pandas.DataFrame.plot(ax=ax)`. An example:
from pandas import Series, date_range
ts = Series(randn(1000), index=date_range('1/1/2000', periods=1000))
ts = ts.cumsum()
fig = plt.figure()
ax = fig.add_subplot(111)
ts.plot(ax=ax)
Then you can reuse the same figure in another cell:
display(ax.get_figure())
|
Pool.map - Why doesn't the worker process crash earlier?
Question: Say I do this:
import multiprocessing as mp
def f(x):
raise OverflowError # raised BEFORE the print
print x
if __name__ == '__main__':
pool = mp.Pool(processes=1)
for _ in pool.imap_unordered(f, range(10)):
pass
pool.close()
pool.join()
Out:
Traceback (most recent call last):
File "test0.py", line 9, in <module>
for _ in pool.imap_unordered(f, range(10)):
File "/Users/usualme/anaconda/lib/python2.7/multiprocessing/pool.py", line 659, in next
raise value
OverflowError
Ok the output makes sense. The exception is raised before the `print`
statement, so there is no output. Now almost the same code, but I switched 2
lines:
import multiprocessing as mp
def f(x):
print x
raise OverflowError # raised AFTER the print
if __name__ == '__main__':
pool = mp.Pool(processes=1)
for _ in pool.imap_unordered(f, range(10)):
pass
pool.close()
pool.join()
Out:
0
1
2
3
4
5
6
7
8
9
Traceback (most recent call last):
File "test0.py", line 9, in <module>
for _ in pool.imap_unordered(f, range(10)):
File "/Users/usualme/anaconda/lib/python2.7/multiprocessing/pool.py", line 659, in next
raise value
OverflowError
I don't understand the output. I was expecting either the number 0 followed by
the stack trace, or all 10 numbers and 10 stack traces. Why is it printing all
numbers and only one stack trace? Why does the worker process wait the very
end to crash?
Answer: It's just a timing thing - The `worker` process doesn't care that an exception
is raised in the function it's running, it just returns the exception to the
parent and keeps chugging along with the next task. Here's the loop it's
running (slightly simplified):
while maxtasks is None or (maxtasks and completed < maxtasks):
try:
task = get() # Get task from parent
except (EOFError, OSError):
util.debug('worker got EOFError or OSError -- exiting')
break
if task is None:
util.debug('worker got sentinel -- exiting')
break
job, i, func, args, kwds = task
try:
result = (True, func(*args, **kwds)) # Call the function pass from the parent
except Exception as e: # We end up in here if the worker raises an exception
if wrap_exception:
e = ExceptionWithTraceback(e, e.__traceback__)
result = (False, e) # The exception object is stored as the result
put((job, i, result)) # Send result to parent process
So, even though the very first task raises an exception, it takes a little bit
of time for the result to travel between the two processes, and for the parent
process to actually pull the result out of the `Queue` and raise the
`Exception`. In that window of time, the worker is able to execute all the
remaining tasks. If you make the worker function slower, you'll see it
executes fewer tasks:
import multiprocessing as mp
import time
def f(x):
print x
time.sleep(2)
raise OverflowError
if __name__ == '__main__':
pool = mp.Pool(processes=1)
for _ in pool.imap_unordered(f, range(10)):
pass
pool.close()
pool.join()
Output:
0
1
Traceback (most recent call last):
File "p.py", line 11, in <module>
for _ in pool.imap_unordered(f, range(10)):
File "/usr/lib/python2.7/multiprocessing/pool.py", line 626, in next
raise value
OverflowError
You'd also only see some percentage of the results printed if you passed a
larger iterable, since the worker wouldn't have enough time to get through all
of them before the parent died.
You only see one exception actually get raised because from the parent's
perspective, as soon as one task has failed, the entire `imap` call should be
aborted. The parent pulls the results from all its child processes from a
single `Queue` sequentially, so as soon as it sees the first exception, the
`imap` call ends, so the results from the rest of the tasks get thrown away.
|
Python sees uninstalled module
Question: I have a really weird problem. I'm developing a Pyramid project and it seems
like **non-existing** module is found when I run `pserve`.
`__init__.py` of my main module:
...
# This works !!! db/models directory does not even exists
from db.models import Base
# Also works - db/model exists
from db.model import Base
...
I even tried to recreate my virtual environment and it still finds it.
Any ideas?
Answer: From the comments it appears this was solved and it was leftover `*.pyc`
files. This problem can come up a lot if you're moving between branches or if
you find yourself frequently renaming/deleting files.
`$ find mydir -name "*.pyc" -exec rm {} \;` will recursively find all `*.pyc`
files in the "mydir" directory (replace mydir with your directory name of
course) and delete them. `$ find . -name "*.pyc" -exec rm {} \;` for your
current working directory.
If you're using git for your project, add this script to your post-checkout
hook to prevent differences between branches from getting in your way.
`$ echo "find src -name "*.pyc" -exec rm {} \;" >> .git/hooks/post-checkout`
|
create sublists within sublists in python
Question: I'd like to create a series of sublists within a sublist:
original = range(1,13)
I read another post that provided a solution to create sublists within a list:
>>> [original[i:i+2] for i in range(0,len(original),2)]
>>> [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12]]
But how can I achieve this result?
[ [[1,2],[3,4]], [[5,6],[7,8]], [[9,10],[11,12]] ]
Answer: You can define your sublisting action as a function and apply it twice. This
is probably inefficient since it will construct intermediate lists before
constructing ones with the finest level of sublisting. But, is is easier to
read since you're already familiar with the first step given that you used it
when asking this question.
def nest_in_pairs(original):
return [original[i:i+2] for i in range(0,len(original),2)]
print nest_in_pairs(nest_in_pairs(original))
A more efficient way to do it would be to create a generator that yields a
list of up to two items from the front of the list. Then chain them.
from types import GeneratorType
def yield_next_two(seq):
if not isinstance(seq, GeneratorType):
for i in range(0, len(seq), 2):
yield seq[i:i+2]
else:
while True:
item1 = next(seq)
try:
item2 = next(seq)
yield [item1, item2]
except StopIteration:
yield [item1]
pair_generator = yield_next_two(original)
quad_generator = yield_next_two(yield_next_two(original))
next(pair_generator)
next(quad_generator)
and you can call `list` on `pair_generator` or `quad_generator` to get the
whole set of contents.
Here's an example of playing with this after pasting the above code into an
IPython session:
In [40]: quad_generator = yield_next_two(yield_next_two(original))
In [41]: list(quad_generator)
Out[41]: [[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]]]
In [42]: nearly_eights_generator = yield_next_two(yield_next_two(yield_next_two(original)))
In [43]: list(nearly_eights_generator)
Out[43]: [[[[1, 2], [3, 4]], [[5, 6], [7, 8]]], [[[9, 10], [11, 12]]]]
|
Python, Tkinter, Subprocess- getting stdout and inserting it to Text
Question: Have mercy, I'm a beginner.
I'm trying to write a very basic application to run 'chkdsk c:' and print out
the output to a text box line by line. for each line I want a ttk.Progressbar
to move a bit. I've seen similar questions answered here, but couldn't get
them to work. I got this far:
from Tkinter import *
import ttk
from subprocess import Popen, PIPE
gui = Tk()
text = Text(gui)
lb = ttk.Label(text="Chkdsk")
prog = ttk.Progressbar(mode='indeterminate')
def chkdsk():
proc = Popen(['chkdsk','c:'],stdout=PIPE)
while True:
line = proc.stdout.readline()
if line != '':
text.insert(INSERT, "\n")
text.insert(END, line.rstrip())
prog.step(1)
gui.update()
else:
break
bt = ttk.Button(text="Chkdsk", command=chkdsk).grid(row=4, column=5)
text.grid(row=6, column= 5)
lb.grid(row=3, column=5)
prog.grid(row=7,column=5)
mainloop()
Ok, so when I run this .pyw script on an **elevated command prompt(this is
good for my purposes)** \- "python chkdsk.pyw" and initiate the chkdsk, it
starts working, and then shortly thereafter becomes non responding.
I believe the problem has something to do with buffering?
Answer: `readline` is blocking. Use non-blocking `read` and the `after`-method.
|
Minimize objective function using limfit.minimize in Python
Question: I am having a problem with package `lmfit.minimize` minimization procedure.
Actually, I could not create a correct objective function for my problem.
**Problem definition**
* My function: `yn = a_11*x1**2 + a_12*x2**2 + ... + a_m*xn**2`,where `xn`\- unknowns, `a_m` \- coefficients. n = 1..N, m = 1..M
* In my case, `N=5` for `x1,..,x5` and `M=3` for `y1, y2, y3`.
I need to find the `optimum: x1, x2,...,x5` so that it can satisfy the `y`
**My question:**
* Error: `ValueError: operands could not be broadcast together with shapes (3,) (3,5)`.
* Did I create the objective function of my problem properly in Python?
**My code:**
import numpy as np
from lmfit import Parameters, minimize
def func(x,a):
return np.dot(a, x**2)
def residual(pars, a, y):
vals = pars.valuesdict()
x = vals['x']
model = func(x,a)
return y - model
def main():
# simple one: a(M,N) = a(3,5)
a = np.array([ [ 0, 0, 1, 1, 1 ],
[ 1, 0, 1, 0, 1 ],
[ 0, 1, 0, 1, 0 ] ])
# true values of x
x_true = np.array([10, 13, 5, 8, 40])
# data without noise
y = func(x_true,a)
#************************************
# Apriori x0
x0 = np.array([2, 3, 1, 4, 20])
fit_params = Parameters()
fit_params.add('x', value=x0)
out = minimize(residual, fit_params, args=(a, y))
print out
if __name__ == '__main__':
main()
Answer: Directly using `scipy.optimize.minimize()` the code below solves this problem.
Note that with more points `yn` you will tend to get the same result as
`x_true`, otherwise more than one solution exists. You can minimize the effect
of the ill-constrained optimization by adding boundaries (see the `bounds`
parameter used below).
import numpy as np
from scipy.optimize import minimize
def residual(x, a, y):
s = ((y - a.dot(x**2))**2).sum()
return s
def main():
M = 3
N = 5
a = np.random.random((M, N))
x_true = np.array([10, 13, 5, 8, 40])
y = a.dot(x_true**2)
x0 = np.array([2, 3, 1, 4, 20])
bounds = [[0, None] for x in x0]
out = minimize(residual, x0=x0, args=(a, y), method='L-BFGS-B', bounds=bounds)
print(out.x)
If `M>=N` you could also use `scipy.optimize.leastsq` for this task:
import numpy as np
from scipy.optimize import leastsq
def residual(x, a, y):
return y - a.dot(x**2)
def main():
M = 5
N = 5
a = np.random.random((M, N))
x_true = np.array([10, 13, 5, 8, 40])
y = a.dot(x_true**2)
x0 = np.array([2, 3, 1, 4, 20])
out = leastsq(residual, x0=x0, args=(a, y))
print(out[0])
|
How run cherrypy app without screen logging?
Question: Hi I looking for some configuration or flag that allows me to silence the
requested pages.
When I run `python cherrypy_app.py` and I join to the `127.0.0.1:8080` in the
console where I start the cherrypy app show me
`127.0.0.1 - - [09/Oct/2014:19:10:35] "GET / HTTP/1.1" 200 1512 ""
"Mozilla/5.0 ..." 127.0.0.1 - - [09/Oct/2014:19:10:35] "GET
/static/css/style.css HTTP/1.1" 200 88 "http://127.0.0.1:8080/" "Mozilla/5.0
..." 127.0.0.1 - - [09/Oct/2014:19:10:36] "GET /favicon.ico HTTP/1.1" 200 1406
"" "Mozilla/5.0 ..."`
I do not want to show this info. It is possible?
Answer: I as far as I remember in my first attempts with CherryPy I had the same
desire. So here's a little more to say besides turning off the stdout logging
per se.
CherryPy has some predefined
[environments](http://cherrypy.readthedocs.org/en/latest/config.html?#environments):
staging, production, embedded, test_suite that are defined
[here](https://bitbucket.org/cherrypy/cherrypy/src/4939ea9fbe6b0def376fbb349c039c38b0a8994b/cherrypy/_cpconfig.py?at=default#cl-187).
Each environment has its set of configuration. So while developing stdout
logging is in fact quite helpful, whereas in production environment it makes
no sense. Setting the environment according to the deployment is the correct
way to deal configuration in CherryPy.
In your particular case the stdout logging is controlled by `log.screen`. It
is already disabled in production environment.
Here's the example, but note that setting environment inside your application
isn't the best idea. You're better use
[`cherryd`](http://cherrypy.readthedocs.org/en/latest/install.html#cherryd)'s
`--environment` for it instead.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import cherrypy
config = {
'global' : {
'server.socket_host' : '127.0.0.1',
'server.socket_port' : 8080,
'server.thread_pool' : 8,
# Doing it explicity isn't a recommended way
# 'log.screen' : False
}
}
class App:
@cherrypy.expose
def index(self):
return 'Logging example'
if __name__ == '__main__':
# Better use cherryd (http://cherrypy.readthedocs.org/en/latest/install.html#cherryd)
# for setting the environment outside the app
cherrypy.config.update({'environment' : 'production'})
cherrypy.quickstart(App(), '/', config)
|
Writing a Array of Dictionaries to CSV
Question: I'm trying to get the dictionary (which the first part of the program
generates) to write to a csv so that I can perform further operations on the
data in excel. I realize the code isn't efficient but at this point I'd just
like it to work. I can deal with speeding it up later.
import csv
import pprint
raw_data = csv.DictReader(open("/Users/David/Desktop/crimestats/crimeincidentdata.csv", "r"))
neighborhood = []
place_count = {}
stats = []
for row in raw_data:
neighborhood.append(row["Neighborhood"])
for place in set(neighborhood):
place_count.update({place:0})
for key,value in place_count.items():
for place in neighborhood:
if key == place:
place_count[key] = place_count[key]+1
for key in place_count:
stats.append([{"Location":str(key)},{"Volume":str(place_count[key])}])
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(stats)
The program is still running fine here as is evident by the pprint output
[ [{'Location': 'LINNTON'}, {'Volume': '109'}],
[{'Location': 'SUNDERLAND'}, {'Volume': '118'}],
[{'Location': 'KENTON'}, {'Volume': '715'}]
This is where the error is definitely happening. The program writes the
headers to the csv just fine then throws the ValueError.
fieldnames = ['Location', 'Volume']
with open('/Users/David/Desktop/crimestats/localdata.csv', 'w', newline='') as output_file:
csvwriter = csv.DictWriter(output_file, delimiter=',', fieldnames=fieldnames, dialect='excel')
csvwriter.writeheader()
for row in stats:
csvwriter.writerow(row)
output_file.close()
I've spent quite a bit of time searching for this problem but none of the
suggestions I have attempted to use have worked. I figure I must me missing
something so I'd really appreciate any and all help.
Traceback (most recent call last):
File "/Users/David/Desktop/crimestats/statsreader.py", line 34, in <module>
csvwriter.writerow(row)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/csv.py", line 153, in writerow
return self.writer.writerow(self._dict_to_list(rowdict))
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/csv.py", line 149, in _dict_to_list
+ ", ".join([repr(x) for x in wrong_fields]))
ValueError: dict contains fields not in fieldnames: {'Location': 'SABIN'}, {'Volume': '247'}
Answer: I believe your problem is here:
for key in place_count:
stats.append([{"Location":str(key)},{"Volume":str(place_count[key])}])
This is creating a list of _two_ dictionaries. The first has only a "Location"
key, and the second has only a "Volume" key. However, the `csv.DictWriter`
objects are expecting a single dictionary per row, with all the keys in the
dictionary. Change that code snippet to the following and it should work:
for key in place_count:
stats.append({"Location": str(key), "Volume": str(place_count[key])})
That should take care of the errors you're seeing.
Now, as for why the error message is complaining about fields not in
fieldnames, which completely misled you away from the real problem you're
having: the `writerow()` function expects to get a dictionary as its row
parameter, but you're passing it a list. The result is confusion: it iterates
over the dict in a `for` loop expecting to get the dict's keys (because that's
what you get when you iterate over a dict in Python), and it compares those
keys to the values in the `fieldnames` list. What it's expecting to see is:
"Location"
"Volume"
in either order (because a Python dict makes no guarantees about which order
it will return its keys). The reason why they want you to pass in a
`fieldnames` list is so that the fields can be written to the CSV in the
correct order. However, because you're passing in a list of two dictionaries,
when it iterates over the `row` parameter, it gets the following:
{'Location': 'SABIN'}
{'Volume': '247'}
Now, the dictionary `{'Location': 'SABIN'}` does not equal the string
`"Location"`, and the dictionary `{'Volume': '247'}` does not equal the string
`"Volume"`, so the `writerow()` function thinks it's found dict keys that
aren't in the `fieldnames` list you supplied, and it throws that exception.
What was _really_ happening was "you passed me a list of two dicts-of-one-key,
when I expected a single dict-with-two-keys", but the function wasn't written
to check for that particular mistake.
* * *
Now I'll mention a couple things you could do to speed up your code. One thing
that will help quite a bit is to reduce those three `for` loops at the start
of your code down to just one. What you're trying to do is to go through the
raw data, and count the number of times each neighborhood shows up. First I'll
show you a better way to do that, then I'll show you an _even better_ way that
improves on my first solution.
The better way to do that is to make use of the wonderful `defaultdict` class
that Python provides in the `collections` module. `defaultdict` is a subclass
of Python's dictionary type, which will automatically create dict entries when
they're accessed for the first time. Its constructor takes a single parameter,
a function which will be called with no parameters and should return the
desired default value for any new item. If you had used `defaultdict` for your
`place_count` dict, this code:
place_count = {}
for place in set(neighborhood):
place_count.update({place:0})
could simply become:
place_count = defaultdict(int)
What's going on here? Well, the `int` function (which really isn't a function,
it's the constructor for the `int` class, but that's a bit beyond the scope of
this explanation) just happens to return 0 if it's called with no parameters.
So instead of writing your own function `def returnzero(): return 0`, you can
just use the existing `int` function (okay, constructor). Now every time you
do `place_count["NEW PLACE"]`, the key `NEW PLACE` will automatically appear
in your `place_count` dictionary, with the value 0.
Now, your counting loop needs to be modified too: it used to go over the keys
of `place_count`, but now that `place_count` automatically creates its keys
the first time they're accessed, you need a different source. But you still
have that source in the raw data: the `row["Neighborhood"]` value for each
row. So your `for key,value in place_count.items():` loop could become:
for row in raw_data:
place = row["Neighborhood"]
place_count[place] = place_count[place] + 1
And now that you're using a `defaultdict`, you don't even need that first loop
(the one that created the `neighborhood` list) at all! So we've just turned
three loops into one. The final version of what I'm suggesting looks like
this:
from collections import defaultdict
place_count = defaultdict(int)
for row in raw_data:
place = row["Neighborhood"]
place_count[place] = place_count[place] + 1
# Or: place_count[place] += 1
However, there's a way to improve that even more. The `Counter` object from
the `collections` module is designed for just this case, and has some handy
extra functionality, like the ability to retrieve the N most common items. So
the **final** final version :-) of what I'm suggesting is:
from collections import Counter
place_count = Counter()
for row in raw_data:
place = row["Neighborhood"]
place_count[place] = place_count[place] + 1
# Or: place_count[place] += 1
That way if you need to retrieve the 5 most crime-ridden neighborhoods, you
can just call `place_count.most_common(5)`.
You can read more about `Counter` and `defaultdict` in the [documentation for
the `collections`
module](https://docs.python.org/3.4/library/collections.html).
|
Euler-Cromer ODE Python Routine
Question: I am using an Euler-Cromer scheme to calculate the position and velocity of
Halley's comet. The script tries several values for a time-step (tau) for each
value of initial velocity in a range. For each tau value, it runs through the
Euler-Cromer routine and compares the total mechanical energy from the
beginning of the orbit to the end of the first orbit's cycle. If the percent
difference between the two energies is less than 1%, the current (optimal)
value of tau is added to a list. After all the iterations, the values of tau
and initial velocities are plotted using pyplot on a semi-log graph so that
the true initial velocity of Halley's comet can be interpreted. However, each
element of my optimal taus list is the first element of my tau range
(currently 0.1). The script seems more complex than necessary and perhaps a
bit convoluted. Here it is: import matplotlib.pyplot as plt import numpy as np
## Set physical parameters and initial position and velocity of the comet
GM = 4 * np.pi ** 2 # Grav const. times mass of sun (AU^3/yr^2)
mass = 1. # Mass of comet
r0 = 35 # Initial aphelion position
v0 = np.arange(0.5, 1.1, 0.1) # Create a range of initial velocities in AU/year
## Function that takes position and velocity vectors and the initial total energy
## and outputs the value of tau that provides a less than 1% error in the total energy
def tau_test(vel):
tau = np.arange(.1, .009, -.001)
r = [r0, 0]
v = vel
optimal_tau = 0
for t in tau:
i = 0
i_max = 5 * 76 / t
r_mag_initial = np.sqrt(r[0] ** 2 + r[1] ** 2) # initial magnitude of the position vector
v_mag_initial = np.sqrt(v[0] ** 2 + v[1] ** 2) # initial magnitude of the velocity vector
KE_initial = 0.5 * mass * v_mag_initial ** 2 # initial kinetic energy
PE_initial = -GM * mass / r_mag_initial # initial potential energy
ME_initial = KE_initial + PE_initial # initial total energy
## Main looping function (using Euler-Cromer method for now)
while not i == i_max:
r_mag = np.sqrt(r[0] ** 2 + r[1] ** 2) # magnitude of the position vector
v_mag_a = np.sqrt(v[0] ** 2 + v[1] ** 2) # current velocity magnitude
a = [-GM * r[0] / r_mag ** 3, # acceleration vector
-GM * r[1] / r_mag ** 3] # based on the current position
v = [v[0] + t * a[0], v[1] + t * a[1]] # new velocity vector using Euler-Cromer method
r = [r[0] + t * v[0], r[1] + t * v[1]] # new position vector using Euler-Cromer method
v_mag_b = np.sqrt(v[0] ** 2 + v[1] ** 2) # new velocity magnitude
if i > .75 * 76 / t: # Ensure that the comet is at least past the sun
if v_mag_b > v_mag_a: # Break out of the while loop if the comet is starting
break # to move back toward the sun
i += 1
v_mag = np.sqrt(v[0] ** 2 + v[1] ** 2) # final magnitude of the velocity vector
r_mag = np.sqrt(r[0] ** 2 + r[1] ** 2) # final magnitude of the position vector
KE = 0.5 * mass * v_mag ** 2 # final kinetic energy
PE = -GM * mass / r_mag # final potential energy
ME = KE + PE # final total energy
#print ME
if abs(((ME - ME_initial) / ME_initial) * 100) <= 1: # If the percent error between the initial and final
optimal_tau = t # total energies is less than 1%, set t as the optimal
break # tau and break out of the for loop
return optimal_tau
## Loop through each initial velocity and test it against several values of tau
taus = []
for u in v0:
v = [0, u]
#print ME_initial
taus.append(tau_test(v))
#print taus
## Plot the values of tau and initial velocity on a semi-log graph
## so that the true velocity of Halley's comet can be interpreted
plt.semilogy(v0, taus)
plt.grid(True)
plt.title('Tau versus Initial Velocities')
plt.show()
Any help is appreciated.
Answer: Instead of banging my head against the wall, I decided to go back to my
original scrip that plots the trajectory of the comet and its energy over
time. I edited it to find the percent difference between the initial total
energy and the total energy of the comet after each orbit. I plotted the total
energy percent difference trend and simply reran the program several times,
using different time step values for each initial velocity between 0.3 and 1.
While this was essentially a brute-force solution, it worked.
|
Writing to a uWSGI unix socket
Question: I have a Python wsgi app that is served by uWSGI behind NGinx. NGinx listens
on the network and forwards requests to the uWSGI unix socket located in
`/tmp/uwsgi.socket`.
Now, I'm trying to emulate what I'm speculating NGinx does when talking to
this socket. I've tried the following using Python:
import socket
uwsgi_socket_address = "/tmp/uwsgi.sock"
socket_client = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
socket_client.connect(uwsgi_socket_address)
msg = "GET /index HTTP/1.1\r\n"
socket_client.sendall(msg)
I get this error in return
/usr/lib/python2.7/socket.pyc in meth(name, self, *args)
222
223 def meth(name,self,*args):
--> 224 return getattr(self._sock,name)(*args)
225
226 for _m in _socketmethods:
error: [Errno 32] Broken pipe
1) is what I'm trying even possible with that socket and uWSGI? if so what is
missing to get this to work?
2) is there a python utility that can help me in crafting text http requests,
rather than just querying the server themselves on my behalf? Like so :
>>> import somehttplib
>>> http = somehttplib()
>>> request = http.get('/index')
>>> request.text=='GET /index HTTP/1.1\r\n'
True
Answer: You cannot talk HTTP to a server speaking the 'uwsgi' protocol. The uwsgi
protocol is a simple binary serialization format for arrays and dictionaries.
If you want to speak HTTP with the uWSGI server you have to configure it to
speak HTTP with --http-socket instead of --socket.
You can have both in the same instance with:
` uwsgi --http-socket <address1> --socket <address2> ... `
Eventually you can easily implement a uwsgi serializer with few lines.
This is an example of a parser (you need the opposite obviously):
<https://github.com/unbit/blastbeat#uwsgi>
Here you find the specs:
<http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html>
|
how to cast a variable in xpath python
Question:
from lxml import html
import requests
pagina = 'http://www.beleggen.nl/amx'
page = requests.get(pagina)
tree = html.fromstring(page.text)
aandeel = tree.xpath('//a[@title="Imtech"]/text()')
print aandeel
This part works, but I want to read multiple lines with different titles, is
it possible to change the "Imtech" part to a variable?
Something like this, it obviously doesnt work, but where did I go wrong? Or is
it not quite this easy?
FondsName = "Imtech"
aandeel = tree.xpath('//a[@title="%s"]/text()')%(FondsName)
print aandeel
Answer: You were almost right:
variabelen = [var1,var2,var3]
for var in variabelen:
aandeel = tree.xpath('//a[@title="%s"]/text()' % var)
|
How to deal with 401 (unauthorised) in python requests
Question: What I want to do is GET from a site and if that request returns a 401, then
redo my authentication wiggle (which may be out of date) and try again. But I
don't want to try a third time, since that would be my authentication wiggle
having the wrong credentials. Does anyone have a nice way of doing this that
doesn't involve properly ugly code, ideally in python requests library, but I
don't mind changing.
Answer: It doesn't get any less ugly than this, I think:
import requests
from requests.auth import HTTPBasicAuth
response = requests.get('http://your_url')
if response.status_code == 401:
response = requests.get('http://your_url', auth=HTTPBasicAuth('user', 'pass'))
if response.status_code != 200:
# Definitely something's wrong
|
Python pandas Reading specific values from HDF5 files using read_hdf and HDFStore.select
Question: So I created hdf5 file with a simple dataset that looks like this
>>> pd.read_hdf('STORAGE2.h5', 'table')
A B
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
Using this script
import pandas as pd
import scipy as sp
from pandas.io.pytables import Term
store = pd.HDFStore('STORAGE2.h5')
df_tl = pd.DataFrame(dict(A=list(range(5)), B=list(range(5))))
df_tl.to_hdf('STORAGE2.h5','table',append=True)
I know I can select columns using
x = pd.read_hdf('STORAGE2.h5', 'table', columns=['A'])
or
x = store.select('table', where = 'columns=A')
How would I select all values in column 'A' that equals 3 or specific or
indicies with strings in column 'A' like 'foo'? In pandas dataframes I would
use `df[df["A"]==3]` or `df[df["A"]=='foo']`
Also does it make a difference in efficiency if I use `read_hdf()` or
`store.select()`?
Answer: You need to specify `data_columns=` (you can use `True` as well to make all
columns searchable)
(FYI, the `mode='w'` will start the file over, and is just for my example)
In [50]: df_tl.to_hdf('STORAGE2.h5','table',append=True,mode='w',data_columns=['A'])
In [51]: pd.read_hdf('STORAGE2.h5','table',where='A>2')
Out[51]:
A B
3 3 3
4 4 4
|
Python filename change
Question: I have a number of videos in a directory on my Mac that all have a specific
string in the file name that I want to remove, but I want to keep the rest of
the file name as it is. I'm running this python script from terminal.
I have this syntax but it doesn't seem to work. Is it practical to use the
following? It seems to simple to be the best way to do this sort of thing
which is why I don't think it works.
from os import rename, listdir
text = "Some text I want to remove from file name"
files = listdir("/Users/Admin/Desktop/Dir_of_videos/")
for x in files:
if text in files:
os.rename(files, files.replace(text, ""))
Answer: the problem is that you get incomplete paths when you are using `listdir`,
basically, it returns only the files in the directory without the prepending
path to the directory
this should do the job:
import os
in_dir = './test'
remove = 'hello'
paths = [os.path.join(in_dir,file) for file in os.listdir(in_dir) if remove in file]
for file in paths:
os.rename(file, file.replace(remove, ""))
|
How to write the follwing in list comprehension in python
Question: Can I write the following in list comprehension way in python
for row in candidates:
sum=0
for i in range(1,len(candidates)):
if(row[i]!='NA')
sum+=int(row[i])
row.append(sum)
Here `candidates` is the list of list of candidates where each inner list
contains candidate marks. What I am trying to do is
"I am increasing a member in each inner list which will keep the total marks
of each candidate" like `candidate=[[143050023,5,7,6,8],[14305678,3,4,5,6]]`
(0 index in the inner has roll number of the candidate)
I want answer as
[[143050023,5,7,6,8,26],[14305678,3,4,5,6,18]]
I am able to do the same without using list comprehension but facing
difficulty while trying with list comprehension The difficulty that I am
facing in writing is how to sum the value and append it to the row after each
inner loop.
When using a list comprehension is better as compared to the normal for loop
structure?
Answer: Let's start with your final question:
> When using a list comprehension is better as compared to the normal for loop
> structure?
Two things have to be true:
* The purpose of your loop is to build a list.
* It's more readable as a comprehension than as an explicit loop.
* * *
Notice that you have two nested loop. That _might_ be transformable into a
comprehension with two `for` clauses, or into a nested comprehension, but it's
not going to transform into a single flat comprehension.
Your inner loop has a problem: a comprehension shouldn't try to mutate
anything, and _can't_ mutate anything with an assignment or other statement.
So you need to find a way to rewrite it so that it does its work immutably.
In this case, the obvious answer is to create the values to be summed, and
then pass the resulting iterable to a function that sums them, like the
builtin `sum` function:
sum(int(row[i]) for i in range(1, len(row)) if row[i] != 'NA')
I used a generator expression instead of a list comprehension because we don't
actually need the list for anything except to loop over it.
Notice that you can simplify this further. The only thing you're ever using
`i` for is in the (repeated) expression `row[i]`. So why iterate over the
range of indexes, when you can iterate over the row directly?
sum(int(value) for value in row if value != 'NA')
* * *
Your outer loop has a similar problem: you're trying to mutate each `row`
inside the loop, not build up a new structure. And it's hard to come up with a
good alternative that accomplishes the same thing by building a new structure
instead.
Sure, you could always do things in two passes:
sums = [sum(int(row[i]) for i in range(1, len(candidates)) if row[i] != 'NA')
for row in candidates]
candidates[:] = [row + [sum] for row, sum in zip(candidates, sums)]
And you can collapse those two passes into one by again changing `sums` into a
generator expression instead of a list comprehension, and you could even turn
it into a one-liner by doing it in-place instead of using a named temporary
variable:
candidates[:] = [row + [total] for row, total in zip(candidates, (sum(
int(row[i]) for i in range(1, len(candidates)) if row[i] != 'NA')
for row in candidates))]
But it's hard to argue that's more readable, or even remotely close to as
readable, as your original version.
Also, notice that I called the variable that holds each sum `total`, not
`sum`. The `sum` function is a pretty important builtin in Python (and one
we're actually using in this answer), so hiding it by creating a variable of
the same name is a bad idea. Even in this case, where it's only alive within
the scope of a list comprehension that doesn't use the builtin, so it's not
ambiguous to the interpreter, it's still ambiguous and confusing to a human
reader. (Thanks to [Padraic
Cunningham](http://stackoverflow.com/users/2141635/padraic-cunningham) for
pointing this out.)
|
plotting datetime object in matplotlib
Question: I have an array of datetime objects that is the following
dates = [datetime.datetime(1900, 1, 1, 10, 8, 14, 565000), datetime.datetime(1900, 1, 1, 10, 8, 35, 330000), datetime.datetime(1900, 1, 1, 10, 8, 43, 358000), datetime.datetime(1900, 1, 1, 10, 8, 52, 808000)]
I then tried converting the array to matplotlib suitable objects using `dates
= plt.dates.date2num(dates)`
Then I tried to plot it against some values using `ax1.plot_date(dates,
datac)`
but received errors as follows:
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python34\lib\tkinter\__init__.py", line 1487, in __call__
return self.func(*args)
File "C:\Python34\lib\site-packages\matplotlib\backends\backend_tkagg.py", line 278, in resize
self.show()
File "C:\Python34\lib\site-packages\matplotlib\backends\backend_tkagg.py", line 349, in draw
FigureCanvasAgg.draw(self)
File "C:\Python34\lib\site-packages\matplotlib\backends\backend_agg.py", line 461, in draw
self.figure.draw(self.renderer)
File "C:\Python34\lib\site-packages\matplotlib\artist.py", line 59, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "C:\Python34\lib\site-packages\matplotlib\figure.py", line 1079, in draw
func(*args)
File "C:\Python34\lib\site-packages\matplotlib\artist.py", line 59, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "C:\Python34\lib\site-packages\matplotlib\axes\_base.py", line 2092, in draw
a.draw(renderer)
File "C:\Python34\lib\site-packages\matplotlib\artist.py", line 59, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "C:\Python34\lib\site-packages\matplotlib\axis.py", line 1103, in draw
ticks_to_draw = self._update_ticks(renderer)
File "C:\Python34\lib\site-packages\matplotlib\axis.py", line 957, in _update_ticks
tick_tups = [t for t in self.iter_ticks()]
File "C:\Python34\lib\site-packages\matplotlib\axis.py", line 957, in <listcomp>
tick_tups = [t for t in self.iter_ticks()]
File "C:\Python34\lib\site-packages\matplotlib\axis.py", line 905, in iter_ticks
for i, val in enumerate(majorLocs)]
File "C:\Python34\lib\site-packages\matplotlib\axis.py", line 905, in <listcomp>
for i, val in enumerate(majorLocs)]
File "C:\Python34\lib\site-packages\matplotlib\dates.py", line 580, in __call__
result = self._formatter(x, pos)
File "C:\Python34\lib\site-packages\matplotlib\dates.py", line 412, in __call__
return self.strftime(dt, self.fmt)
File "C:\Python34\lib\site-packages\matplotlib\dates.py", line 450, in strftime
s1 = time.strftime(fmt, (year,) + timetuple[1:])
ValueError: Invalid format string
Does anyone have any advice on how to fix this? Thanks in advance!
Answer:
dates = plt.dates.date2num(dates)
is not appropriate as you would have used the following command
import matplotlib.pyplot as plt
So here plt does not contain date2num function. So you would have to use
from matplotlib.dates import date2num
and `dates =date2num(dates)`. I think this will work fine
|
Heroku push rejected - failed to compile - Unicode error
Question: I'm new to Flask and Heroku, so to try it out, I wrote a little app that works
fine when I run it locally using `foreman start`. However, when I try to `git
push heroku master`, I get the following error:
----------------------------------------
Cleaning up...
Command /app/.heroku/python/bin/python -c "import setuptools,
tokenize;__file__='/tmp/pip_build_u44996/scipy/setup.py';exec(compile
(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'),
__file__, 'exec'))" install --record /tmp/pip-ETnOiF-record/install-record.txt
--single-version-externally-managed --compile failed with error code 1 in
/tmp/pip_build_u44996/scipy
Traceback (most recent call last):
File "/app/.heroku/python/bin/pip", line 9, in <module>
load_entry_point('pip==1.5.6', 'console_scripts', 'pip')()
File "/app/.heroku/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/
__init__.py", line 185, in main
return command.main(cmd_args)
File "/app/.heroku/python/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/
basecommand.py", line 161, in main
text = '\n'.join(complete_log)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 70:
ordinal not in range(128)
! Push rejected, failed to compile Python app
I don't really understand the error traceback here in terms of what exactly is
going wrong. Is it something with scipy, or with the html templates I've made,
or one of the modules? I've tried out the proposed solutions to similar
questions that others have asked on stackoverflow, but I still get the same
error. Any help would be greatly appreciated. Thanks in advance!
Answer: Try replacing the line distribute==x.x.x with setuptools>=0.7 on your
requirements.txt file
Works for me.
|
Remove duplicate url's python
Question: I want to remove the duplicate url's from the file having list of url's. My
bugun_url_given.txt has "<http://www.bugun.com.tr/ara/Ak%20Parti/1>" and it
fetches all the url's and they are repeating.. It saves all the unique url's
in "bugun_url_collection.tx" here is my code:
from cookielib import CookieJar
import urllib2
import json
from bs4 import BeautifulSoup
cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
try:
text_file = open('bugun_url_given.txt', 'r')
for line in text_file:
print line
soup = BeautifulSoup(opener.open(line))
links = soup.select('div.nwslist a')
for link in links:
print link
#unique_url = set(map(lambda url : url.strip("/ "), links))
with open('bugun_url_collection.txt', 'a') as f:
for link in links:
f.write(link.get('href') + '\n')
except ValueError:
pass
Answer:
for link in links:
f.write(link.get('href') + '\n')
can become
for link in set(link.get('href') for link in links):
f.write(link + '\n')
In response to the comment (which is correct), let's rewrite this properly:
from cookielib import CookieJar
import urllib2
import json
from bs4 import BeautifulSoup
cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
def write_links_to_file(links):
with open('bugun_url_collection.txt', 'a') as f:
f.writeline(link)
def get_links_from_file(text_file):
for line in text_file:
print line
soup = BeautifulSoup(opener.open(line))
links = soup.select('div.nwslist a')
for link in links:
yield link.get('href')
with open('bugun_url_given.txt', 'r') as text_file:
links = get_links_from_file(text_file)
unique_links = set(link for link in links)
write_links_to_file(unique_links)
|
How do I make a histogram from a csv file which contains a single column of numbers in python?
Question: I have a csv file (excel spreadsheet) of a column of roughly a million
numbers. I want to make a histogram of this data with the frequency of the
numbers on the y-axis and the number quantities on the x-axis. I know
matplotlib can plot a histogram, but my main problem is converting the csv
file from string to float since a string can't be graphed. This is what I
have:
import matplotlib.pyplot as plt
import csv
with open('D1.csv', 'rb') as data:
rows = csv.reader(data, quoting = csv.QUOTE_NONNUMERIC)
floats = [[item for number, item in enumerate(row) if item and (1 <= number <= 12)] for row in rows]
plt.hist(floats, bins=50)
plt.title("histogram")
plt.xlabel("value")
plt.ylabel("frequency")
plt.show()
Answer: You can do it in one line with [pandas](http://pandas.pydata.org):
import pandas as pd
pd.read_csv('D1.csv', quoting=2)['column_you_want'].hist(bins=50)
|
AttributeError: 'str' object has no attribute 'policy'
Question: I am new to Python. I am trying to make an email script that can send an
email. First, I made a Python script without any classes, just function just
to make sure that the script runs as expected. After I got the expected
result. I am trying to rewrite the script using classes, so as to learn. But I
am getting error, which I don't understand. I don't understand where actually
the problem lies.
Below is the code as well as the screenshot of the error
import smtplib
import os
import sys
import mimetypes #for guess mime types of attachment
from email import encoders
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.audio import MIMEAudio
from email.mime.base import MIMEBase
from email.mime.image import MIMEImage
class Email(object):
message = None
subject = None
from_address = None
to_address = None
body = None
email_server = None
attachment = None
def __init__(self,from_address,to_address,subject,body,attachment,email_server):
self.message = MIMEMultipart()
self.message['subject'] = subject
self.message['From'] = from_address
self.message['TO'] = to_address
self.body = MIMEText(body, 'plain')
self.message.attach(body)
self.email_server = email_server
if attachment is not None:
self.attachment = attachment
self.attach_attachment()
def get_message(self):
return self.message
def send_message(self,auth):
username, password = auth.get_user_auth_details()
server = smtplib.SMTP(self.email_server)
server.starttls() #For Encryption
server.login(username, password)
server.send_message(self.message)
server.quit()
def attach_attachment(self):
self.messaege = self.attachment.set_attachment_type(self.message)
class Security(object):
username = ""
password = ""
def __init__(self,username, password):
self.username = username
self.password = password
def get_user_auth_details(self):
return self.username, self.password
class Attachment(object):
attachment_path = ''
def __init__(self,attachment_path):
self.attachment_path = attachment_path
def is_directory(self):
return os.path.isdir(self.attachment_path)
def is_file(self):
return os.path.isfile(self.attachment_path)
def guess_and_get_attachment_type(self, filenamepath):
ctype, encoding = mimetypes.guess_type(filenamepath)
if ctype is None or encoding is not None:
# No guess could be made, or the file is encoded (compressed), so
# use a generic bag-of-bits type.
ctype = "application/octet-stream"
maintype , subtype = ctype.split('/' , 1)
if maintype == 'text':
fp = open(filenamepath)
attachment = MIMEText(fp.read() , subtype)
fp.close()
elif maintype == 'image':
fp = open(filenamepath , 'rb')
attachment = MIMEImage(fp.read() , subtype)
fp.close()
elif maintype == 'audio':
fp = open(filenamepath , 'rb')
attachment = MIMEAudio(fp.read() , subtype)
fp.close()
else:
fp = open(filenamepath , 'rb')
attachment = MIMEBase(maintype , subtype)
attachment.set_payload(fp.read()) #Actual message
fp.close()
encoders.encode_base64(attachment) # Encode the payload using Base64
return attachment
def set_attachment_type(self,message):
if(self.is_directory()):
for filename in os.listdir(self.attachment_path):
filenamepath = os.path.join(self.attachment_path , filename)
attachment = self.guess_and_get_attachment_type(filenamepath)
# Set the filename parameter
attachment.add_header('Content-Disposition', 'attachment', filename = filenamepath)
message.attach(attachment)
elif(self.is_file()):
attachment = self.guess_and_get_attachment_type(self.attachment_path)
# Set the filename parameter
attachment.add_header('Content-Disposition', 'attachment', filename = self.attachment_path)
message.attach(attachment)
else:
print("Unable to open file or directory")
return message
def main():
#Constants
GMAIL_SERVER = "smtp.gmail.com:587"
FROM_ADDRESS = "[email protected]"
TO_ADDRESS = "[email protected]"
auth = Security("[email protected]" , "MySuperSecretPassword")
attachment = Attachment("/path/to/attachment/file/or/directory")
email = Email(FROM_ADDRESS ,TO_ADDRESS, "Hi from class Python" , "OOPs Python at Work!!" ,attachment,GMAIL_SERVER )
email.send_message(auth)
if __name__ == '__main__':
main()

Answer: I changed
self.message.attach(body) #In the class email constructor
to
self.message.attach(self.body) #In the class email constructor
and it worked.
I was attaching string type to message instead of MIMEText
|
modify qscintilla python lexar
Question: Similar to this question: [Creating and colorizing new constructs on a
existing Scintilla
lexer](http://stackoverflow.com/questions/22021294/creating-and-colorizing-
new-constructs-on-a-existing-scintilla-lexer) but instead of adding, I would
like to modify text colors of the lexer in pyqt4. The closes I found is
[QScintilla: how to create a new lexer or modify an existing
one?](http://stackoverflow.com/questions/4097773/qscintilla-how-to-create-a-
new-lexer-or-modify-an-existing-one) in which the user just gave up.
Basically I would like to switch to a darker text editor theme, such as MAYA
(Not the same keyword/syntax highlighting, just the overall color theme):

I have been able to modify some open code online to set my background and
default text:
lexer = getattr(Qsci, 'QsciLexer' + 'Python')()
lexer.setDefaultFont(font)
lexer.setDefaultPaper(QColor("#3c3c3c"))
lexer.setDefaultColor(QColor("#f9f9f9"))
self.setLexer(lexer)
self.SendScintilla(QsciScintilla.SCI_STYLESETFONT, 1, 'Helvetica')
I cant find access to setting colors for the python lexer like comments,
imports, exceptions, etc.
Answer: To set the foreground colour for e.g. comments:
lexer.setColor(QColor('lightblue'), QsciLexerPython.Comment)
To set the background colour:
lexer.setPaper(QColor('darkblue'), QsciLexerPython.Comment)
To set the font:
lexer.setFont(QFont('DejaVu Sans'), QsciLexerPython.Comment)
For other possibilities, consult the [QScintilla
docs](http://pyqt.sourceforge.net/Docs/QScintilla2/index.html).
|
What exactly does Spyder do to Unicode strings?
Question: Running Python in a standard GNU terminal emulator on Ubuntu 14.04, I get the
expected behavior when typing interactively:
>>> len('tiθ')
4
>>> len(u'tiθ')
3
The same thing happens when running an explicitly utf8-encoded script in
Spyder:
# -*- coding: utf-8 -*-
print(len('tiθ'))
print(len(u'tiθ'))
...gives the following output, regardless of whether I run it in a new
dedicated interpreter, or run in a Spyder-default interpreter (shown here):
>>> runfile('/home/dan/Desktop/mwe.py', wdir=r'/home/dan/Desktop')
4
3
But when typing interactively in a Python console within Spyder:
>>> len('tiθ')
4
>>> len(u'tiθ')
4
This issue has been brought up
[elsewhere](http://stackoverflow.com/q/5695421/1664024), but that question
regards differences between Windows and Linux. Here, I'm getting different
results in different consoles _on the same system_ , and the Python startup
message in the terminal emulator and in the console within Spyder are
identical:
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
What is going on here, and how can I get Python-within-Spyder to behave like
Python-in-the-shell with regard to unicode strings? @martijn-pieters makes the
comment on [this question](http://stackoverflow.com/q/24034716/1664024) that
> Spyder does all sorts of things to break normal a Python environment.
But I'm hoping there's a way to un-break this particular feature, since it
makes it really hard to debug scripts in the IDE when I can't rely on my
interactive typed commands to yield the same results as scripts run as a whole
with their `coding: utf-8` declaration.
### UPDATES
In the GNU terminal:
>>> repr(u'tiθ')
"u'ti\\u03b8'"
>>> import sys
>>> sys.stdin.encoding
'UTF-8'
>>> sys.getdefaultencoding()
'ascii'
In Spyder console:
>>> repr(u'tiθ')
"u'ti\\xce\\xb8'"
>>> import sys
>>> sys.stdin.encoding # returns None
>>> sys.getdefaultencoding()
'UTF-8'
So knowing that, can I convince Spyder to behave like the GNU terminal?
Answer: After a bit of research, it seems that the strange behavior is at least in
part built into Python (see [this discussion
thread](https://mail.python.org/pipermail/python-
list/2010-August/584517.html); briefly, Python sets `sys.stdin.encoding` to
`None` as default, and only changes it if it detects that the host is `tty`
and it can detect the `tty`'s encoding).
That said, a hackish workaround was just to tell Spyder to use
`/usr/bin/python3` as its executable instead of the default (which was Python
2.7.6). When running a Python 3 console within Spyder or within a GNU terminal
emulator, I get results different (better!) than before, but importantly the
results are consistent, regardless of whether running a script or typing
interactively, and regardless of using GNU terminal or Spyder console:
>>> len('tiθ')
3
>>> len(u'tiθ')
3
>>> import sys
>>> sys.stdin.encoding
'UTF-8'
>>> sys.getdefaultencoding()
'utf-8'
This leads to other problems within Spyder, however: its `sitecustomize.py`
script is not Python 3 friendly, for example, so every new interpreter starts
with
Error in sitecustomize; set PYTHONVERBOSE for traceback:
SyntaxError: invalid syntax (sitecustomize.py, line 432)
The interpreter seems to work okay anyway, but it makes me nervous enough that
I'm not considering this an "acceptable" answer, so if anyone else has a
better idea...
|
Find if 24 hrs have passed between datetimes - Python
Question: I have the following method:
# last_updated is a datetime() object, representing the last time this program ran
def time_diff(last_updated):
day_period = last_updated.replace(day=last_updated.day+1, hour=1,
minute=0, second=0,
microsecond=0)
delta_time = day_period - last_updated
hours = delta_time.seconds // 3600
# make sure a period of 24hrs have passed before shuffling
if hours >= 24:
print "hello"
else:
print "do nothing"
I want to find out if 24 hrs have passed since `last_updated`, how can I do
that in `Python`?
Answer: If `last_updated` is a naive datetime object representing the time in UTC:
from datetime import datetime, timedelta
if (datetime.utcnow() - last_updated) > timedelta(1):
# more than 24 hours passed
If `last_updated` is the local time (naive (timezone-unaware) datetime
object):
import time
DAY = 86400
now = time.time()
then = time.mktime(last_updated.timetuple())
if (now - then) > DAY:
# more than 24 hours passed
If `last_updated` is an ambiguous time e.g., the time during an end-of-DST
transition (once a year in many timezones) then there is a fifty-fifty chance
that `mktime()` returns a wrong result (e.g., off by an hour).
`time.mktime()` may also fail if C `time` library doesn't use a historical
timezone database on a given platform _and_ the UTC offset for the local
timezone was different at `last_updated` time compared to now. It may apply to
more than a third of all timezones in the last year. Linux, OS X, the recent
versions of Windows have the tz database (I don't know whether old Windows
versions would work for such past dates).
Beware: it might be tempting to write `datetime.now() - last_updated` (similar
to the UTC case) but it is guaranteed to fail on all platforms if the UTC
offset was different at `last_updated` time (it is possible in many
timezones). `mktime()`-based solution can utilize the tz database at least on
some platforms and therefore it can handle the changes in the UTC offset for
whatever reason there.
For portability, you could install the tz database. It is provided by `pytz`
module in Python. `tzlocal` can return `pytz` timezone corresponding to the
local timezone:
from datetime import datetime, timedelta
from tzlocal import get_localzone # $ pip install tzlocal
tz = get_localzone() # local timezone
then = tz.normalize(tz.localize(last_updated)) # make it timezone-aware
now = datetime.now(tz) # timezone-aware current time in the local timezone
if (now - then) > timedelta(1):
# more than 24 hours passed
It works even if the UTC offset was different in the past. But it can't (as
well as `time.mktime()`) fix ambiguous times (`tz.localize()` picks
`is_dst=False` time by default). `tz.normalize()` is called to adjust non-
existing times e.g., those that correspond to a start-of-DST transition (it
should not affect the result).
The above code assumes that `last_updated` is a naive datetime object (no
associated timezone info). If `last_updated` is an aware datetime object then
it is easy to convert it to UTC:
from datetime import datetime, timedelta
then_in_utc = last_updated.replace(tzinfo=None) - last_updated.utcoffset()
if (datetime.utcnow() - then_in_utc) > timedelta(1):
# more than 24 hours passed
General note: you should understand now why people recommend to work with UTC
time and to use local time only for display.
|
Python: To get proper attribute / function name for dynamically added functions
Question: Below is the sample code.
import inspect
from types import MethodType
class Settings(object):
APPS = ['s1', 's2', 's3']
def __init__(self):
Settings._setup_apps(self)
@classmethod
def _setup_apps(cls, instance):
def _setup(self, appname = None):
print "appname : ", appname
print "inspect : ", inspect.currentframe().f_code.co_name
for app in Settings.APPS:
new_attr_name = _setup.__name__[1:] + '_' + app
setattr(Settings, new_attr_name, _setup)
# setattr(instance, new_attr_name, MethodType(_setup, instance, type(instance)))
Execution
s = Settings()
s.setup_s1()
Result
appname : None
inspect : _setup
How can I get the app name 's1' to be available in the function 'setup_s1'
(similarly 's2' & 's3' in 'setup_s2' & 'setup_s3' respectively), with out
passing the 'appname' explicitly while calling. i.e
s.setup_s1('s1') # Do not prefer this
But updating the default value of 'appname' from 'None' to 's1' / 's2' / 's3'
during the creation of attribute 'setup_s1' / 'setup_s2' / 'setup_s3' is good.
(I don't know if this is possible).
Answer: You can use
[`functools.partial`](https://docs.python.org/2/library/functools.html#functools.partial)
to bind parameters to a function:
import inspect
import functools
class Settings(object):
APPS = ['s1', 's2', 's3']
def __init__(self):
Settings._setup_apps(self)
@classmethod
def _setup_apps(cls, instance):
def _setup(self, appname = None):
print "appname : ", appname
print "inspect : ", inspect.currentframe().f_code.co_name
for app in Settings.APPS:
new_attr_name = _setup.__name__[1:] + '_' + app
setattr(Settings, new_attr_name, functools.partial(_setup, instance, app))
|
Filtering null values from keys of dictionary- Python
Question: I have a pandas data frame and created a dictionary based on columns of the
data frame. The dictionary is almost well generated but the only problem is
that I try to filter out the NaN value but my code doesn't work, so there are
NaN as key in the dictionary. My code is the following:
for key,row in mr.iterrows():
# With this line I try to filter out the NaN values but it doesn't work
if pd.notnull(row['Company nameC']) and pd.notnull(row['Company nameA']) and pd.notnull(row['NEW ID']) :
newppmr[row['NEW ID']]=row['Company nameC']
The output is:
defaultdict(<type 'list'>, {nan: '1347 PROPERTY INS HLDGS INC', 1.0: 'AFLAC INC', 2.0: 'AGCO CORP', 3.0: 'AGL RESOURCES INC', 4.0: 'INVESCO LTD', 5.0: 'AK STEEL HOLDING CORP', 6.0: 'AMN HEALTHCARE SERVICES INC', nan: 'FOREVERGREEN WORLDWIDE CORP'
So, I don't know how to filer out the nan values and what's wrong with my
code.
**EDIT:**
An example of my pandas data frames is:
CUSIP Company nameA A�O NEW ID Company nameC
42020 98912M201 NaN NaN NaN ZAP
42021 989063102 NaN NaN NaN ZAP.COM CORP
42022 98919T100 NaN NaN NaN ZAZA ENERGY CORP
42023 98876R303 NaN NaN NaN ZBB ENERGY CORP
Answer: Pasting an example - how to remove "nan" keys from your dictionary:
Lets create dict with 'nan' keys (NaN in numeric arrays)
>>> a = float("nan")
>>> b = float("nan")
>>> d = {a: 1, b: 2, 'c': 3}
>>> d
{nan: 1, nan: 2, 'c': 3}
Now, lets remove all 'nan' keys
>>> from math import isnan
>>> c = dict((k, v) for k, v in d.items() if not (type(k) == float and isnan(k)))
>>> c
{'c': 1}
Other scenario that works fine. Maybe I'm missing something ?
In [1]: import pandas as pd
In [2]: import numpy as np
In [3]: df = pd.DataFrame({'a':[1,2,3,4,np.nan],'b':[np.nan,np.nan,np.nan,5,np.nan]})
In [4]: df
Out[4]:
a b
0 1 NaN
1 2 NaN
2 3 NaN
3 4 5
4 NaN NaN
In [5]: for key, row in df.iterrows(): print pd.notnull(row['a'])
True
True
True
True
False
In [6]: for key, row in df.iterrows(): print pd.notnull(row['b'])
False
False
False
True
False
In [7]: x = {}
In [8]: for key, row in df.iterrows():
....: if pd.notnull(row['b']) and pd.notnull(row['a']):
....: x[row['b']]=row['a']
....:
In [9]: x
Out[9]: {5.0: 4.0}
|
Compiling *.py files
Question: I'm trying to compile python source files without success. According to
[documentation](https://docs.python.org/3.2/library/compileall.html),
**compileall.compile_dir** function has "**ddir** " parameter, which (I guess)
specifies the destination folder for .pyc files. I try to compile it with this
script:
import compileall
compileall.compile_dir(".", force=1, ddir=".")
but it doesn't work. I see output from terminal (compiling, listing etc.) but
pyc files are not generated. Anyone can help me understand where pyc files are
stored and how to change that default behavior?
Thanks
Answer: Check `__pycache__` directories. Since Python 3.2, the compiled files are
collected in `__pycache__` directories.
See [PEP 3147: PYC Repository Directories - What’s New In Python
3.2](https://docs.python.org/3/whatsnew/3.2.html?highlight=__pycache__#pep-3147-pyc-
repository-directories).
BTW, `ddir` is not destination directory. According to [the
documentation](https://docs.python.org/3/library/compileall.html#compileall.compile_dir):
> If _ddir_ is given, it is prepended to the path to each file being compiled
> for use in compilation time tracebacks, and is also compiled in to the byte-
> code file, where it will be used in tracebacks and other messages in cases
> where the source file does not exist at the time the byte-code file is
> executed.
|
Buildout installs django but can't import
Question: Here's my buildout.cfg:
[buildout]
parts =
django
[versions]
djangorecipe = 1.5
django = 1.7
[django]
recipe = djangorecipe
project = timetable
eggs =
Here's my routine for setting up project in a new environment:
virtualenv .
source bin/activate
easy_install -U setuptools
python bootstrap.py
bin/buildout -v
python manage.py migrate
When I run bin/buildout, it says django is installed, and django binary is in
the bin folder. But when I run manage.py, it can't import django:
(timetable)mick88@s59gr2dmmd:~/timetable$ python manage.py migrate
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
But it works when I install django using pip. Why doesn't buildout install
django in my virualenv? How can I fix this?
Answer: Buildout doesn't install anything in a virtualenv. Buildout collects python
packages and adds programs to the `bin/` directory that have the correct
python packages added to their `sys.path`.
So:
* virtualenv/pip installs _everything_ into the virtualenv. You have to activate the virtualenv so that it can modify your `PYTHONPATH` environment variable (and the `PATH` variable). This way the python from your virtualenv's `bin/` directory is used and the python packages from the `lib/` dir.
* Buildout adds the necessary "pythonpath" changes to scripts in `bin/`, modifying the `sys.path` setting directly instead of through the environment variable.
The **one thing** you need to know is that you should run `bin/django` instead
of `python manage.py`. The effect is the same, only `bin/django` already has
the right `sys.path` setting.
As an example, just look at the contents of the `bin/django` script. It should
look something like this:
#!/usr/bin/python
import sys
sys.path[0:0] = [
'/vagrant',
'/vagrant/eggs/djangorecipe-1.10-py2.7.egg',
'/vagrant/eggs/Django-1.6.6-py2.7.egg',
'/vagrant/eggs/zc.recipe.egg-2.0.1-py2.7.egg',
'/vagrant/eggs/zc.buildout-2.2.1-py2.7.egg',
'/vagrant/eggs/South-1.0-py2.7.egg',
...
]
import djangorecipe.manage
if __name__ == '__main__':
sys.exit(djangorecipe.manage.main('yoursite.settings'))
|
How to use Hacker News API in Python?
Question: Hacker News has released an API, how do I use it in Python?
I want get all the top posts. I tried using `urllib`, but I don't think I am
doing right.
here's my code:
import urllib2
response = urllib2.urlopen('https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty')
html = response.read()
print response.read()
It just prints empty
''
I missed a line, had updated my code.
Answer: As @jonrsharpe, explained `read()` is only one time operation. So if you print
`html`, you will get list of all ids. And if you go through that list, you
have to make each request again to get story of each id.
First you have to convert the received data to python list and go through them
all.
base_url = 'https://hacker-news.firebaseio.com/v0/item/{}.json?print=pretty'
top_story_ids = json.loads(html)
for story in top_story_ids:
response = urllib2.urlopen(base_url.format(story))
print response.read()
Instead of all this, you could use [haxor](https://github.com/avinassh/haxor),
it's a Python wrapper for Hacker News API. Following code will fetch you all
the ids of top stories :
from hackernews import HackerNews
hn = HackerNews()
top_story_ids = hn.top_stories()
# >>> top_story_ids
# [8432709, 8432616, 8433237, ...]
Then you can go through that loop and print all them, for example:
for story in top_story_ids:
print hn.get_item(story)
**Disclaimer** : I wrote `haxor`.
|
importer which imports .py files only
Question: I need a Python <http://legacy.python.org/dev/peps/pep-0302/> finder and
importer class which works on a specific directory, but it can load only `.py`
files (i.e. no `.so`, no `.dll`, no `.pyc`).
The specified directory contains several packages (with `__path__` specified
and overridden from the default `__path__` added for `__init__.py`).
Also I need a loader which doesn't create `.pyc` files, and doesn't use any of
the Python 2.6-specific solutions (e.g. `sys.dont_write_bytecode = True`,
`python -B` or `PYTHONDONTWRITEBYTECODE`).
Answer: Since I couldn't find an existing implementation, I wrote one:
<https://github.com/pts/import_only_py/blob/master/import_only_py.py>
|
How do I flush a graphics figure from matplotlib.pylab when inside a file input loop?
Question: I am using Python 2.7 and importing libraries numpy and matplotlib. I want to
read multiple file names of tab-delimited text files (time, voltage and
pressure measurements) and after each one display the corresponding graph with
%pylab.
My code can display the graph I want, but only after I enter the specific
string ('exit') to get out of the while loop. I want to see each graph
displayed immediately after the file name has been entered and have multiple
figures on the screen at once. What's wrong with my code below?
import numpy as np
import matplotlib.pylab as plt
filename = ''
filepath = 'C:/Users/David/My Documents/Cardiorespiratory Experiment/'
FileNotFoundError = 2
while filename != 'exit':
is_valid = False
while not is_valid :
try :
filename = raw_input('File name:')
if filename == 'exit':
break
fullfilename = filepath+filename+str('.txt')
data = np.loadtxt(fullfilename, skiprows=7, usecols=(0,1,2))
is_valid = True
except (FileNotFoundError, IOError):
print ("File not found")
t = [row[0] for row in data]
v = [row[1] for row in data]
p = [row[2] for row in data]
#c = [row[3] for row in data]
plt.figure()
plt.subplot(2, 1, 1)
plt.title('Graph Title ('+filename+')')
plt.xlabel('time (s)')
plt.ylabel('voltage (mV)')
plt.plot(t, v, color='red')
plt.subplot(2, 1, 2)
plt.xlabel('time (s)')
plt.ylabel('pressure (kPa)')
plt.plot(t, p, color='blue')
plt.show()
I have tried Padraic Cunningham's suggestion to use only a single while loop
to get the file name and that's an improvement. But when I put the graphing
commands inside the loop, the figure comes up as an empty window with the
message "Not Responding". The graph appears in the figure only after exiting
the while loop. I want the figures to appear immediately upon getting the file
name. Here's my current code:
import numpy as np
import matplotlib.pylab as plt
filename = ''
filepath = 'C:/Users/David/My Documents/Cardiorespiratory Experiment/'
FileNotFoundError = 2
Count = 0
while Count <= 4:
try :
filename = raw_input('File name:')
fullfilename = "{}{}.txt".format(filepath, filename)
data = np.loadtxt(fullfilename, skiprows=7, usecols=(0,1,2))
is_valid = True
except (FileNotFoundError, IOError):
print ("File not found")
Count += 1
t = [row[0] for row in data]
v = [row[1] for row in data]
p = [row[2] for row in data]
plt.figure()
plt.subplot(2, 1, 1)
plt.title('Graph Title ('+filename+')')
plt.xlabel('time (s)')
plt.ylabel('voltage (mV)')
plt.plot(t, v, color='red')
plt.subplot(2, 1, 2)
plt.xlabel('time (s)')
plt.ylabel('pressure (kPa)')
plt.plot(t, p, color='blue')
plt.show()
Answer: Just use one loop to get the filename:
while True:
try :
filename = raw_input('File name:')
full_filename = "{}{}.txt".format(filepath, filename)
data = np.loadtxt(full_filename, skiprows=7, usecols=(0,1,2))
break # if no error break out of loop
except (FileNotFoundError, IOError): # else catch error and ask again
print ("File not found")
`".txt"` is already a string so no need to cast
|
python script and libGLEW related error (menpo.io API)
Question: I am writing a python script in Ubuntu 14.04 that imports the menpo.io (API
for deformable models) that results in the following error:
Traceback (most recent call last):
File "/home/xsmailis/Dropbox/pyFaceDepression/AAM_Menpo_final.py", line 1, in <module>
import menpo.io as mio
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/menpo/__init__.py", line 7, in <module>
import io
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/menpo/io/__init__.py", line 1, in <module>
from base import (import_auto, import_image, import_images,
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/menpo/io/base.py", line 815, in <module>
from menpo.io.extensions import (mesh_types, all_image_types,
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/menpo/io/extensions.py", line 2, in <module>
from menpo.io.landmark import (LM3Importer, LANImporter, LM2Importer,
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/menpo/io/landmark.py", line 9, in <module>
from menpo.shape import PointCloud
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/menpo/shape/__init__.py", line 2, in <module>
from menpo.shape.mesh import TriMesh, ColouredTriMesh, TexturedTriMesh
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/menpo/shape/mesh/__init__.py", line 2, in <module>
from .coloured import ColouredTriMesh
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/menpo/shape/mesh/coloured.py", line 3, in <module>
from menpo.rasterize import Rasterizable
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/menpo/rasterize/__init__.py", line 2, in <module>
from menpo.rasterize.opengl import GLRasterizer
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/menpo/rasterize/opengl.py", line 2, in <module>
from cyrasterize.base import CyRasterizerBase
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/cyrasterize/__init__.py", line 1, in <module>
from cyrasterize.base import CyRasterizer
File "/home/xsmailis/miniconda/envs/menpo/lib/python2.7/site-packages/cyrasterize/base.py", line 3, in <module>
from cyrasterize.glrasterizer import GLRasterizer
ImportError: libGLEW.so.@glew_version@: cannot open shared object file: No such file or directory
The library I import is called Menpo.io
Any ideas how to fix this ?
Answer: I have to apologise, this is actually totally my fault as I was trying to
update some of our build scripts and accidentally released a broken build of
GLEW. To fix this, you can run:
`conda install -c menpo --force glew=1.10.0`
Which should download and reinstall GLEW and hopefully fix this problem.
Please reply back if you have any problems at all!
|
Python3 "magic functions" stack trace
Question: I find myself in a situation where I am redefining a lot of the so called
"magic" attributes or functions of my class in Python3 (`__add__`, `__sub__`,
etc.)
For all of these, I implement the same two lines of code:
arg1 = self.decimal if isinstance(self, Roman) else self
arg2 = other.decimal if isinstance(other, Roman) else other
The details of what these lines do isn't important, however, the redundancy in
my code is distracting. Is there another "magic" function that is a middle
ground between this and it being called in the REPL?
For example:
>> Class(9) + Class(3)
... (somewhere in python module)
... def __magicFunction__(self, rhs):
... arg1 = self.decimal if isinstance(self, Roman) else self
... arg2 = other.decimal if isinstance(other, Roman) else other
...
... THEN
...
... def __add__(self, rhs):
... return arg1 + arg2
...
12
with a stacktrace of something like this:
Traceback (most recent call last):
File "< stdin>", line 1, in < module>
File "/home/module.py", line 105, in ```__magicFunction__```
File "/home/module.py", line 110, in ```__gt__```
I hope this makes sense...
Answer: I don't know about another magic function, but it would probably be just as
productive to make arg1 and arg2 permanent variables of the class you're in.
Then make a method for the class that you call from within every other magic
function.
EDIT:
actually, why don't you just use getattr? so each magic function would look
something like this:
def __add__(self, rhs):
return getattr(self, 'decimal', self) + getattr(other, 'decimal', other)
|
Browser Crashes in Page Retrieving Loop, Python Selenium Script
Question: I wrote this simple script - it simply fetches an image from blogposts and
posts it to Pinterest. It works great, except that after about 43 pages, the
browser hangs/freezes.
I'm wondering if there is some sort of "leak" causing things to get out of
control? Perhaps there is something I can alter in the script to make it run
without crashing?
Here is my code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import pickle
import os.path
import time
driver = webdriver.Firefox()
#START THE RACE
driver.get("https://www.pinterest.com/login/")
assert "Pinterest" in driver.title
#name
elem = driver.find_element_by_name("username_or_email")
elem.send_keys("[email protected]")
#pass
elem = driver.find_element_by_name("password")
elem.send_keys("12345")
elem.send_keys(Keys.RETURN)
time.sleep(5)
new_url = 'something.com/something_else'
driver.get(new_url)
i=0
while(1):
i=i+1
print i
time.sleep(5)
driver.find_element_by_css_selector(".pin-it-btn-wrapper a").click();
time.sleep(3)
try:
driver.find_element_by_css_selector('[data-pin-index="0"]').click();
except:
driver.find_element_by_css_selector("#prev_post a").click();
time.sleep(3)
handles = driver.window_handles
if(handles):
for handle in handles:
driver.switch_to_window(handle)
try:
driver.find_element_by_css_selector('button.pinIt').click();
time.sleep(2)
except:
continue
time.sleep(3)
if(handles):
for handle in handles:
driver.switch_to_window(handle)
try:
time.sleep(1)
driver.find_element_by_css_selector("#prev_post a").click();
time.sleep(3)
break
except:
continue
#assert "No results found." not in driver.page_source
#driver.close()
The script itself logs into pinterest, goes to given site, and starts crawling
the "previous post" links of the blog posts - pushes the "Pin it" button,
selects the image, confirms (separate window) then jumps back to main window
and starts another page.
As mentioned, I think its possible there is a "leak" somewhere causing it to
grind to a hault after a while.
Answer: Per commenter's advice, simply changing every instance of:
time.sleep(3)
to
driver.implicitly_wait(3)
seems to have helped things. Also I updated to most recent version.
|
Python Lex-Yacc (PLY) Error recovery at the end of input
Question: ## Problem
I am trying to implement an error tolerant parser using Python Lex-Yacc (PLY),
but I have trouble using error recovery rules at the end of my input string.
How can I recover from an unexpected end of input?
## Example
This example grammar produces strings of the form `A END A END A END A END`
`...`
Statement : Expressions
Expressions : Expression Expressions
|
Expression : A END
I want to perform an error recovery if the END Token was omitted, so stings
like `A A A END` or `A A A` will be recognized by the parser.
## My approach
I added an error recovery rule, which allows me to accept input like `A A A
END`
Expression : A END
| A error
Which allows me to accept the following input: `A A A END`
But if the last END token is omitted (`A A A`), I still get a syntax error and
cannot recover.
* * *
# Sample PLY code
from __future__ import print_function
# Tokens
tokens = ('A', 'END')
t_A = r'A'
t_END = r'END'
t_ignore = " "
def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)
# Build the lexer
import ply.lex as lex
lex.lex()
# Rules
def p_statement_expr(p):
'''statement : expressions'''
print("parsed:", p[1])
def p_expressions(p):
'''expressions : expression expressions'''
p[0] = [p[1]] + p[2]
def p_expressions_empty(p):
'''expressions : '''
p[0] = list()
def p_expression_pharse(p):
'''expression : A END
| A error'''
p[0] = 'A'
def p_error(p):
if p:
print("Syntax error at '%s'" % p.value)
else:
print("Syntax error at EOI")
import ply.yacc as yacc
yacc.yacc()
while 1:
try:
s = raw_input('query > ') # use input() on Python 3
except EOFError:
break
yacc.parse(s)
Answer: I add it as a new answer (and do know it is too late for the bounty :-( )
because it is a very different approach. If we used `flex`, it would be much
easier, since it has the notion of the `<<EOF>>` token that matches only at
end of file. After thinking about that, I realized that it was very simple to
add that functionality to PLY _without any change to the original module_ by
using a _proxy_ around the lexer. And Python allows easy implementation of
proxies thanks the the `__getattr__` special method.
I just add
* a new token `EOF` that will be send at end of file
* a proxy around the `token` method of the lexer that on end of file returns the special `EOF` token on first pass and then the normal `None`
* the eof token to end `statement` rule
And still reverse the rule `expressions : expressions expression` instead of
`expressions : expression expressions` to allow immediate reduce
The code becomes :
from __future__ import print_function
# Tokens
tokens = ('A', 'END', 'EOF')
t_A = r'A'
t_END = r'END'
t_ignore = " "
def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)
# Build the lexer
import ply.lex as lex
orig_lexer = lex.lex()
class ProxyLexer(object):
def __init__(self, lexer, eoftoken):
self.end = False
self.lexer = lexer
self.eof = eoftoken
def token(self):
tok = self.lexer.token()
if tok is None:
if self.end :
self.end = False
else:
self.end = True
tok = lex.LexToken()
tok.type = self.eof
tok.value = None
tok.lexpos = self.lexer.lexpos
tok.lineno = self.lexer.lineno
# print ('custom', tok)
return tok
def __getattr__(self, name):
return getattr(self.lexer, name)
lexer = ProxyLexer(orig_lexer, 'EOF')
# Rules
def p_statement_expr(p):
'''statement : expressions EOF'''
print("parsed:", p[1])
def p_expressions(p):
'''expressions : expressions expression'''
p[0] = p[1] + [p[2]]
def p_expressions_empty(p):
'''expressions : '''
p[0] = list()
def p_expression_pharse(p):
'''expression : A END
| A error'''
p[0] = 'A'
def p_error(p):
if p:
print("Syntax error at '%s'" % p.value)
else:
print("Syntax error at EOI")
import ply.yacc as yacc
parser = yacc.yacc()
while 1:
try:
s = raw_input('query > ') # use input() on Python 3
except EOFError:
break
parser.parse(s, lexer = lexer)
That way :
* the original grammar is unchanged
* the error recovery method remains stupidly simple and has no dependance on the remaining of the grammar
* it can be easily extended to complex parsers
|
PyCharm: python build-in exceptions unresolved
Question: I have a working Django PyCharm 3.4.1 project which i have been working on for
month without problems!
But now PyCharm _for some reason marks all python build-in exceptions as
unresolved_. Other Features like code completion and debugging remain to work
fine.
### How it happend
The issue started as i created a package named "exceptions" and tried to move
a few Exception-derived classed via "refactor" into that package - the
operation completed without displaying any error but the involved source files
were not modified as if the operation did not happen.
After realizing the possible name conflict with the build-in exceptions i
deleted the folder - effectively putting the source files back into the
initial state.
### Additional Description
* At that point exceptions like IOError, Exception, KeyError where not resolved correct any more.
* As a quickfix the IDE suggests to create the respective class, rename the reference or ignore the problem.
* the editor shows unresolved reference but in the project explorer the concerned files are not underlined red.
### Attempts to fix the issue
Unfortunate the issue remained even after i:
* closed and reopened the project
* invalidated all Caches and restarted the IDE multiple times
* switched the python interpreter
Do you have ideas or suggestions on how to make PyCharm get these names right
again and to resolve the issue?
Answer: I found a way to resolve the issue:
1. exported "File types" IDE-settings
2. exchanged the filetypes.xml which looked like this
<?xml version="1.0" encoding="UTF-8"?>
<application>
<component name="FileTypeManager" version="11">
<ignoreFiles list="CVS;SCCS;RCS;rcs;.DS_Store;.svn;.pyc;.pyo;*.pyc;*.pyo;.git;*.hprof;_svn;.hg;*.lib;*~;__pycache__;.bundle;*.rbc;*$py.class;" />
<extensionMap>
<mapping pattern="exceptions.py" type="PLAIN_TEXT" />
<mapping ext="spec" type="PLAIN_TEXT" />
</extensionMap>
</component>
</application>
with
<?xml version="1.0" encoding="UTF-8"?>
<application>
<component name="FileTypeManager" version="11">
<ignoreFiles list="CVS;SCCS;RCS;rcs;.DS_Store;.svn;.pyc;.pyo;*.pyc;*.pyo;.git;*.hprof;_svn;.hg;*.lib;*~;__pycache__;.bundle;*.rbc;*$py.class;" />
<extensionMap>
<mapping ext="spec" type="PLAIN_TEXT" />
</extensionMap>
</component>
</application>
3. imported the modified settings.jar back into the IDE.
After searching for about 7 hours for what causes the bug by exporting
settings, resetting all IDE settings and importing settings back module my
module i found this solution.
After identifying the filetypes module of the IDE as the problem i tried to
modify the problematic module via the settings dialog but could not find a
way.
I wander what caused the IDE to create the
<mapping pattern="exceptions.py" type="PLAIN_TEXT" />
entry - when trying to reproduce the issue it would no longer occur - even my
refactoring operation succeed.
|
Logging at Gevent application
Question: I'm trying to use standard Python logging module alongside with gevent. I have
monkey patched threading and I expect logging to work with my app:
import gevent.monkey
gevent.monkey.patch_all()
import logging
logger = logging.getLogger()
fh = logging.FileHandler('E:\\spam.log')
fh.setLevel(logging.DEBUG)
logger.addHandler(fh)
def foo():
logger.debug('Running in foo')
gevent.sleep(0)
logger.debug('Explicit context switch to foo again')
def bar():
logger.debug('Explicit context to bar')
gevent.sleep(0)
logger.debug('Implicit context switch back to bar')
gevent.joinall([
gevent.spawn(foo),
gevent.spawn(bar),
])
Unfortunately E:\spam.log is empty and I can see no output to the console. It
seems that either I haven't configured logging properly or gevent does not
support it at all (which I don't believe, because gevent documentation says
that it [does](http://learn-gevent-
socketio.readthedocs.org/en/latest/gevent.html#examples)). So, how can I log
at gevent app?
Answer: You haven't configured it correctly. You need to set the `DEBUG` level on the
logger rather than the handler, otherwise the logger's default level
(`WARNING`) causes your debug messages to be dropped. Try doing a
logger.setLevel(logging.DEBUG)
and it should work.
|
Recording the time into a text file
Question: I am trying to test how long my battery runs on my raspberry pi. All i need to
do is run a while loop until the battery dies. However, I need to record the
time that it starts and ends. I would like to save the data into a txt file.
My current code is as follows:
import time
file = open('time','w')
x=1
while x==1:
from datetime import datetime
now = datetime.now()
file.write(now)
file.close()
If I just print the results in python i will get a result, and the current
code makes a file called 'file' but there is nothing saved in the txt file.
Any and all help would be appreciated.
Thank you for your time
Answer: You have an infinite loop which just gets the current time over and over again
... This loop won't break (even if your battery dies ...).
At some point, you need to `break` the loop or the condition in the `while`
needs to become `False`. e.g.
from datetime import datetime
while x == 1:
now = datetime.now()
break
or
from datetime import datetime
while x == 1:
now = datetime.now()
x += 1
Generally speaking, you'll want to look in your system logs for when the
computer decided to start up and when it decided to shut down due to lack of
battery power ...
|
Read/Write Sequence Files Containing Thrift Records Using Hadoop Streaming with Python
Question: I would like to Read/Write sequence files containing Thrift records using
Hadoop Streaming with Python. I have looked at the following and its seems
this is possible after HADOOP-1722 but if someone has done this already and
can give an example, that would be great.
<http://mojodna.net/2013/12/27/binary-streaming-with-hadoop-and-nodejs.html>
[How to use "typedbytes" or "rawbytes" in Hadoop
Streaming?](http://stackoverflow.com/questions/15171514/how-to-use-typedbytes-
or-rawbytes-in-hadoop-streaming)
<http://static.last.fm/johan/huguk-20090414/klaas-hadoop-1722.pdf>
<https://issues.apache.org/jira/browse/HADOOP-1722>
The key is to be able to read thrift objects from stdin in Python.
Answer: I finally got this done with [Hadoopy](http://www.hadoopy.com/en/latest/).
This is my simple Thrift object.
struct Test {
1: required string foo;
2: required string bar;
}
I generated the python definitions using the command line tool, and zipped the
directory. Then I generated some data and pushed it into HDFS using loadtb.
Here is the code to de-serialize the data and simply write it out as a string.
import hadoopy
from thrift.protocol import TBinaryProtocol
from thrift.TSerialization import deserialize
import sys
class Mapper(object):
def __init__(self):
sys.path.append('lib/test.zip/gen-py')
from Test.ttypes import Test
self.protocol_factory = TBinaryProtocol.TBinaryProtocolFactory
self.test = Test()
def map(self, key, value):
deserialize(self.test, key, protocol_factory=self.protocol_factory())
yield 1, self.test.foo + " " + self.test.bar
if __name__ == '__main__':
hadoopy.run(Mapper)
This is the driver script.
import hadoopy
hadoopy.launch('/data/fuscala', '/data/fuscala-output', 'fuscala1.py', remove_output = True, files = ["test.zip"])
data = hadoopy.readtb('/data/fuscala-output')
for i, j in data:
print i,j
|
How to convert list of dictionary values in python to a string and write on seprate cells of a csv file?
Question: I want to write a dictionary on a csv file in Python. I want the output to be
like this:
key1 key2
4 3
2 1
3
but the output is like this:
key1 key2
['4'] ['3']
['2'] ['1']
['3']
I'm using this code to write dictionary on csv files:
my_file_path = os.path.join(filePath, 'MyDictionary.csv')
with open(my_file_path,'w', newline='') as csv_file:
csv_writer = csv.DictWriter(csv_file, my_data.keys())
csv_writer.writeheader()
csv_writer.writerow(my_data)
I couldn't find my answer in the archive. How can I convert the list of values
in a way that it is written as a string on a csv file?
Answer: Try this:
>>> import csv,os
>>> my_data = {'key1': [4,2,3], 'key2': [3,1]}
>>> filePath='.'
>>> my_file_path = os.path.join(filePath, 'MyDictionary.csv')
>>> with open(my_file_path,'w',, newline = '') as csv_file:
... keys = sorted(my_data.keys()) #ensure key1 is before key2
... csv_writer = csv.writer(csv_file, delimiter=',')
... csv_writer.writerow(keys)
... num_loops = max([len(v) for k,v in my_data.iteritems()])
... for idx in range(0,num_loops):
... row = []
... for k in keys:
... to_append = my_data[k][idx] if len(my_data[k])>idx else ''
... row.append(to_append)
... csv_writer.writerow(row)
Output file:
key1 key2
4 3
2 1
3
|
Checking file header, signature and type
Question: I am making a file type checking program (with file signature checking) but,
there are so many file signatures, so I can't compare object file with file
signature.
Is there any library checking file signatures, or how can I easily make this
script? How can I make one function checking file?
* <http://forensic-proof.com/archives/300>
* [Python: Check if uploaded file is jpg](http://stackoverflow.com/questions/266648/python-check-if-uploaded-file-is-jpg)
Code:
__author__ = 'jun'
# -*- coding: utf-8 -*-
import sys
if len(sys.argv) is 1:
print(sys.stderr, '읽을 파일명을 입력해주세요')
exit(1)
try:
IN = open(sys.argv[1], 'rb')
except IOError:
print(sys.stderr, '그런 파일이 없거나, 열기에러입니다.')
exit(1)
offset = 0
def is_jpg(filename):
data = IN.read(11)
if data[:4] != '\xff\xd8\xff\xe0': return False
if data[6:] != 'JFIF\0': return False
return True
if is_jpg(IN) is True:
print ('jpg file')
Answer: What you're trying to do is called detecting the mime type.
[This library](https://github.com/ahupp/python-magic) can do what you want.
|
Add enviroment variable for python with batch
Question: ## **Short question:**
How would I append a environment python variable with a bath script? I want do
the equal to: `import sys sys.path.append('E:\whatever\locallyScriptFolder')`
but with a batch file? I'm a batch noob.
## **Longer pipeline question:**
I need to setup a Maya python script pipeline. All the scripts are in our
Perforce folder. Here are also the .bat file which copy the userSetup.py to
the users local drive. This userSetup.py executes when Maya starts. From where
I want to start script from the Perforce folder, but this folder have a
different path for every user (some has it on the E drive and so on). What
would be the best way to be able to get those scripts? Is it to append a
environment variable?
**Example:**
C:\Users\nameOnUser\Documents\maya\2014-x64\scripts\ **userSetup.py** \-
starts up with Maya.
X:\RandomPath\Scripts\ **scriptWantToCallFromUserSetup1.py**
X:\RandomPath\Scripts\ **scriptWantToCallFromUserSetup2.py**
Answer:
set PYTHONPATH=X:\RandomPath\Scripts
Whatever is in `PYTHONPATH` will be added to `sys.path` when Python starts.
|
Can't pip install anything requiring C compilation on OSX 10.10 with homebrew python
Question: When I try to `pip install` things that involve C compilation (`Pillow`,
specifically) I get an odd error:
clang: error: no such file or directory: 'Python.framework/Versions/2.7/Python'
error: command 'clang' failed with exit status 1
----------------------------------------
Cleaning up...
Command /usr/local/opt/python/bin/python2.7 -c "import setuptools, tokenize;__file__='/private/var/folders/xt/f2d8vp4j65j2rt71pns6trjw0000gn/T/pip_build_jakob/Pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/xt/f2d8vp4j65j2rt71pns6trjw0000gn/T/pip-LnL0hW-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/var/folders/xt/f2d8vp4j65j2rt71pns6trjw0000gn/T/pip_build_jakob/Pillow
[I posted my `pip.log` for this
installation.](https://gist.github.com/c0fd762f3a690a715f85)
I think this problem is due to a combination of OSX 10.10 Yosemite and
Homebrew python. Have you experienced this before? ideas on how to fix it?
Answer: You need to install XCode in order to be be able to compile on OS X.
|
Python How to copy value from an Entry field to a Text field in Tkinter
Question: I’m trying to make a simple GUI with Tkinker that when you press a button it
adds the value in the Entry field to the next row in the Text field.
from tkinter import *
#def onclick():
# pass
class MyApp:
def __init__(self):
window=Tk()
window.title('Multi Line Text Form')
frame = Frame(window)
frame.pack()
#self.fields ={}
l = Label(frame, text="Text to add below: ")
l.grid(row=0, column = 0)
#self.fields['name'] = Entry(frame)
name = Entry(frame)
#self.fields['name'].grid(row=0,column=1)
name.grid(row=0,column=1)
self.text = Text(frame, width=40, height=10)
self.text.grid(row=3,column=1)
self.text.insert(INSERT, "Hello.....")
self.text.insert(END, "Bye Bye.....")
msg = name.get()
updatebtn = Button(frame, text="Update Text Box", command=self.writeToLog(name))
updatebtn.grid(row=0, column=2)
window.mainloop()
def writeToLog(self, msg):
print(msg)
numlines = self.text.index('end - 1 line').split('.')[0]
#text['state'] = 'normal'
if numlines==4:
self.text.delete(1.0, 2.0)
if self.text.index('end-1c')!='1.0':
self.text.insert('end', '\n')
self.text.insert('end', msg)
#text['state'] = 'disabled'
if __name__=="__main__":
MyApp()
Answer: Your button needs to be configured with the _name_ of a function to call. This
function can then get the value from the entry widget and put it in the text
widget:
def __init__(self):
...
updatebtn = Button(command=self.update_text)
...
self.name_entry = Entry(frame)
def update_text(self):
data = self.name_entry.get()
self.writeToLog(data)
|
Getting the name of file download using selenium in python
Question: So I'm downloading a file using Selenium and it works fine but I need to get
the name of the file.
My variable `path` should get the name of what was downloaded but all it
prints out is "none".
driver = webdriver.Firefox(firefox_profile=profile, firefox_binary=Binary)
driver.get("stuff")
time.sleep(2)
path = driver.find_element_by_xpath("//a[contains(text(), 'tgz')]").click()
print path
Answer: This is an alternate solution to
[@TehTris](http://stackoverflow.com/u/2193134)'s
[solution](http://stackoverflow.com/a/26371425/2932244).
It is not necessary that all the links will contain the file name (they can
also be redirects at times), so in that case you can try checking the
difference in the directory listing before and after downloading the file by
using [`os.listdir()`](https://docs.python.org/2/library/os.html#os.listdir).
import os
before = os.listdir('/home/jason/Downloads')
# Download the file using Selenium here
after = os.listdir('/home/jason/Downloads')
change = set(after) - set(before)
if len(change) == 1:
file_name = change.pop()
else:
print "More than one file or no file downloaded"
|
Capture group via regex; then run a different regex on captured group
Question: I have a log file that's filled with run time information for a program. If it
fails it generates a typical python stacktrace.
I'd like to extract the whole stack trace, trigged by 'Traceback (most recent
call last):' into a regex group, and then run another separate regex to grab
the last line, usually something like KeyError: Something blah blah.
I have two questions that I have not been able to find answers on, is it
possible to use regex to find the Traceback line, and if it exists grab
everything from that line to the end of my file?
And if so, can I run a different regex on a captured regex group?
Answer: This should be the most obvious. Try always to give (code) examples about the
circumstances in this case for example your concrete log structure. Checkout
the [regex](https://docs.python.org/2/library/re.html) documentation.
import re
test = """
some stuff
Traceback (most recent call last):
File "x.py", line x, in <module>
raise KeyError("foo")
KeyError: foo
some stuff
Traceback (most recent call last):
File "y.py", line y, in <module>
raise KeyError("bar")
KeyError: bar
some other stuff
"""
match = re.findall(ur"^Traceback \(most recent call last\):\n(.*?^KeyError: (.+?)$)$", test, re.DOTALL|re.MULTILINE)
for traceback, error in match:
print "\n".join(a.strip() for a in traceback.split("\n"))
print error.strip()
Hint: .*? is a non-greedy match, that means it tries to match as less as
possible. Otherwise it matches stuff between two tracebacks.
Results in:
File "x.py", line x, in <module>
raise KeyError("foo")
KeyError: foo
foo
File "y.py", line y, in <module>
raise KeyError("bar")
KeyError: bar
bar
|
Parsing Serialized Java objects in Python
Question: The string at the bottom of this post is the serialization of a
`java.util.GregorianCalendar` object in Java. I am hoping to parse it in
Python.
I figured I could approach this problem with a combination of regexps and
`key=val` splitting, i.e. something along the lines of:
`text_inside_brackets = re.search(r"\[(.*)\]", text).group(1)`
and
import parse
for x in [parse('{key} = {value}', x) for x in text_inside_brackets.split('=')]:
my_dict[x['key']] = x['value']
My question is: What would be a more **principled** / **robust** approach to
do this? Are there any Python parsers for serialized Java objects that I could
use for this problem? (do such things exist?). What other alternatives do I
have?
My hope is to ultimately parse this in JSON or nested Python dictionaries, so
that I can manipulate it it any way I want.
**Note:** I would prefer to avoid a solution relies on
[Py4J](http://py4j.sourceforge.net/) mostly because it requires setting up a
server and a client, and I am hoping to do this within a single Python script.
>
> `java.util.GregorianCalendar[time=1413172803113,areFieldsSet=true,areAllFieldsSet=true,lenient=true,zone=sun.util.calendar.ZoneInfo[id="America/New_York",offset=-18000000,dstSavings=3600000,useDaylight=true,transitions=235,lastRule=java.util.SimpleTimeZone[id=America/New_York,offset=-18000000,dstSavings=3600000,useDaylight=true,startYear=0,startMode=3,startMonth=2,startDay=8,startDayOfWeek=1,startTime=7200000,startTimeMode=0,endMode=3,endMonth=10,endDay=1,endDayOfWeek=1,endTime=7200000,endTimeMode=0]],firstDayOfWeek=1,minimalDaysInFirstWeek=1,ERA=1,YEAR=2014,MONTH=9,WEEK_OF_YEAR=42,WEEK_OF_MONTH=3,DAY_OF_MONTH=13,DAY_OF_YEAR=286,DAY_OF_WEEK=2,DAY_OF_WEEK_IN_MONTH=2,AM_PM=0,HOUR=0,HOUR_OF_DAY=0,MINUTE=0,SECOND=3,MILLISECOND=113,ZONE_OFFSET=-18000000,DST_OFFSET=3600000]`
Answer: The serialized form of a GregorianCalendar object contains quite a lot of
redundancy. In fact, there are only two fields that matter, if you want to
reconstitute it:
1. the time
2. the timezone
There is code for extracting this in [How to Convert Gregorian String to
Gregorian Calendar](http://stackoverflow.com/questions/25807422/how-to-
convert-gregorian-string-to-gregorian-calendar/25809374#25809374)
If you want a more principled and robust approach, I echo mbatchkarov's
suggestion to use JSON.
|
Can't install lxml in python2.7
Question: Im trying to install lxml within a virtualenv with `sudo pip install lxml` and
also `sudo pip install --upgrade lxml` but getting the following in both
cases:
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,
relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes
-D_FORTIFY_SOURCE=2 -g -fstack-protector --param=ssp-buffer-size=4 -Wformat
-Werror=format-security build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -lxslt
-lexslt -lxml2 -lz -lm -o build/lib.linux-x86_64-2.7/lxml/etree.so
/usr/bin/ld: cannot find -lz
collect2: error: ld returned 1 exit status
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Cleaning up...
Command /usr/bin/python -c "import setuptools,
tokenize;__file__='/tmp/pip_build_root/lxml/setup.py';
exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'),
__file__, 'exec'))" install --record /tmp/pip-nmFOYf-record/install-record.txt
--single-version-externally-managed --compile failed with error code 1 in
/tmp/pip_build_root/lxml
Storing debug log for failure in /root/.pip/pip.log
I have tried all the posted solutions
[here](http://stackoverflow.com/questions/4598229/installing-lxml-module-in-
python), which implies that I have `libxml2-dev`, `libxslt-dev` and `python-
dev` installed and I also installed `build-essential`
I'm currently running Linux Mint 17 Debian Based which uses `apt-get` as
package manager.
`python-lxml` was already preinstalled.
Answer: `lxml` depends on various C libraries, and you have to have those C libraries
installed—including their development files (headers, `.so` or `.a` libraries,
etc.)—to build `lxml`. The [installation](http://lxml.de/installation.html)
docs explain what prerequisites you need to build on your particular platform.
* * *
This error:
/usr/bin/ld: cannot find -lz
… means that the prerequisite you're missing is `libz`, aka `zlib`.
Since you're not on Windows, it's incredibly unlikely that you actually don't
have `zlib` at all… but pretty plausible that you don't have the development
files for `zlib`. On some platforms, most notably many linux distros, packages
are typically split into separate pieces. For example, the parts of `zlib`
needed at runtime may be in a package named `zlib`, while the parts needed for
building other programs that need `zlib` in a package named `zlib-dev` or
`zlib-devel`. The exact details depend on your exact platform.
That being said, if you don't have the `zlib` development files, you probably
don't have the `libxml2` or `libxslt` development files either, because I
don't know of any platform where installing those wouldn't have pulled in the
`zlib` files as well.
At any rate, since you haven't told us what platform you're one (and distro,
if linux), I don't know what package manager to use, what the packages are
named, etc., but do whatever's appropriate for your platform.
* * *
Also:
> I already have `python-lxml` installed
You really shouldn't install the same package both with your distro's package
manager and with `pip`; that's a great way to confuse yourself.
But at any rate, most likely you installed `python-lxml` from a binary
package, not from source, which means you didn't need the build requirements.
Now you're trying to build it from source, which means you do.
|
MySQL Connector in Python
Question: Trying to connect to my MySQL DB on my VPS. I'm getting the following error,
when using the below code;
(I've stripped out my credentials - however I know they work, as I use them
for my PHP dev as well.
**CODE**
import mysql.connector as mysql
from mysql.connector import errorcode
try:
conn = mysql.connect( user= %USER%
,password=%PW%
,host = %HOST%
,database = %DB%
)
except mysql.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print("Something is wrong with your user name or password")
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print("Database does not exists")
else:
print(err)
else:
conn.close()
**ERROR MESSAGE**
2003: Can't connect to MySQL server on '%HOST%:3306' (10060 A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond)
**Connector That I'm using**
<http://cdn.mysql.com/Downloads/Connector-Python/mysql-connector-
python-2.0.1-py3.4.msi>
Any idea what I'm doing wrong?
Answer: Check the [GRANT](http://dev.mysql.com/doc/refman/5.5/en/grant.html) on your
MySQL database. It might be that you haven't been GRANTed permission to
connect from that host with the given username and password.
The MySQL error number is 2003. I usually find it helpful to do a Google
search to see if others have had the problem:
<http://dev.mysql.com/doc/refman/5.0/en/access-denied.html>
Is there a firewall between the client machine and the database server? I
would hope that port 3306 would be blocked by default. If that's true, you'll
need to set up a firewall rule to allow access from the client IP to the
database IP via port 3306.
|
What is the best way to convert raw binary data to a custom base in python?
Question: I need to convert some data to base 29 before processing and I'm using this:
import string
def datatobase(data, base):
digs = string.digits + string.lowercase + string.uppercase
if base > len(digs):
return None
digits = []
x = int(data.encode("hex"), 16)
while x:
digits.append(digs[x % base])
x /= base
digits.reverse()
return ''.join(digits)
Problem is that this small code is slowing my program too much so what would
you do to replace it?
A custom answer for base 29 only would be great too!
Answer: Base 29, only, solution for an `int` argument.
Recursive:
s = '0123456789ABCDEFGHIJKLMNOPQRS'
def foo(n, s=s):
if n < 29:
return s[n]
a, b = divmod(n, 29)
return foo(a) + s[b]
_Regular_ :
def foo(n, s=s):
x = ''
while n >= 29:
n, b = divmod(n, 29)
x += s[b]
x += s[n]
return x[::-1]
|
python does not find wxPython
Question: In order to start building a gui I've decided to install wxPython, however I
can't get it working. I run python 2.7.6 (in IDLE it shows: Python 2.7.6
(default, Nov 10 2013, 19:24:24) [MSC v.1500 64 bit (AMD64)] on win32) and it
works fine.
When I however try to install a 32 bit or 64 bit version of wxPython from
<http://www.wxpython.org/download.php> it doesn't work, meaning that when I
run:
import wx
I get the following message:
Traceback (most recent call last):
File "C:\Users\****\Desktop\Python GUI test\test1.py", line 2, in <module>
import wx
ImportError: No module named wx
I think it might go wrong with the place where it installs wxPython. Normally
however it uses the right directory automatically. It tries to install in the
following dir:
C:\Users\****\AppData\Local\Enthought\Canopy32\User\Lib\site-packages
Which is in a program called Canopy which I once installed but don't know how
to get rid off. I've installed it in this dir and I've also installed it
within the dir where Python is installed:
C:\Python27\Lib\site-packages
Both of these locations don't work.
Anyone has an idea where things go wrong?
Answer: I would suggest to clean up your installations. Canopy is part of Enthought,
so if you don't need this anymore de-install it, I would think you can do that
from: Control Panel\All Control Panel Items\Programs and Features
Then re/install the Python version you want to use, would suggest 2.7.8 32 bit
and then install wxPython 3.0.1 32 bit for Python 2.7.
To test it do not use "Idle" as it is written in Tkinter, which will cause
conflicts with wxPython. Just test it with
c:\Python27\python.exe
import wx
|
Want to pull a journal title from an RCSB Page using python & BeautifulSoup
Question: I am trying to get specific information about the original citing paper in the
Protein Data Bank given only the 4 letter PDBID of the protein.
To do this I am using the python libraries requests and BeautifulSoup. To try
and build the code, I went to the page for a particular protein, in this case
1K48, and also save the HTML for the page (by hitting command+s and saving the
HTML to my desktop).
First things to note:
1) The url for this page is:
<http://www.rcsb.org/pdb/explore.do?structureId=1K48>
2) You can get to the page for any protein by replacing the last four
characters with the appropriate PDBID.
3) I am going to want to perform this procedure on many PDBIDs, in order to
sort a large list by the Journal they originally appeared in.
4) Searching through the HTML, one finds the journal title located inside a
form here:
<form action="http://www.rcsb.org/pdb/search/smartSubquery.do" method="post" name="queryForm">
<p><span id="se_abstractTitle"><a onclick="c(0);">Refined</a> <a onclick="c(1);">structure</a> <a onclick="c(2);">and</a> <a onclick="c(3);">metal</a> <a onclick="c(4);">binding</a> <a onclick="c(5);">site</a> of the <a onclick="c(8);">kalata</a> <a onclick="c(9);">B1</a> <a onclick="c(10);">peptide.</a></span></p>
<p><a class="sePrimarycitations se_searchLink" onclick="searchCitationAuthor('Skjeldal, L.');">Skjeldal, L.</a>, <a class="sePrimarycitations se_searchLink" onclick="searchCitationAuthor('Gran, L.');">Gran, L.</a>, <a class="sePrimarycitations se_searchLink" onclick="searchCitationAuthor('Sletten, K.');">Sletten, K.</a>, <a class="sePrimarycitations se_searchLink" onclick="searchCitationAuthor('Volkman, B.F.');">Volkman, B.F.</a></p>
<p>
<b>Journal:</b>
(2002)
<span class="se_journal">Arch.Biochem.Biophys.</span>
<span class="se_journal"><b>399: </b>142-148</span>
</p>
A lot more is in the form but it is not relevant. What I do know is that my
journal title, "Arch.Biochem.Biophys", is located within a span tag with class
"se_journal".
And so I wrote the following code:
def JournalLookup():
PDBID= '1K48'
import requests
from bs4 import BeautifulSoup
session = requests.session()
req = session.get('http://www.rcsb.org/pdb/explore.do?structureId=%s' %PDBID)
doc = BeautifulSoup(req.content)
Journal = doc.findAll('span', class_="se_journal")
Ideally I'd be able to use find instead of findAll as these are the only two
in the document, but I used findAll to at least verify I'm getting an empty
list. I assumed that it would return a list containing the two span tags with
class "se_journal", but it instead returns an empty list.
After spending several hours going through possible solutions, including a
piece of code that printed every span in doc, I have concluded that the
requests doc does not include the lines I want at all.
Does anybody know why this is the case, and what I could possibly do to fix
it?
Thanks.
Answer: The content you are interested in is provided by the javascript. It's easy to
find out, visit the same URL on browser with javascript disabled and you will
not see that specific info. It also displays a friendly message:
> "This browser is either not Javascript enabled or has it turned off. This
> site will not function correctly without Javascript."
For javascript driven pages, you cannot use Python Requests. There are some
alternatives, one being [dryscape](https://github.com/niklasb/dryscrape).
PS: Do not import libraries/modules within a function. Python does not
recommend it and [PEP08](http://www.python.org/dev/peps/pep-0008/) says that:
> Imports are always put at the top of the file, just after any module
> comments and docstrings, and before module globals and constants.
[This SO question](http://stackoverflow.com/questions/128478/should-python-
import-statements-always-be-at-the-top-of-a-module) explains why it's not
recomended way to do it.
|
Fast 3D matrix re-slicing
Question: I have a 3D matrix of size _(X, Y, Z)_ which is stored in a data structure as
_Z_ matrices, each _X x Y_ in size. I would like to re-slice these matrices to
obtain _X_ slices, each _Y x Z_ in size. In other words, I want to reslice a
3D matrix stored as XY slices in the YZ plane. The use case is to reslice
axial CT images into sagittal images. I am working inside a **browser
environment.**
Here's an example of what I am trying to achieve:
![http://ieeexplore.ieee.org/ieee_pilot/articles/06/ttg2009061407/assets/img/article_1/fig_2/large.jpg\]\(series\)!\[enter
image description here](http://i.stack.imgur.com/prSnW.jpg)
I have implemented the naive (iterative) solution in Python, which takes O(Y *
Z) per slice. I haven't even bothered writing out the corresponding JavaScript
implementation, because this approach is too slow by several orders of
magnitude.
import glob
import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import imread
height, width, depth = 512, 512, 100
volume = np.zeros((height, width, depth))
s = 0
for filename in glob.iglob('./*.jpg'):
volume[:,:,s] = imread(filename)[...,0]/255.0
s += 1
reslice = np.zeros((depth, height, width))
for s in xrange(0, width):
current = np.zeros((depth, height))
for i in xrange(0, height):
for j in xrange(0, depth):
current[j,i] = volume[i,s,j]
reslice[:,:,s] = current
This algorithm seems to be amenable to parallelization. For example, in CUDA,
one could load the 3D data into global memory, create one thread per pixel,
then iterate for every slice in the new direction, and on each iteration ask
the right pixels to fire in order to fill out the current slice. This would be
a trivial kernel to write, and would be approximately O(1) per slice. However,
I don't have access to CUDA in the browser.
Mapping from CUDA to WebCL is relatively straightforward, but WebCL is out of
question given inexistent vendor support ATM. Therefore, I'm thinking WebGL is
the ideal solution.
I'm not too sure how this would be done in the "WebGL" paradigm, but I'm sure
it can be done, and I suspect it is fairly trivial as well. I can't seem to
find where to start, however, and resources on doing general-purpose
computations with OpenGL are extremely scarce. How would I go about using
OpenGL to speed up reslicing of a 3D matrix inside the browser?
Answer: You don't have necessarily to use webGL to be fast enough.
If you use a 3D array, JavaScript might be too slow but by using a **_flat_**
array the time to reslice is in fact similar to the time it takes to create
the array!
Another trick is to use a **_typed array_** to reduce memory usage and improve
performances (Uint8Array).
I created a small class to handle such a flat array and to slice it.
I think the most relevant thing you want is in fact to get a **_view_** ,
either on (x, y) axes or (y, z) axes.
Since Array creation is very slow, you want to build the view on place within
a fixed buffer. And since you want also a sliced view, you have to create a
buffer and method also for the sliced view. It's fast: creating a view for
your 512X512x100 example take **_less than 5 ms!_** (So in fact, the
putImageData you'll have to do afterward will quite take more time! )
Fiddle is here: <http://jsfiddle.net/n38mwh95/1/>
Here's the class handling the data, you'll have to change the constructor so
it accepts the real raw data:
function Array3D(xSize, ySize, zSize) {
this.xSize = xSize;
this.ySize = ySize;
this.zSize = zSize;
var xyMultiplier = xSize * ySize;
this.array = new Uint8Array(xSize * ySize * zSize);
this.view = new Uint8Array(xSize * ySize);
this.slicedView = new Uint8Array(ySize * zSize);
this.valueAt = function (x, y, z) {
return this.array[x + xSize * (y + z * ySize)];
};
this.setValueAt = function (x, y, z, val) {
return this.array[x + xSize * (y + z * ySize)] = val;
};
this.buildView = function (z) {
var src = this.array;
var view = this.view;
for (var x = 0; x < xSize; x++) {
for (var y = 0; y < ySize; y++) {
view[x + xSize * y] = src[x + xSize * (y + z * ySize)];
}
}
return view;
};
this.buildSlicedView = function (x) {
var src = this.array;
var sView = this.slicedView;
for (var y = 0; y < ySize; y++) {
for (var z = 0; z < zSize; z++) {
sView[y + ySize * z] = src[x + xSize * (y + z * ySize)];
}
}
return sView;
};
}
In use:
var xSize = 512;
var ySize = 512;
var zSize = 100;
var t1, t2;
t1 = performance.now();
var testArray = new Array3D(xSize, ySize, zSize);
t2 = performance.now();
console.log('created in :' + (t2 - t1));
t1 = performance.now();
var resliced = testArray.buildView(10);
t2 = performance.now();
console.log('building view in :' + (t2 - t1));
var x = 80;
t1 = performance.now();
var resliced = testArray.buildSlicedView(x);
t2 = performance.now();
console.log('building sliced view in :' + (t2 - t1));
Results:
created in :33.92199998779688 (index):73
building view in :2.7559999871300533 (index):79
building sliced view in :5.726000003051013
At the end of the code I also added some code to render the view.
Don't forget to cache the canvas imageData: create it only once then re-use it
for best performance.
You could easily have a real-time rendering in fact.
|
Create Job using Rundeckrun?
Question: Want to create a job using rundeckrun python module in Rundeck, I searched in
their documentation, but couldn't find it.
Is there any other option to create a job using rundeckrun in Rundeck
Thanks for your attention.
Answer: This was recently posted in the rundeckrun repo:
[#17](https://github.com/marklap/rundeckrun/issues/17) (probably by you:
@Naren). As I mentioned in the comments of that Github Issue, the Rundeck API
doesn't provide a high level method of creating jobs so rundeckrun doesn't
either... yet. :) However, the Rundeck API does provide a method of importing
job definitions and rundeckrun provides a light wrapper around that endpoint.
You can manipulate/create a job definition and import it. In fact, that's
exactly what's done in the
[`tests/__init__.py`](https://github.com/marklap/rundeckrun/blob/master/tests/__init__.py#L44)
setup logic:
test_job_id = uuid.uuid4()
test_job_name = 'TestJobTest'
test_job_proj = 'TestProjectTest'
test_job_def_tmpl = """<joblist>
<job>
<id>{0}</id>
<loglevel>INFO</loglevel>
<sequence keepgoing='false' strategy='node-first'>
<command>
<node-step-plugin type='localexec'>
<configuration>
<entry key='command' value='echo "Hello World! (from:${{option.from}})"' />
</configuration>
</node-step-plugin>
</command>
<command>
<node-step-plugin type='localexec'>
<configuration>
<entry key='command' value='sleep ${{option.sleep}}' />
</configuration>
</node-step-plugin>
</command>
</sequence>
<description></description>
<name>{1}</name>
<context>
<project>{2}</project>
<options>
<option name='from' value='Tester' />
<option name='sleep' value='0' />
</options>
</context>
<uuid>{0}</uuid>
</job>
</joblist>"""
test_job_def = test_job_def_tmpl.format(test_job_id, test_job_name, test_job_proj)
And then [the creation of the
job](https://github.com/marklap/rundeckrun/blob/master/tests/__init__.py#L107)...
def setup():
rundeck_api.jobs_import(test_job_def, uuidOption='preserve')
|
Executing a Postgresql query in Python that creates a table
Question: I'm creating a Flask webapp that displays the results of various Postgresql
queries. As part of the initialization, I want to run a query that creates a
table in Postgresql containing all of the data that I will need for subsequent
queries. My problem is that, although the webapp appears to initialize
correctly (in that it doesn't throw up any exceptions), I have no idea where
the new table is being stored (I believe I have namespaced it correctly) and I
can't access it from any other queries.
Code below:
import psycopg2 as p2
import pandas
conn = p2.connect("dbname='x' user='y' host='z' password='password' port='1234'")
cur = conn.cursor()
def exec_postgres_file(cursor, postgres_file):
statement = open(postgres_file).read()
try:
cursor.execute(statement)
except (p2.OperationalError, p2.ProgrammingError) as e:
print "\n[WARN] Error during execute statement \n\tArgs: '%s'" % (str(e.args))
exec_postgres_file(cur,'/Filepath/initialization_query.sql')
cur.execute("""SELECT * FROM schema.new_table""")
rows = cur.fetchall()
new_table_df = pandas.DataFrame(rows)
print new_table_df
exec_postgres_file(cur,'/Filepath/query1.sql')
rows2 = cur.fetchall()
query1_df = pandas.DataFrame(rows2)
print query1_df
Where new_table is the table created during initialization_query. query1 is
trying to access new_table but an exception is thrown up after the second
exec_postgres_file statement:
> relation "new_table" does not exist LINE 10
Assume initialization_query is:
select *
into schema.new_table
from schema.old_table
where id in ('A','B','C','D','E','F')
;
and query1 is:
select date, sum(revenue) as revenue
from schema.new_table
group by date
;
These both work when I run the query in a database management tool like
Navicat.
Answer: This is what Flask-Migrate has been made for: <http://flask-
migrate.readthedocs.org/en/latest/>
|
Apache mod_wsgi and Qt
Question: I'm getting an error in Apache error_log with WSGI and PyQt4 :
: cannot connect to X server
My Python code looks like :
import PyQt4.qtgui as qtgui
__qt_app = qtgui.QApplication([])
I had a minimal CentOS installation and I had to install lightweight X server
(group "X Window System" and some other rpms).
Previous code is working in a console after the X server installation (before
was not).
Environment : CentOS 6.5, Apache 2.2.15, mod_wsgi 4.3.0.
Any clue about what could happening ?
Answer: I found a "solution", i re-installed minimal CentOS (i don't need X server)
and i added the `xorg-x11-server-Xvfb` package.
Start Xvfb on display 99 :
/usr/bin/Xvfb :99 -screen 0 640x480x24
Edit python code :
import os
import PyQt4.qtgui as qtgui
os.environ['DISPLAY'] = ':99'
__qt_app = qtgui.QApplication([])
And everything is working.
|
skimage's rgb2gray in python: AttributeError: Nonetype object has no attribute ndim
Question: I was using this code (with skimage version 0.10.0) as far as I can remember
without issues:
from scipy import misc
import scipy.io as sio
from skimage.color import rgb2gray
img = cv2.imread(myfile)
img = rgb2gray(img)
But now I'm getting this error:
Traceback (most recent call last):
File "C:\work_asaaki\code\generateProposals.py", line 48, in <module>
img = rgb2gray(img)
File "C:\Anaconda\lib\site-packages\skimage\color\colorconv.py", line 635, in rgb2gray
if rgb.ndim == 2:
AttributeError: 'NoneType' object has no attribute 'ndim'
What could the problem possibly be? How can I fix it to be able to convert the
image to grayscale?
Answer: Given the error message your problem is that the `imread` call fails, which
means `img`is `None`.
Reason for why the `imread` call fails is usually that the path to the file is
wrong.
|
how to use python to parse ossec rules xml
Question: I have a ossec rules XML file, with content like this:
<var name="SENSITIVE_DIRECTORY">^/root|^/proc|^/etc|^/$</var>
<var name="BAD_WORDS_OPS">failure|error|bad |fatal|failed|illegal |denied|refused|unauthorized</var>
<group name="local,ops,syslog,sudo,pam,">
<rule id="101000" level="4">
<if_sid>5715</if_sid>
<srcip>!10.83.60.54</srcip>
<srcip>!10.83.60.55</srcip>
<description>Except IPs approved.</description>
</rule>
</group>
I am trying to use python to parse this xml, but I get this error:
xml.etree.ElementTree.ParseError: junk after document element: line 10, column 0
Here is the code I am using:
from xml.etree import ElementTree
def read_xml(text):
root = ElementTree.fromstring(text)
lst_node = root.getiterator("person")
print lst_node
if __name__ == '__main__':
read_xml(open("test.xml").read())
Answer: wrap your xml with a tag like this
import xml.etree.ElementTree as ET
def read_xml(text):
root = ET.fromstring('<root>'+text+'</root>') # just wrap it with a root tag
for el in root.iter('srcip'): # I changed the tag to srcip since your sample hasn't got "player"
print el.text
if __name__ == '__main__':
read_xml(open("yourfile.xml", "r").read())
!10.83.60.54
!10.83.60.55
|
How to define General deterministic function in PyMC
Question: In my model, I need to obtain the value of my deterministic variable from a
set of parent variables using a complicated python function.
Is it possible to do that?
Following is a pyMC3 code which shows what I am trying to do in a simplified
case.
import numpy as np
import pymc as pm
#Predefine values on two parameter Grid (x,w) for a set of i values (1,2,3)
idata = np.array([1,2,3])
size= 20
gridlength = size*size
Grid = np.empty((gridlength,2+len(idata)))
for x in range(size):
for w in range(size):
# A silly version of my real model evaluated on grid.
Grid[x*size+w,:]= np.array([x,w]+[(x**i + w**i) for i in idata])
# A function to find the nearest value in Grid and return its product with third variable z
def FindFromGrid(x,w,z):
return Grid[int(x)*size+int(w),2:] * z
#Generate fake Y data with error
yerror = np.random.normal(loc=0.0, scale=9.0, size=len(idata))
ydata = Grid[16*size+12,2:]*3.6 + yerror # ie. True x= 16, w= 12 and z= 3.6
with pm.Model() as model:
#Priors
x = pm.Uniform('x',lower=0,upper= size)
w = pm.Uniform('w',lower=0,upper =size)
z = pm.Uniform('z',lower=-5,upper =10)
#Expected value
y_hat = pm.Deterministic('y_hat',FindFromGrid(x,w,z))
#Data likelihood
ysigmas = np.ones(len(idata))*9.0
y_like = pm.Normal('y_like',mu= y_hat, sd=ysigmas, observed=ydata)
# Inference...
start = pm.find_MAP() # Find starting value by optimization
step = pm.NUTS(state=start) # Instantiate MCMC sampling algorithm
trace = pm.sample(1000, step, start=start, progressbar=False) # draw 1000 posterior samples using NUTS sampling
print('The trace plot')
fig = pm.traceplot(trace, lines={'x': 16, 'w': 12, 'z':3.6})
fig.show()
When I run this code, I get error at the y_hat stage, because the `int()`
function inside the `FindFromGrid(x,w,z)` function needs integer not FreeRV.
Finding `y_hat` from a pre calculated grid is important because my real model
for y_hat does not have an analytical form to express.
I have earlier tried to use OpenBUGS, but I found out
[here](http://stats.stackexchange.com/questions/119883/how-to-define-function-
in-openbugs) it is not possible to do this in OpenBUGS. Is it possible in PyMC
?
# Update
Based on an example in pyMC github page, I found I need to add the following
decorator to my `FindFromGrid(x,w,z)` function.
@pm.theano.compile.ops.as_op(itypes=[t.dscalar, t.dscalar, t.dscalar],otypes=[t.dvector])
This seems to solve the above mentioned issue. But I cannot use NUTS sampler
anymore since it needs gradient.
Metropolis seems to be not converging.
Which step method should I use in a scenario like this?
Answer: You found the correct solution with `as_op`.
Regarding the convergence: Are you using `pm.Metropolis()` instead of
`pm.NUTS()` by any chance? One reason this could not converge is that
`Metropolis()` by default samples in the joint space while often Gibbs within
Metropolis is more effective (and this was the default in pymc2). Having said
that, I just merged this: <https://github.com/pymc-devs/pymc/pull/587> which
changes the default behavior of the `Metropolis` and `Slice` sampler to be
non-blocked by default (so within Gibbs). Other samplers like `NUTS` that are
primarily designed to sample the joint space still default to blocked. You can
always explicitly set this with the kwarg `blocked=True`.
Anyway, update pymc with the most recent master and see if convergence
improves. If not, try the `Slice` sampler.
|
Sympy issue with solving equation from stated conditions
Question: Hello I'm quite new to python and I'm trying to solve a set of equations with
an unknown variable, the equations are in the code below
from __future__ import division
import sympy as sy
import math
#Global Variables indepenedant of calculations
Tc = 9.25
Ic = 340*10-6
Tb = 6.2
I = 2 *Ic
alpha = 2*10**-2
thickness = 100*10**-10
L0 = 2.44*10**-8
electrical_resist = 0.5*10**-2
sigma = 1 / electrical_resist
k = sigma*L0*Tc
A = 1
B = 1
#set of problems to solve
r0 = sy.symbols('r0')
LHS=(I/(alpha*thickness))**2 * electrical_resist
RHS = -k*((r0**2)*((A*math.e**Tc)+(B*math.e**0)))+(alpha/thickness) * (r0**2) * (Tc - Tb)
print sy.nsolve(LHS==RHS, 0.002)
But I keep getting an error
2444 if isinstance(f, Equality):
2445 f = f.lhs - f.rhs
-> 2446 f = f.evalf()
2447 syms = f.free_symbols
2448 if fargs is None:
AttributeError: 'bool' object has no attribute 'evalf'
Any help is greatly appreciated.
Answer: `LHS==RHS` creates a boolean, True if LHS is exactly equal to RHS and False
otherwise. `nsolve` and other solve functions in SymPy assume expressions are
equal to zero, so use `nsolve(LHS - RHS, 0.002)`. See also
<http://docs.sympy.org/latest/tutorial/gotchas.html#equals-signs>.
|
ffmpeg in the use of libardrone by python
Question: Recently I am trying to do a vision-based control using AR.drone 2.0. I meet a
problem in the first step that is to import video seen from drone to my PC. I
searched online and there is a library called
[libardrone](https://github.com/braincorp/robustus-test-repo). I tried to used
it but when I do the first step that is to intialize. I wrote
drone = libardrone.libardrone.ARDrone(True)
The problem rises in the installation of ffmpeg. I actually installed and set
ffmpeg\bin in my path, but I don't know why it keeps jumping out this error
The error turns out to be
Traceback (most recent call last):
File "C:\Python27\dronetest.py", line 7, in <module>
drone=libardrone.ARDrone(is_ar_drone_2=True,hd=True)
File "C:\Python27\lib\site-packages\libardrone\libardrone.py", line 126, in __init__
self.network_process = arnetwork.ARDroneNetworkProcess(com_pipe_other, is_ar_drone_2, self)
File "C:\Python27\lib\site-packages\libardrone\arnetwork.py", line 45, in __init__
self.ar2video = ar2video.ARVideo2(self._drone, libardrone.DEBUG)
File "C:\Python27\lib\site-packages\libardrone\ar2video.py", line 37, in __init__
self.h264 = h264decoder.H264Decoder(self, drone.image_shape)
File "C:\Python27\lib\site-packages\libardrone\h264decoder.py", line 82, in __init__
raise Exception("You need to install ffmpeg to be able to run ardrone")
Exception: You need to install ffmpeg to be able to run ardrone
The related code in h264decoder.py is
if (H264Decoder.which('ffmpeg') is None):
raise Exception("You need to install ffmpeg to be able to run ardrone")
....
def which(program):
def is_exe(fpath):
return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
fpath, fname = os.path.split(program)
if fpath:
if is_exe(program):
return program
else:
for path in os.environ["PATH"].split(os.pathsep):
path = path.strip('"')
exe_file = os.path.join(path, program)
if is_exe(exe_file):
return exe_file
return None
Answer: the easiest way to debug this is to create a script file with that code (that
you've found in the "which" function) and run it against the path to your
ffmpeg executable. You may have some user rights issues or maybe the path is
not correctly formatted...
Clearly the executable file cannot be found or accessed.
|
python filter rewrite filter
Question: Working code
def not_double_cap_word(word):
cap_count = 0
for ch in word:
if str.isupper(ch):
cap_count += 1
not_double_cap = (cap_count < 2)
return not_double_cap
...
words_no_double_caps =list(filter(not_double_cap_word,words_alnum))
What would be a different solution, say maybe using lambdas or other patterns
in Python? The above creates a new list of words with any word with more than
two caps removed. one two TWo => one, two.
Answer: You can definitely simplify that `not_double_cap_word` function, but it's
still going to be the same basic solution.
First, you can use `ch.isupper()` instead of `str.isupper(ch)`. Calling a
method the normal way is always easier than calling it as an unbound method
and passing the `self` explicitly.
Next, we can replace the explicit for loop with `sum` over a generator
expression:
cap_count = sum(ch.isupper() for ch in word)
And we don't really need to define `not_double_cap`, `cap_count < 2` seems
simple enough to return directly. So:
def not_double_cap_word(word):
cap_count = sum(ch.isupper() for ch in word)
return cap_count < 2
But really, this whole thing is probably simple enough to inline directly into
the main expression. While you _could_ do that by defining a function with
`lambda`, there's no reason to. In general, `map` and `filter` are good when
what you want to do to each thing is call a function that you already have
lying around; comprehensions are better when what you want to do is an
expression that you'd have to wrap in a function (`lambda` or otherwise) to
pass to `map` or `filter`. Compare:
words_no_double_caps = [word for word in words_alnum
if sum(ch.isupper() for ch in word) < 2]
words_no_double_caps = list(filter((lambda word: sum(map(
lambda ch: ch.upper(), word)) < 2), words_alnum))
(I think I got the parens on the second version right. If not… well, if I
wanted to program in Lisp, I would.:)
Either way, it's performing pretty much the exact same steps as your original
code, but it's more concise. Is it more readable? That's for you to decide.
But that's the most important reason to choose one or the other, or something
intermediate between the two.
Well, that, and whether or not you need to reuse this logic; if you do, it
should _definitely_ be defined with a `def` statement and given a nice name.
|
python 3 iterator not executing next
Question: Why does an iterator in python 3 support `__next__` and not `next`? Is it not
supposed to be called directly but only while doing :
for i in iterator:
dosomething(i)
I have a use case where I would like to call `next`. For example using
`itertools.count` for a stream of increasing integers. I would like to do :
from itertools import count
cnt = count(0)
one(cnt.next())
two(cnt.next())
Right now I can do the above by using `__next__` which suggests to me that it
is supposed to not be called externally?
Answer: In Python 3, use the global function
[`next()`](https://docs.python.org/2/library/functions.html#next):
one(next(cnt))
|
Python program, user friendly query
Question: I saw your bit explaining how to import these things and generate a random
number using them but can you solve this problem. This is (the starting stages
of) my program:
import random
from operator import add, sub, mul
for x in range(10):
ops = (add, sub, mul)
op = random.choice(ops)
num1, num2 = random.randint(1,10), random.randint(1,10)
int(input("What is %s %s %s?\n" % (num1, op, num2)))
ans = op(num1, num2)
However when I execute this code this is printed: What is 8 1? and I was
wondering how I would efficiently print this in a user friendly way such as:
"What is 8 add 1?"
Thankyou if you solve this!
Answer: Maybe use a dictionary instead of a tuple.
import random
from operator import add, sub, mul
for x in range(10):
ops = {'+': add, '-': sub, '*': mul}
op = random.choice(ops.keys())
num1, num2 = random.randint(1,10), random.randint(1,10)
int(input("What is %s %s %s?\n" % (num1, op, num2)))
ans = ops[op](num1, num2)
|
Running PHP Selenium Webdriver tests programmatically, without phpunit command
Question: My question is quite simple. I'm coming from Python world, where it's very
simple to execute Selenium testing code within a program, just writing
something like:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://www.python.org")
driver.close()
When using PHP, things are getting harder: I wrote something like that
require 'vendor/autoload.php';
class MyTest extends PHPUnit_Extensions_Selenium2TestCase {
public function setUp() {
$this->setBrowser('Firefox');
$this->setBrowserUrl('http://www.python.org');
}
public function testToto() {
$this->url('/');
}
}
...which kinda works when I execute `phpunit MyTest.php`.
But what I would like to do is to **instanciate my test class in PHP code, and
execute my Selenium commands "programmatically"** , like:
$myTest = new MyTest();
$myTest->testToto();
And here it sucks :(
> PHP Fatal error: Uncaught exception
> 'PHPUnit_Extensions_Selenium2TestCase_Exception' with message 'There is
> currently no active session to execute the 'url' command.
So **is there a way to execute Selenium code directly from PHP script without
executing command line things with phpunit**?
**Edit** : What am I trying to achieve? My project is to build a testing
application which must be able to launch tests within a UI built by a end user
thanks to a user friendly drag and drop builder (the user chooses which test
he wants to execute first, then another, and so on). So I would like to avid
ececuting phpunit commands with a ugly PHP exec: to me, the best option is to
launch test case methods programmatically!
Answer: Well, a very nice question first of all. The short answer is yes you can, but
it's too much pain. PHPUnit is just a modestly complicated, huge, scary and
amazing library with a gadzillion of extensions. In the nutshell it reads the
configuration, finds the tests, and runs them.
You can put a break point inside your test and trace to the top what it does,
what parameters it accepts and literally simulate the whole thing. That would
be the "proper" and crazy way, and the most complex too.
The simpler way would be by finding out what the test case class needs in
order to run (break point & trace are always your best friends), in this
particular case it turned out to be just this:
$myTest = new MyTest();
$myTest->setUp(); // Your setup will always be called prior the test.
$myTest->prepareSession(); // Specific to Selenium test case, called from `runTest` method.
$myTest->testToto();
But, even in `PHPUnit_Extensions_Selenium2TestCase` there is a lot of stuff
that are not publicly accessible and it feels just a strike of luck. But you
get the idea. Besides, simply calling a method of a test case class will
result in two things: nothing happens, or you get an exception. All the fancy
result tracing happens higher in the hierarchy.
I can only guess what you are trying to achieve, but probably if you ask the
question about the actual problem we'd be able to help more.
**Edit**
`exec` might seem ugly indeed, but it's there for a very good reason: process
isolation. There are situations when one piece of the code that is being
tested changes the environment and it becomes conflicting with another piece
of code, e.g., session-related, sent headers, etc. When you come across one of
them, you will be praying on `exec`.
In your case, the easiest would be to launch the PHPUnit from the command
line, but you might need to write a custom formatter to get the data in the
necessary format out of it, unless you are happy with the existing ones.
Another option would be to use the the existing client for the WebDriver /
Selenium and simply send commands directly to the Selenium server, I assume
that's what you really need? You can find out the piece of code responsible
for that in the PHPUnit extension or there's another cool project called Behat
(and Mink). I believe their client is in the
[Behat/MinkSelenium2Driver](https://github.com/Behat/MinkSelenium2Driver)
repository. And if you don't like those, I'm sure there are other php wrappers
you can find on the github, or can create your own using the existing ones as
an example.
PS: Share a link to the project when it's up and running if its public.
|
re: matching 'a href' tag
Question: I have this simple program that takes in a file from stdin and output only the
host (example: returning only HOST.
Except when I run cat sample.html | python program.py right now it outputs _href"=google.com_
I want it to remove the _'href="_ part and have it only output google.com, but
when I tried removing it from my re, it became even worse. Thoughts?
import re
import sys
s = sys.stdin.read()
lines=s.split('\n')
match = re.search(r'href=[\'"]?([^\'" >]+)', s) #here
if match:
print match.group(0)
Thank you.
Answer: That is because you reference group(0) when it should be group(1) which holds
the actual match result.
if match:
print match.group(1)
|
How to exit from Python using a Tkinter Button?
Question: To start off, let me show you my code:
import Tkinter
import tkMessageBox
import time
import sys
def endProgam():
raise SystemExit
sys.exit()
top = Tkinter.Tk()
B = Tkinter.Button(top, text = "Hello", command = endProgam)
B.pack()
top.mainloop()
As you can see under `endProgram()` I have tried 2 types of exit commands,
both do not work. I never used them together, I was just trying to show what
methods I have used so far. These methods were methods I found here and on
other websites, but if I try either, I get this error:
Traceback (most recent call last):
File "C:\Users\Sa'id\Documents\Learning Programming\Python\Tkinter Tuts.py", line 22, in <module>
top.mainloop()
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1070, in mainloop
self.tk.mainloop(n)
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1488, in __call__
raise SystemExit, msg
SystemExit
I can't seem to find a fix to this and I was hoping maybe someone here could
help me. If you need any more details I will gladly provide what you need.
Answer: There are two functions you should use to _quit_ a window:
* `destroy()`
* `quit()`
Here you have the code using one of the two:
import Tkinter
import tkMessageBox
import time
import sys
def endProgam():
# top.quit()
top.destroy()
top = Tkinter.Tk()
B = Tkinter.Button(top, text = "Hello", command = endProgam)
B.pack()
top.mainloop()
|
Python - "random" error
Question: I am writing a code in Python 3.3.3 that makes a list of 32 teams if you enter
say 12 and makes sure that the team that is repeated the most is only repeated
once more than that which is being repeated the least. If have done this:
import random
teams =[]
randoms = []
team = 0
amount = 1
while team != "done":
team = input("Please enter team name " + str(amount) +" or enter 'done' if you have finished.\n")
if team != "done":
teams.append(team)
randoms.append(team)
amount = amount + 1
length = len(teams)
times =0
while len(teams) != 32:
while len(teams) <= 32-length:
for x in range (0,length+1):
teamname = teams[x]
teams.append(teamname)
else:
choice = random.choice(randoms)
teams.append(choice)
randoms.remove(choice)
teams.sort()
for x in range(0,len(teams)):
print (teams[x])
I run the program and enter 12 teams then done. It comes up with the following
message:
Traceback (most recent call last):
File "C:\Python33\lib\random.py", line 248, in choice
i = self._randbelow(len(seq))
File "C:\Python33\lib\random.py", line 224, in _randbelow
r = getrandbits(k) # 0 <= r < 2**k
ValueError: number of bits must be greater than zero
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "[File location]", line 30, in <module>
choice = random.choice(randoms)
File "C:\Python33\lib\random.", line 250, in choice
raise IndexError('Cannot choose from an empty sequence')
IndexError: Cannot choose from an empty sequence
What does this mean and how do I fix this?
Answer: That error occurs when the array in the random.choice() function is empty. I
suggest you use a Python Debugger to figure out when the array is completely
empty. It may help you to add a line of code to print len(teams) every loop to
see where your coding is messing up. It may be as simple as > versus >=.
Hope this helps!
|
Creating and maintaining MongoDB replica sets with pymongo
Question: I am trying to replicate (for a teaching activity) the [Docker and MongoDB
Sharded Cluster](https://sebastianvoss.com/docker-mongodb-sharded-
cluster.html) recipe in an IPython notebook using _pymongo_ to set up several
mongo replica sets.
The recipe suggests creating a replica set from the mongo prompt by connecting
to one member of the proposed replica set as follows:
mongo --port <port>
rs.initiate()
rs.add("<IP_of_rs1_srv2>:27017")
rs.add("<IP_of_rs1_srv3>:27017")
rs.status()
The [_pymongo_
documentation](http://api.mongodb.org/python/current/examples/high_availability.html)
suggests the following initialisation route:
$ hostname
morton.local
$ mongod --replSet foo/morton.local:27018,morton.local:27019 --rest
$ mongod --port 27018 --dbpath /data/db1 --replSet foo/morton.local:27017 --rest
$ mongod --port 27019 --dbpath /data/db2 --replSet foo/morton.local:27017 --rest
from pymongo import MongoClient, ReadPreference
c = MongoClient("morton.local:27017", read_preference=ReadPreference.SECONDARY)
c.admin.command("replSetInitiate")
This requires knowing the port ids and using them in the initialisation of the
database servers via the `--replSet` flag, rather than just declaring a simple
single label for the replica set and then applying it to each member. How can
I script the initialisation in _pymongo_ to follow the original recipe?
The original also modifies the configuration by changing the hostname to
include the container IP address:
cfg = rs.conf()
cfg.members[0].host = "<IP_of_rs1_srv1>:27017"
rs.reconfig(cfg)
rs.status()
Again, _pymongo_ doesn't support the `rs` helpers, so how can I update the
replica set configuration?
Answer: One solution to this is to create a configuration for the replica set based on
the attributes of the mongodb containers and then use that to initiate the
cluster set.
So for example, a configuration might take the form:
rsc = {'_id': 'rs4',
'members': [{'_id': 0, 'host': '172.17.0.2:27017'},
{'_id': 1, 'host': '172.17.0.3:27017'},
{'_id': 2, 'host': '172.17.0.4:27017'}]}
and then be called with the `replSetInitiate` admin command.
import docker
#Connect to docker
c = docker.Client(base_url='unix://var/run/docker.sock',
version='1.10',
timeout=10)
Create a set of database nodes to run as the replica set:
def createReplicaSetNode(c,stub,num=0):
''' Create and run a specified number of mongo database servers as a replica set '''
name='{stub}_srv{num}'.format(stub=stub,num=num)
command='--replSet {stub} --noprealloc --smallfiles'.format(stub=stub)
c.create_container('dev24/mongodb',name=name,command=command)
c.start(name,publish_all_ports=True)
return name
def createReplicaSetNodes(c,stub,numNodes):
''' Create and run a specified number of mongo database servers as a replica set '''
names=[]
for i in range(0,numNodes):
name=createReplicaSetNode(c,stub,i)
names.append(name)
return names
def getContainIPaddress(c,container):
''' Get the IP address of the container '''
cConfig = c.inspect_container(container)
return cConfig['NetworkSettings']['IPAddress']
def rs_config(c,rsid,num=3):
''' Create a replica set of nodes and then define a configuration file for that replica set '''
createReplicaSetNodes(c,rsid,num)
_rs_config={"_id" : rsid, 'members':[] }
#This is scrappy - should really return something better from the creation
for i in range(0,num):
name='{stub}_srv{num}'.format(stub=rsid,num=i)
#c.inspect_container(name)
#get IP and port
_rs_config['members'].append({"_id":i,"host":'{0}:{1}'.format(getContainIPaddress(c,name),27017)})
return _rs_config
We can then start the nodes and create a config file for the replica set:
rsc=rs_config(c,'rs4')
rsc
'''
{'_id': 'rs4',
'members': [{'_id': 0, 'host': '172.17.0.2:27017'},
{'_id': 1, 'host': '172.17.0.3:27017'},
{'_id': 2, 'host': '172.17.0.4:27017'}]}
'''
This configuration data is used to initialise the replica set:
from pymongo import MongoClient
#Find the local port bound for 27017/tcp for each server in the replica set
def get27017tcp_port(c,container):
cConfig = c.inspect_container(container)
return int(cConfig['NetworkSettings']['Ports']['27017/tcp'][0]['HostPort'])
#We'll use the 0th server in the set as a the node
mc = MongoClient('localhost', get27017tcp_port(c,'rs4_srv0'))
#In mongo console, we'd typically use command rs.config() to initialise replica set
#Here, use replSetInitiate admin command, applying it with desired configuration
mc.admin.command( "replSetInitiate",rsc);
|
Python: Join lists of lists by the first element
Question: I'm trying to combine a list of lists/tuples by the first element in the list
- something like this:
Input:
[(1, [32, 432, 54]), (1, [43, 54, 65]), (2, [2, 43, 54]), (2, [1, 5, 6])]
Output:
[(1, [32, 432, 54], [43, 54, 65]), (2, [2, 43, 54], [1, 5, 6])]
The lists are actually ordered by the first element like in my example input,
and it doesn't matter if at the end the tuples are lists.
Is there an efficient/pythonic way to do this?
Answer: Using
[`itertools.groupby`](https://docs.python.org/2/library/itertools.html#itertools.groupby)
and [list
comprehension](https://docs.python.org/2/tutorial/datastructures.html#list-
comprehensions):
>>> lst = [(1, [32, 432, 54]), (1, [43, 54, 65]), (2, [2, 43, 54]), (2, [1, 5, 6])]
>>> import itertools
>>> [(key,) + tuple(v for k, v in grp)
... for key, grp in itertools.groupby(lst, key=lambda x: x[0])]
[(1, [32, 432, 54], [43, 54, 65]), (2, [2, 43, 54], [1, 5, 6])]
|
New Django App MEDIA_URL pathing incorrect
Question: So, I've created a new app in Django via `python manage.py startapp foo`
My new app will not load any files in the `/site_media/` directory, via the
`{{ MEDIA_URL }}`. They are attempting to path from the App's directory, not
the `/site_media/` directory.
* * *
**Example:** Instead of loading from
`http://sitename/site_media/bootstrap/bootstrap.min.css`
it tries to load from `http://sitename/foo/bootstrap/bootstrap.min.css`
* * *
Here is a snippet from the `settings.py` which defines the `MEDIA_URL`
MEDIA_URL = '/site_media/'
I can force the files to load correctly in the app by replacing `{{ MEDIA_URL
}}` with `/site_media/` in the `base.html` and my `show_foo.html`, but this
then breaks the pathing on the rest of the site.
I'm not sure what else anyone would like to see to try and diagnose the issue,
but I'm stumped!
* * *
Just in case: from `urls.py` in my app directory
#!/usr/bin/python
# -*- coding: UTF8 -*-
from django.conf.urls.defaults import *
urlpatterns = patterns('foo_web.foo_track.views',
url('^$','view_foo_track',name='foo_home'),
url('^newentry/(?P<entry_id>\d+)$','write_form_data',name='foo_track_new'),
)
* * *
`settings.py` edit*removed comments for readability
import os
current_dir = os.path.abspath(os.path.dirname(__file__))
from os import sys, path
sys.path.append(path.dirname(path.dirname(path.abspath(__file__))))
import run_server
cherry_config,django_config = run_server.get_web_server_parameters()
DEBUG = django_config['DEBUG']
TEMPLATE_DEBUG = django_config['TEMPLATE_DEBUG']
CACHE_MODE = django_config['CACHE_MODE']
DB = django_config['DB']
HOST = django_config['HOST']
LOGIN_URL = '/'
LOGOUT_URL = '/'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': '%s' % DB,
'USER': 'postgres',
'PASSWORD': '*****',
'HOST': '%s' % HOST,
'PORT': '5432',
}
}
CACHES = {
'default': {
'BACKEND': CACHE_MODE,
'LOCATION': 'my_cache_table',
'TIMEOUT': 1800,
'OPTIONS': {
'MAX_ENTRIES': 10000
}
}
}
ADMINS = (
('**', '**'),
)
MANAGERS = ADMINS
#~ EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
DEFAULT_FROM_EMAIL = '**'
SERVER_EMAIL = '**'
EMAIL_USE_TLS = True
EMAIL_HOST = "**"
EMAIL_HOST_USER = "**"
EMAIL_HOST_PASSWORD = "**"
TIME_ZONE = 'America/Chicago'
LANGUAGE_CODE = 'en-us'
#~ gettext = lambda s: s
#~ LANGUAGES = (
#~ ('de', gettext('German')),
#~ ('en', gettext('English')),
#~ )
SITE_ID = 1
USE_I18N = True
USE_L10N = True
USE_TZ = True
MEDIA_ROOT = current_dir + '/media/'
MEDIA_URL = '/site_media/'
STATIC_ROOT = current_dir + '/media/static/'
STATIC_URL = '/site_static/'
STATICFILES_DIRS = (
# Put strings here, like "/home/html/static" or "C:/www/django/static".
)
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
SECRET_KEY = '***'
if DEBUG:
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
else:
TEMPLATE_LOADERS = (
('django.template.loaders.cached.Loader', (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
)),
# 'django.template.loaders.eggs.Loader',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'sensei_web.middleware.FilterPersistMiddleware',
# Uncomment the next line for simple clickjacking protection:
# 'django.middleware.clickjacking.XFrameOptionsMiddleware',
#~ 'sensei_web.middleware.ProfileMiddleware'
#~ 'django.middleware.cache.UpdateCacheMiddleware',
#~ 'django.middleware.common.CommonMiddleware',
#~ 'django.middleware.cache.FetchFromCacheMiddleware',
)
ROOT_URLCONF = 'sensei_web.urls'
WSGI_APPLICATION = 'sensei_web.wsgi.application'
TEMPLATE_DIRS = (
current_dir + '/templates',
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
# Uncomment the next line to enable the admin:
'django.contrib.admin',
# Uncomment the next line to enable admin documentation:
# 'django.contrib.admindocs',
'foo.foo',
'foo.foo',
#~ 'foo.foo',
'foo.foo',
'foo.foo',
'foo',
'foo.foo',
'foo.foo_track',
)
ACCOUNT_ACTIVATION_DAYS = 7
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
Answer: Thanks everyone for the help! It turns out this is just a mistake I made :(
Very interesting consequences though.
I noticed something funny in the logs about `A (% csrf_token %) was used in a
template, but the context did not provide the value. This is uaually caused by
not using RequestContext.`
Lo and behold, in my `views.py` this line was incorrect: `return
render_to_response('foo_track/foo_track_show.html',{'access':access})`
it should have had the `RequestContext(request)` as well like this: `return
render_to_response('foo_track/foo_track_show.html',{'access':access},RequestContext(request))`
And now everything works. Sheesh!
|
Why does padding an FFT in NumPy make it run much slower?
Question: I had writted a script using NumPy's `fft` function, where I was padding my
input array to the nearest power of 2 to get a faster FFT.
After profiling the code, I found that the FFT call was taking the longest
time, so I fiddled around with the parameters and found that if I _didn't_ pad
the input array, the FFT ran several times faster.
Here's a minimal example to illustrate what I'm talking about (I ran this in
IPython and used the `%timeit` magic to time the execution).
x = np.arange(-4.*np.pi, 4.*np.pi, 1000)
dat1 = np.sin(x)
The timing results:
%timeit np.fft.fft(dat1)
100000 loops, best of 3: 12.3 µs per loop
%timeit np.fft.fft(dat1, n=1024)
10000 loops, best of 3: 61.5 µs per loop
Padding the array to a power of 2 leads to a very drastic slowdown.
Even if I create an array with a prime number of elements (hence the
theoretically slowest FFT)
x2 = np.arange(-4.*np.pi, 4.*np.pi, 1009)
dat2 = np.sin(x2)
The time it takes to run still doesn't change so drastically!
%timeit np.fft.fft(dat2)
100000 loops, best of 3: 12.2 µs per loop
I would have thought that padding the array will be a one time operation, and
then calculating the FFT should be quicker. Am I missing anything?
**EDIT:** I was supposed to use `np.linspace` rather than `np.arange`. Below
are the timing results using `linspace`
In [2]: import numpy as np
In [3]: x = np.linspace(-4*np.pi, 4*np.pi, 1000)
In [4]: x2 = np.linspace(-4*np.pi, 4*np.pi, 1024)
In [5]: dat1 = np.sin(x)
In [6]: dat2 = np.sin(x2)
In [7]: %timeit np.fft.fft(dat1)
10000 loops, best of 3: 55.1 µs per loop
In [8]: %timeit np.fft.fft(dat2)
10000 loops, best of 3: 49.4 µs per loop
In [9]: %timeit np.fft.fft(dat1, n=1024)
10000 loops, best of 3: 64.9 µs per loop
Padding still causes a slowdown. Could this be a local issue? i.e., due to
some quirk in my NumPy setup it's acting this way?
Answer: FFT algorithms like NumPy's are fast for array sizes that factorize into a
product of small primes, not just powers of two. If you increase the array
size by padding the computational work increases. The speed of FFT algorithms
is also critically dependent on the cache use. If you pad to an array size
that creates less efficient cache use the efficiency slows down. The really
fast FFT algorithms, like FFTW and Intel MKL, will actually generate plans for
the array size factorization to get the most efficient computation. This
includes both heuristics and actual measurements. So no, padding to the
nearest power of two is only beneficial in introductory textbooks and not
neccesarily in practice. As a rule of thumb you usually benefit from padding
if the array size factorizes to one or more very large prime.
|
Plot linear model in 3d with Matplotlib
Question: I'm trying to create a 3d plot of a linear model fit for a data set. I was
able to do this relatively easily in R, but I'm really struggling to do the
same in Python. Here is what I've done in R:

Here's what I've done in Python:
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.formula.api as sm
csv = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
model = sm.ols(formula='Sales ~ TV + Radio', data = csv)
fit = model.fit()
fit.summary()
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(csv['TV'], csv['Radio'], csv['Sales'], c='r', marker='o')
xx, yy = np.meshgrid(csv['TV'], csv['Radio'])
# Not what I expected :(
# ax.plot_surface(xx, yy, fit.fittedvalues)
ax.set_xlabel('TV')
ax.set_ylabel('Radio')
ax.set_zlabel('Sales')
plt.show()
What am I doing wrong and what should I do instead?
Thank you.
Answer: You were correct in assuming that plot_surface wants a meshgrid of coordinates
to work with, but predict wants a data structure like the one you fitted with
(the "exog").
exog = pd.core.frame.DataFrame({'TV':xx.ravel(),'Radio':yy.ravel()})
out = fit.predict(exog=exog)
ax.plot_surface(xx, yy, out.reshape(xx.shape), color='None')
|
Mocking Directory Structure in Python
Question: I have some code below that I'm using to take an input of files, open and
process, and then output some data. I've gotten the functionality working and
I'm unit testing it now, below is an example of the code.
def foo(dir):
path_to_search = join(dir, "/baz/foo")
if isdir(path_to_search):
#path exists so do stuff...
for fname in listdir(path_to_search):
do_stuff()
else:
print "path doesn't exist"
I've been able to create a test where the past doesn't exist easily enough,
but as you can see above I assert that the "/baz/foo" portion of the directory
structure exists (in production the directory structure must have this file,
in some cases it won't and we won't need to process it.)
I've tried to create a temporary directory structure using TempDir and join,
but the code always kicks out saying the path doesn't exists.
Is it possible to mock the output of os.listdir such that I won't need to
create a temporary directory structure that follows the needed /baz/foo
convention?
Answer: You don't need to create a fake directory structure, all you need to do is
mock the `isdir()` and `listdir()` functions.
Using the [`unittest.mock`
library](https://docs.python.org/3/library/unittest.mock.html) (or the
external [`mock` library](http://www.voidspace.org.uk/python/mock/), which is
the exact same thing for Python versions < 3.3):
try:
# Python >= 3.3
from unittest import mock
except ImportError:
# Python < 3.3
import mock
with mock.patch('yourmodule.isdir') as mocked_isdir, \
mock.patch('yourmodule.listdir') as mocked_listdir:
mocked_isdir.return_value = True
mocked_listdir.return_value = ['filename1', 'filename2']
yourmodule.foo('/spam/eggs')
mocked_isdir.assert_called_with('/spam/eggs/baz/foo')
mocked_listdir.assert_called_with('/spam/eggs/baz/foo')
|
why does pymongo's find_one fail silently? (nginx/uwsgi/flask/gevent/pymongo)
Question: **Summary:** Pymongo appears to fail silently for no reason in my
flask+gevent+uwsgi+nginx app. I would love some pointers on where I should
look
I'm a newcomer to web application programming (and to python), please bear
with me. I'm porting an app from Heroku to an OpenStack provider, and am
finding that code that worked fine on the former fails intermittently and
silently on the latter. I wonder if anyone can shed some light on this.
This is the function in question:
`emergencies` is a pymongo Collection. This is correctly instantiated.
`user_id` is the User id I'm looking for. It's correct.
22 def get_emergency_by_user(user_id):
23 print "going to find emergency by user:"+user_id
24 print emergencies
25 print EmergencyDict.user_id
26 try:
27 emergency = emergencies.find_one({EmergencyDict.user_id: user_id})
28 except:
29 print 'mongo failed'
30 print 'this should appear'
31 print 'emergency - %s' % emergency
32 return emergency
Here is the output from the function (line numbers added for easy reference):
**Failure Case**
23 going to find emergency by user:UnitTestScript
24 Collection(Database(Connection('[redacted]', [redacted]), u'[redacted]'), u'emergencies')
25 userid
So I can see that line 23 through 25 work fine, and I assume that line 27 is
called. But I get **nothing** below that. Neither line 29 (the `except:` case)
nor line 30 ever run.
The strangest thing is that there are times during the day when this isn't a
problem at all, and it works perfectly. In these cases, it looks more like
this (line numbers added for easy reference):
**Success Case**
23 going to find emergency by user:UnitTestScript
24 Collection(Database(Connection('[redacted]', [redacted]), u'[redacted]'), u'emergencies')
25 userid
30 this should appear
31 {'_obj'...[a bunch of json representing the correct document]...'}
I haven't been able to isolate anything makes it work though. It's maddening,
and I don't know where to look next.
**some things I have tried**
I have read some articles that suggest that I need to have the `from gevent
import monkey; monkey.patch_all()` line in my imports; I have done this.
I have also read that you can't use uwsgi+gevent with multiple threads, so my
uwsgi is configured with 1 thread.
**tl;dr** Pymongo appears to fail silently for no reason in my
flask+gevent+uwsgi+nginx app. I would love some pointers on where I should
look
Answer: I just remembered how we solved this. The version of pymongo in the
requirements.txt was old. I updated it to the newest version, and it was fine
after that.
|
Python Lirc blocks code even when blocking is off
Question: I'm trying to set up a scrolling weather feed using the **OWN (Open Weather
Network**)on my **Raspberry Pi B+** running the latest **Rasbian Wheezy**
distro and I'm having trouble adding IR support using **Python LIRC (Linux
Infrared Remote Control)**.
**_What I'm trying to do:_** There are four weather variables: **condition,
temperature, humidity, and wind speed**. They will appear on my 16x2 LCD
screen, centered with their title on the top line and value on the second.
They will stay on the screen for five seconds before being replaced with the
next. Once it reaches the end it will loop through again. After it has looped
through 180 times (roughly one hour) it will update the weather. I want to use
my IR remote's buttons 1-4 to jump to a specific tile then continue back with
it's loop.
**_What it's doing:_** When no button has been pressed, instead of skipping
the empty queue like it should with **LIRC** blocking off, it hangs on
**lirc.nextcode()** waiting for a button press, until I quit with
**KeyboardInterrupt**.
Everything worked great, until I added IR. Now it displays the first weather
variable, then when it tries to pull the next one, instead of skipping and
going to the next tile if there's no IR code in queue, **lirc.nextcode()**
halts the code until it receives an IR code, which shouldn't happen with
**LIRC** blocking turned off.
I have the latest versions of everything (**Python LIRC 1.2.1**), I know a
previous version of **Python LIRC** had a bug with the blocking parameter.
I've spent two days researching and trying every possible thing. Here is one
possible workaround that I found, but it's affected by the same problem this
one is: _"[Python LIRC blocking Signal workaround not
working](http://stackoverflow.com/questions/26435239/python-lirc-blocking-
signal-workaround-not-working)"_
I know lots of the code is improper, _ie global variables, stuff needs to be
in functions, OWN updates every three hours and I'm updating every hour_ , but
this is temporary to get it working. I'll be tidying it up and making it
object-oriented later. Sorry ahead of time if this makes it more difficult for
some to read.
import pyowm
from sys import exit
import time
import RPi.GPIO as GPIO
from RPLCD import CharLCD, cleared, cursor
import lirc
# initialize lirc and turn of blocking
sockid = lirc.init("weather", blocking=False)
lirc.set_blocking(False, sockid)
# initialize weather network
owm = pyowm.OWM('API #')
# initialize LCD
lcd = CharLCD(pin_rs=26, pin_rw=None, pin_e=24, pins_data=[22, 18, 16, 12],
cols=16, rows=2)
# weather data
w = None # wind m/s
wind = None # wind km/h
windkm = None
humidity = None
temper = None
COUNTER = 0 #number of cycles before update
NEXT = 1
# switches to next tile
def next_tile():
global NEXT
This is where the problem lies. **Lirc.nextcode()** should pull the next IR
code from the **LIRC** queue and add it to **codeIR** as a list, but if no
button has been pressed, and blocking is off, it should just skip over the
code. Instead it acts as though blocking is on, and hangs until a button is
pressed. and then it still won't continue my main loop. It just prints
**NEXT** and hangs until I **KeyboardInterrupt** out.
codeIR = lirc.nextcode() # pulls IR code from LIRC queue.
# checks if there's a code in codeIR and goes to that tile. If not, it
# goes to the next tile instead.
if not codeIR:
if NEXT != 4: # if it's the last tile, cycle to the first
NEXT += 1
print NEXT
return NEXT
else: # if not last tile, go to next
NEXT -= 3
print NEXT
return NEXT
else:
NEXT = codeIR[0]
print NEXT
return NEXT
I've added the rest of my code, it all works fine, but I'm sure it'll help you
understand what I'm trying to accomplish.
while True:
try:
if COUNTER == 0:
COUNTER = 180
# Search for current weather in London (UK)
observation = owm.weather_at_place('City, State')
w = observation.get_weather()
# Weather details
wind = w.get_wind() # {'speed': 4.6, 'deg': 330}
windkm = (wind['speed'] * 3600) / 1000 #convet to km/h
humidity = w.get_humidity()
# {'temp_max': 10.5, 'temp': 9.7, 'temp_min': 9.0}
temper = w.get_temperature('celsius')
else:
while NEXT == 1:
# prints condition to lcd
lcd.cursor_pos = (0, 4) #adjust cursor position
lcd.write_string('Weather') # write to lcd
lcd.cursor_pos = (1, 5) # adjust cursor position
lcd.write_string(w.get_status()) # write to lcd
time.sleep(5) # leave on lcd for 5 seconds
lcd.clear() # clear lcd
next_tile() # switches to next tile
while NEXT == 2:
# prints temp to lcd
lcd.cursor_pos = (0, 2)
lcd.write_string('Temperature')
lcd.cursor_pos = (1, 6)
lcd.write_string(str(temper['temp']))
lcd.write_string(' C')
time.sleep(5)
lcd.clear()
next_tile()
while NEXT == 3:
# prints temp to lcd
lcd.cursor_pos = (0, 4)
lcd.write_string('Humidity')
lcd.cursor_pos = (1, 6)
lcd.write_string(str(humidity))
lcd.write_string(' %')
time.sleep(5)
lcd.clear()
next_tile()
while NEXT == 4:
# prints wind speed to lcd
lcd.cursor_pos = (0, 3)
lcd.write_string('Wind Speed')
lcd.cursor_pos = (1, 6)
lcd.write_string(str(int(windkm)))
lcd.write_string('km')
time.sleep(5)
lcd.clear()
COUNTER -= 1
codeIR = lirc.nextcode()
next_tile()
# quit with ctrl+C
except(KeyboardInterrupt, SystemExit):
print 'quitting'
lcd.close(clear=True)
lirc.deinit()
exit()
When I **KeyboardInterrupt** out, **Traceback** always leads to
**lirc.nextcode()** , I'd post the error, but I changed the code a bit and now
it only traces up to the function that contains **lirc.nextcode()**.
I've spent two days trying to work this out and I'm nearly pulling out my
hair, so I'll take any solution or workaround you guys can give me. Thanks
beforehand, I really appreciate any help I can find. I found a workaround
using a **Signal Module AlarmException** but the moment I switch from
raw_input() to lirc.nextcode() it also hangs the same way (even though it puts
a timer on raw_input() no problem) and blocks the alarm from working right.
Here's the link again: _"[Python LIRC blocking Signal workaround not
working](http://stackoverflow.com/questions/26435239/python-lirc-blocking-
signal-workaround-not-working)"_
Answer: Turns out the bug is still in **1.2.1** I suppose. I switched to **Pylirc2**
and it turned off blocking no problem with **pylirc.blocking(0)**. I also had
to remove the `return`'s from my **next_tile() function**.
Here's the finished code I ended up using if anyone is interested, it sure
would've saved me a load of time:
import pyowm
from sys import exit
import time
import RPi.GPIO as GPIO, feedparser, time
from RPLCD import CharLCD, cleared, cursor
import pylirc
sockid = pylirc.init('weather')
allow = pylirc.blocking(0)
owm = pyowm.OWM('API Key')
lcd = CharLCD(pin_rs=26, pin_rw=None, pin_e=24, pins_data=[22, 18, 16, 12],
cols=16, rows=2)
class mail(object):
def __init__(self):
self.username = "email address"
self.password = "password"
self.newmail_offset = 0
self.current = 0
GPIO.setmode(GPIO.BOARD)
GPIO.setup(15, GPIO.OUT)
GPIO.setup(13, GPIO.OUT)
GPIO.setup(11, GPIO.OUT)
def buzz(self):
self.period = 1.0 / 250
self.delay = self.period / 2
self.cycles = 250
for i in range(self.cycles):
GPIO.output(11, True)
time.sleep(self.delay)
GPIO.output(11, False)
time.sleep(self.delay)
def check(self):
self.newmails = int(feedparser.parse("https://" + self.username + ":" +
self.password +"@mail.google.com/gmail/feed/atom")
["feed"]["fullcount"])
if self.newmails > self.newmail_offset:
GPIO.output(15, True)
GPIO.output(13, False)
if self.newmails > self.current:
self.buzz()
self.current += 1
else:
GPIO.output(15, False)
GPIO.output(13, True)
self.current = 0
### will be a class
class weather(object):
def __init__(self):
self.w = None
self.wind = None
self.windkm = None
self.humidity = None
self.temper = None
self.counter = 0
self.next = 1
def update(self):
if self.counter == 0:
self.counter = 180
self.observation = owm.weather_at_place('City, Country')
self.w = self.observation.get_weather()
self.wind = self.w.get_wind()
self.windkm = (self.wind['speed'] * 3600) / 1000
self.humidity = self.w.get_humidity()
self.temper = self.w.get_temperature('celsius')
else:
pass
def display_weather(self):
lcd.cursor_pos = (0, 4)
lcd.write_string('Weather')
lcd.cursor_pos = (1, 5)
lcd.write_string(self.w.get_status())
time.sleep(3)
lcd.clear()
def display_temp(self):
lcd.cursor_pos = (0, 2)
lcd.write_string('Temperature')
lcd.cursor_pos = (1, 6)
lcd.write_string(str(self.temper['temp']))
lcd.write_string(' C')
time.sleep(3)
lcd.clear()
def display_hum(self):
lcd.cursor_pos = (0, 4)
lcd.write_string('Humidity')
lcd.cursor_pos = (1, 6)
lcd.write_string(str(self.humidity))
lcd.write_string(' %')
time.sleep(3)
lcd.clear()
def display_wind(self):
lcd.cursor_pos = (0, 3)
lcd.write_string('Wind Speed')
lcd.cursor_pos = (1, 4)
lcd.write_string(str(int(self.windkm)))
lcd.write_string('km/h')
time.sleep(3)
lcd.clear()
def next_tile(self):
self.counter -= 1
self.codeIR = pylirc.nextcode()
if not self.codeIR or self.codeIR[0] == self.next:
if self.next != 4:
self.next += 1
else:
self.next -= 3
else:
self.next = int(self.codeIR[0])
email = mail()
weather = weather()
weather.update()
def up_next():
weather.update()
weather.next_tile()
while True:
try:
while weather.next == 1:
weather.display_weather()
up_next()
while weather.next == 2:
weather.display_temp()
up_next()
while weather.next == 3:
weather.display_hum()
up_next()
while weather.next == 4:
weather.display_wind()
email.check()
up_next()
except(KeyboardInterrupt, SystemExit):
print 'quitting'
lcd.close(clear=True)
exit()
|
Using zxJDBC with jython not working
Question: since I wanted to transform data storage for my recent Minecraft Python/Jython
Bukkit plugins from flat file to MySQL database I started googling. Tried
sqlite3 and MySQLd for Python but without success, so after few hours of
searching StackOverflow I came up to this question and answer, which SHOULD
solve my problem, since it's same thing. I tried to follow steps given in
[this answer](http://stackoverflow.com/a/10340498/2874447) , but without any
success due to this error:
[15:31:45 WARN]: org.bukkit.plugin.InvalidPluginException: Traceback (most recen
t call last):
File "<iostream>", line 10, in <module>
zxJDBC.DatabaseError: unable to instantiate datasource
[15:31:45 WARN]: at net.lahwran.bukkit.jython.PythonPluginLoader.loadPlug
in(PythonPluginLoader.java:296)
[15:31:45 WARN]: at net.lahwran.bukkit.jython.PythonPluginLoader.loadPlug
in(PythonPluginLoader.java:113)
[15:31:45 WARN]: at net.lahwran.bukkit.jython.PythonPluginLoader.loadPlug
in(PythonPluginLoader.java:83)
[15:31:45 WARN]: at org.bukkit.plugin.SimplePluginManager.loadPlugin(Simp
lePluginManager.java:305)
[15:31:45 WARN]: at com.master.bukkit.python.PythonLoader.onLoad(PythonLo
ader.java:113)
[15:31:45 WARN]: at org.bukkit.craftbukkit.v1_7_R1.CraftServer.loadPlugin
s(CraftServer.java:260)
[15:31:45 WARN]: at org.bukkit.craftbukkit.v1_7_R1.CraftServer.reload(Cra
ftServer.java:628)
[15:31:45 WARN]: at org.bukkit.Bukkit.reload(Bukkit.java:279)
[15:31:45 WARN]: at org.bukkit.command.defaults.ReloadCommand.execute(Rel
oadCommand.java:23)
[15:31:45 WARN]: at org.bukkit.command.SimpleCommandMap.dispatch(SimpleCo
mmandMap.java:192)
[15:31:45 WARN]: at org.bukkit.craftbukkit.v1_7_R1.CraftServer.dispatchCo
mmand(CraftServer.java:542)
[15:31:45 WARN]: at org.bukkit.craftbukkit.v1_7_R1.CraftServer.dispatchSe
rverCommand(CraftServer.java:529)
[15:31:45 WARN]: at net.minecraft.server.v1_7_R1.DedicatedServer.aw(Dedic
atedServer.java:286)
[15:31:45 WARN]: at net.minecraft.server.v1_7_R1.DedicatedServer.u(Dedica
tedServer.java:251)
[15:31:45 WARN]: at net.minecraft.server.v1_7_R1.MinecraftServer.t(Minecr
aftServer.java:541)
[15:31:45 WARN]: at net.minecraft.server.v1_7_R1.MinecraftServer.run(Mine
craftServer.java:453)
[15:31:45 WARN]: at net.minecraft.server.v1_7_R1.ThreadServerApplication.
run(SourceFile:617)
Code I used that caused error above:
from com.ziclix.python.sql import zxJDBC
params = {}
params['serverName'] = 'host'
params['databaseName'] = 'dbname'
params['user'] = "username"
params['password'] = "pw"
params['port'] = 3306
db = apply(zxJDBC.connectx, ("org.gjt.mm.mysql.MysqlDataSource",), params)
Also, I tried this code:
from com.ziclix.python.sql import zxJDBC
d, u, p, v = "jdbc:mysql://host", "root", "pw", "org.gjt.mm.mysql.Driver"
db = zxJDBC.connect(d, u, p, v)
but it caused this error:
[15:37:20 WARN]: Caused by: Traceback (most recent call last):
File "<iostream>", line 13, in <module>
zxJDBC.DatabaseError: driver [org.gjt.mm.mysql.Driver] not found
[15:37:20 WARN]: at org.python.core.PyException.doRaise(PyException.java:
200)
[15:37:20 WARN]: at org.python.core.Py.makeException(Py.java:1239)
[15:37:20 WARN]: at org.python.core.Py.makeException(Py.java:1243)
[15:37:20 WARN]: at com.ziclix.python.sql.zxJDBC.makeException(zxJDBC.jav
a:328)
[15:37:20 WARN]: at com.ziclix.python.sql.connect.Connect.__call__(Connec
t.java:78)
[15:37:20 WARN]: at org.python.core.PyObject.__call__(PyObject.java:441)
[15:37:20 WARN]: at org.python.core.PyObject.__call__(PyObject.java:447)
[15:37:20 WARN]: at org.python.pycode._pyx5.f$0(<iostream>:15)
[15:37:20 WARN]: at org.python.pycode._pyx5.call_function(<iostream>)
[15:37:20 WARN]: at org.python.core.PyTableCode.call(PyTableCode.java:165
)
[15:37:20 WARN]: at org.python.core.PyCode.call(PyCode.java:18)
[15:37:20 WARN]: at org.python.core.Py.runCode(Py.java:1275)
[15:37:20 WARN]: at org.python.util.PythonInterpreter.execfile(PythonInte
rpreter.java:235)
[15:37:20 WARN]: at org.python.util.PythonInterpreter.execfile(PythonInte
rpreter.java:230)
[15:37:20 WARN]: at net.lahwran.bukkit.jython.PythonPluginLoader.loadPlug
in(PythonPluginLoader.java:244)
[15:37:20 WARN]: ... 16 more
**What I acutally did (step-by-step)?**
I downloaded zipped mysql connector/J from [this
link](http://dev.mysql.com/downloads/connector/j/3.1.html) (as given in answer
from already linked S.O. question), unzipped it, copied "mysql-connector-
java-3.1.14-bin.jar" from it, pasted it to this path
"C:\Users\my_name\Documents\1.7.2twistedjobs\plugins\MySQL_jython". After, I
opened Control Panel, clicked System, went to Advanced system settings,
clicked Environment variables button, added new named CLASSPATH (since there
was no variable by that name), and set this path as value
"C:\Users\my_name\Documents\1.7.2twistedjobs\plugins\MySQL_jython\mysql-
connector-java-3.1.14-bin.jar", clicked OK.
NOTE: There was no import errors because of zxJDBC, which is strange, since it
obviously successfully imported it, but couldn't find drivers...
Thanks in advance!
Answer: Your second snippet of code should work. It works on my machine. What you
appear to have is a CLASSPATH issue. You might try putting your MySQL driver
jar file in the extensions directory, which on Windows you would find here:
%SystemRoot%\Sun\Java\lib\ext
The driver would then be available to all of your Java applications (if that's
what you want). Details are in Oracle's documentation:
<http://docs.oracle.com/javase/tutorial/ext/basics/install.html>
I want to make a couple of other points. First, even though the driver will
work with the name you gave it, you may eventually want to change it from
`org.gjt.mm.mysql.Driver` to `com.mysql.jdbc.Driver`, which is the modern,
renamed version. (If you're interested as to why, the details are here:
<http://stackoverflow.com/a/5808184/155167>.)
Your first snippet of code will not work, as there is no `MysqlDataSource`
class in the `org.gjt.mm.mysql` package. That class you reference is actually
here: `com.mysql.jdbc.jdbc2.optional.MysqlDataSource`, but I don't know if
_that_ snippet will work.
Like I said, put the MySQL driver jar in your CLASSPATH correctly, and it
should work. The Oracle documentation on CLASSPATH setup is here:
<http://docs.oracle.com/javase/tutorial/essential/environment/paths.html>
<http://docs.oracle.com/javase/8/docs/technotes/tools/windows/classpath.html>
Good luck!
|
Predict interesting articles with scikit-learn
Question: I'm trying to build an algorithm capable of predicting if I will like an
article, based on the previous articles I liked.
Example:
* I read 50 articles, I liked 10. I tell my program I liked them.
* Then 20 new articles are coming. My program has to give me a "percentage of like" for each new articles, based on the 10 I previously liked.
I found a lead here: [Python: tf-idf-cosine: to find document
similarity](http://stackoverflow.com/questions/12118720/python-tf-idf-cosine-
to-find-document-similarity)
>>> from sklearn.feature_extraction.text import TfidfVectorizer
>>> from sklearn.datasets import fetch_20newsgroups
>>> twenty = fetch_20newsgroups()
>>> tfidf = TfidfVectorizer().fit_transform(twenty.data)
And then, to compare the first document of the dataset to the others documents
in the dataset:
>>> from sklearn.metrics.pairwise import linear_kernel
>>> cosine_similarities = linear_kernel(tfidf[0:1], tfidf).flatten()
>>> cosine_similarities
array([ 1. , 0.04405952, 0.11016969, ..., 0.04433602,
0.04457106, 0.03293218])
For my case, what I think I will do is to concatenate the text of my 10
articles, run the TfidfVectorizer, and then compare the new big vector to each
new article coming.
But I wonder how the comparison will be done:
* the big vector (10 articles) compared to the little one OR
* the little one compared to the big one
I don't know if you get my point, but in the first case 90 % of the words in
the big vector won't be in the little one.
So my question is: how is the cosine similarity calculated ? Do you see a
better approach for my project ?
Answer: A [naive bayes](http://en.wikipedia.org/wiki/Naive_Bayes_classifier)
classifier should perform better. Your problem is similar to the classic spam-
classification problem. In your case, you are not identifying **Spam** (what
you don't like) but identifying **Ham** (what article you like).
From the first 50 labeled articles, it's easy to calculate the below stats:
p(word1|like) -- among all the articles I like, the probability of word1 appears
p(word2|like) -- among all the articles I like, the probability of word2 appears
...
p(wordn|like) -- among all the articles I like, the probability of wordn appears
p(word1|unlike) -- among all the articles I do not like, the prob of word1 appears
...
p(like) -- the portion of articles I like (should be 0.2 in your example)
p(unlike) -- the portion of articles I do not like. (0.8)
Then given a 51th new example, you should find all seen words in it, for
example, it contains only word2 and word5. **One of the nice things about the
naive bayes is that it only cares the in-vocabulary words**. Even more than
90% of the words in the big vector won't be in the new one, it is not a
problem since **all irrelevant features cancel each other out without
affecting results.**
The [likelihood
ratio](http://en.wikipedia.org/wiki/Likelihood_function#Continuous_probability_distribution)
will be
prob(like|51th article) p(like) x p(word2|like) x p(word5|like)
---------------------------- = -----------------------------------------
prob(unlike|51th article) p(unlike)xp(word2|unlike)xp(word5|unlike)
As long as the ratio is > 1, you can predict the article as "like". Further,
if you want to increase the precision of identifying "liked" articles, you can
play with the precision-recall balance by increase the threshold ratio values
from 1.0 to a bigger value. On the other direction, if you want to increase
the recall, you can lower the threshold etc.
For further reading for naive bayes classification in text domain, see
[here](https://web.stanford.edu/class/cs124/lec/naivebayes.pdf).
**This algorithm can easily be modified to do online learning** , i.e.,
updating the learned model as soon as a new example is "liked" or "disliked"
by the user. Since every thing in the above stats table are basically
normalized counts. As long as you keep each counts (per word) and the total
counts saved, you are able to update the model on a per-instance basis.
To **use tf-idf weight** of a word for naive bayes, we treat the weight as the
count of the word. I.e., without tf-idf, each word in each document counted as
1; with tf-idf, the words in the documents are counted as their TF-IDF weight.
Then you get your probabilities for Naive Bayes using the same formula. This
idea can be found in this
[paper](http://machinelearning.wustl.edu/mlpapers/paper_files/icml2003_RennieSTK03.pdf).
I think the [multinomial naive bayes classifier in scikit-
learn](http://scikit-
learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html)
should accept tf-idf weights as input data.
See the comment for the MultinomialNB:
> The multinomial Naive Bayes classifier is suitable for classification with
> discrete features (e.g., word counts for text classification). The
> multinomial distribution normally requires integer feature counts. However,
> **in practice, fractional counts such as tf-idf may also work.**
|
Python, Django with PyCharm. Message error: "No module named M2Crypto" How resolve?
Question: I received this message: "No module named M2Crypto" I have already install
M2Crypto with the command "pip install M2Crypto" and when I re-run it, I got
the message: "Requirement already satisfied"
What's the problem with M2Crypto?
Thanks
ps: I use Linux: 3.11.0-12-generic #19-Ubuntu SMP Wed Oct 9 16:12:00 UTC 2013
i686 i686 i686 GNU/Linux, Pycharm and Python2.7 (/usr/bin/python2.7)
Maybe some interpreter option in PyCharm configuration for running the
project?
Answer: First of all, verify that the **version of pip** is in line with your
interpreter. So for python2.7,
pip --version
should print something like
pip 6.0.8 from /usr/local/lib/python2.7/dist-packages (python 2.7)
depending on how you've installed it. The important part is in the end, where
your interpreter ("python 2.7") should be shown.
Once you're sure to have the right pip-version, ensure your **package is
correctly installed**. It should usually be installed in the directory printed
out previously by pip (e.g. /usr/local/lib/python2.7/dist-packages/).
Assume you've already done this, **what else** can go wrong to make your
interpreter not finding the 'M2Crypto' package?
python uses the `PYTHONPATH` environment variable for module lookups. So,
there's the possibility, that your `PYTHONPATH` variable has been changed. Try
running your program by adding the above-mentioned path to `PYTHONPATH` and
either exporting it before running your webserver:
export PYTHONPATH=/usr/local/lib/python2.7/dist-packages/:$PYTHONPATH
# run your server here
or by prepending the same variable to your command:
PYTHONPATH=/usr/local/lib/python2.7/dist-packages/:$PYTHONPATH python <run-stuff-here>
This should make your program find the M2Crypto module.
|
Hierarchical Clustering Dendrogram using python
Question: Graph theory and Data mining are two fields of computer science I'm still new
at, so excuse my basic understanding.
I have been asked to plot a Dendrogram of a hierarchically clustered graph.
The input I have been given is the following : a list of all the edges of this
graph.
So far I have been able to draw the graph from the input.
The next step would be clustering the graph, then plotting the Dendrogram from
that Clustered graph.
My question is : Can someone give me a step by step guide to follow ? what
input/output is required/returned during both steps of the process.
(Clustering, getting the Dendrogram)
Note :
So far I have been using graph-tool to draw the graphs, I also ran a test code
I found on the internet from the Scipy.clustering.hierarchy package, and it
seems to have all the needed functions.
Answer: You are correct the 'Scipy.clustering.hierarchy package' is correct here is
some python pseudo code to show you the general idea.
from your statement 'So far I have been able to draw the graph from the
input.' I am assuming that you have a good start with getting the input data
into python ect ..
start python Clustering pseudo code
I needed at least these python packages
import scipy.cluster.hierarchy as sch
import numpy as np
import sys
you need a 'distance metric', if your input data was 'strings' then you would
use something like this
`from Levenshtein import jaro`
get the Matrix Dimension from the labels for the distance matrix
`distanceMatrixDimension= len(p_names)`
get coordinates of the upper triangle
`upper_triangle=np.triu_indices(distanceMatrixDimension,1)`
get the distances `distances=np.apply_along_axis(dis,0,upper_triangle)`
start the clustering algorithm from the 'Scipy.clustering.hierarchy' package
get the linkage matrix Z here 'average' is the method of the linkage
`Z=sch.linkage(distances,'average')`
get a bound for the metrics dimension values generated from the data
`max_dist=distances.max()`
0.2.max_dist acts like a threshold value, experiment with different values
`fclster=sch.fcluster(Z,0.2*max_dist,'distance')`
end python Clustering pseudo code
Z is a linked hierarchical agglomeration clustering of your data another way
to say this is it is a (hierarchical)'tree' with a root node that branches
down to your leaf nodes that are generally a record or row in the input data
you want to cluster
the Deprogram is just a visualization of this tree. There are a number of ways
to do this, you can get plot variables from the dimensions of Z. The best way
to do this is with matlab or octave. generally you use dendrogram from scipy
to plot the 'dendrogram'
`import matplotlib.pyplot as plt`
then
`dendrogram(Z, color_threshold=1, labels=[....],show_leaf_counts=True)`
|
Python: Running multiple timers simultaneously
Question: I want to create multiple timers in a loop. When the loop terminates, there
should be multiple timers running. If any of the timers times out, it should
call another function. How do I implement this in Python? Any help will be
appreciated.
eg.
for i in (0,6):
do something
start timer_i
for i in (0,6):
if timer_i times out:
call another function
Answer: Look into `Timer`, which is locate in the `threading` module of the python
standard library. The documentation gives the following example:
from threading import Timer
def hello():
print("hello, world")
t = Timer(30.0, hello)
t.start() # after 30 seconds, "hello, world" will be printed
|
Resume an Iterator
Question: Is there a way to resume an iterator after a keyboard interrupt signal or
other SIGINT signal in python?
Specifically for itertools iterator
`import itertools for word in itertools.product("abcd",repeat=3): print(word)`
I want to resume printing from where it left off
Answer: You can catch keyboard interrupts in a try/except block
import itertools
import time
for word in itertools.product("abcd",repeat=3):
try:
print(word)
time.sleep(1)
except:
pass
Hitting Ctrl+C during executing does not stop the iteration
|
Trying to multiprocess a function requiring a list argument in python
Question: My problem is that I'm trying to pass a `list` as a variable to a function,
and I'd like to mutlti-thread the function processing. I can't seem to use
`pool.map` because it only accepts iterables. I can't seem to use `pool.apply`
because it seems to block the pool while it works, so I don't really
understand how it allow mutli-threading at all (admittedly, I don't seem to
understand anything about multi-threading). I tried `pool.apply_async`, but
the program finishes in seconds, and only appears to process about 20000 total
computations. Here's some psuedo-code for it.
import MySQLdb
from multiprocessing import Pool
def some_math(x, y):
f(x[1], x[2], y[1], y[2])
return f
def distance(x):
x_distances = []
for y in all_y:
distance = some_math(x, y)
if distance > 1000000:
continue
else:
x_distances.append(x[0], y[0],distance)
mysql.executemany(sql_update, x_distances)
mydb.commit()
all_x = []
all_y = []
sql_x = 'SELECT id, lat, lng FROM table'
sql_y = 'SELECT id, lat, lng FROM table'
sql_update = 'INSERT INTO distances (id_x, id_y, distance) VALUES (%s, %s, %S)'
cursor.execute(sql_x)
all_x = cursor.fetchall()
cursor.execute(sql_y)
all_y = cursor.fetchall()
p = Pool(4)
for x in all_x:
p.apply_async(distance, x)
OR, if using map:
p = Pool(4)
for x in all_x:
p.map(distance, x)
The error returns: Processing A for distances...
Traceback (most recent call last):
File "./distance-house.py", line 94, in <module>
p.map(range, row)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 251, in map
return self.map_async(func, iterable, chunksize).get()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 558, in get
raise self._value
TypeError: 'float' object has no attribute '__getitem__'
I am trying to multi-thread a long computation - calculating a the distance
between something like 10,000 points on a many-to-many basis. Currently, the
process is taking several days, and I figure that multiprocessing the results
could really improve the efficiency. I'm all ears for suggestions.
Answer: You can use `pool.map`:
p = Pool(4)
p.map(distance, all_x)
as per the first example in the
[doc](https://docs.python.org/2/library/multiprocessing.html). It will do the
iteration for you!
|
PolynomialFeatures fit_transform is giving Value error
Question: I am getting a ValueError while trying to run the Polynomial Regression
example:
from sklearn.preprocessing import PolynomialFeatures
import numpy as np
poly = PolynomialFeatures(degree=2)
poly.fit_transform(X) ==> ERROR
The error is:
File "/root/.local/lib/python2.7/site-packages/sklearn/base.py", line 426, in fit_transform
return self.fit(X, **fit_params).transform(X)
File "/root/.local/lib/python2.7/site-packages/sklearn/preprocessing/data.py", line 473, in fit
self.include_bias)
File "/root/.local/lib/python2.7/site-packages/sklearn/preprocessing/data.py", line 463, in _power_matrix
powers = np.vstack(np.bincount(c, minlength=n_features) for c in combn)
File "/usr/lib/python2.7/dist-packages/numpy/core/shape_base.py", line 226, in vstack
return _nx.concatenate(map(atleast_2d,tup),0)
File "/root/.local/lib/python2.7/site-packages/sklearn/preprocessing/data.py", line 463, in <genexpr>
powers = np.vstack(np.bincount(c, minlength=n_features) for c in combn)
ValueError: The first argument cannot be empty.
My scikit-learn version is 0.15.2
This is example is taken from: <http://scikit-
learn.org/stable/modules/linear_model.html#polynomial-regression-extending-
linear-models-with-basis-functions>
Answer: You should try to set include_bias to False when creating object of
PolynomialFeatures class like this
poly = PolynomialFeatures(degree=2, include_bias=False)
Note that the final matrix in the example does not have the first column now.
|
Python memory management with a median image stacker
Question: I've been playing with time-lapse photography lately, and using median image
stacking on a group of images, or extracted video frames. I've created a
little script that works well with a relative few images:
from PIL import Image
import os
import numpy as np
#Create a list of the images' data
imglist = []
for fname in os.listdir("input\\"):
imglist.append(np.array(Image.open("input\\"+fname)))
#Find the median of all image data in the stack, save it
median = np.uint8(np.median(imglist, axis=0))
Image.fromarray(median).save("median.png","PNG")
The obvious problem here is that if too many images are loaded into memory at
once, they fill up my meager 8GB of RAM. I have tried approaches which include
splitting the image data into chunks and calculating the median one chunk at a
time, and splitting the image data by color channel, but when there are that
many images to process, the amount of file calls it takes to save the data to
the disk image by image causes a massive slowdown.
I am wondering if there's any way I can use some weighting scheme to calculate
the median for a few images at a time, then repeat with the results, or use a
gimmick like virtual memory, memory-mapped files, or something else to
eliminate this excessive memory usage. Any ideas?
Answer: How about using [](http://www.h5py.org/ "h5py")<http://www.h5py.org/>, an HDF5
interface for Python? HDF5 is a data format designed for handling massive
gridded datasets, and with h5py you can manipulate that data with NumPy.
NetCDF is good too, and integrates HDF5.
What kind of resolution are the images, and how many of them are there?
Here is a solution to your problem which uses netCDF4 to slice out (400,400)
regions of a collection of images, adds them to a netCDF file, and takes the
median at the end:
import glob
from netCDF4 import Dataset
import numpy as np
from PIL import Image
WIDTH = 800
HEIGHT = 450
root = Dataset('test.nc', 'w')
root.createDimension('x', WIDTH)
root.createDimension('y', HEIGHT)
root.createDimension('channels', 3)
t = root.createDimension('t', None)
img = root.createVariable('image', 'u1', ('t','y','x','channels'))
images = glob.glob('images/*')
for i,fname in enumerate(images):
im = Image.open(fname)
im_array = np.asarray(im)
img[i] = im_array
median = np.median(img, axis=0)
im = Image.fromarray(np.uint8(median))
im.save('out.png')
|
"no control matching name" in mechanize for python
Question: I am using mechanize for python and I am trying to search for an item in
kijiji. Eventually my goal is for my program to search for an item, and using
beautifulsoup, check whether or not someone has posted a new ad for my search
term by scraping through the html source that comes from inputting a search
term and e-mailing me if any new ads show up so that I can be the first one to
respond. Below is my code, but I get an error:"no control matching name
'keywords". I am not sure what I am doing wrong.
import mechanize
br = mechanize.Browser() # allow everything to be written to
br.set_handle_robots(False) # ignore robots
br.open("http://www.kijiji.ca/h-kitchener-waterloo")
br.select_form(nr=0)
br["keywords"] = "Nvidia"
Answer: Your code is OK but there is no form named 'keywords' in the page. You can
look into page source info to verify that.
|
My first unit test, what am I doing wrong?
Question: This is my first time trying to write a test and I'm guessing I made some
obvious screw up with writing the test itself.
Here is my test:
from django.test import TestCase
from accounts.forms import UserReview
class MyTests(TestCase):
def test_forms(self):
form_data = {'headline': 'test', 'body_text': 'description of item Im selling', 'author: ben'}
form = SellForm(data=form_data)
self.assertEqual(form.is_valid(), True)
I am getting the following error:
ImportError: Failed to import test module: accounts.tests
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/loader.py", line 254, in _find_tests
module = self._get_module_from_name(name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/loader.py", line 232, in _get_module_from_name
__import__(name)
File "/Users/benjamino/Desktop/myproject/myproject/accounts/tests.py", line 8
form_data = {'headline': 'test', 'body_text': 'description of item Im selling', 'author: ben'}
^
SyntaxError: invalid syntax
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
Destroying test database for alias 'default'...
Why is accounts.tests failing to import? The code above is located in my
accounts/tests.py.
Answer: This is a simple syntax error, as the message tells you. Your quoting is wrong
in the last element of your dict: `'author: ben'` should be `'author': 'ben'`
|
py2exe cannot import from six.py
Question: I'm trying use py2exe on a program that imports `urlparse` from
`six.moves.urllib_parse`. Here is the program:
# hello.py
from six.moves.urllib_parse import urlparse
print('hello world')
And here is my setup.py:
from distutils.core import setup
import py2exe
setup(console=['hello.py'])
Running hello.py works fine. When I compile hello.py into an exe using `python
setup.py py2exe`, a `hello.exe` file is produced. However, when I run
`hello.exe` I get an error saying:
`ImportError: No module named urlparse`
I'm using Python 2.7.
With Python 3.4, I get an error saying `KeyError: 'six.moves'` when running
`python setup.py py2exe`.
How can I stop these errors from occurring?
Answer: py2exe released a new version recently that fixes this problem:
Changes in version 0.9.2.2:
- Added support for six, cffi, pycparser, openssl.
Using this version I was able to create an .exe and run it successfully.
|
how to avoid IOError while looping through a long list that opens many files in threads?
Question: I'm downloading access logs from Amazon S3. These are A LOT of small files. To
reduce the time of download, I've decided to read each file in a thread.
This is my main method that first connects to S3, then iterates over each
document, and reads each documents' content inside a separate thread.
def download_logs(self):
"""
Downloads logs from S3 using Boto.
"""
if self.aws_keys:
conn = S3Connection(*self.aws_keys)
else:
conn = S3Connection()
files = []
mybucket = conn.get_bucket(self.input_bucket)
with tempdir.TempDir() as directory:
for item in mybucket.list(prefix=self.input_prefix):
local_file = os.path.join(directory, item.key.split("/")[-1])
logger.debug("Downloading %s to %s" % (item.key, local_file))
thread = threading.Thread(target=item.get_contents_to_filename, args=(local_file,))
thread.start()
files.append((thread,local_file))
elms = range(len(files))
elemslen = len(elms)
while elemslen:
curr = random.choice(elms)
thread, file = files[curr]
if not thread.is_alive():
yield file
elms.remove(curr)
elemslen -= 1
As you can see this is a generator as it yields. The generator is processed by
simply reading each file's content to concatenate them
logs = self.download_logs()
for downloaded in logs:
self.concat_files(tempLog, downloaded)
The above code fails with the following Warning raised in the threads:
[2014-10-20 15:15:21,427: WARNING/Worker-2] Exception in thread Thread-710:
Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/Users/viktornagy/.virtualenvs/vidzor/lib/python2.7/site-packages/boto/s3/key.py", line 1561, in get_contents_to_filename
fp = open(filename, 'wb')
IOError: [Errno 24] Too many open files: u'/var/folders/7h/9tt8cknn1qx40bs_s467hc3r0000gn/T/tmpZS9fdn/access_log-2014-10-20-11-36-20-9D6F43B122C83BD6'
Of course, I could raise the number of open files, but I would rather limit
the number of threads to something meaningful.
Now my question is how to achieve that? I have a loop that generates a list of
threads. Once this loop is finished, then I digest the list, and check for
closed threads that can yield.
If I limit the number of threads in the first loop, than I'll never have the
list ready to start its digestion.
Answer: You can use
[`multiprocessing.dummy`](https://docs.python.org/2.7/library/multiprocessing.html#module-
multiprocessing.dummy) to create a pool of `threading.Thread` objects, and
distribute the work to the threads in the `Pool`:
from multiprocessing.dummy import Pool
def download_logs(self):
"""
Downloads logs from S3 using Boto.
"""
if self.aws_keys:
conn = S3Connection(*self.aws_keys)
else:
conn = S3Connection()
files = []
mybucket = conn.get_bucket(self.input_bucket)
pool = Pool(20) # 20 threads in the pool. Tweak this as you see fit.
with tempdir.TempDir() as directory:
results = pool.imap_unordered(item.get_contents_to_filename,
[os.path.join(directory, item.key.split("/")[-1]
for item in mybucket.list(prefix=self.input_prefix)]
for result in results:
yield result
I'm using
[`imap_unordered`](https://docs.python.org/2.7/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.imap_unordered)
so that you can start yielding results as soon as they arrive, rather than
needing to wait for all the tasks to complete.
|
Python Function to test ping
Question: I'm trying to create a function that I can call on a timed basis to check for
good ping and return the result so I can update the on-screen display. I am
new to python so I don't fully understand how to return a value or set a
variable in a function.
Here is my code that works:
import os
hostname = "google.com"
response = os.system("ping -c 1 " + hostname)
if response == 0:
pingstatus = "Network Active"
else:
pingstatus = "Network Error"
Here is my attempt at creating a function:
def check_ping():
hostname = "google.com"
response = os.system("ping -c 1 " + hostname)
# and then check the response...
if response == 0:
pingstatus = "Network Active"
else:
pingstatus = "Network Error"
And here is how I display `pingstatus`:
label = font_status.render("%s" % pingstatus, 1, (0,0,0))
So what I am looking for is how to return pingstatus from the function. Any
help would be greatly appreciated.
Answer: It looks like you want the `return` keyword
def check_ping():
hostname = "taylor"
response = os.system("ping -c 1 " + hostname)
# and then check the response...
if response == 0:
pingstatus = "Network Active"
else:
pingstatus = "Network Error"
return pingstatus
You need to capture/'receive' the return value of the function(pingstatus) in
a variable with something like:
pingstatus = check_ping()
Some info on python functions:
<http://www.tutorialspoint.com/python/python_functions.htm>
<http://www.learnpython.org/en/Functions>
It's probably worth going through a good introductory tutorial to Python,
which will cover all the fundamentals. I recommend investigating
[Udacity.com](https://www.udacity.com/courses#!/all) and
[codeacademy.com](http://www.codecademy.com/tracks/python)
|
How to output a file where named entities are replaced by tags using Stanford CoreNLP in Python?
Question: I'm working with Stanford NLP using Python. So, I have a function that inputs
some text files and converts them to xml files(generated by Stanford CoreNLP).
Now, I want to write another function that inputs these xml files and outputs
the corresponding file that contains the same text but with named entities
replaced by their tag and marking the ends of sentences with the word "STOP",
removing the punctuations. The beginning of the file also has the word "STOP"
. The function giving the xml files is:
import subprocess
def generate_xml(input,output):
p = subprocess.Popen('java -cp stanford-corenlp-2012-07-09.jar:stanford-corenlp-2012-07-06-models.jar:xom.jar:joda-time.jar -Xmx3g edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,ner -filelist /Users/akritibahal/Downloads/stanford-corenlp-2012-07-09/myfile_list.txt -outputDirectory /Users/akritibahal/Downloads/stanford-corenlp-2012-07-09', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
print line
retval = p.wait()
The function is that will give out an output file with named entity tags:
def process_file(input_xml,output_file):
Could anybody help me how to get such output file with named entity tags?
Answer: I've been parsing the output of the CoreNLP using minidom. Here is some
starter code you may want to use but you may want to check
<https://github.com/dasmith/stanford-corenlp-python>
Note that you need to get the tokenization used by the Stanford CoreNLP since
the data returned is based on offsets of sentences and tokens.
from xml.dom import minidom
xmldoc = minidom.parseString(raw_xml_data)
for sentence_xml in xmldoc.getElementsByTagName('sentences')[0].getElementsByTagName('sentence'):
parse = parser.parse(sentence_xml.getElementsByTagName('parse')[0].firstChild.nodeValue)
tokens = [(i,j) for i,j in zip(sentence_xml.getElementsByTagName('tokens')[0].getElementsByTagName('token'),parse.get_leaves())]
# example for processing dependencies
elements = sentence_xml.getElementsByTagName('dependencies')
for element in elements:
if element.getAttribute('type')=="collapsed-ccprocessed-dependencies":
dependencies += [i for i in element.getElementsByTagName('dep')]
|
Seasonal Decomposition of Time Series by Loess with Python
Question: I'm trying to do with Python what I the STL function on R.
The R commands are
fit <- stl(elecequip, s.window=5)
plot(fit)
How do I do this in Python? I investigated that statmodels.tsa has some time
series analysis functions but I could specifically found "Seasonal
Decomposition of Time Series by Loess" in the documentation. Similarly on
Python.org there is a library called timeseries 0.5.0, but this doesn't have
documentation and it's home site looks down. I know that there is an option
with rpy2 using a wrapper, but I don't know how to use the wrapper.
Thanks.
Answer: I've been having a similar issue and am trying to find the best path forward.
[Here is a github repo for an STL decomposition based on the Loess
procedure](https://github.com/andreas-h/pyloess/blob/master/mpyloess.py). It
is based on the original fortran code that was available with [this
paper](http://www.wessa.net/download/stl.pdf). It is really just a python
wrapper around the original Fortran code, so you know its likely works well
and isn't buggy.
If you want something more Python centric and are willing to go with a
slightly simpler decomposition routine, StatsModels has one:
Try moving your data into a [Pandas](http://pandas.pydata.org/) DataFrame and
then call [StatsModels](http://statsmodels.sourceforge.net/)
`tsa.seasonal_decompose`. See the [following
example](http://statsmodels.sourceforge.net/stable/release/version0.6.html?highlight=seasonal#seasonal-
decomposition):
import statsmodels.api as sm
dta = sm.datasets.co2.load_pandas().data
# deal with missing values. see issue
dta.co2.interpolate(inplace=True)
res = sm.tsa.seasonal_decompose(dta.co2)
resplot = res.plot()

You can then recover the individual components of the decomposition from:
res.resid
res.seasonal
res.trend
I hope this helps!
|
Obtain text from lxml Comment
Question: I am trying to get the content of the `_Comment`. I've researched quite a bit
on how do do it, but I don't know how to access the function from the `td`
element in order to grab the text. I'm using xpaths with the python Scrapy
module if that helps.
td = None [_Element]
<built-in function Comment> = None [_Comment]
a = None [_Element]
The HTML for the `td` element is:
<table class="crIFrameReviewList">
<tr>
<td>
<!-- BOUNDARY -->
<a name="R2L4AFEICL8GG6"></a><br />
<div style="margin-left:0.5em;">
<div style="margin-bottom:0.5em;">
304 of 309 people found the following review helpful
</div>
<div style="margin-bottom:0.5em;">
<span style='margin-left: -5px;'><img src="http://g-ecx.images-amazon.com/images/G/01/x-locale/common/customer-reviews/stars-5-0._V192240867_.gif" width="64" alt="5.0 out of 5 stars" title="5.0 out of 5 stars" height="12" border="0" /> </span>
<b>Great Travel Zoom</b>, <nobr>April 9, 2014</nobr>
</div>
<div style="margin-bottom:0.5em;">
<div class="tiny" style="margin-bottom:0.5em;">
<span class="crVerifiedStripe"><b class="h3color tiny" style="margin-right: 0.5em;">Verified Purchase</b><span class="tiny verifyWhatsThis">(<a href="http://www.amazon.com/gp/community-help/amazon-verified-purchase" target="AmazonHelp" onclick="amz_js_PopWin('http://www.amazon.com/gp/community-help/amazon-verified-purchase', 'AmazonHelp', 'width=400,height=500,resizable=1,scrollbars=1,toolbar=0,status=1');return false; ">What's this?</a>)</span></span>
</div>
<div class="tiny" style="margin-bottom:0.5em;">
<b><span class="h3color tiny">This review is from: </span>Canon PowerShot SX700 HS Digital Camera (Black) (Electronics)</b>
</div>
For the recent few years Canon has made great efforts to improve their travel-zoom compact cameras, and the new SX700 is their next remarkable achievement on that way. It's a little bit bigger than its predecessor (SX280) but it is very well built and has an attractive look and feel (I like the black one). It also got a new front grip which makes one-hand shooting more convenient, even when shooting video, since the Video button was moved from the back to the top and you can now use your thumb solely for holding the camera.<br /><br />Here is a brief list of the new camera pros & cons:<br /><br />PROS:<br />* A very good design and build quality with the attractive finish.<br />* A new powerful 30x optical zoom lens in just a pocket-size body.<br />* Incredible range from 25mm wide to 750mm telephoto for stills and video.<br />* Zoom Framing Assist - very useful new feature to compose your pictures at long telephoto.<br />* Very effective optical Intelligent Image Stabilization for...
<a href="http://rads.stackoverflow.com/amzn/click/B00I58M26Y" target="_top">Read more</a>
<div style="padding-top: 10px; clear: both; width: 100%;">
Answer: Find the `div` with `class="reviewText"` using `.//div[@class="reviewText"]`
xpath expression and dump the element to string using `tostring()` with a
`text` method:
import lxml.html
data = """
your html here
"""
td = lxml.html.fromstring(data)
review = td.find('.//div[@class="reviewText"]')
print lxml.html.tostring(review, method="text")
Prints:
54,000 RPM - It has a spinning disk drive that is way beyond our time...I bought 10 of these just for the hard drive, they blow SSD's out of the water.
Seriously though... how does a well known computer company mistype an important spec?
|
Can not load jQuery DataTables plugin in IPython Notebook
Question: I'm attempting to use the jQuery DataTables plugin within an IPython Notebook.
For some reason, the plugin doesn't seem to be applied to the jQuery instance.
The code below demonstrates the problem. When I execute this, I get an error
of "[Error] TypeError: 'undefined' is not a function (evaluating
'$('mytable').dataTable')" in the web console as if the plugin hasn't been
loaded. Should it be possible to load plugins this way?
from IPython.display import Javascript
Javascript('''
var dataSet = [
['Trident','Internet Explorer 4.0','Win 95+','4','X'],
['Trident','Internet Explorer 5.0','Win 95+','5','C'],
['Trident','Internet Explorer 5.5','Win 95+','5.5','A'],
['Trident','Internet Explorer 6','Win 98+','6','A'],
['Trident','Internet Explorer 7','Win XP SP2+','7','A'],
['Trident','AOL browser (AOL desktop)','Win XP','6','A'],
['Gecko','Firefox 1.0','Win 98+ / OSX.2+','1.7','A'],
['Gecko','Firefox 1.5','Win 98+ / OSX.2+','1.8','A'],
['Gecko','Firefox 2.0','Win 98+ / OSX.2+','1.8','A'],
['Gecko','Firefox 3.0','Win 2k+ / OSX.3+','1.9','A'],
['Gecko','Camino 1.0','OSX.2+','1.8','A']
];
$(element).html('<table id="mytable"></table>')
$('#mytable').dataTable( {
"data": dataSet,
"columns": [
{ "title": "Engine" },
{ "title": "Browser" },
{ "title": "Platform" },
{ "title": "Version", "class": "center" },
{ "title": "Grade", "class": "center" }
]
} );
''', lib=['/static/DataTables/media/js/jquery.dataTables.js'],
css=['/static/DataTables/media/css/jquery.dataTables.css'])
Answer: May be you should use `require.config`.
Like this.
%%javascript
require.config({
paths: {
dataTables: '//cdn.datatables.net/1.10.12/js/jquery.dataTables.min'
}
});
and
from IPython.display import HTML
HTML('''
<table id="example" class="display" cellspacing="0" width="100%">
<thead>
<tr>
<th>Name</th>
<th>Position</th>
<th>Office</th>
<th>Age</th>
<th>Start date</th>
<th>Salary</th>
</tr>
</thead>
<tbody>
<tr>
<td>Tiger Nixon</td>
<td>System Architect</td>
<td>Edinburgh</td>
<td>61</td>
<td>2011/04/25</td>
<td>$320,800</td>
</tr>
</tbody>
<script>
require(['dataTables'], function(){
$('#example').DataTable();
});
</script>
''')
|
Python: How to download a webfile into the memory?
Question: [In order to open the example urls you need to login to Shazam]
So I'm writing a script that downloads my Shazam history so I can then
manipulate it to write playlists to other services. Anyways, I can't directly
parse the history from <http://www.shazam.com/myshazam> because there's a lot
of JavaScript reloading going on there and I guess it would be harder to solve
that problem. So that's why I want to manipulate the file you can download
which you can find here <http://www.shazam.com/myshazam/download-history>
I'm trying to find a way to do this but I'm running with some problems here.
1st I was planning to use urlretrieve
import urllib
urllib.urlretrieve ("http://www.shazam.com/myshazam/download-history, "myshazam-history.html")
but I'm not even sure that's going to work at all because when I try to
download that file there's not an actual URL path like
<http://www.shazam.com/myshazam/download-history/myshazam-history.html> (that
gives you a 404 error). Instead when you hit that URL it immediately redirects
to <http://www.shazam.com> and it prompts the download window of the browser.
The 2nd problem is that I still need to hold the cookies of the sessions and I
don't know how to pass that to urlretrieve to test if it works. Below there is
a test code I wrote that is logging in, holding the session and then parse a
webpage.
def LoginFB(username,password):
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookielib.CookieJar()))
url = "https://www.facebook.com/login.php?skip_api_lo....allthe_loginshazam_stuff)"
data = "&email="+username+"&pass="+password
socket = opener.open(url)
return socket, opener
def shazamParse(opener):
url = "http://www.shazam.com/myshazam/"
content = opener.open(url).read()
soup = BeautifulSoup(content)
finalParse = soup.prettify()
return finalParse.encode("utf-8")
(socket, opener) = LoginFB("email","password")
shazamParse(opener)
What I want to do is hit the download url as a logged user(holding the session
cookies), download the file into the memory, put the contents of the file into
a string and then parse it with BeautifulSoup. Exactly the same approach as my
shazamParse function only that I'm reading from a string with the contents of
the **myshazam-history.html** file.
Any ideas or hints on how can I do this?
Answer: While i'll provide a direct answer here, there are several other libraries
that will do this type of thing in a much cleaner, more maintainable manner
for you. They are:
1. [Scrapy](http://doc.scrapy.org/en/latest/) \- A web spider which handles auth. It's a large tool, but will work well if you do a lot of scraping.
2. [requests](http://docs.python-requests.org/en/latest/) library - This is what urllib2 should have been. _Highly recommended for this job!_
To do this with `urllib2`, you'll need to use the `CookieJar` library, so that
urllib2 has a chance to hold onto your session and cookie variables set in the
initial auth request.
import urllib2
from cookielib import CookieJar
cj = CookieJar()
# Creates a custom page opener, which is cookie aware
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
# Make the actual url request from the server
data = {} # Any optional data to add to the headers.
response = opener.open("http://www.example.com/page/page2", data)
page_data = response.read()
# Look at the HTML from the response
print page_data[0:200]
Once you setup a `urllib2 opener` with a CookieJar, all future requests from
this opener will be able to access cookies setup in previous requests.
There is another issue you might run into about using facebook auth to access
their site, but that's should be posted on a new question should you possibly
run into that!
Hope that helps!
|
Create new line based on each regex match in python
Question: I have an input file that contains data formatted as follows:
a; b, c| derp derp "X1234567, Y1234567, Z1234567" derp derp a; b, c|
I would like to use Python to parse that into multiple lines for each item
that occurs between double quotes.
The output for the above example would be:
a; b, c| derp derp X1234567 derp derp a; b, c|
a; b, c| derp derp Y1234567 derp derp a; b, c|
a; b, c| derp derp Z1234567 derp derp a; b, c|
So far I have something like this:
import re
prefix = re.compile ('^(.*?)"')
pattern = re.compile('\"(.*?)([A-Z]{1}[0-9]{7})(.*?)\"')
suffix = re.compile ('"(.*?)$')
for i, line in enumerate(open('myFile.txt')):
for match in re.finditer(pattern, line):
print prefix, match.group(), suffix
But it only seems to return the first match of each of the contents.
Answer: In this situation it's actually alot more work (in my opinion) to use regex
rather than simple string and list manipulations. I would just do it as such:
#!/usr/bin/env pytohn
with open('myFile.txt','r') as f:
lines = readlines(f)
for line in lines:
line = line.strip()
start = line.find('"')
end = line.find('"',start+1)
data = line[start+1:end].split(',')
data = [x.strip() for x in data]
for x in data:
print line[:start],x,line[end+1:]
Here's what I found after taking a look at the code you posted:
* You're actually printing `SRE_Pattern` objects for `prefix` and `suffix` in the print line. You should record matches for prefix and suffix on every iteration of the outer for loop.
* Calling `match.group()` will return the entire match, not just what's in parentheses. I think you want `match.group(1)` in most cases.
* Having `pattern` defined like that only matches one string because it searches sequentially through the lines for a starting quotation mark followed by the rest of the pattern. Therefore it gets to the index for the first quotation mark, checks once for a pattern, finds `X1234567` then moves on.
* I'm not sure why you have backslashes before the quotation marks in `pattern`, as I don't think those are special characters.
* In your `suffix`, it will match the first quotation mark not the second, and thus `suffix` will include the stuff between the quotation marks.
* The print statement will insert spaces between items if you use commas, so you should concatenate them using `+` instead.
And finally here is what I ended up with for regex:
#!/usr/bin/env python
import re
prefix = re.compile('^(.*?)"')
quotes = re.compile('".*?(.*).*?"')
pattern = re.compile('[A-Z]{1}[0-9]{7}')
suffix = re.compile('".*"(.*?)$')
for (i,line) in enumerate(open('myfile.txt')):
pre = prefix.search(line).group(1)
data = quotes.search(line).group(1)
suf = suffix.search(line).group(1)
for match in re.finditer(pattern,data):
print pre+match.group(0)+suf
Hope this helps, any questions please ask. Regex is a tricky beast at the best
of times.
|
python search a string in a text file and copy a block
Question: I have this text file names.txt
Daniel
Sam
Sameer
Code
Print
Alpha
Bravo
Charlie
and I want to search it for string "Alpha" and copy the line "alpha" and the
previous 100 lines then write"append" it to file result.txt
with open(names.txt) as g:
lines = (g.readlines())
for line in lines:
if "{0}".format("Alpha") in line:
????????????
I writed this code and stopped here , any one could help ?
Answer: Probably the easiest way to do this is to maintain a list of the last 100
lines you've read, then output them to your `result.txt` file if the current
line is `'Alpha'`:
limit = 100
prev_items = []
# Open file and iterate over lines.
with open('names.txt') as f:
for line in f:
# Add the current line to the list.
prev_items.append(line)
# Reduce the list to its newest elements.
prev_items = prev_items[-limit:]
# If the current line is 'Alpha', we don't need to read any more.
if line == 'Alpha':
break
# Append prev_items to the results file.
with open('results.txt', 'a') as f:
f.write('\n'.join(prev_items))
Or, if you're happy to use a collection other than `list`, use a
[`deque`](https://docs.python.org/2/library/collections.html#collections.deque):
from collections import deque
limit = 100
prev_items = deque(maxlen=limit)
# Open file and iterate over lines.
with open('names.txt') as f:
for line in f:
# Add the line to the deque.
prev_items.append(line)
# If the current line is 'Alpha', we don't need to read any more.
if line == 'Alpha':
break
# Append prev_items to the results file.
with open('results.txt', 'a') as f:
f.write('\n'.join(prev_items))
|
Python gspread login error 10060
Question: I am attempting to log in to my Google account with gspread. However, it just
times out with a `Socket Errno 10060`. I have already activated POP and IMAP
access on my email.
import gspread
print 1
gc = gspread.Client(auth=('***@gmail.com', '*****'))
print 2
gc.login()
print 2
sht = gc.open_by_url('https://docs.google.com/spreadsheets/d/1XEThXRqWc_Vs4j_6oIuSPXoybj7aUp4h3g1bqPnBzRM/edit#gid=0')
print 3
val = sht.acell('B1').value
My error <http://tinypic.com/r/ws0ndh/8>
Thank guys!
Answer: `Socket Errno 10060` means the host isn't responding at all to you. Are you
able to access the spreadsheet with your browser? This may be a network
settings issue, you may be blocked (I've seen some reports of this online,
they may block you if you've been making lots of requests), etc. It's hard to
tell, as it's not a fault with your code.
|
Converting LinearSVC's decision function to probabilities (Scikit learn python )
Question: I use linear SVM from scikit learn (LinearSVC) for binary classification
problem. I understand that LinearSVC can give me the predicted labels, and the
decision scores but I wanted probability estimates (confidence in the label).
I want to continue using LinearSVC because of speed (as compared to
sklearn.svm.SVC with linear kernel) Is it reasonable to use a logistic
function to convert the decision scores to probabilities?
import sklearn.svm as suppmach
# Fit model:
svmmodel=suppmach.LinearSVC(penalty='l1',C=1)
predicted_test= svmmodel.predict(x_test)
predicted_test_scores= svmmodel.decision_function(x_test)
I want to check if it makes sense to obtain Probability estimates simply as [1
/ (1 + exp(-x)) ] where x is the decision score.
Alternately, are there other options wrt classifiers that I can use to do this
efficiently?
Thanks.
Answer: If you want speed, then just _replace_ the SVM with
`sklearn.linear_model.LogisticRegression`. That uses the exact same training
algorithm as `LinearSVC`, but with log-loss instead of hinge loss.
Using [1 / (1 + exp(-x))] will produce probabilities, in a formal sense
(numbers between zero and one), but they won't adhere to any justifiable
probability model.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.