text
stringlengths 226
34.5k
|
---|
Python (NumPy) Arrays with Alternative Lengths which can not be predefined
Question: I'm currently working on a numerical python code (from scratch) to solve the
following thought problem:
Imagine I have a 2D problem with a rigid ground, and a flexible layer on top
of that, which is connected to the ground by equidistant spaced springs. On
top of the flexible layer there is a cylinder, which is (if you like) glued to
the edge of the flexible layer. Now the cylinder is rolled horizontally in
numerous increments effecting an opening of the springs, starting from the
edge.
Now my question concerns the following; as an effect of the balance between
the combined spring forces and the moment applied to the cylinder a certain
number of springs will be opened at equilibrium. This number of opened springs
is not known in advance due to the nonlinearity of the problem. Furthermore, I
wish not to model the whole geometry, but rather only take into accounts the
relevant opened springs. During this process I would like, for example, to
store the resultant forces in every opened spring corresponding to a certain
moment increment.
* * *
Thus, in my opinion, the programming problem here is to:
* Create a Force `array` which contains for every increment a list of all the forces in the individual springs. The complexity emerges from the fact that this list does not have a predefined length nor is this length the same for every numerical increment. Note that in this case the number of increments is predefined through for example a `linspace`
* * *
I have been looking for a suitable Pythonic solution to this problem, but have
not found one yet. I think the NumPy library should offer a convenient
solution, but I lack the Python experience nor did I find a nice example. I am
open to alternative approaches also, if more convenient in your opinion for
the above described setup.
Currently, I have the following code example to showcase my approach:
from numpy import array , zeros , linspace
area = 2.0
Forces = [] # Initialize complete Force array
for inc in range(1,4):
stresses = linspace(0.,10.,inc)
n = len(stresses)
Forcej = zeros( n ) # Initialize Forces in current increment
for i, stress in enumerate(stresses):
Forcej[i] = area*stress
print Forcej
Forces.append(Forcej)
print Forces
Which outputs:
[ 0.]
[ 0. 20.]
[ 0. 10. 20.]
[array([ 0.]), array([ 0., 20.]), array([ 0., 10., 20.])]
I suppose this is the desired outcome, but I'm thinking this is far from the
most elegant solution, could anyone assist or comment on this example? Thanks
already for considering my Question.
Answer: To be honest, I would recommend using a full array, so one column for every
spring. In combination with for example the numexpr
(<https://code.google.com/p/numexpr>) module, it is very easy and fast to
extract all "active" springs at each timestep.
The reason I believe a full array is the way to go is because it maps really
nice into how the underlying system handles memory access. So unless the
number of springs are hugh (say >100000), I believe that dragging around a lot
of zeros will not degrade the performance. Another option might be to look for
efficient sparse matrices. I know SciPy has support for it. I am not sure if
the matrix "profile" that you will get from your problem fits a sparse matrix
very well.
|
Use three different quotation marks in a bash command in Python scripts
Question: I want to have three different quotation marks in a python script, because I
want to execute a two-line python command, on a different computer. For
example:
import commands
command = "ssh someothercomputer 'python -c `import psutil; print psutil.cpu_percent()`'"
output = commands.getstatusoutput(command)[1]
However, the backticks are not recognized as quotation marks. The error is the
following:
"Badly placed ()'s.\nArgument expected for the -c option\nusage: python [option] ... [-c cmd | -m mod | file | -] [arg] ...\nTry `python -h' for more information."
How can I get this to work?
Answer: Instead of using backticks, you can escape the inner quotation marks, to have
them in `command`. Like this:
command = "ssh someothercomputer 'python -c \"import psutil; print psutil.cpu_percent()\"'"
Executing this command with `commands` yields the wanted output (the current
cpu percentage at the other computer)
|
How Python threading Timer work internally?
Question: I want to know how python `threading.Timer` works.
In more detail, When i run a couple of `threading.Timer`, does it run separate
thread for counting a time and running the handler ?
Or one thread manages and counts a couple of timer together ?
I am asking because my application need to schedule many event, But
If threading.Timer runs separate each thread for counting a timer, and i run
many timers, it may affect performance so much.
So i am worry that if i have to implement a scheduler running only one thread
if it has big effect in performance.
Answer: `threading.Timer` class is a subclass of `threading.Thread` and basically it
just runs a separate thread in which it sleeps for the specified amount of
time and runs the corresponding function.
It is definitely **not** an efficient way to schedule events. Better way is to
do the scheduling in a single thread by using `Queue.PriorityQueue` in which
you would put your events where "priority" actually means "next fire date".
Similar to how cron works.
Or even better: use something that already exists, do not reinvent the wheel:
Cron, Celery, whatever...
A very simplified example of making a scheduler via `Queue.PriorityQueue`:
import time
from Queue import PriorityQueue
class Task(object):
def __init__(self, fn, crontab):
# TODO: it should be possible to pass args, kwargs
# so that fn can be called with fn(*args, **kwargs)
self.fn = fn
self.crontab = crontab
def get_next_fire_date(self):
# TODO: evaluate next fire date based on self.crontab
pass
class Scheduler(object):
def __init__(self):
self.event_queue = PriorityQueue()
self.new_task = False
def schedule_task(self, fn, crontab):
# TODO: add scheduling language, crontab or something
task = Task(fn, crontab)
next_fire = task.get_next_fire_date()
if next_fire:
self.new_task = True
self.event_queue.put((next_fire, task))
def run(self):
self.new_task = False
# TODO: do we really want an infinite loop?
while True:
# TODO: actually we want .get() with timeout and to handle
# the case when the queue is empty
next_fire, task = self.event_queue.get()
# incremental sleep so that we can check
# if new tasks arrived in the meantime
sleep_for = int(next_fire - time.time())
for _ in xrange(sleep_for):
time.sleep(1)
if self.new_task:
self.new_task = False
self.event_queue.put((next_fire, task))
continue
# TODO: run in separate thread?
task.fn()
time.sleep(1)
next_fire = task.get_next_fire_date()
if next_fire:
event_queue.put((next_fire, task))
def test():
return 'hello world'
sch = Scheduler()
sch.schedule_task(test, '5 * * * *')
sch.schedule_task(test, '0 22 * * 1-5')
sch.schedule_task(test, '1 1 * * *')
sch.run()
It's just an idea. You would have to properly implement both `Task` and
`Scheduler` classes, i.e. `get_next_fire_date` method plus some kind of
scheduling language (crontab?) and error handling. I still strongly suggest to
use one of the existing libraries.
|
How To Compare Items In Two Lists Python 3.3
Question: I tried using `cmp(list1, list2)` to learn it's no longer supported in Python
3.3. I've tried many other complex approaches, but none have worked.
I have two lists of which both contain just words and I want it to check to
see how many words feature in both and return the number for how many.
Answer: You can find the length of the set intersection using `&` like this:
len(set(list1) & set(list2))
**Example:**
>>>len(set(['cat','dog','pup']) & set(['rat','cat','wolf']))
1
>>>set(['cat','dog','pup']) & set(['rat','cat','wolf'])
{'cat'}
Alternatively, if you don't want to use sets for some reason, you can always
use
[`collections.Counter`](https://docs.python.org/dev/library/collections.html#collections.Counter),
which supports most multiset operations:
>>> from collections import Counter
>>> print(list((Counter(['cat','dog','wolf']) & Counter(['pig','fish','cat'])).elements()))
['cat']
|
"maximum recursion depth exceeded" while compiling py file with py2exe
Question: The py2exe worked quite well on the same py file several months ago. But when
I tried it again today it failed by reporting a “RuntimeError: maximum
recursion depth exceeded”. I got an empty “dist” folder as a result. The Py
file works normally, it just can't be compiled. I guess there is something
wrong with the imported modules in the py file. But I can not figure it out
exactly. Does any one know the solution?
The begining part of my py file is:
import xlrd
import wx
import wx.lib.filebrowsebutton as filebrowse
from scipy.optimize import fsolve
import math
import threading
from sympy import Symbol
from sympy import solve
And the last several lines of the cmd window is:
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 332, in _safe_import_hook
self.import_hook(name, caller, level=level)
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 719, in import_hook
return Base.import_hook(self,name,caller,fromlist,level)
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 137, in import_hook
m = self.load_tail(q, tail)
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 214, in load_tail
m = self.import_module(head, mname, m)
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 724, in import_module
r = Base.import_module(self,partnam,fqname,parent)
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 284, in import_module
m = self.load_module(fqname, fp, pathname, stuff)
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 730, in load_module
r = Base.load_module(self, fqname, fp, pathname, (suffix, mode, typ))
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 314, in load_module
self.scan_code(co, m)
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 423, in scan_code
self._safe_import_hook(name, m, fromlist, level=level)
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 332, in _safe_import_hook
self.import_hook(name, caller, level=level)
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 719, in import_hook
return Base.import_hook(self,name,caller,fromlist,level)
File "D:\Python27\lib\site-packages\py2exe\mf.py", line 134, in import_hook
self.msg(3, "import_hook", name, caller, fromlist, level)
RuntimeError: maximum recursion depth exceeded
D:\Python27\py2exetemp>pause
Answer: I'd try to increase recursion depth limit. Insert at the beginning of your
file:
import sys
sys.setrecursionlimit(5000)
|
How to solve: ImportError: "No module named 'graphlab'?
Question: With "source activate graphlab" in the terminal I can start up graphlab.
I've created it like this: "conda create -n graphlab python=2.7 anaconda",
because using virtualenv with Anaconda is untested and not recommended
(according to the warning in the terminal, I don't know whether this really is
the case.).
After starting up graphlab the terminal shows:
discarding /Users/username/anaconda/bin from PATH
prepending /Users/username/anaconda/envs/graphlab/bin to PATH
But when I want to import graphlab in the Spider IDE it shows the following
error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/username/anaconda/lib/python3.4/site packages/spyderlib/widgets/externalshell/sitecustomize.py", line 580, in runfile
execfile(filename, namespace)
File "/Users/username/anaconda/lib/python3.4/site-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 48, in execfile
exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)
File "/Users/username/Documents/projectname/pythonfile.py", line 3, in <module>
import graphlab as gl
ImportError: No module named 'graphlab'
How can I solve this? I am totally new to installing these things, so
hopefully someone can help me with an extensive step by step explanation.
Answer: The Spider IDE seems to be configured to use Python3.4 by default. That has to
be changed to 2.7. This is from the error message on the issue.
|
Installed Spyder using pip under Canopy (OSx), how do i start Spyder?
Question: I am on a machine running Yosemite. I manage my python environment with Canopy
Enthought. I'm trying to install Spyder, so I don't have to use the Editor
that comes with Canopy.
I opened a Canopy terminal and used `pip install spyder` which went fine.
How do I actually start Spyder? When I use `pip list` I can see the Spyder
package.
spyder
spyder.py
python spyder.py
ect. does not work.
Thanks in advance.
Answer: `python -c "from spyderlib import start_app; start_app.main()"`
|
How to get raw bytes from GLib.GString in Python?
Question: I have an application written in Python using GTK3 through the GObject
introspection (Python 2.7 and PyGObject 3.14). I am trying to load a web page
using WebKit and access the contents of all the resources it loads. I'm able
to accomplish this by connecting to the [resource-load-
finished](http://webkitgtk.org/reference/webkitgtk/stable/webkitgtk-
webkitwebview.html#WebKitWebView-resource-load-finished) signal of the
WebKitWebView object I am using to load the page.
Within my signal handler I use the
[WebKitWebResource](http://webkitgtk.org/reference/webkitgtk/stable/WebKitWebResource.html)
object in the web_resource parameter to access the loaded data. Everything
works fine with the GLib.GString returned from get_data() when it does not
contain a NULL byte, I can access what I need using data.str. However when the
data does contain a NULL byte, which is often the case when the MIME type of
the loaded resource is an image, data.len is correct but data.str only
contains the data up to the first NULL byte. I can access the raw bytes by
calling data.free_to_bytes() which returns a GLib.GBytes instance, however
when the signal handler returns the application segfaults. I'm trying to
access all the data within the loaded resource.
I hope the following code helps demonstrate the issue.
from gi.repository import Gtk
from gi.repository import WebKit
def signal_resource_load_finished(webview, frame, resource):
gstring = resource.get_data()
print(resource.get_mime_type())
desired_len = gstring.len
# gstring.str is missing data because it returns the data up to the first NULL byte
assert(gstring.str == desired_len) # this assertion fails
# calling this causes a segfault after the handler returns, but the data is accessible from gbytes.get_data()
#gbytes = gstring.free_to_bytes()
#assert(len(gbytes.get_data()) == desired_len) # this assertion succeeds before the segfault
return
webview = WebKit.WebView()
webview.connect('resource-load-finished', signal_resource_load_finished)
webview.connect('load-finished', Gtk.main_quit)
# lol cat for demo purposes of a resource containing NULL bytes (mime type: image/png)
webview.load_uri('http://images5.fanpop.com/image/photos/30600000/-Magical-Kitty-lol-cats-30656645-1280-800.png')
Gtk.main()
Answer: You don't want to use `free_to_bytes` as this will not only give you the bytes
you want, but also release the string from memory without Python knowing about
it - which, as you discovered, crashes your program. Unfortunately there isn't
a corresponding `get_bytes` method as `GLib.String` wasn't really designed to
hold binary data.
In fact I'd consider it a mistake in the WebKit API that the resource payload
is only available as a `GLib.String`. They seem to have corrected this mistake
in WebKit2:
<http://webkitgtk.org/reference/webkit2gtk/stable/WebKitWebResource.html>
Consider switching to WebKit2 if you can (`from gi.repository import
WebKit2`).
|
Python string recursion
Question: I have to recursively replace characters in a string with new characters. I'm
a bit stuck right now, however. I cannot import anything, so here is my code:
EDIT: I forgot to mention that I'm not allowed to use any built in functions
besides len() and index/splice operators. This means I can't use 'in'. This is
a homework question, but I don't expect you guys to solve it for me, just to
point out why my code isn't working. Thanks!
def replace_char(astr, old_char, new_char):
newstr = ""
if astr == "":
return newstr
elif astr[0] == old_char:
newstr += new_char
return replace_char(astr[1:], old_char, new_char)
else:
newstr += astr[0]
return replace_char(astr[1:], old_char, new_char)
Any ideas on why it isn't working? Thanks!
Answer: Here's what you're looking for:
def replace_char(astr, old_char, new_char):
if astr == "":
return astr
elif astr[0] == old_char:
return new_char + replace_char(astr[1:], old_char, new_char)
else:
return astr[0] + replace_char(astr[1:], old_char, new_char)
The only string you ever returned in your old code was an empty one:
newstr = ""
if astr == "":
return newstr
You need to be adding characters to your return value as you recurse through
the string.
|
pyserial 2.7 and USB Relay module
Question: I've purchased two lctech-inc.com 011801 USB Relay Modules. I'm trying to
control them with python and pyserial. The module does show up as USB-SERIAL
CH340 (COM5). The support information says:
Communication baud rate: 9600bps; Protocol: start: 0 x A0,
switch address: 0 x 01, operation data: 0 x 00 (off), 0 x 01 (on),
check code: on: A0 01 01 A2, off: A0 01 00 A1
I'm using the following python code to turn the relay on but it doesn't work:
import sys
import serial
portName = "COM5"
relayNum = "1"
relayCmd = "on"
#Open port for communication
serPort = serial.Serial(portName, 9600, timeout=1)
#Send the command
serPort.write("relay "+ str(relayCmd) +" "+ str(relayNum) + "\n\r")
print "Command sent..."
#Close the port
serPort.close()
As long as I use the correct COM port, COM5, I do not get any errors.
Any suggestions? Any help would be greatly appreciated. TIA
Answer: It looks like you need to send the byte 0x01 to turn the relay on and 0x00 to
turn it off, not strings "on" and "off".
Try `serPort.write(0x01)` to turn the relay on.
Edit: It also looks like you might need to send the start byte 0xA0 first.
|
Converting a Python List into a Numpy Structured array?
Question: I need to convert a Python list with mixed type (integers and floats) to a
Numpy structured array with certain column names.
I have tried the following code but for some reason I cant see it doesn't
work.
import numpy as np
lmtype = [('el','intp',1), ('n1','intp',1), ('n2','intp',1), ('n3','float64',1),
('n4','float64',1), ('n5','float64',1), ('n6','float64',1), ('n7','float64',1),
('n8','float64',1), ('n9','float64',1), ('n10','float64',1), ('n11','float64',1)]
LAMI = np.zeros(5, dtype=lmtype)
linea = ['1', '2', '3', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0', '0.0']
for idx, la in enumerate(LAMI):
lineanum = ([ int(j) for j in linea[0:3] ] + [float(i) for i in linea[3:12] ] )
print lineanum
LAMI[idx] = np.array( lineanum )
The code runs, but look what LAMI has inside:
>>> LAMI[0]
(0, 1072693248, 0, 5.304989477e-315, 5.307579804e-315, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0)
Answer: Try:
LAMI[idx] = tuple( lineanum )
Tuples are the normal way to assign to a `record` (row) of a structured array.
They can hold mixed types, just like the structured element.
`np.array(lineanum)` is all `float`. `LAMI[idx] = np.array( lineanum )` just
copies the buffer full of floats to a segment of the `LAMI` buffer. I'm a
little surprised that it permits this copy; it must be doing some sort of
'copy only as much as fits'. `LAMI.itemsize` is `84`, while the total length
of `np.array(lineanum)` is 12*8=96.
LAMI[0]=np.array(lineanum, dtype=int) # or better
LAMI[0]=np.array(lineanum[:3], type=int)
would also work, since the only nonzero values are those 1st 3 which are
suppposed to be ints.
|
Interact with a webpage containing dynamic list using Python
Question: I am trying to write a Python program to automatically add tasks to todoist. I
am currently trying to use mechanize but not sure if this is what I need
because it is set up to interact with forms and I'm not sure if todoist is
using HTML forms for input. This is my code so far:
import mechanize;
import re;
br = mechanize.Browser();
br.open("http://todoist.com/");
assert br.viewing_html();
print br.title();
print br.forms();
Which prints
To-do list and task manager. Free, easy, online and mobile: Todoist
<generator object __call__ at 0x02D69288>
So it is successfully connecting to todoist but I am not sure how to access
the input forms to actually have it put tasks in? Also not sure why br.forms()
prints the reference rather than the value (which is an empty list)?
Answer: There are a few modules already available that allow me to interface with the
todoist API. This one seems to be working well in case anyone comes searching
with the same question: <https://github.com/Garee/pytodoist>
|
paramiko.ssh_exception.SSHException: Expecting packet from (31,), got 94
Question: When I use pysftp-0.2.8 to send a Large files to my sftp service. it always
got the same error:paramiko.ssh_exception.SSHException: Expecting packet from
(31,), got 94
The file is about 1.5G , when it transfers at 500M, it break. code here:
import pysftp
upftp=FTP(host=ftp_ip, user=ftp_name, passwd=ftp_passwd, acct=ftp_port, timeout=None)
...
try:
upftp.storbinary('STOR %s'%obj[2], fp, 8192, self.callpecent)
except Exception as error:
fp.close()
self.endit(upftp, 1, '%s,%s'%(obj[2],error), '%s,%s'%(obj[2],error)
)
the error message at here:
2015-03-24 09:43:05 DEBUG - Rekeying (hit 32729 packets, 536900100 bytes sent)
2015-03-24 09:43:05 DEBUG - Ciphers agreed: local=aes128-ctr, remote=aes128-ctr
2015-03-24 09:43:05 DEBUG - using kex diffie-hellman-group1-sha1; server key type ssh-rsa; cipher: local aes128-ctr, remote aes128-ctr; mac: local hmac-sha1, remote hmac-sha1; compression: local none, remote none
2015-03-24 09:43:05 ERROR - Exception: Expecting packet from (31,), got 94
2015-03-24 09:43:05 ERROR - Traceback (most recent call last):
2015-03-24 09:43:05 ERROR - File "D:\Python34\lib\site-packages\paramiko-1.14.0-py3.4.egg\paramiko\transport.py", line 1435, in run
2015-03-24 09:43:05 ERROR - raise SSHException('Expecting packet from %r, got %d' % (self._expected_packet, ptype))
2015-03-24 09:43:05 ERROR - paramiko.ssh_exception.SSHException: Expecting packet from (31,), got 94
2015-03-24 09:43:05 ERROR -
2015-03-24 09:43:05 DEBUG - Dropping user packet because connection is dead.
2015-03-24 09:43:05 DEBUG - [chan 1] close(b'd40b000000000000')
2015-03-24 09:43:05 INFO - [chan 1] sftp session closed.
Thank you for your answer !
Answer: I find somethings may be useful :)
in the paramiko/packet.py
REKEY_PACKETS = pow(2, 29)
REKEY_BYTES = pow(2, 29)
REKEY_PACKETS_OVERFLOW_MAX = pow(2, 29) # Allow receiving this many packets after a re-key request before terminating
REKEY_BYTES_OVERFLOW_MAX = pow(2, 29) # Allow receiving this many bytes after a re-key request before terminating
and
def _trigger_rekey(self):
# outside code should check for this flag
self.__need_rekey = True
when it use the re-key ,it goes wrong.
|
IronPython attribute is not callable
Question: I have a problem with the following IronPython attribute (`RouteAttribute`):
import clr
import System
from System.Web.Http import HttpGetAttribute, ApiController, RouteAttribute
class DemoController(ApiController):
@RouteAttribute("~/api/test")
def test(self):
return "Hallo Welt!"
class DemoControllerClr:
def getType(self):
return clr.GetClrType(DemoController)
If i create an instance of the DemoController-Class, i always get the error-
message: `RouteAttribute is not callable`. I do not create an instance of
`DemoController` on my own, the asp.net web api 2 tries to do this.
Maybe some one has some experiances with attributes in IronPython.
**EDIT**
My aim is, to mix C# and IronPython API-Controller. I register the controller
like this (The c# ones works):
controller = new Dictionary<string, System.Web.Http.Controllers.HttpControllerDescriptor>();
controller.Add("General", new System.Web.Http.Controllers.HttpControllerDescriptor(config, "General", typeof(Controller.GeneralController)));
foreach (DBScript script in dbContext.LoadData<DBScript>(optWherePart: "ScriptContentType = 1"))
{
System.Diagnostics.Debugger.Launch();
var sc = ScriptManager.Singleton.CreateScript(script.ScriptName, ScriptLanguage.IronPython);
var clrInst = sc.GetClassInstance(script.ScriptName + "Clr");
controller.Add(script.ScriptName.Replace("Controller", ""), new System.Web.Http.Controllers.HttpControllerDescriptor(config,
script.ScriptName.Replace("Controller", ""), clrInst.getType()));
}
**EDIT2**
I'm using the newest release of IronPython (2.7.5).
Thank you!
Answer: This did not work, bevause the dependency injection mechanism does not work
together with ironpython, because of the dlr. So just don't work.
|
Append object to end of list
Question: Im using python with junit_xml to parse a logfile to produce a xml output. My
logfile looks like this:
/path/to/app1,app1,success,[email protected],app1_log.log
/path/to/app2,app2,fail,[email protected],app1_log.log
I am able to do append multiple TestCase objects to test_cases with the
following code:
test_cases = [TestCase('app1), TestCase('app2')]
What i need is to go through the logfile line by line and add testresult[0] to
the testcases object.
from junit_xml import TestSuite, TestCase
test_cases=[]
lines = open('testresults.log').readlines()
for line in lines:
testresult = string.split(string.strip(line), ',')
test_cases.append(TestCase(testresult[0])
ts = TestSuite("my test suite", test_cases)
The lineparsing part works fine, but i cant seem to add multiple TestCase
objects to the test_case list.
Answer: Changing your code to the following, seems to work:
from junit_xml import TestSuite, TestCase
test_cases=[]
lines = open('testresults.log').readlines()
for line in lines:
testresult = line.split(",")
test_cases.append(testresult[0])
print test_cases
`$ python script.py ['/path/to/app1', '/path/to/app2']`
|
Solving a Tricky Integral on Python
Question: I'm currently working on solving a solid angle calculation for my physics
project. My python skills are basic and I need some hints/tips on how to go
about solving this equation which I obtained from a research paper - equation
9b on ([http://ac.els-
cdn.com/S0969804306002090/1-s2.0-S0969804306002090-main.pdf?_tid=7b61d61e-d162-11e4-9032-00000aab0f6c&acdnat=1427118343_91eaa37dbd9242927e86c41952074750](http://ac.els-
cdn.com/S0969804306002090/1-s2.0-S0969804306002090-main.pdf?_tid=7b61d61e-d162-11e4-9032-00000aab0f6c&acdnat=1427118343_91eaa37dbd9242927e86c41952074750))
The integral is:
arccos[p^2 + r^2 + h^2 * tan(x) / 2 p h tan(x)] * sin(x)dx
with h = 68, r = 24.8 and p = 40.2
from x = 12.76 degrees to x = 43.71 degrees.
Any help would be appreciated, thanks.
Answer: There is a built in function `math` which should be imported like this:
>>> import math
`math.acos()` is arcos and `**` is exponentiation.There is also `math.tan()`
and `math.sin()`.You could use `dir(math)` find more functions in the math
module.Use the `help()` function to see its uses. Try to carry on with this.It
would be fun.
|
getElementsByTagName() in Python's xml.dom.minidom is not working
Question: I am parsing an output XML file generated from gtest. I want to find the
result of each test case. A test case is failed only when "testcase" has
element "failure" otherwise test case is passed. But I could not access
element.
My xml file :-
<?xml version="1.0" encoding="UTF-8"?>
<testsuites tests="11" failures="0" disabled="0" errors="0" timestamp="2015-03-23T17:29:43" time="1.309" name="AllTests">
<testsuite name="AAA" tests="4" failures="0" disabled="0" errors="0" time="0.008">
<testcase name="BBBB" status="run" time="0.002" classname="AAA" />
<failure message="Value of: add(1, 1)
 Actual: 3
Expected: 2" type="" />
<testcase name="CCC" status="run" time="0.002" classname="AAA" />
<testcase name="DDD" status="run" time="0.002" classname="AAA" />
<testcase name="FFF" status="run" time="0.002" classname="AAA" />
</testsuite>
</testsuites>
My python file is :-
from xlrd import open_workbook
from xml.dom.minidom import parse
import xml.dom.minidom
# Open XML document using minidom parser
DOMTree = xml.dom.minidom.parse("output.xml")
testsuites = DOMTree.documentElement
testCaseCollection = testsuites.getElementsByTagName("testcase")
testCasefailure = testsuites.getElementsByTagName("failure")
OutputXLS = open_workbook('output.xls')
for testCase in testCaseCollection:
#print testCase.firstChild;
if testsuites.getElementsByTagName("failure"):
print testCase.getAttribute("name"), " --> ","FAIL"
else:
print testCase.getAttribute("name"), " --> ","PASS"
And output is :-
BBB --> PASS
CCC --> PASS
DDD --> PASS
FFF --> PASS
Though test case "BBB" is failed as it has "failure" attribute in xml, it
shows pass in result. Kindly Help me out with this.
Answer:
from xlrd import open_workbook
from xml.dom.minidom import parse
# Open XML document using minidom parser
DOMTree = parse("output.xml")
testsuites = DOMTree.documentElement
testCaseCollection = testsuites.getElementsByTagName("testcase")
OutputXLS = open_workbook('output.xls')
for testCase in testCaseCollection:
sibNode = testCase.nextSibling.nextSibling
if sibNode and sibNode.nodeName == 'failure':
print testCase.getAttribute("name"), " --> ","FAIL"
else:
print testCase.getAttribute("name"), " --> ","PASS"
|
Length of the line in a file in Python returns value always as No. of Characters + 1
Question: I am using python 2.7.x
I have a file page06.txt
1
22
333
4444
55555
666666
7777777
88888888
I am reading this file through a program:
from __future__ import print_function
fileobj = open("page06.txt")
i = 0
while True:
F = fileobj.readline()
print("::::", len(F))
if len(F) <= 0:
break;
print(F , end="")
fileobj.close()
The first line contains 1, so as per my understanding is that len(F) must
return 1 but it returns 2. So, whyis the length of all the lines in the file
always is
"No. of characters" +1
Answer: You counted also the new line. Use `F = fileobj.readline().strip()` to get the
line without the new line character.
|
Converting a datetime object to an integer python
Question: I would like to convert a datetime object to an int in python:
import datetime
time_entered = datetime.datetime.strptime(raw_input("Time1: "), "%H%M")
time_left = datetime.datetime.strptime(raw_input("Time2"), "%H%M")
time_taken = time_left - time_entered
int(time_taken)
When I run that code I get the following error:
> TypeError: int() argument must be a string or a number, not
> 'datetime.timedelta'
Answer: You can convert datetime object to timetuple, and then use time.mktime
function
import time
from datetime import datetime
timestamp = int(time.mktime(datetime.now().timetuple())
Then you can convert it back:
now = datetime.fromtimestamp(timestamp)
|
Naming a folder with the name of the first file in the same folder python
Question: I have a code that reads a csv file and create multiple folders and puts a
determined number of files with a certain name within the folder. The folders
need to be called as the first file that is placed inside each folder.
import os
import csv
from itertools import islice
with open("file.dat", 'rb') as f:
reader = csv.reader(f)
n = 1
for row in islice(reader, 4, None): # Skip the first 4 rows.
if row [8] != "0":
A = 1 # A = 1 row => [8] != 0, A = 0 => row [8] == 0.
row[0] = row[0].replace('-', '')
row[0] = row[0].replace(' ', '-')
row[0] = row[0].replace(':', '') # Convert 2015-02-17 01:20:00 to 20150217-012000.
Time = row[0]
#path = Time + "." + "%d" % n
path = "Event" + "%d" % n
if not os.path.exists(path):
os.makedirs(path, 0777)
Events = Time + ".txt" # Puts 20150217-012000.dat or .txt.
with open(os.path.join(path,Events), 'wb') as outfile:
writer = csv.writer(outfile, delimiter=" ")
for i in range(35):
writer.writerow(row[(34*i)+11:(34*i)+45])
elif row [8] == "0":
A = 0
n += 1
continue
This code creates 4 folders:
> Event2, Event8, Event9 and Event 18
The number of folders is what i want, but I need them to have another name. I
want his name to be like the name of row[0] for example:
> 20150217-002000, 20150217-008000, 20150217-009000, 20150217-018000
I tried to change the name of path to:
> Time + "." + "%d" % n
But only creates a folder for each file, i.e. creates more than 4 folders.
What can i do? Please help.
This is a line of many in the file, occasionally there is an integer between
all those zeros:
"2015-02-17 08:55:00",7617,"0",1,0,19691,61,0.0447,0.0007,2.763,0.647,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
"2015-02-17 09:00:00",7585,"0",1,0,19691,61,0.0447,0.0007,2.763,0.647,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
"2015-02-17 09:05:00",7586,"0",1,0,20000,23,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
"2015-02-17 09:10:00",7596,"0",1,0,20000,61,0.0326,0.0005,2.956,0.665,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
"2015-02-17 09:15:00",7597,"0",1,0,20000,23,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
"2015-02-17 09:20:00",7598,"0",1,0,20000,61,0.0346,0.0006,3.465,0.734,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
"2015-02-17 09:25:00",7596,"0",1,0,20000,61,0.0326,0.0005,2.956,0.665,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
"2015-02-17 09:30:00",7597,"0",1,0,20000,23,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
"2015-02-17 09:35:00",7598,"0",1,0,20000,61,0.0346,0.0006,3.465,0.734,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
Answer: Sorry, this still isn't an answer, merely a way to get clarification about
what you want to accomplish. Below is the output from two versions of your
code using the data you added: one defining the path as shown in your
question, and another showing what it does with the modification you tried but
didn't like.
Please describe -- in your question -- exactly, what you would like to happen
processing exactly the input data shown.
Output:
======================================================
Current code
path = "Event" + "%d" % n
creating directory: Event2
creating file: Event2\20150217-092000.txt
creating file: Event2\20150217-092500.txt
creating directory: Event3
creating file: Event3\20150217-093500.txt
======================================================
Your attempt
path = Time + "." + "%d" % n
creating directory: 20150217-092000.2
creating file: 20150217-092000.2\20150217-092000.txt
creating directory: 20150217-092500.2
creating file: 20150217-092500.2\20150217-092500.txt
creating directory: 20150217-093500.3
creating file: 20150217-093500.3\20150217-093500.txt
|
pyspark getattr() behavior
Question: Noticed some strange behavior with PySpark, would appreciate any insights.
Suppose I have an RDD composed of simple elements
from collections import namedtuple
Animal = namedtuple('Animal', ('name','age'))
a = Animal('jeff',3)
b = Animal('mike',5)
c = Animal('cathy',5)
rdd=sc.parallelize([a,b,c])
Now I'm interested in capturing, in a simple class, the different attributes
of that RDD, using for example `rdd.map(lambda s: getattr(s,'name'))` to
extract the `name` attribute from each element.
So objects of this class
class simple():
def __init__(self,name):
self.name=name
def get_value(self):
self.value = rdd.map(lambda s: getattr(s,self.name)).collect()
will set their `name` and fetch the corresponding `values` from the RDD.
theAges = simple('age')
theAges.get_value()
However, this encounters an error that I think centers on the `self.name` in
the `lambda` expression. This second class works fine
class simple2():
def __init__(self,name):
self.name=name
def get_value(self):
n=self.name
self.value = rdd.map(lambda s: getattr(s,n)).collect()
where all I have added is a preceding call `n=self.name` and passed `n` into
the `lambda` instead of `self.name`.
So is the problem that we are unable to evaluate `self.name` within the
`lambda`? I've created a similar situation (with `self.name` in a `lambda`) in
pure python and there's no errors, so I think this is Spark specific. Thanks
for your thoughts.
Answer: This is due to pyspark being unable to create a closure over the class
instance. Assigning `n` in the `get_value` scope allows Spark to ship off the
pickled function including what amounts to an alias to the object attribute.
So far, it seems the solution is to just assign class attributes in the
function scope (but don't count on them changing!)
|
string to date in python with the greatest possible year when dow is given
Question: Given:
from datetime import datetime
date = datetime.strptime('Wed 12 Nov', '%a %d %b')
It returns `datetime.datetime(1900, 11, 12, 0, 0)`. What might be the best way
to get the greatest past year?, in this case should 2014 instead of 1900.
Any help will be appreciate.
Answer: You can get the year from `datetime.now()` and subtract 1:
from datetime import datetime
date = datetime.strptime('Wed 12 Nov {}'.format(datetime.now().year-1), '%a %d %b %Y')
This will not work for feb 29.
Actually a bug in the first implementation, it was starting on a monday
because 12 November 1900 was a Monday:
dte = 'Wed 12 Nov'
start = datetime.strptime('Wed 12 Nov', "%a %d %b")
greatest = None
while start <= datetime.now():
start += timedelta(days=1)
if start.strftime("%a %d %b") == dte:
greatest = start
print(greatest)
2014-11-12 00:00:00
There is also a [monthdelta](https://pythonhosted.org/MonthDelta/) package
that you can use to increment by month:
from datetime import datetime
from monthdelta import monthdelta
dte = 'Wed 12 Nov'
start = datetime.strptime('Wed 12 Nov', "%a %d %b")
greatest = None
while start <= datetime.now():
start += monthdelta(1)
if start.strftime("%a %d %b") == dte:
greatest = start
print(greatest)
You can see incrementing by months is much more efficient:
In [1]: from datetime import datetime, timedelta
In [2]: %%timeit
...: dte = 'Wed 12 Nov'
...: start = datetime.strptime('Wed 12 Nov', "%a %d %b")
...: greatest = None
...: while start <= datetime.now():
...: start += timedelta(days=1)
...: if start.strftime("%a %d %b") == dte:
...: greatest = start
...:
1 loops, best of 3: 382 ms per loop
In [3]: from datetime import datetime
In [4]: from monthdelta import monthdelta
In [5]: %%timeit
...: dte = 'Wed 12 Nov'
...: start = datetime.strptime('Wed 12 Nov', "%a %d %b")
...: greatest = None
...: while start <= datetime.now():
...: start += monthdelta(1)
...: if start.strftime("%a %d %b") == dte:
...: greatest = start
...:
100 loops, best of 3: 18.7 ms per loop
Both return pretty quick but if you had many calls to the method then the
monthly increase is a better option. We could also add 30 days and then set
the day to 12, there may be bugs as I have not overly tested it:
def match_date(abb_wk_dy, day_date, abb_mon):
dte = "{} {} {}".format(abb_wk_dy.capitalize(), day_date, abb_mon.capitalize())
start = datetime.strptime(dte, "%a %d %b")
greatest = None
while start <= datetime.now():
start += timedelta(days=30)
start = start.strptime("{} {} {}".format(start.year, start.month, day_date), "%Y %m %d")
if start.strftime("%a %d %b") == dte:
greatest = start
return greatest
The last code runs pretty efficiently:
In [12]: timeit match_date("wed","12","nov")
10 loops, best of 3: 34.7 ms per loop
If you only want the year then return `greatest.year`.
On testing the above code fails for leap years so we need to catch that, we
can also just increase the year by 1 each time:
def match_date(abb_wk_dy, day_date, abb_mon):
wkd, dd, ab = abb_wk_dy.capitalize(), day_date, abb_mon.capitalize()
match = "{} {} {}".format(wkd, dd, ab)
try:
dte = "{} {} {} {}".format(1900, wkd, dd, ab)
start = datetime.strptime(dte, "%Y %a %d %b")
except ValueError:
# first leap year since 1900
dte = "{} {} {} {}".format(1904, wkd, dd, ab)
start = datetime.strptime(dte, "%Y %a %d %b")
day, mon = start.day, start.month
greatest = None
while start <= datetime.now():
try:
start = start.strptime("{} {} {}".format(start.year + 1, mon, day), "%Y %m %d")
except ValueError:
start = start.strptime("{} {} {}".format(start.year + 1, 01, 01), "%Y %m %d")
continue
if start.strftime("%a %d %b") == match:
greatest = start
return greatest.year if greatest else "No match"
Which runs in:
In [27]: timeit match_date("Wed","12","Nov")
100 loops, best of 3: 2.63 ms per loop
You would also need to validate that no day > 31 is ever entered and other
months and days match up which could be achieved using a dict or
`calender.monthrange` or a dict mapping max day in month to month name.
|
python - create json file from input matrix by grouping rows
Question: Given the matrix below, I'd like to create and json output that will iterate
through the matrix rows and append items (col 1) if the group (col2) repeats:
matrix = [
['JAN', '1', 'abc'],
['FEB', '7', 'xyz'],
['FEB', '4', 'abc'],
['MAR', '3', 'xyz'],
['DEC','12', 'xyz']
]
how can be the a output/JSON file created:
{
"month": ["JAN","FEB","FEB","MAR","DEC"],
"items": [
{ "group":"abc",
"data":[1,4]
},
{ "group":"xyz",
"data":[7,3,12]
}
]
}
I've tried transposed matrix/column and construct a dictionary where I append
items in a for loop, but I am missing something and it's getting messy. it
should be a simple way.
Answer: This program:
from itertools import groupby
import json
matrix = [
['JAN', '1', 'abc'],
['FEB', '7', 'xyz'],
['FEB', '4', 'abc'],
['MAR', '3', 'xyz'],
['DEC', '12', 'xyz']
]
months = [x[0] for x in matrix]
items = [[x[2], int(x[1])] for x in matrix]
items = sorted(items, key=lambda x: x[0])
items = groupby(items, lambda x: x[0])
items = [{"group": k, "data": [x[1] for x in g]} for k, g in items]
result = {
"month": months,
"items": items
}
print (json.dumps(result, sort_keys=True, indent=4, separators=(',', ': ')))
produces this output:
{
"items": [
{
"data": [
1,
4
],
"group": "abc"
},
{
"data": [
7,
3,
12
],
"group": "xyz"
}
],
"month": [
"JAN",
"FEB",
"FEB",
"MAR",
"DEC"
]
}
|
How to acces class variables outside class in python
Question: I am new to python and this is my first program in python i want to know how
to access class variables outside class.I have a code which throws some error
from xxxxxxx import Products
class AccessKey(object):
def key(self):
self.products = Products(
api_key = "xxxxxxxxxxxxxxxxxxxxxx",
api_secret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
)
class Data(AccessKey):
def Print(self):
products.products_field( "search", "Samsung Galaxy" )
results = products.get_products()
print "Results of query:\n", results
data = Data()
data.Print()
The above program throws following error
Traceback (most recent call last):
File "framework.py", line 10, in <module>
class Data(AccessKey):
File "framework.py", line 13, in Data
results = products.get_products()
NameError: name 'products' is not defined
Answer: First of all, you need to call the products field as **self.products.** (etc)
It looks like you don't necessarily instantiate "products" before you call it.
If you want to make sure it's instantiated, then you need to have products be
set in the constructor of the parent class (AccessKey)
A simplified example would be:
class A (object):
def __init__ (self):
self.x = 1
class B (A):
def get (self):
return self.x
b = B ()
print (b.get ())
Basically, you would have to add the following constructor to your first class
class AccessKey(object):
def __init__(self):
self.products = Products (X, Y) # or whatever you want to initialize it to
# the rest of your code below
Or, you can be even better and create a set_product function:
# inside of the parent class
def set_product (self, X, Y):
try:
self.products.product_field (X, Y)
except NameError:
self.products = Product (X, Y)
|
Python- Importing a dat file into an array using numpy
Question: Hey guys im trying to import text file into an array using numpy but it looks
like when it imported them with the rows as cols and vice versa. Am I
formatting the array wrong or is that what happnened?
I added a picture below:

Answer: Its because you set `unpack=True`, unpack transposes your array.
From the numpy documentation:
unpack : bool, optional
If True, the returned array is transposed, so that arguments may be
unpacked using x, y, z = loadtxt(...). When used with a record data-type,
arrays are returned for each field. Default is False.`
If you set it to false, it wont transpose the array.
|
How to test cmd-based programs?
Question: Helo, I wrote some program that uses
[cmd](https://docs.python.org/2/library/cmd.html) module. (The Cmd class
provides a simple framework for writing line-oriented command interpreters.)
My program is cli to api.
My problem is that I can't test it now.
I can run:
# ./cli
CLI Interface
------------
Help may be requested at any point in a command by entering
a question mark '?'. If nothing matches, the help list will
be empty and you must backup until entering a '?' shows the
available options.
Two styles of help are provided:
1. Full help is available when you are ready to enter a
command argument (e.g. 'show ?') and describes each possible
argument.
2. Partial help is provided when an abbreviated argument is entered
and you want to know what arguments match the input
(e.g. 'show pr?'.)
#role
Current Roles
rde:
- base functionality
test:
- Test
#quit
Exiting...
If I wrote test like:
from cli import Cli
class TestCliClass(unittest.TestCase):
def setUp(self):
self.cmdLine = Cli()
def test_role(self):
self.assertEqual("",self.cmdLine.do_role())
#a=self.cmdLine.do_role()
#print a
if __name__ == '__main__':
unittest.main()
It's output will be:
test_cli.py
-----------------------------
current logs in /var/log/test_rdepyui.log
-----------------------------
Current Roles
rde:
- base functionality
test:
- Test
F
======================================================================
FAIL: test_role (__main__.TestCliClass)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tests/test_cli.py", line 23, in test_role
self.assertEqual("",self.cmdLine.do_role())
AssertionError: '' != None
----------------------------------------------------------------------
Ran 1 test in 0.040s
FAILED (failures=1)
import sys
import os
from cmd import Cmd
USING_READLINE = True
class Cli(Cmd,object):
def __init__(self):
Cmd.__init__(self)
if not USING_READLINE:
self.completekey = None
self._hist=[]
self.prompt = "#"
self.intro = """ CLI Interface """
def default(self, line):
cmd, arg, line = self.parseline(line)
cmds = self.completenames(cmd)
num_cmds = len(cmds)
if num_cmds == 1:
getattr(self, 'do_'+cmds[0])(arg)
elif num_cmds > 1:
sys.stdout.write('%% Ambiguous command:\t"%s"\n' % cmd)
else:
sys.stdout.write('% Unrecognized command\n')
def emptyline(self):
pass
def do_role(self,args=None):
try:
if args is None or args == 'show' or args.startswith('show') or args =='':
roles = self.rderole.getRole()
print "Current Roles"
output=""
#max_l=0
for role in roles:
role_str="%s:" % (role)
output +=role_str+"\n"
#if len(role_str)>max_l:
# max_l=len(role_str)
description=""
for subroles in roles[role]:
print subroles
if self.rderole.PLAYBOOK_DESCRIPTION in subroles:
description=subroles[self.rderole.PLAYBOOK_DESCRIPTION]
subrole_str=" - %s" % description
#if len(subrole_str)>max_l:
# max_l=len(subrole_str)
output +=subrole_str+"\n"
#print subrole_str
#subrole_str.ljust(len(role_str))
#print subrole_str
print output.strip()
elif args == 'help' :
Cmd.do_help(self, "role")
except Exception as e:
print "<ERROR>Can't show role: %s" % e
if __name__ == '__main__':
cmdLine = Cli()
cmdLine.cmdloop()
Answer: You use Python [unittest](https://docs.python.org/3.4/library/unittest.html)
package:
import unittest
from mymodule import Cli
class MyTest(unittest.TestCase):
def test_cli(self):
cli = Cli()
cli.do_foofail() # foofail must return something sensible which you can check with self.assertXXX() methods
|
Sublime Plugin: How can I import wx?
Question: I have trouble in making sublime text 3 plugins. I Just Installed wxPython
with python2.7 on my macintosh.
In terminal, my mac can find wxPython-import wx. But, Sublime Text Plugin
Source cannot imported wxPython.
You can check out capture below.

How can I fix that problem.
thank you for reading my unskillful english context.
Answer: Plugins are executed using Sublime's internal Python interpreter, not any
version of Python installed on your computer. Nearly all of the standard
library is included, but a few packages (including Tkinter, among others) are
not. To my knowledge it is not possible to use `pip`, for example, to install
3rd-party modules into Sublime Text.
However, if you would like to include some 3rd-party code, just put it in your
plugin's directory. For example, if you store your plugin code in
`Packages/MyPlugin` (where `Packages` is the directory opened by selecting
**`Preferences -> Browse Packages...`**), and you want to include the 3rd-
party library `foobar`, just copy the `foobar` directory into
`Packages/MyPlugin`. Then, in your plugin code, use the following template,
assuming you're trying to code for both ST3 (Python 3.3) and ST2 (Python 2.6):
try: #ST3
from .foobar import mymodule
except ImportError: #ST2
from foobar import mymodule
Obviously, if you're just planning on supporting ST3 (there are enough
differences in the API to make programming for both versions annoying), you
won't need the `try`/`except` clause. Also, if you are going to be
distributing your plugin via Package Control or some other method, make sure
you **can** redistribute the 3rd-party code, and that your license is
compatible with its license.
|
How to run pyleus on Storm
Question: I'm a learning on Storm, I have installed zookeeper, storm, python and pyleus.
the first step, I copy python script from pyleus web as a sample
(<https://github.com/Yelp/pyleus/tree/aaa423864f953332202832b8fd8404e03d3d74e3>
) and try to run it in storm server, the sample include below 3 files:
pyleus_topology.yaml, dummy_spout.py and dummy_bolt.py the 2 py file has been
put into a folder namely "my_first_topology" but when I run the pyleus build
command in my VMware server(CentOS64-bit), the command can run 20 second and
then, I got below error:
> _[root@localhost bin]# pyleus build /root/Desktop/CRM_ETL-Project-
> Storm/my_first_topology/pyleus_topology.yaml pyleus build: error:
> [VirtualenvError] Failed to execute Python module:
> my_first_topology.dummy_spout. Error:
> /tmp/tmpZMIXa3/resources/pyleus_venv/bin/python: No module named
> my_first_topology_
what I can do for it? any steps I missed?
the script for reference 1> pyleus_topology.yaml
name: my_first_topology
topology:
- spout:
name: my-first-spout
module: my_first_topology.dummy_spout
- bolt:
name: my-first-bolt
module: my_first_topology.dummy_bolt
groupings:
- shuffle_grouping: my-first-spout
2> dummy_spout.py
from pyleus.storm import Spout
class DummySpout(Spout):
OUTPUT_FIELDS = ['sentence', 'name']
def next_tuple(self):
self.emit(("This is a sentence.", "spout",))
if **name** == '**main** ': DummySpout().run()
3> dummy_bolt.py
from pyleus.storm import SimpleBolt
class DummyBolt(SimpleBolt):
OUTPUT_FIELDS = ['sentence']
def process_tuple(self, tup):
sentence, name = tup.values
new_sentence = "{0} says, \"{1}\"".format(name, sentence)
self.emit((new_sentence,), anchors=[tup])
if **name** == '**main** ': DummyBolt().run()
Answer: I think your problem is that you are missing an empty file named "__init.py__"
in your inner my_first_topology folder. That file makes it a python module.
Just create it and you should be set.
|
how does IPython magics work
Question: ipthon-sql is an extension of ipython, I first install it by pip install
ipython-sql
the project is here: <https://github.com/catherinedevlin/ipython-sql>
and my problem is:
when I enter %load_ext sql and press SHIFT+ENTER, what's the detailed
procedure of IPython to execute this magic sentence? thanks ...

Answer: When you run any code in the notebook, an `execute_request` is sent via the
notebook server, to a 'kernel', a process which executes your code.
When the kernel receives your code, it runs it through a sequence of input
transformers. One of these detects that this line is a magic command, and
rewrites it to:
get_ipython().magic('load_ext sql')
You can see these translated commands using `%hist -t`.
The `.magic()` method takes the first word of its argument, `load_ext`, and
looks it up in a dictionary. You can see that dictionary by running:
get_ipython().magics_manager.magics['line']
(this may be a bit different depending on your version of IPython)
That gives it a reference to the method
`IPython.core.magics.extension.ExtensionMagics.load_ext`, which you can see
[here](https://github.com/ipython/ipython/blob/rel-3.0.0/IPython/core/magics/extension.py#L59).
It calls that method with the remainder of the string.
That method imports the package `sql`, and calls
`sql.load_ipython_extension(ip)` to set it up. It's up to the extension what
it does then - in this case, it registers some new magic functions.
|
How do I reverse the order of PriorityQueue in python?
Question: I have created a simple priority queue in python that orders items by their
value:
import Queue
q = Queue.PriorityQueue()
for it in items:
q.put((it.value, it))
but when i print the queue using:
while not q.empty()
print q.get()
it will always print the lowest value first. is there a way of getting the
last item in a queue without changing the last two lines in the top bit of
code to:
for it in items:
q.put((-1*it.value, it))
because that seems a bit messy and creates problems if i want to use that
information for something else (i would have to multiply it by -1 again)
Answer: You _could_ just make your own class that inherits from PriorityQueue and does
the messy -1 multiplication under the hood for you:
class ReversePriorityQueue(PriorityQueue):
def put(self, tup):
newtup = tup[0] * -1, tup[1]
PriorityQueue.put(self, newtup)
def get(self):
tup = PriorityQueue.get(self)
newtup = tup[0] * -1, tup[1]
return newtup
This appears to work with tuples, at least:
Q = ReversePriorityQueue()
In [94]: Q.put((1,1))
In [95]: Q.get()
Out[95]: (1, 1)
In [96]: Q.put((1,1))
In [97]: Q.put((5,5))
In [98]: Q.put((9,9))
In [99]: Q.get()
Out[99]: (9, 9)
In [100]: Q.get()
Out[100]: (5, 5)
In [101]: Q.get()
Out[101]: (1, 1)
I'm sure you could generalize the code to work with more than just tuples from
here.
|
Numpy - Sorting two ndarrays by single axis of first array
Question: I'm trying to sort two large four dimensional arrays in numpy.
I want to sort based on the values axis 2 of the first array, and sort the
second array by the same indices. All other axes should remain in the same
order for both arrays.
The following code does what I want, but relies on looping in python, so it's
slow. The arrays are quite large, so I'd really like to get this working using
compiled numpy operations for performance reasons. Or some other means of
getting this block of code to be compiled (Cython?).
import numpy as np
data = np.random.rand(10,6,4,1)
data2 = np.random.rand(10,6,4,3)
print data[0,0,:,:]
print data2[0,0,:,:]
for n in range(data.shape[0]):
for m in range(data.shape[1]):
sort_ids = np.argsort(data[n,m,:,0])
data[n,m,:,:] = data[n,m,sort_ids,:]
data2[n,m,:,:] = data2[n,m,sort_ids,:]
print data[0,0,:,:]
print data2[0,0,:,:]
Answer: Maybe there is a better solution but this should work:
sort_ids = np.argsort(data,axis=2)
s1 = data.shape
s2 = data2.shape
d1 = data[np.arange(s1[0])[:,None,None,None],np.arange(s1[1])[None,:,None,None],sort_ids,np.arange(s1[3])[None,None,None,:]]
d2 = data2[np.arange(s2[0])[:,None,None,None],np.arange(s2[1])[None,:,None,None],sort_ids,np.arange(s2[3])[None,None,None,:]]
At least the output is identical to your code.
|
ImportError: No module named 'numpy.ma'
Question: The full error:
import matplotlib
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import matplotlib
File "R:\Python34\lib\site-packages\matplotlib\__init__.py", line 180, in <module>
from matplotlib.cbook import is_string_like
File "R:\Python34\lib\site-packages\matplotlib\cbook.py", line 34, in <module>
import numpy.ma as ma
ImportError: No module named 'numpy.ma'
numpy is imported normally. How do I install `numpy.ma`?
Answer: Re-install the correct version of `numpy`.
download correct `.whl` file from
<http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>
pip install C:\Path\To\Wheel\Filename.whl # for example: numpy-1.9.2+mkl-cp34-none-win_amd64.whl
|
workbook.save() of python's openpyxl module is not working
Question: I was trying to open a XLSX file using openpyxl module. Following is my
script:-
from openpyxl import load_workbook
print "Going to execute the script"
workbook = load_workbook("FileName.xlsx")
worksheet = workbook.get_sheet_by_name("Sheet01")
worksheet['B8'] = "Customer07"
workbook.save("FileName.xlsx")
print "End of script execution"
When I execute this script I get the following error:-
/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/xml/__init__.py:15: UserWarning: The installed version of lxml is too old to be used with openpyxl
warnings.warn("The installed version of lxml is too old to be used with openpyxl")
Going to execute the script
/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/workbook/names/named_range.py:121: UserWarning: Discarded range with reserved name
warnings.warn("Discarded range with reserved name")
Traceback (most recent call last):
File "check.py", line 11, in <module>
workbook.save("FileName.xlsx")
File "/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/workbook/workbook.py", line 296, in save
save_workbook(self, filename)
File "/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/writer/excel.py", line 191, in save_workbook
writer.save(filename, as_template=as_template)
File "/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/writer/excel.py", line 174, in save
self.write_data(archive, as_template=as_template)
File "/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/writer/excel.py", line 85, in write_data
self._write_worksheets(archive)
File "/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/writer/excel.py", line 111, in _write_worksheets
write_worksheet(sheet, self.workbook.shared_strings,
File "/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/writer/worksheet.py", line 299, in write_worksheet
xf.write(comments)
File "/usr/lib/python2.6/contextlib.py", line 34, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/xml/xmlfile.py", line 42, in element
yield
File "/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/writer/worksheet.py", line 276, in write_worksheet
hf = write_header_footer(worksheet)
File "/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/writer/worksheet.py", line 172, in write_header_footer
header = worksheet.header_footer.getHeader()
File "/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/worksheet/header_footer.py", line 150, in getHeader
t.append(self.left_header.get())
File "/usr/lib/python2.6/site-packages/openpyxl-2.2.0-py2.6.egg/openpyxl/worksheet/header_footer.py", line 90, in get
t.append('&%d' % self.font_size)
TypeError: %d format: a number is required, not str
And after this when I open "FileName.xlsx", pop box appears with message
**"Excel found unreadable comment in 'FileName.xlsx'. Do you want recover the
contents of this workbook? If you trust the source of this workbook, click
yes"**. Regardless of whatever button I click, the workbook is empty.
If I comment the following line in script,
workbook.save("FileName.xlsx")
the script runs without any error but modifications are not saved in workbook.
Please help me out with this.
Answer: Try putting worksheet['B8'].value = "Customer07" instead worksheet['B8'] =
"Customer07"
|
Python: Parsing a complex string into usable data for analysis
Question: Is it possible to parse the below string, using Python, or convert it into
another data structure so that each element can be accessed for analysis?
This is an example line from a large text file where each line has the same
format.
string = ["('a', '1')", "('b', '2')"]
Answer: If you simply want to convert the tuple-strings to tuples, you can use
[`ast.literal_eval`](https://docs.python.org/2/library/ast.html#ast.literal_eval):
>>> import ast
>>> [ast.literal_eval(x) for x in string]
[('a', '1'), ('b', '2')]
Use of `ast.literal_eval` rather than `eval` is encouraged as it is considered
safer: it does not execute _all_ strings of Python code, only literal
expressions (no variables, no function calls).
You can then access the elements of the tuples using Python's slice/index
notation, or convert to an alternative data structure, e.g. a dictionary:
>>> dict([ast.literal_eval(x) for x in string])
{'a': '1', 'b': '2'}
|
Regex match following substring in string python
Question: I've come up with a regex expression that works well enough for my purposes
for finding phone numbers.
I would like to take it a step further and use it in large text blocks to
identify matching strings that follow the words 'cell' or 'mobile' by at most
10 characters. I would like it to return the number in `Cell Phone: (954)
555-4444` as well as `Mobile 555-777-9999` but not `Fax: (555) 444-6666`
something like (in pseudocode)
regex = re.compile(r'(\+?[2-9]\d{2}\)?[ -]?\d{3}[ -]?\d{4})')
bigstring = # Some giant string added together from many globbed files
matches = regex.search(bigstring)
for match in matches:
if match follows 'cell' or match follows 'mobile':
print match.group(0)
Answer: You can do:
txt='''\
Call me on my mobile anytime: 555-666-1212
The office is best at 555-222-3333
Dont ever call me at 555-666-2345 '''
import re
print re.findall(r'(?:(mobile|office).{0,15}(\+?[2-9]\d{2}\)?[ -]?\d{3}[ -]?\d{4}))', txt)
Prints:
[('mobile', '555-666-1212'), ('office', '555-222-3333')]
|
Flask middleware on abort 401 causing a 500
Question: I have a middleware in my Flask app that is used to authenticate a JSON Web
Token coming in the header of the request and is checking to verify it, below
is my middleware class:
class AuthMiddleware(object):
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
path = environ.get('PATH_INFO')
if path != '/authenticate' or path != '/token':
token = environ.get('HTTP_X_ACCESS_TOKEN')
verfied_token = verify_token(token)
if verfied_token is False:
abort(401)
elif verfied_token is True:
# payload = get_token_payload(token)
# check_permissions(payload)
pass
return self.app(environ, start_response)
`verify_token()` is a function that will return True or False, and if it
returns False, I want it to abort with an error 401. However, it aborts with
an error 500:
127.0.0.1 - - [25/Mar/2015 11:37:25] "POST /role HTTP/1.1" 500 -
Error on request:
Traceback (most recent call last):
File "/ENV/lib/python2.7/site-packages/werkzeug/serving.py", line 180, in run_wsgi
execute(self.server.app)
File "/ENV/lib/python2.7/site-packages/werkzeug/serving.py", line 168, in execute
application_iter = app(environ, start_response)
File "/ENV/lib/python2.7/site-packages/flask/app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "/middleware.py", line 24, in __call__
return self.app(environ, start_response)
File "/ENV/lib/python2.7/site-packages/werkzeug/exceptions.py", line 605, in __call__
raise self.mapping[code](*args, **kwargs)
BadRequest: 400: Bad Request
In my views, I abort a 401 like it should, but here it seems to be a problem.
What should I do?
Answer: The middleware you've shown runs some other code and _then_ calls the wrapped
Flask application. However, `abort` raises exceptions that Flask handles, but
aren't handled by WSGI directly. Since you're not in the Flask application
yet, it can't handle the exception.
A _much easier way_ would be to do this check inside the Flask app. Create a
`before_request` handler that does basically the same thing as the middleware,
except you can use `flask.request` rather than needing to parse the path and
headers yourself.
from flask import request, abort
@app.before_request
def check_auth_token():
if request.path in ('/authenticate', '/token'):
return
token = request.headers.get('X-ACCESS-TOKEN')
if not verify_token(token):
abort(401)
check_permissions(get_token_payload(token))
* * *
If you _do_ want to use WSGI middleware for this, you need to create the
response yourself. Conveniently, Werkzeug's exceptions behave like WSGI
applications, so it's straightforward to use them.
from werkzeug.exceptions import Unauthorized
# in place of abort(401) in the middleware
return Unauthorized()(environ, start_response)
You can also use `abort` still by catching the exceptions it raises (which,
again, are WSGI applications).
from werkzeug.exceptions import abort, HTTPException
# werkzeug.exceptions.abort is the same as flask.abort
try:
abort(401)
except HTTPException as e:
return e(environ, start_response)
|
Creating Word Cloud in Python --- Making Words Different Sizes?
Question: I am trying to create a word cloud in python using
[pytagcloud](https://pypi.python.org/pypi/pytagcloud/). With my current cloud,
I can generate a cloud, but the words all are the same size. How can I alter
the code so that my words' sizes appear in relation to their frequency?
My text file already has the words with their respective frequency counts
already in it, the format is like "George, 44" newline "Harold, 77", newline,
"Andrew, 22", newline, etc. However, when it displays the word, it also
displays the frequency with it.
with open ("MyText.txt", "r") as file:
Data =file.read().replace('\n', '')
tags = make_tags(get_tag_counts(Data), maxsize=150)
create_tag_image(tags, 'Sample.png', size=(1200, 1200),background=(0, 0, 0, 255), fontname='Lobstero', rectangular=True)
import webbrowser
webbrowser.open('Sample.png')
Answer: You need to cast the result into a tuple. Using your question as input text we
get the expected result:
from pytagcloud import create_tag_image, make_tags
from pytagcloud.lang.counter import get_tag_counts
TEXT = '''I am trying to create a word cloud in python. With my current cloud, I can generate a cloud, but the words all are the same size. How can I alter the code so that my words' sizes appear in relation to their frequency?'''
counts = get_tag_counts(TEXT)
tags = make_tags(counts, maxsize=120)
create_tag_image(tags, 'cloud_large.png', size=(900, 600), fontname='Lobster')

It is worth looking at the variable `counts`:
[('cloud', 3),
('words', 2),
('code', 1),
('word', 1),
('appear', 1), ...
which is simply a list of tuples. Since your input text file contains a list
of tuples, you simply need to pass that information into `make_tags`.
**Edit:** You can read a file like this
counts = []
with open("tag_file.txt") as FIN:
for line in FIN:
# Assume lines look like: word, number
word,n = line.strip().split()
word = word.replace(',','')
counts.append([word,int(n)])
|
mpiexec and python mpi4py gives rank 0 and size 1
Question: I have a problem with running a python Hello World mpi4py code on a virtual
machine.
The hello.py code is:
#!/usr/bin/python
#hello.py
from mpi4py import MPI
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
print "hello world from process ", rank,"of", size
I've tried to run it using mpiexec and mpirun, but it is not running well. The
output:
$ mpirun -c 4 python hello.py
hello world from process 0 of 1
hello world from process 0 of 1
hello world from process 0 of 1
hello world from process 0 of 1
And from mpiexec:
$ mpiexec -n 4 python hello.py
hello world from process 0 of 1
hello world from process 0 of 1
hello world from process 0 of 1
hello world from process 0 of 1
They seem not getting rank and size of comm. What can cause this? How to solve
it?
mpiexec --version
mpiexec (OpenRTE) 1.6.5
mpirun --version
mpirun (Open MPI) 1.6.5
The system is Ubuntu 14.04 on the Virtal Machine.
Any ideas why? Thanks!
Answer: I had the same issue when running the python module `emcee`. It would give me
an error:
"ValueError: Tried to create an MPI pool, but there was only one MPI process available.
Need at least two."
The solution I found for my particular cluster was to use a different MPI. My
code worked with `intel-mpi` and `mpich2` but not `openmpi`. For this system,
all I had to do was switch the MPI. In my `PBS` script I used `module load
mpich2` instead of `module load openmpi`. In this case `mpiexec` and `mpirun`
worked correctly.
|
In python replacing characters from multiple char maps
Question: I haven't been able to find a solution to this problem, and it's for a
workaround in some bad platform code that I can't do anything about. I want to
render UTF-8 strings but the platform crashes if it receives a character out
side it's supported character maps. In the case here, I have German Navi unit
in Russia - Latin 2 (iso-8859-2) and Cyrillic (iso-8859-5) are supported but
the platform crashes on an Arabic character. So I want to filter out anything
that is not German or Russian.
This code:
import codecs
import string
if __name__ == '__main__':
s = u'Ivan Krsti\u0107\u0416'
print s
print s.encode ('iso-8859-1', 'replace')
print s.encode ('iso-8859-5', 'replace').decode('iso-8859-5')
print s.encode ('iso-8859-2', 'replace').decode('iso-8859-2')
Produces
Ivan KrstićЖ
Ivan Krsti??
Ivan Krsti?Ж
Ivan Krstić?
My question is how to I combine the character maps for 'iso-8859-2' and
'iso-8859-5' so I get the first result after filtering? (Assume that I've
already encoded UTF-8 to unicode.)
Answer: You can produce all codepoints that are valid for either codec using sets:
iso_8859_2 = {chr(i).decode('iso-8859-2') for i in xrange(0xff)}
iso_8859_5 = {chr(i).decode('iso-8859-5') for i in xrange(0xff)}
combined = iso_8859_2 | iso_8859_5
and then make that into a regular expression:
import re
# escape meta characters
invalid = u''.join(combined).replace('-', r'\-').replace(']', r'\]')
invalid = re.compile(u'([^{}])'.format(invalid))
and apply that to Unicode text to filter out all codepoints that fall outside
those codepoints:
text_using_only_iso_8859_2_or_5 = invalid.sub('', unicodetext)
This then removes any codepoints that are _not_ in either of the given
character sets.
You could also work with `unicode.translate()`, which takes a mapping of
codepoints (integers) to new codepoints, or `None`to remove characters:
all_of_unicode = set(range(0x10ffff))
iso_8859_2 = {ord(chr(i).decode('iso-8859-2')) for i in xrange(0xff)}
iso_8859_5 = {ord(chr(i).decode('iso-8859-5')) for i in xrange(0xff)}
# map the difference to None values
to_remove = dict.fromkeys(all_of_unicode - iso_8859_2 - iso_8859_5)
text_using_only_iso_8859_2_or_5 = unicodetext.translate(to_remove)
|
Simple Filter Python script for Text
Question: I am trying to create what must be a simple filter function which runs a regex
against a text file and returns all words containing that particular regex.
so for example if i wanted to find all words that contained "abc", and I had
the list: `abcde`, `bce`, `xyz` and `zyxabc` the script would return `abcde`
and `zyxabc`.
I have a script below however I am not sure if it is just the regex I am
failing at or not. it just returns abc twice rather than the full word.
thanks.
import re
text = open("test.txt", "r")
regex = re.compile(r'(abc)')
for line in text:
target = regex.findall(line)
for word in target:
print word
Answer: I think you dont need regex for such task you can simply `split` your lines to
create a list of words then loop over your words list and use `in` operator :
with open("test.txt") as f :
for line in f:
for w in line.split():
if 'abc' in w :
print w
|
How can mongomock can be use with motor?
Question: I have a server implemented with Tornado, and Motor, and I've come across this
mock of pymongo: <https://github.com/vmalloc/mongomock>
I really like the idea of doing the unit tests of my code with no real call to
the DB, for the sake of running them very fast.
I've tried patching motor to pass calls to the mongomock, like that:
from mock import MagicMock
import mongomock
p = mock.patch('motor.MotorClient.__delegate_class__', new=mongomock.MongoClient)
p1 = mock.patch('motor.MotorDatabase.__delegate_class__', new=MagicMock())
p.start()
p1.start()
def fin():
p.stop()
p1.stop()
request.addfinalizer(fin)
it was failing like that:
Traceback (most recent call last):
File "C:\Users\ifruchte\venv\lib\site-packages\pytest_tornado\plugin.py", line 136, in http_server
http_app = request.getfuncargvalue(request.config.option.app_fixture)
File "C:\Users\ifruchte\venv\lib\site-packages\_pytest\python.py", line 1337, in getfuncargvalue
return self._get_active_fixturedef(argname).cached_result[0]
File "C:\Users\ifruchte\venv\lib\site-packages\_pytest\python.py", line 1351, in _get_active_fixturedef
result = self._getfuncargvalue(fixturedef)
File "C:\Users\ifruchte\venv\lib\site-packages\_pytest\python.py", line 1403, in _getfuncargvalue
val = fixturedef.execute(request=subrequest)
File "C:\Users\ifruchte\venv\lib\site-packages\_pytest\python.py", line 1858, in execute
self.yieldctx)
File "C:\Users\ifruchte\venv\lib\site-packages\_pytest\python.py", line 1784, in call_fixture_func
res = fixturefunc(**kwargs)
File "C:\Users\ifruchte\PycharmProjects\pyrecman\tests\__init__.py", line 65, in app
return get_app(db=motor_db(io_loop))
File "C:\Users\ifruchte\PycharmProjects\pyrecman\tests\__init__.py", line 27, in motor_db
return motor.MotorClient(options.mongo_url, io_loop=io_loop)[options.db_name]
File "C:\Users\ifruchte\venv\lib\site-packages\motor\__init__.py", line 1003, in __getattr__
return MotorDatabase(self, name)
File "C:\Users\ifruchte\venv\lib\site-packages\motor\__init__.py", line 1254, in __init__
delegate = Database(connection.delegate, name)
File "C:\Users\ifruchte\venv\lib\site-packages\pymongo\database.py", line 61, in __init__
**connection.write_concern)
TypeError: attribute of type 'Collection' is not callable
Anyone know how it can be done ? or I'm wasting my time here ?
Answer: No need to instantiate the magicmock.
`p1 = mock.patch('motor.MotorDatabase.__delegate_class__', new=MagicMock)`
|
creating vector using netcdf into array
Question: I'm fairly new to python and have found stack overflow one of the best
resources out there, now I'm hoping someone can help me with what I believe is
a fairly basic question.
I'm looking to create a land mask from a list of lats and lons and rainfall
data extracted from a netCDF file. I need to get the data from the netcdf file
to line up so I can remove rows which have a rainfall value of '-9999.'
(indicating no data because its over the ocean). I can access the file, I can
create a mesh grid, but when it comes to inserting the rainfall data for the
final check I'm getting odd shapes and no luck with the logical test. Can
someone have a look at this code and let me know what you think?
from netCDF4 import Dataset
import numpy as np
f=Dataset('/Testing/Ensemble_grid/1970_2012_eMAST_ANUClimate_mon_evap_v1m0_197001.nc')
lat = f.variables['latitude'][:]
lon = f.variables['longitude'][:]
rainfall = np.array(f.variables['lwe_thickness_of_precipitation_amount'])
lons, lats = np.meshgrid(lon,lat)
full_ary = np.array((lats,lons))
full_lats_lons = np.swapaxes(full_ary,0,2)
rain_data = np.squeeze(rainfall,axis=(0,))
grid = np.array((full_lats_lons,rain_data))
full_grid = np.expand_dims(grid,axis=1)
full_grid_col = np.swapaxes(full_grid,0,1)
land_grid = np.logical_not(full_grid_col[:,1]==-9999.)
Answer: Ian, you need to put a repeatable example up here...
I suspect what you need is something like this;
x = np.array([[1, 2, 3], [4, 5, 6]], np.int32)
x.flat
|
Passing a numpy function in python to C++ using SWIG directors
Question: My code is mainly in C++, but I want to use the SWIG director option to
redefine a C++ virtual function in python, and then pass that function back to
C++. The following is a simplified example.
Here is my base class in C++ with a virtual function `fun`. `fun` takes in
`(double* x, int n)` which allows me to pass a numpy array into it.
class base{
public:
virtual double fun(double* x, int n){return 0.0;}
};
I have another class which takes in this base class:
class main{
public:
base* b_;
main(base& b){
b_ = &b;
}
};
This class has a constructor which takes in a base class and stores a pointer
to it (`b_`).
I can successfully compile the swig interface into a module named `mymodule`
making sure to enable directors (see SWIG interface file below), and, as per
the SWIG director documentation, I inherit the base class and redefine the
virtual function `fun` as follows:
import mymodule
class base(mymodule.base):
def __init__(self):
super(base,self).__init__()
def fun(self,x):
return x[0]
Then I pass it into the `main` class:
b = base()
m = mymodule.main(b)
However, when I try `m.b_.fun(array([1.,2.]))` I get a `TypeError`:
TypeError: 'SwigPyObject' object has no attribute '__getitem__'
Any ideas on how to get this to work? I think when I pass `b` into
mymodule.main, C++ is seeing `x` as a double * pointer and doesn't know what
to do with it.
By the way, here is my swig interface file:
%module(directors="1") mymodule
%{
#define SWIG_FILE_WITH_INIT
/* Includes the header in the wrapper code */
#include "myclasses.h"
%}
%feature("director") base;
%init %{
import_array();
%}
%apply (double* IN_ARRAY1, int DIM1) {(double* x, int n)};
/* Parse the header file to generate wrappers */
%include "myclasses.h"
Answer: It looks like an ownership issue from Python.
Provide a hint to SWIG:
%new double *fun(double* IN_ARRAY1, int DIM1) {(double* x, int n)};
Notice the pointer to the function itself as well. The Python module will
assign proper ownership of the object(s) when %new is used. That's why you can
use values but not variables in the call.
|
using python to randomize csv file
Question: I'm new for python and would love to get some help.
I have a csv file with 2 columns, think:
1 A
2 B
3 C
4 D
My aim is to use python to "open" the file, read it, randomize the order of
the two lists (i.e. have 1 be with the same line as C, 2 with D etc.), and
then save the new randomized lists in a different csv file.
Thanks,
Edit:
import csv
import random
f=open('my_file.csv')
csv_f=csv.reader(f)
I read some more stuff about writer. but unsure how to use these functions
yet..
edit 2: someone just posted a great comment, but for some reason I can't see
it now. the only problem is that I need to keep the columns headers intact,
they can't be randomized. The code was as follows:
import csv
import random
with open ("my_file") as f:
l=list(csv.rader(ff))
random.shuffle(l)
with open("random.csv", "W") as f:
csv.writer(f).writerows(f)
Answer: You can read the rows as list, extract the two columns, then shuffle each one,
then zip the columns together and finally write the result to a new csv file:
import csv
import random
with open("input.csv") as f:
r = csv.reader(f)
header, l = next(r), list(r)
a = [x[0] for x in l]
random.shuffle(a)
b = [x[1] for x in l]
random.shuffle(b)
with open("random.csv", "wb") as f:
csv.writer(f).writerows([header] + zip(a, b))
|
Quickly parse Python datetime in a non-local timezone, adjusting for daylight savings
Question: I need to quickly turn an ISO 8601 datetime string--with no timezone in the
string, but known to be in the US/Pacific timezone--into a numpy `datetime64`
object.
If my machine were in US/Pacific time, I could simply run
`numpy.datetime64(s)`. However, this assumes that strings without timezones
are in the local timezone. Furthermore, I can't easily specify the US/Pacific
timezone in ISO 8601 format, because it is sometimes `-0800` and sometimes
`-0700` depending on daylight savings time.
So far, the fastest solution I have is
`numpy.datetime64(pandas.Timestamp(s).tz_localize(tz='US/Pacific',
ambiguous=True))`. This takes 70µs on my machine. It would be good if I could
get this at least an order of magnitude faster (`numpy.datetime64(s)` in local
time takes 4 µs but is incorrect as described above). Is this possible?
Answer: First note that without the offset some localtimes and therefore their
datetime strings are ambiguous. For example, the ISO 8601 datetime strings
2000-10-29T01:00:00-07:00
2000-10-29T01:00:00-08:00
both map to the same string `2000-10-29T01:00:00` when the offset is removed.
So it may not always be possible to reconstitute a unique timezone-aware
datetime from a datetime string without offset.
However, we could make a choice in these ambigous situations and accept that
not all ambiguous dates will be correctly converted.
* * *
If you are using Unix, you can use
[time.tzset](https://docs.python.org/2/library/time.html#time.tzset) to change
the process's local timezone:
import os
import time
os.environ['TZ'] = tz
time.tzset()
You could then convert the datetime strings to NumPy datetime64's using
def using_tzset(date_strings, tz):
os.environ['TZ'] = tz
time.tzset()
return np.array(date_strings, dtype='datetime64[ns]')
* * *
Note however that `using_tzset` does not always produce the same value as the
method you proposed:
import os
import time
import numpy as np
import pandas as pd
tz = 'US/Pacific'
N = 10**5
dates = pd.date_range('2000-1-1', periods=N, freq='H', tz=tz)
date_strings_tz = dates.format(formatter=lambda x: x.isoformat())
date_strings = [d.rsplit('-', 1)[0] for d in date_strings_tz]
def orig(date_strings, tz):
return [np.datetime64(pd.Timestamp(s, tz=tz)) for s in date_strings]
def using_tzset(date_strings, tz):
os.environ['TZ'] = tz
time.tzset()
return np.array(date_strings, dtype='datetime64[ns]')
npdates = dates.asi8.view('datetime64[ns]')
x = np.array(orig(date_strings, tz))
y = using_tzset(date_strings, tz)
df = pd.DataFrame({'dates': npdates, 'str': date_strings_tz, 'orig': x, 'using_tzset': y})
This indicates that the original method, `orig`, fails to recover the original
date 172 times:
print((df['dates'] != df['orig']).sum())
172
while `using_tzset` fails 11 times:
print((df['dates'] != df['using_tzset']).sum())
11
Note however, that the 11 times that `using_tzset` fails are due to the
ambiguity in local datetimes due to DST.
This shows some of the discrepancies:
mask = df['dates'] != df['using_tzset']
idx = np.where(mask.shift(1) | mask)[0]
print(df[['dates', 'str', 'using_tzset']].iloc[idx]).head(6)
# dates str using_tzset
# 7248 2000-10-29 08:00:00 2000-10-29T01:00:00-07:00 2000-10-29 08:00:00
# 7249 2000-10-29 09:00:00 2000-10-29T01:00:00-08:00 2000-10-29 08:00:00
# 15984 2001-10-28 08:00:00 2001-10-28T01:00:00-07:00 2001-10-28 08:00:00
# 15985 2001-10-28 09:00:00 2001-10-28T01:00:00-08:00 2001-10-28 08:00:00
# 24720 2002-10-27 08:00:00 2002-10-27T01:00:00-07:00 2002-10-27 08:00:00
# 24721 2002-10-27 09:00:00 2002-10-27T01:00:00-08:00 2002-10-27 08:00:00
As you can see the discrepancies occur when the date strings in the `str`
column become ambiguous when the offset is removed.
So `using_tzset` appears to produce the correct result up to ambiguous
datetimes.
* * *
Here is a timeit benchmark comparing `orig` and `using_tzset`:
In [95]: %timeit orig(date_strings, tz)
1 loops, best of 3: 5.43 s per loop
In [96]: %timeit using_tzset(date_strings, tz)
10 loops, best of 3: 41.7 ms per loop
So `using_tzset` is over 100x faster than `orig` when N = 10**5.
|
How to properly create and run concurrent tasks using python's asyncio module?
Question: I am trying to properly understand and implement two concurrently running
[`Task`](https://docs.python.org/3/library/asyncio-task.html#task) objects
using Python 3's relatively new
[`asyncio`](https://docs.python.org/3/library/asyncio.html) module.
In a nutshell, asyncio seems designed to handle asynchronous processes and
concurrent `Task` execution over an event loop. It promotes the use of `await`
(applied in async functions) as a callback-free way to wait for and use a
result, without blocking the event loop. (Futures and callbacks are still a
viable alternative.)
It also provides the `asyncio.Task()` class, a specialized subclass of
`Future` designed to wrap coroutines. Preferably invoked by using the
`asyncio.ensure_future()` method. The intended use of asyncio tasks is to
allow independently running tasks to run 'concurrently' with other tasks
within the same event loop. My understanding is that `Tasks` are connected to
the event loop which then automatically keeps driving the coroutine between
`await` statements.
I like the idea of being able to use concurrent Tasks without needing to use
one of the
[`Executor`](https://docs.python.org/dev/library/concurrent.futures.html)
classes, but I haven't found much elaboration on implementation.
This is how I'm currently doing it:
import asyncio
print('running async test')
async def say_boo():
i = 0
while True:
await asyncio.sleep(0)
print('...boo {0}'.format(i))
i += 1
async def say_baa():
i = 0
while True:
await asyncio.sleep(0)
print('...baa {0}'.format(i))
i += 1
# OPTION 1: wrap in Task object
# -> automatically attaches to event loop and executes
boo = asyncio.ensure_future(say_boo())
baa = asyncio.ensure_future(say_baa())
loop = asyncio.get_event_loop()
loop.run_forever()
In the case of trying to concurrently run two looping Tasks, I've noticed that
unless the Task has an internal `await` expression, it will get stuck in the
`while` loop, effectively blocking other tasks from running (much like a
normal `while` loop). However, as soon the Tasks have to wait--even for just a
fraction of a second--they seem to run concurrently without an issue.
Thus, the `await` statements seem to provide the event loop with a foothold
for switching back and forth between the tasks, giving the effect of
concurrency.
Example output with internal `await`:
running async test
...boo 0
...baa 0
...boo 1
...baa 1
...boo 2
...baa 2
Example output _without_ internal `await`:
...boo 0
...boo 1
...boo 2
...boo 3
...boo 4
# Questions
Does this implementation pass for a 'proper' example of concurrent looping
Tasks in `asyncio`?
Is it correct that the only way this works is for a `Task` to provide a
blocking point (`await` expression) in order for the event loop to juggle
multiple tasks?
Answer: Yes, any coroutine that's running inside your event loop will block other
coroutines and tasks from running, unless it
1. Calls another coroutine using `yield from` or `await` (if using Python 3.5+).
2. Returns.
This is because `asyncio` is single-threaded; the only way for the event loop
to run is for no other coroutine to be actively executing. Using `yield
from`/`await` suspends the coroutine temporarily, giving the event loop a
chance to work.
Your example code is fine, but in many cases, you probably wouldn't want long-
running code that isn't doing asynchronous I/O running inside the event loop
to begin with. In those cases, it often makes more sense to use
[`BaseEventLoop.run_in_executor`](https://docs.python.org/3/library/asyncio-
eventloop.html#asyncio.BaseEventLoop.run_in_executor) to run the code in a
background thread or process. `ProcessPoolExecutor` would be the better choice
is your task is CPU-bound, `ThreadPoolExecutor` would be used if you need to
do some I/O that isn't `asyncio`-friendly.
Your two loops, for example, are completely CPU-bound and don't share any
state, so the best performance would come from using `ProcessPoolExecutor` to
run each loop in parallel across CPUs:
import asyncio
from concurrent.futures import ProcessPoolExecutor
print('running async test')
def say_boo():
i = 0
while True:
print('...boo {0}'.format(i))
i += 1
def say_baa():
i = 0
while True:
print('...baa {0}'.format(i))
i += 1
if __name__ == "__main__":
executor = ProcessPoolExecutor(2)
loop = asyncio.get_event_loop()
boo = asyncio.ensure_future(loop.run_in_executor(executor, say_boo))
baa = asyncio.ensure_future(loop.run_in_executor(executor, say_baa))
loop.run_forever()
|
Finding modes for multiple dictionary keys
Question: I currently have a Python dictionary with keys assigned to multiple values
(which have come from a CSV), in a format similar to:
{
'hours': ['4', '2.4', '5.8', '2.4', '7'],
'name': ['Adam', 'Bob', 'Adam', 'John', 'Harry'],
'salary': ['55000', '30000', '55000', '30000', '80000']
}
(The actual dictionary is significantly larger in both keys and values.)
I am looking to find the mode* for each set of values, with the stipulation
that sets where all values occur only once do not need a mode. However, I'm
not sure how to go about this (and I can't find any other examples similar to
this). I am also concerned about the different (implied) data types for each
set of values (e.g. 'hours' values are floats, 'name' values are strings,
'salary' values are integers), though I have a rudimentary conversion function
included but not used yet.
import csv
f = 'blah.csv'
# Conducts type conversion
def conversion(value):
try:
value = float(value)
except ValueError:
pass
return value
reader = csv.DictReader(open(f))
# Places csv into a dictionary
csv_dict = {}
for row in reader:
for column, value in row.iteritems():
csv_dict.setdefault(column, []).append(value.strip())
*I'm wanting to attempt other types of calculations as well, such as averages and quartiles- which is why I'm concerned about data types- but I'd mostly like assistance with modes for now.
EDIT: the input CSV file can change; I'm unsure if this has any effect on
potential solutions.
Answer: I'm not sure I understand the question, but you could create a dictionary
matching each desired mode to those keys, manually, or you could use the
'type' class by asking the values, then if the type returns a string ask other
questions/parameters, like length of the item.
|
What is %timeit in python?
Question: I always read the code to calculate the time like this way:
%timeit function()
Can you explain what means "%" here?
I think, the "%" is always used to replace something in a string, like %s
means replace a string, %d replace a data, but I have no idea about this case.
Answer: `%timeit` is an [ipython magic](https://ipython.org/ipython-
doc/dev/interactive/magics.html#magic-timeit) function, which can be used to
time a particular piece of code (A single execution statement, or a single
method)
From the docs:
> %timeit
>
>
> Time execution of a Python statement or expression
>
> Usage, in line mode:
> %timeit [-n<N> -r<R> [-t|-c] -q -p<P> -o] statement
>
To use it, for example if we want to find out whether using `xrange` is any
faster than using `range`, you can simply do:
In [1]: %timeit for _ in range(1000): True
10000 loops, best of 3: 37.8 µs per loop
In [2]: %timeit for _ in xrange(1000): True
10000 loops, best of 3: 29.6 µs per loop
And you will get the timings for them.
The major advantage is that you don't have to import `timer.timeit`, and run
the code multiple times to figure out which is the better approach; %timeit
will automatically calculate number of runs required for your code based on a
total of 2 seconds execution window.
|
pandas dataframe from a nested dictionary (elasticsearch result)
Question: I am having hard time translating results from elasticsearch aggregations to
pandas. I am trying to write an abstract function which would take nested
dictionary (arbitrary number of levels) and flatten them into a pandas
dataframe
Here is how a typical result look like
\-- edit : I added the parent key as well
x1 = {u'xColor': {u'buckets': [{u'doc_count': 4,
u'key': u'red',
u'xMake': {u'buckets': [{u'doc_count': 3,
u'key': u'honda',
u'xCity': {u'buckets': [{u'doc_count': 2, u'key': u'ROME'},
{u'doc_count': 1, u'key': u'Paris'}],
u'doc_count_error_upper_bound': 0,
u'sum_other_doc_count': 0}},
{u'doc_count': 1,
u'key': u'bmw',
u'xCity': {u'buckets': [{u'doc_count': 1, u'key': u'Paris'}],
u'doc_count_error_upper_bound': 0,
u'sum_other_doc_count': 0}}],
u'doc_count_error_upper_bound': 0,
u'sum_other_doc_count': 0}},
{u'doc_count': 2,
u'key': u'blue',
u'xMake': {u'buckets': [{u'doc_count': 1,
u'key': u'ford',
u'xCity': {u'buckets': [{u'doc_count': 1, u'key': u'Paris'}],
u'doc_count_error_upper_bound': 0,
u'sum_other_doc_count': 0}},
{u'doc_count': 1,
u'key': u'toyota',
u'xCity': {u'buckets': [{u'doc_count': 1, u'key': u'Berlin'}],
u'doc_count_error_upper_bound': 0,
u'sum_other_doc_count': 0}}],
u'doc_count_error_upper_bound': 0,
u'sum_other_doc_count': 0}},
{u'doc_count': 2,
u'key': u'green',
u'xMake': {u'buckets': [{u'doc_count': 1,
u'key': u'ford',
u'xCity': {u'buckets': [{u'doc_count': 1, u'key': u'Berlin'}],
u'doc_count_error_upper_bound': 0,
u'sum_other_doc_count': 0}},
{u'doc_count': 1,
u'key': u'toyota',
u'xCity': {u'buckets': [{u'doc_count': 1, u'key': u'Berlin'}],
u'doc_count_error_upper_bound': 0,
u'sum_other_doc_count': 0}}],
u'doc_count_error_upper_bound': 0,
u'sum_other_doc_count': 0}}],
u'doc_count_error_upper_bound': 0,
u'sum_other_doc_count': 0}}
what I would like to have is a dataframe with the doc_count of the lowest
level
for the first record
red-honda-rome-2
red-honda-paris-1
red-bmw-paris-1
I came across json_normalize in pandas
[here](http://stackoverflow.com/questions/16444797) but do not understand how
to put the arguments and I and have seen different suggestions for flattening
a nested dictionary but can't really understand how they work. Any help to get
me started would be appreciated [Elasticsearch result to
table](http://stackoverflow.com/questions/25165472/)
**UPDATE**
I tried to use [dpath](https://github.com/akesterson/dpath-python) which is a
great library, but I do not see how to abstract this (in form of a function
which takes just the buckets names as arguments) as dpath cannot handle the
structure in which values are lists (and not other dictionaries)
import dpath
import pandas as pd
xListData = []
for q1 in dpath.util.get(x1, 'xColor/buckets'):
xColor = q1['key']
for q2 in dpath.util.get(q1, 'xMake/buckets'):
#print '--', q2['key']
xMake = q2['key']
for q3 in dpath.util.get(q2, 'xCity/buckets'):
#xDict = []
xCity = q3['key']
doc_count = q3['doc_count']
xDict = {'color': xColor, 'make': xMake, 'city': xCity, 'doc_count': doc_count}
#print '------', q3['key'], q3['doc_count']
xListData.append(xDict)
pd.DataFrame(xListData)
This gives:
city color doc_count make
0 ROME red 2 honda
1 Paris red 1 honda
2 Paris red 1 bmw
3 Paris blue 1 ford
4 Berlin blue 1 toyota
5 Berlin green 1 ford
6 Berlin green 1 toyota
Answer: Try with a recursive function:
import pandas as pd
def elasticToDataframe(elasticResult,aggStructure,record={},fulllist=[]):
for agg in aggStructure:
buckets = elasticResult[agg['key']]['buckets']
for bucket in buckets:
record = record.copy()
record[agg['key']] = bucket['key']
if 'aggs' in agg:
elasticToDataframe(bucket,agg['aggs'],record,fulllist)
else:
for var in agg['variables']:
record[var['dfName']] = bucket[var['elasticName']]
fulllist.append(record)
df = pd.DataFrame(fulllist)
return df
Then call the function with your data (x1) and a properly configured
'aggStructure' dict. The nested nature of the data must be reflected in this
dict.
aggStructure=[{'key':'xColor','aggs':[{'key':'xMake','aggs':[{'key':'xCity','variables':[{'elasticName':'doc_count','dfName':'count'}]}]}]}]
elasticToDataframe(x1,aggStructure)

Cheers
|
Numpy individual element access slower than for lists
Question: I just started using Numpy and noticed that iterating through each element in
a Numpy array is ~4x slower than doing the same but with a list of lists. I
know now that this defeats the purpose of Numpy and I should vectorize the
function if possible. My question is though why is it 4x slower. That seems
like quite a large amount.
I ran the tests below using `%timeit`
import numpy as np
b = np.eye(1000)
a = b.tolist()
%timeit b[100][100] #1000000 loops, best of 3: 692 ns per loop
%timeit a[100][100] #10000000 loops, best of 3: 70.7 ns per loop
%timeit b[100,100] #1000000 loops, best of 3: 343 ns per loop
%timeit b.item(100,100) #1000000 loops, best of 3: 297 ns per loop
I tried to use `dis.dis` to see what was going on under the hood but got:
TypeError: don't know how to disassemble method-wrapper objects
Then I tried to look at the Numpy source code but couldn't figure out which
file corresponded to array element access. I'm curious what accounts for the
extra overhead, and more importantly how to figure this out for myself in the
future. It seems like python can't be easily compiled to C code so that I can
see the difference. But is there a way to see what byte code is generated for
each line, to get a sense of the differences?
Answer: To recap, the NumPy operations you have listed do the following:
1. `b[100][100]` returns row 100 of `b`, and then gets the value at index 100 of this row.
2. `b[100,100]` returns the value at row 100 and column 100 _directly_ (no row is returned first).
3. `b.item(100,100)` does the same as above `b[100,100]` except that the NumPy type is _copied_ to a Python type and returned.
Now, (1) is slowest because it requires two sequential NumPy indexing
operations (I'll explain why this is slower than list indexing below). (2) is
quickest because only a single indexing operation is performed and the NumPy
type does not need to be copied to a Python type (unlike (3)).
Why is list access still faster than `b[100,100]`?
Two of the reasons why `a[100][100]` (getting from the list) is quicker than
`b[100,100]` (getting from the array) are that:
* The bytecode opcode `BINARY_SUBSCR` is executed when indexing both lists and arrays, but it is optimised for the case of Python lists.
* The internal C function handling integer indexing for Python lists is very short and simple. On the other hand, NumPy indexing is much more complicated and a significant amount of code is executed to determine the type of indexing being used so that the correct value can be returned.
Below, the steps for accessing elements in a list and array with `a[100][100]`
and `b[100,100]` are described in more detail.
* * *
### Bytecode
The same four bytecode opcodes are triggered for both lists and arrays:
0 LOAD_NAME 0 (a) # the list or array
3 LOAD_CONST 0 (100) # index number (tuple for b[100,100])
6 BINARY_SUBSCR # find correct "getitem" function
7 RETURN_VALUE # value returned from list or array
Note: if you start chain indexing for multi-dimensional lists, e.g.
`a[100][100][100]`, you start to repeat these bytecode instructions. This does
not happen for NumPy arrays using the tuple indexing: `b[100,100,100]` uses
just the four instructions. This is why the gap in the timings begins to close
as the number of dimensions increases.
### Finding the correct "getitem" function
The functions for accessing lists and arrays are different and the correct one
needs to be found in each case. This task is handled by the
[`BINARY_SUBSCR`](https://hg.python.org/cpython/file/45b1ae1ef318/Python/ceval.c#l1374)
opcode:
w = POP(); // our index
v = TOP(); // our list or NumPy array
if (PyList_CheckExact(v) && PyInt_CheckExact(w)) { // do we have a list and an int?
/* INLINE: list[int] */
Py_ssize_t i = PyInt_AsSsize_t(w);
if (i < 0)
i += PyList_GET_SIZE(v);
if (i >= 0 && i < PyList_GET_SIZE(v)) {
x = PyList_GET_ITEM(v, i); // call "getitem" for lists
Py_INCREF(x);
}
else
goto slow_get;
}
else
slow_get:
x = PyObject_GetItem(v, w); // else, call another function
// to work out what is needed
Py_DECREF(v);
Py_DECREF(w);
SET_TOP(x);
if (x != NULL) continue;
break;
This code is optimised for Python lists. If the function sees a list, it will
quickly call the function `PyList_GET_ITEM`. This list can now be accessed at
the required index (see next section below).
However, if it doesn't see a list (e.g. we have a NumPy array), it takes the
"slow_get" path. This in turn calls another function
[`PyObject_GetItem`](https://hg.python.org/cpython/file/ea33b61cac74/Objects/abstract.c#l136)
to check which "getitem" function the object is mapped to:
PyObject_GetItem(PyObject *o, PyObject *key)
{
PyMappingMethods *m;
if (o == NULL || key == NULL)
return null_error();
m = o->ob_type->tp_as_mapping;
if (m && m->mp_subscript)
return m->mp_subscript(o, key);
...
In the case of NumPy arrays, the correct function is located in
[`mp_subscript`](https://docs.python.org/3/c-api/typeobj.html#c.PyMappingMethods.mp_subscript)
in the `PyMappingMethods` structure.
Notice the additional function calls before this correct "get" function can be
called. These calls add to the overhead for `b[100]`, although how much will
depend on how Python/NumPy was compiled, the system architecture, and so on.
### Getting from a Python list
Above it was seen that the function
[`PyList_GET_ITEM`](https://hg.python.org/cpython/file/ea33b61cac74/Objects/listobject.c#l195)
is called. This is a short function that essentially looks like this*:
PyList_GetItem(PyObject *op, Py_ssize_t i)
{
if (!PyList_Check(op)) { // check if list
PyErr_BadInternalCall();
return NULL;
}
if (i < 0 || i >= Py_SIZE(op)) { // check i is in range
if (indexerr == NULL) {
indexerr = PyUnicode_FromString(
"list index out of range");
if (indexerr == NULL)
return NULL;
}
PyErr_SetObject(PyExc_IndexError, indexerr);
return NULL;
}
return ((PyListObject *)op) -> ob_item[i]; // return reference to object
}
* `PyList_GET_ITEM` _is actually the macro form of this function which does the same thing, minus error checking._
This means that getting the item at index `i` of a Python list is relatively
simple. Internally, Python checks whether the type of the item being is a
list, whether `i` is in the correct range for the list, and then returns the
reference to the object in the list.
### Getting from a NumPy array
In contrast, NumPy has to do much more work before the value at the requested
index can be returned.
Arrays can be indexed in a variety of different ways and NumPy has to decide
which index routine is needed. The various indexing routines are handled
largely by code in
[`mapping.c`](https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/mapping.c#l141).
Anything used to index NumPy arrays passes through the function
[`prepare_index`](https://github.com/numpy/numpy/blob/master/numpy/core/src/multiarray/mapping.c#l164)
which begins the parsing of the index and stores the information about
broadcasting, number of dimensions, and so on. Here is the call signature for
the function:
NPY_NO_EXPORT int
prepare_index(PyArrayObject *self, PyObject *index,
npy_index_info *indices,
int *num, int *ndim, int *out_fancy_ndim, int allow_boolean)
/* @param the array being indexed
* @param the index object
* @param index info struct being filled (size of NPY_MAXDIMS * 2 + 1)
* @param number of indices found
* @param dimension of the indexing result
* @param dimension of the fancy/advanced indices part
* @param whether to allow the boolean special case
*/
The function has to do a lot of checks. Even for a relatively simple index
such as `b[100,100]`, a lot of information has to be inferred so that NumPy
can return a reference (view) to the correct value.
In conclusion, it takes longer for the "getitem" function for NumPy to be
found and the functions handling the indexing of arrays are necessarily more
complex than the single function for Python lists.
|
Python sqlite3: run different sqlite3 version
Question: I'm running a Django project using sqlite as one of the databases. I would
like to run the most recent SQLite version when running the project. By that I
mean the most recent sqlite binary, not the sqlite python library.
I have a local sqlite3 binary that is not the system default and I can't
change the default sqlite3 version.
I'm not using Django's ORM but have replaced it with a standalone SQLAlchemy
version.
I've found one related link but that had to do with running the most recent
python sqlite library version.
[How to upgrade sqlite3 in python 2.7.3 inside a
virtualenv?](http://stackoverflow.com/questions/14541869/how-to-upgrade-
sqlite3-in-python-2-7-3-inside-a-virtualenv)
Answer: Python can't use the `sqlite3` binary directly. It always uses a module which
is linked against the `sqlite3` shared library. That means you have to follow
the instructions in "[How to upgrade sqlite3 in python 2.7.3 inside a
virtualenv?](http://stackoverflow.com/questions/14541869/how-to-upgrade-
sqlite3-in-python-2-7-3-inside-a-virtualenv)" to create a version of the
`pysqlite` module in your virtualenv.
You can then use this import
from pysqlite2 import dbapi2 as sqlite
to shadow the system's default `sqlite` module with the new one.
Another option would be to get Python's source code, compile everything and
copy the file `sqlite.so` into your virtualenv. The drawback of this approach
is that it's brittle and hard to repeat by other people.
|
Python timer on randint
Question: If I have this code:
r = randint(0, 255)
g = randint(0, 255)
b = randint(0, 255)
screen.fill((r, g, b))
How do I set it so that it chooses a random number every, say, 10 seconds?
Answer: There are **many different** approaches for that problem. :)
A very simple and quick and dirty method is to **pause the game** (i.e. set a
delay) in the main game loop for an amount of time. You could use PyGames
[`pygame.time.delay()`](http://www.pygame.org/docs/ref/time.html#pygame.time.delay)
function:
#import pygame and randint function from random module
import pygame
from random import randint
pygame.init()
screen = pygame.display.set_mode((800,800)) #create screen
while True:
#get events from the event queue
for ev in pygame.event.get():
if ev.type == pygame.KEYDOWN and ev.key == pygame.K_ESCAPE:
pygame.quit()
exit()
#create random variables r, g and b
r = randint(0, 255)
g = randint(0, 255)
b = randint(0, 255)
#fill screen
screen.fill((r, g, b))
#update whole screen
pygame.display.flip()
#set timout for 500ms (i.e. pause game for 500ms)
pygame.time.delay(500)
The problem with this approach is, that you **can´t get any events** and do
some stuff **for the amount of time (e.g. 500ms) the game pauses**.
To avoid this problem you could **create** your very **own event which appears
on the event queue every given number of milliseconds**. You could use
[`pygame.time.set_timer()`](http://www.pygame.org/docs/ref/time.html#pygame.time.set_timer)
to create an event:
import pygame
from random import randint
pygame.init()
#create a new event id called event_500ms
event_500ms = pygame.USEREVENT + 1
#set timer event to appear on the event queue every 500 number of milliseconds
pygame.time.set_timer(event_500ms, 500)
screen = pygame.display.set_mode((800,800))
while True:
#get events from the event queue
for ev in pygame.event.get():
if ev.type == pygame.KEYDOWN and ev.key == pygame.K_ESCAPE:
pygame.quit()
exit()
#check for event_500ms event
if ev.type == event_500ms:
r = randint(0, 255)
g = randint(0, 255)
b = randint(0, 255)
screen.fill((r, g, b))
pygame.display.flip()
I hope this helps :)
|
Python: ctypes and Pointer to Structure
Question: I am trying to make a pointer of a struct and then de-reference it. But it's
crashing. I have mimic'ed the behvior here with this simple code.
from ctypes import *
import ctypes
class File(Structure):
_fields_ = [("fileSize", c_uint),
("fileName", c_byte * 32)]
f = File()
f.fileSize = 2
print(f.fileSize)
P = ctypes.POINTER(File)
p = P.from_address(addressof(f))
print(p.contents.fileSize)
Can someone point out what's the issue with this code?
Thanks in advance.
Answer: this works (I just tried):
p = pointer(f)
no need to instantiate P at all. To be clearer, given the p and P look quite
similar on screen:
from ctypes import *
class File(Structure):
_fields_ = [("fileSize", c_uint),
("fileName", c_byte * 32)]
f = File()
f.fileSize = 2
print(f.fileSize)
p = pointer(f)
print(p.contents.fileSize)
|
How to read multiple NetCDF files from a folder in python
Question: I am attempting to make a plot of mean temperatures of multiple years
(1979-2014), the only problem I am having is attempting to read multiple
NetCDF (.nc) files from a folder. At the moment my program will plot a single
file, but I do not understand how to make it read all files in a folder (one
for each year). I want to find the mean for all of the years. I left out the
plot data, because that is fine, the only help I need is in looping through
all files in a single folder
import numpy as np
from mpl_toolkits.basemap import Basemap
from netCDF4 import Dataset
import matplotlib.pyplot as plt
q=Dataset('/Users/marjoryrogers/Desktop/July_Temp/MERRA300.prod.assim.tavgM_2d_rad_Nx.201407.SUB-4.nc','r',format='NETCDF4')
q.variables
#jan_temp = q.variables['ts'] # units here, degrees
july_temp = q.variables['ts']
lats = q.variables['latitude']
lons = q.variables['longitude']
#jan_temp.shape
july_temp.shape
lats[:], lons[:]
#q=Dataset('Users/marjoryrogers/Desktop/Spring 2015/Hydroclimatology/MERRA301.prod.assim.tavgM_2d_rad_Nx.200805.SUB.nc', 'r', format='NETCDF4')
july_temp = q.variables['ts']
#jan_july = np.concatenate((may_temp, jun_temp), axis=0)
#jan_july.shape
#aver_temp = np.mean(jan_temp, axis=0)# average temperature
aver_temp2 = np.mean(july_temp, axis=0)
Answer: The [netcdf4-python library has a
class](http://unidata.github.io/netcdf4-python/netCDF4.MFDataset-class.html)
that can read multiple netcdf files making variables that has a record
dimension appear as a single big variable.
import netCDF4 as nc
# read multiple files (wildcard)
vn = nc.MFDataset('data_y*.nc')
# read multiple files (file list)
vn = nc.MFDataset(['data_y1997','data_y1998','data_y1999'])
# Variable from multiple files.
airv = vn.variables['ts']
|
Drawing 3D points on a 2D plot reading values from a file
Question: I have a file containing sets of points with 3 coordinates, separated by tabs.
Like this (spaces added for readability, not present in the original file):
x0 \t y0 \t z0
x0 \t y1 \t z1
x1 \t y0 \t z0
x1 \t y1 \t z1
x1 \t y2 \t z2
x2 \t y0 \t z0
...
I'd like to plot them as separate lines on a single 2D plot, like:
line for all points with x=x0, label=x0
line for all points with x=x1, label=x1
line for all points with x=x2, label=x2
Painting those lines with a different color.
I know numpy has a cool function for reading columns, like this:
my_data = np.genfromtxt(input_file, delimiter='\t', skiprows=0)
Y = my_data[:, 1]
Z = my_data[:, 2]
Is there a similarly quick and clean way of picking column values based on the
value of another column?
If there's not quick function to do that (compose a column based on the x
value next to it) I can parse the file and build the data structure step by
step.
Then I would do something like this, using Matplotlib:
ax = plt.axes()
ax.set_xlabel('Y')
ax.set_ylabel('Z')
# for each value of X
# pick Y and Z values
plt.plot(Y, Z, linestyle='--', marker='o', color='b', label='x_val')
But I'm sure there's a more Pythonic way of doing that. Maybe some trick with
list comprehension?
EDIT: here's the full working code (thanks to the people who answered). I just
need a way to make it show without cutting the legend
import os
import numpy as np
import matplotlib.pyplot as plt
input_file = os.path.normpath('C:/Users/sturaroa/Documents/my_file.tsv')
# read values from file, by column
my_data = np.genfromtxt(input_file, delimiter='\t', skiprows=0)
X = my_data[:, 0] # 1st column
Y = my_data[:, 1] # 2nd column
Z = my_data[:, 2] # 3rd column
# read the unique values in X and use them as keys in a dictionary of line properties
d = {val: {'label': 'x {}'.format(val), 'linestyle': '--', 'marker': 'o'} for val in set(X)}
# draw a different line for each of the unique values in X
for val, kwargs in d.items():
mask = X == val
y, z = Y[mask], Z[mask]
plt.plot(y, z, **kwargs)
# label the axes of the plot
ax = plt.axes()
ax.set_xlabel('Y')
ax.set_ylabel('Z')
# get the labels of all the lines in the graph
handles, labels = ax.get_legend_handles_labels()
# create a legend growing it from the middle and put it on the right side of the graph
lgd = ax.legend(handles, labels, loc='center left', bbox_to_anchor=(1.0, 0.5))
# save the figure so that the legend fits inside it
plt.savefig('my_file.pdf', bbox_extra_artists=(lgd,), bbox_inches='tight')
plt.show()
Answer: Assume you have a dictionary of `value:kwargs` pairs, where `value` is the
value that `X` must take to be in that curve and `kwargs` is a dict that holds
arguments to be passed to the plotting function.
The code below will use the `value` to construct a mask which can be used to
index and choose the appropriate points.
import numpy as np
import matplotlib.pyplot as plt
my_data = np.genfromtxt('data.txt', delimiter=' ', skiprows=0)
X = my_data[:, 0]
Y = my_data[:, 1]
Z = my_data[:, 2]
d = {
0: {'label': 'x0'},
1: {'label': 'x1'}
}
for val, kwargs in d.items():
mask = X == val
y, z = Y[mask], Z[mask]
plt.plot(y, z, **kwargs)
plt.legend()
plt.show()

|
Python Countdown Timer includes Extra Zero
Question: I'm attempting to make a timer that, once inputing any amount of minutes, the
window will count down the time in a mm:ss format. however, when inputing a
time over 10 mins, when the minutes get below 10 after starting at any time
above 10 minutes, an extra zero appears at the end of the timer. I cannot seem
to figure out how to get rid of it and was hoping for another set of eyes to
see if I've made any silly mistakes.
import time, sys, os
os.system("cls")
def countdown_screen(endtime):
sectot=int(endtime)*60
print("*******************************************************************************")
for i in range(0,sectot):
mins=(int(sectot)/60)
secs=sectot-(mins*60)
if secs<10:
secs="0"+str(secs)
if mins==10 and secs==0:
secs=secs-"0"
sys.stdout.write("\rSystem standby for: %s:%s" %(mins,secs))
sys.stdout.flush()
time.sleep(1)
sectot=sectot-1
sys.stdout.write("\rSystem standby complete.")
print""
endtime=raw_input("mins:")
countdown_screen(endtime)
Answer: You are overloading `secs` to be an int and then a string, that's sure to be a
problem:
if secs<10:
secs="0"+str(secs)
if mins==10 and secs==0:
secs=secs-"0"
But you should simplify your whole approach and let string formatting do the
work for you:
import time, sys, os
os.system("cls")
def countdown_screen(endtime):
sectot=int(endtime)*60
print("*******************************************************************************")
for i in range(0,sectot):
sys.stdout.write("\rSystem standby for: %02d:%02d" % (sectot/60,sectot%60))
sys.stdout.flush()
time.sleep(1)
sectot=sectot-1
sys.stdout.write("\rSystem standby complete.")
print""
endtime=raw_input("mins:")
countdown_screen(endtime)
|
Issue With Relative Imports In Python
Question: I'm running into an issue with how to properly declare imports for some
modules that I've written.
Suppose the follow directory structure:
main_dir/
__init__.py
module_A
sub_dir/
__init__.py
module_B
module_C
So that modules B and C are both in the same subdirectory relative to module
A.
Module B imports C. Module A sometimes imports B.
So, in Module B, using `import module_C` works fines.
And in Module A, using `import sub_dir.module_C` works fine.
However, in Module A, using `import sub_dir.module_B` causes an ImportError
`no module named 'module_C'` because B imports C.
I'm assuming that I could change B to `import sub_dir.module_C` but I don't
want to do that because then it will break when I start directly in B rather
than import B from A.
What's the correct way(s) to handle this sort of issue?
Answer: This should be your app structure of files.
app/
├── __init__.py
├── module_a.py
└── subdir
├── __init__.py
├── module_b.py
└── module_c.py
**module_a.py**
from subdir import module_b, module_c
Then, you will have access to all modules from **module_a**.
If you import **module_b** in **module_c** or **module_c** in **module_b** you
will have an [cyclic
import](http://stackoverflow.com/questions/744373/circular-or-cyclic-imports-
in-python) issue. This is a design question. You need to review your code and
rethink how to link modules.
|
yum install python-setuptools to install easy_install and ansible - errors: AttributeError: other Python Errors
Question: Goal: Install ansible on a RedHat Linux machine.
**Little overview on how it all started:** When my Linux machine was RedHat
5.9 (Tikanga), the default python installed version was 2.4. I tried my best,
but couldn't get anything to work as Ansible requires python >= 2.6. I tried
installing 2.7.9 on Linux 5.9 version but then things started to act up really
fast.
I did try 2.7.9 python on Linux 5.9 as "make altinstall" instead of install
but still there were lots of errors while running yum / etc system level
commands.
Few errors which came there were (with or without running sudo):
**# sudo pip install ansible**
Traceback (most recent call last):
File "/usr/bin/pip", line 7, in ?
sys.exit(
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 236, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 2097, in load_entry_point
return ep.load()
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1830, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/usr/lib/python2.4/site-packages/pip-6.0.8-py2.4.egg/pip/__init__.py", line 211
except PipError as exc:
^
SyntaxError: invalid syntax
or
**# sudo easy_install pip**
Searching for pip
Best match: pip 6.0.8
Processing pip-6.0.8-py2.4.egg
pip 6.0.8 is already the active version in easy-install.pth
Installing pip script to /usr/bin
Installing pip2 script to /usr/bin
Installing pip2.4 script to /usr/bin
Using /usr/lib/python2.4/site-packages/pip-6.0.8-py2.4.egg
Processing dependencies for pip
or
**# sudo pip install ansible**
Traceback (most recent call last):
File "/usr/bin/pip", line 7, in ?
sys.exit(
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 236, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 2097, in load_entry_point
return ep.load()
File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1830, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/usr/lib/python2.4/site-packages/pip-6.0.8-py2.4.egg/pip/__init__.py", line 211
except PipError as exc:
^
SyntaxError: invalid syntax
or
**# sudo easy_install ansible**
'import site' failed; use -v for traceback
Traceback (most recent call last):
File "/usr/bin/easy_install", line 5, in ?
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
etc....
Finally to my luck, I thought, let's try installing python again from scratch
(so I ran **yum erase python** , !!! beware !!!!) and to my knowledge, it was
the best command I ever ran in my experience with a little oversight. End
result: I ended up creating a new product, here:
<http://www.keepcalmandcarryon.com/creator/?shortcode=qCsMlpyc>
**Anyways** , ... Now, I got the server revived with a newer version of RedHat
(version 6.6 Santiago) and this time default Python on it was: 2.6.6.
**Current situation: THIS is now, what I'm facing now on RH Linux 5.9 with
Python 2.6.6 installed.**
I'm running: **sudo easy_install pip** but I got an error:
sudo: easy_install: command not found
To resolve the above, I'm now running: `sudo yum install python-setuptools` It
found it... but showing me some an error.
Loaded plugins: product-id, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Setting up Install Process
http://74.125.194.100/yum/x86_64/6Server/%24YUM0/Server/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
http://74.125.194.100/yum/x86_64/supplemental/%24YUM0/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
Resolving Dependencies
--> Running transaction check
---> Package python-setuptools.noarch 0:0.6.10-3.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================================================
Installing:
python-setuptools noarch 0.6.10-3.el6 release.update 336 k
Transaction Summary
==============================================================================================================================================================================================================================
Install 1 Package(s)
Total download size: 336 k
Installed size: 1.5 M
Is this ok [y/N]: y
Downloading Packages:
http://74.125.194.100/yum/x86_64/6Server/%24YUM0/Server/../Packages/python-setuptools-0.6.10-3.el6.noarch.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
Error Downloading Packages:
python-setuptools-0.6.10-3.el6.noarch: failure: ../Packages/python-setuptools-0.6.10-3.el6.noarch.rpm from release.update: [Errno 256] No more mirrors to try.
-bash-4.1$
Any idea, how can i get easy_install, pip or ansible on my Linux machine 6.6
(now).
Thanks.
Answer: Ran the following commands to fix everything and now ansible is finally
installed:
wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
sudo rpm -ivh epel-release-6-8.noarch.rpm
But at this stage, I got an error for Jinja2 ... as below: **ImportError: No
module named setuptools**
-bash-4.1$ sudo python setup.py install
Traceback (most recent call last):
File "setup.py", line 40, in <module>
from setuptools import setup, Extension, Feature
ImportError: No module named setuptools
-bash-4.1$
To fix the above and proceed, I ran the following commands to get python-
setuptools (which installs easy_install)
wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo python
Great, now I have easy_install installed on my machine.
-bash-4.1$ which easy_install
/usr/bin/easy_install
-bash-4.1$
At this point, I wanted to install ANSIBLE, so I ran the following:
sudo easy_install pip
sudo pip install ansible
The last line in the above ansible install command's output was:
Successfully installed ansible-1.9.0.1 jinja2-2.7.3 markupsafe-0.23 pycrypto-2.6.1
Now, I can see ansible on my machine.
-bash-4.1$ which ansible
/usr/bin/ansible
-bash-4.1$
Oh, oh... something is still not good:
**-bash-4.1$ ansible --help**
Traceback (most recent call last):
File "/usr/bin/ansible", line 36, in <module>
from ansible.runner import Runner
File "/usr/lib/python2.6/site-packages/ansible/runner/__init__.py", line 62, in <module>
from Crypto.Random import atfork
File "/usr/lib64/python2.6/site-packages/Crypto/Random/__init__.py", line 29, in <module>
from Crypto.Random import _UserFriendlyRNG
File "/usr/lib64/python2.6/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 38, in <module>
from Crypto.Random.Fortuna import FortunaAccumulator
File "/usr/lib64/python2.6/site-packages/Crypto/Random/Fortuna/FortunaAccumulator.py", line 39, in <module>
import FortunaGenerator
File "/usr/lib64/python2.6/site-packages/Crypto/Random/Fortuna/FortunaGenerator.py", line 34, in <module>
from Crypto.Util.number import ceil_shift, exact_log2, exact_div
File "/usr/lib64/python2.6/site-packages/Crypto/Util/number.py", line 56, in <module>
if _fastmath is not None and not _fastmath.HAVE_DECL_MPZ_POWM_SEC:
AttributeError: 'module' object has no attribute 'HAVE_DECL_MPZ_POWM_SEC'
**-bash-4.1$**
I checked, my LD_LIBRARY_PATH variable was already set/exported as:
LD_LIBRARY_PATH=/usr/lib64/:/usr/local/lib64
Then what else... OK.
Ran the following steps/commands to resolve the above error and get ansible to
actually WORK!!!. For prompts, I said Yes "y".
* * *
echo "y" | sudo pip uninstall pycrypto
echo "y" | sudo yum erase python-crypto
echo "y" | sudo pip uninstall ansible
sudo pip install ansible
## Time to get gmplib 6.0.0(a) etc
wget https://gmplib.org/download/gmp/gmp-6.0.0a.tar.bz2
tar -xvjpf gmp-6.0.0a.tar.bz2
## Change to the above extracted folder and run some commands, seems like they'll run forever.
cd gmp-6.0.0
./configure
sudo make || echo "these few steps will take good amt of time to complete"
sudo make check || echo "hang on and see if all tests passes in the check process"
sudo make install || echo "final shenzi will be done shortly"
sudo make check || echo "really!!!! ????"
which ansible
ansible || ansible --help
**!!! FINALLY !!!** \----
**-bash-4.1$ which ansible**
/usr/bin/ansible
**-bash-4.1$ ansible**
Usage: ansible <host-pattern> [options]
Options:
-a MODULE_ARGS, --args=MODULE_ARGS
module arguments
--ask-become-pass ask for privilege escalation password
-k, --ask-pass ask for SSH password
--ask-su-pass ask for su password (deprecated, use become)
-K, --ask-sudo-pass ask for sudo password (deprecated, use become)
--ask-vault-pass ask for vault password
-B SECONDS, --background=SECONDS
run asynchronously, failing after X seconds
(default=N/A)
-b, --become run operations with become (nopasswd implied)
--become-method=BECOME_METHOD
privilege escalation method to use (default=sudo),
valid choices: [ sudo | su | pbrun | pfexec | runas ]
--become-user=BECOME_USER
run operations as this user (default=root)
-C, --check don't make any changes; instead, try to predict some
of the changes that may occur
-c CONNECTION, --connection=CONNECTION
connection type to use (default=smart)
-e EXTRA_VARS, --extra-vars=EXTRA_VARS
set additional variables as key=value or YAML/JSON
-f FORKS, --forks=FORKS
specify number of parallel processes to use
(default=5)
-h, --help show this help message and exit
-i INVENTORY, --inventory-file=INVENTORY
specify inventory host file
(default=/etc/ansible/hosts)
-l SUBSET, --limit=SUBSET
further limit selected hosts to an additional pattern
--list-hosts outputs a list of matching hosts; does not execute
anything else
-m MODULE_NAME, --module-name=MODULE_NAME
module name to execute (default=command)
-M MODULE_PATH, --module-path=MODULE_PATH
specify path(s) to module library (default=None)
-o, --one-line condense output
-P POLL_INTERVAL, --poll=POLL_INTERVAL
set the poll interval if using -B (default=15)
--private-key=PRIVATE_KEY_FILE
use this file to authenticate the connection
-S, --su run operations with su (deprecated, use become)
-R SU_USER, --su-user=SU_USER
run operations with su as this user (default=root)
(deprecated, use become)
-s, --sudo run operations with sudo (nopasswd) (deprecated, use
become)
-U SUDO_USER, --sudo-user=SUDO_USER
desired sudo user (default=root) (deprecated, use
become)
-T TIMEOUT, --timeout=TIMEOUT
override the SSH timeout in seconds (default=10)
-t TREE, --tree=TREE log output to this directory
-u REMOTE_USER, --user=REMOTE_USER
connect as this user (default=c400093)
--vault-password-file=VAULT_PASSWORD_FILE
vault password file
-v, --verbose verbose mode (-vvv for more, -vvvv to enable
connection debugging)
--version show program's version number and exit
**-bash-4.1$**
PS: Stay away if you can from "_yum erase python_ " and entering "y" for it,
good luck.
|
Threading in python - processing multiple large files concurrently
Question: I'm new to python and I'm having trouble understanding how threading works. By
skimming through the documentation, my understanding is that calling `join()`
on a thread is the recommended way of blocking until it completes.
To give a bit of background, I have 48 large csv files (multiple GB) which I
am trying to parse in order to find inconsistencies. The threads share no
state. This can be done single threadedly in a reasonable ammount of time for
a one-off, but I am trying to do it concurrently as an exercise.
Here's a skeleton of the file processing:
def process_file(data_file):
with open(data_file) as f:
print "Start processing {0}".format(data_file)
line = f.readline()
while line:
# logic omitted for brevity; can post if required
# pretty certain it works as expected, single 'thread' works fine
line = f.readline()
print "Finished processing file {0} with {1} errors".format(data_file, error_count)
def process_file_callable(data_file):
try:
process_file(data_file)
except:
print >> sys.stderr, "Error processing file {0}".format(data_file)
And the concurrent bit:
def partition_list(l, n):
""" Yield successive n-sized partitions from a list.
"""
for i in xrange(0, len(l), n):
yield l[i:i+n]
partitions = list(partition_list(data_files, 4))
for partition in partitions:
threads = []
for data_file in partition:
print "Processing file {0}".format(data_file)
t = Thread(name=data_file, target=process_file_callable, args = (data_file,))
threads.append(t)
t.start()
for t in threads:
print "Joining {0}".format(t.getName())
t.join(5)
print "Joined the first chunk of {0}".format(map(lambda t: t.getName(), threads))
I run this as:
python -u datautils/cleaner.py > cleaner.out 2> cleaner.err
My understanding is that join() should block the calling thread waiting for
the thread it's called on to finish, however the behaviour I'm observing is
inconsistent with my expectation.
I never see errors in the error file, but I also never see the expected log
messages on stdout.
The parent process does not terminate unless I explicitly kill it from the
shell. If I check how many prints I have for `Finished ...` it's never the
expected 48, but somewhere between 12 and 15. However, having run this single-
threadedly, I can confirm that the multithreaded run is actually processing
everything and doing all the expected validation, only it does not seem to
terminate cleanly.
I know I must be doing something wrong, but I would really appreciate if you
can point me in the right direction.
Answer: I can't understand where mistake in your code. But I can recommend you to
refactor it a little bit. First at all, threading in python is not concurrent
at all. It's just illusion, because there is a [Global Interpreter
Lock](https://wiki.python.org/moin/GlobalInterpreterLock), so only one thread
can be executed in same time. That's why I recommend you to use
[multiprocessing
module](https://docs.python.org/2/library/multiprocessing.html):
from multiprocessing import Pool, cpu_count
pool = Pool(cpu_count)
for partition in partition_list(data_files, 4):
res = pool.map(process_file_callable, partition)
print res
At second, you are using not pythonic way to read file:
with open(...) as f:
line = f.readline()
while line:
... # do(line)
line = f.readline()
Here is pythonic way:
with open(...) as f:
for line in f:
... # do(line)
> This is memory efficient, fast, and leads to simple code. (c) PyDoc
By the way, I have only one hypothesis what can happen with your program in
multithreading way - app became more slower, because unordered access to hard
disk drive is significantly slower than ordered. You can try to check this
hypothesis using `iostat` or `htop`, if you are using Linux.
If your app does not finish work, and it doesn't do anything in process
monitor (cpu or disk is not active), it means you have some kind of deadlock
or blocked access to same resource.
|
Python regex £ sign
Question: I'm reading from a text file which contains pound signs (£):
f = open(file, 'r')
string = f.read()
f.close()
Along with some other regex operations, I want to remove these pound signs,
and write the string to a new file. The closest I've got to making this work
is the following code:
n = re.compile(unichr(163))
string = n.sub('', string)
This seems to find the pound signs correctly, but instead of replacing them
with nothing, the `£` are converted this symbol: `Â`
Anyone have any idea what's going on?
Answer: # Summary:
> In utf8, `£` maps to raw bytes `\xc2\xa3`. The `re` module allows for string
> substitution to occur between unicode and byte encoded strings, which is an
> error.
It's my opinion that J.F. Sebastian's
[answer](http://stackoverflow.com/a/29317102/1213041) is more succint, but
here is a walkthrough.
# Details:
Calls to `read()` return a bytestring.
To illustrate, lets create the following file `durp`:
echo -n "£" > durp
The next command gets the contents of the file in hex:
$ cat durp | xxd | cut -d " " -f 2
c2a3
**Note:** Visiting this [url](http://cdosborn.github.io/unicode?cp=163/) will
display `£` in multiple encodings.
These are the raw bytes which constitute `£`. What does python do with the
file when its read?
$ python
> f = open("durp")
> f.read()
'\xc2\xa3'
It doesn't know what the encoding is so it represents the bytes in their
escaped hex form.
Let's import your code:
> import re
> r = re.compile(u'£')
> u'£'
u'\xa3'
That last line is just to see what we're making a pattern on. This is the
source of the error.
Now we perform the substitution on the contents of the file:
> r.sub('', '\xc2\xa3')
'\xc2'
Which is conceivable but wrong. We substituted `'\xa3'` for `''` in
`'\xc2\xa3'` and got `'\xc2'`. This is an error in `re`, because unicode
strings are being mixed with bytestrings. It doesn't not make sense to perform
substitions of characters that have different encodings. This is essentially
substituting bytes rather than characters.
J.F. Sebastian's [answer](http://stackoverflow.com/a/29317102/1213041)
explains how your terminal would interpret `'\xc2'` as `Â`.
|
Logic for if statement that executes code based on exception in Python
Question: I would like to run a script every five minutes in windows task scheduler. The
script reads a JSON web-service of service requests and writes specified
fields to an esri geodatabase.
For this process, there is a time stamp, with the information to query the
time per service request.
My question is how do I create the logic in this script to say **"Hey, I ran
successfully for 12:00PM-12:05:59PM, something happened for 12:06-12:11:59,
but I will run for 12:11-12:16:59, and since I am so nice I will grab data for
12:06-12:11."**
the logic that I have built into my code so far is;
import datetime
DateofDataCreation = 2015-02-17 16:53:25
i = 5
Start = datetime.datetime.now()
now_minus_5 = Start - datetime.timedelta(minutes =i)
if DateofDataCreation >= now_minus_5:
WriteToDatabase
else:
print "No Current Data"
Answer: am not sure if I understand your question exactly. Based on what I think you
are asking, try this:
import datetime
#If you change the time to be after now() then it will print "WriteToDatabase" .
#If the time is before then it will print "No Current "Data"
DateofDataCreation = datetime.datetime(2015,2,17,16,53,25)
i = 5
Start = datetime.datetime.now()
now_minus_5 = Start - datetime.timedelta(minutes =i)
if DateofDataCreation >= now_minus_5:
print("WriteToDatabase")
else:
print ("No Current Data")
|
python scikit error - no module named sklearn
Question: When i follow the website
(<https://www.kaggle.com/wiki/GettingStartedWithPythonForDataScience>) and
type `python makeSubmission.py` I get the following error message :
ImportError: No module named sklearn
I think I have already successfully installed the following: Python 3.4 for
windows sciPy,NumPy and matplotlib setuptools scikit-learn PyCharm
I then opened "Python 3.4 command line" and typed `import sys;
print(sys.__path__)`,but I got the message
Traceback (most recent call last): File "<stdin>", line 1, in <module>
AttributeError: module object has no attribute '__path__'
Anyone can help?
Answer: Looks like you haven't installed scikit-learn properly. `pip install -U
scikit-learn` should do the job. Also, I would suggest downloading the
Anaconda distribution of python if you're planning to use python for kaggle
contests. It takes care of all the necessary dependencies and contains all the
commonly needed python packages for the contest. I found that easier than the
tedious download of the dependencies. Here's the
[Link](https://store.continuum.io/cshop/anaconda/ "Link")
|
Python: How can i find a directory that matches the first 3 characters from a string?
Question: I have a directory with 50+ directories inside which are named `"XXX -
something"`
If I have `X = '123'` How can I find the directory that starts with `'123'`?
Answer: You can try this using
[`os.walk`](https://docs.python.org/2/library/os.html#os.walk)
import os
[i[0] for i in os.walk('/path/to/directory/') if i[0].split("/")[-1].startswith(X)]
It will return a `list` in the folder `/path/to/directory/` recursively, if
foldername startswith `X (your varibale)`
**OR**
[i for i,j,k in os.walk('/path/to/directory/') if i.split("/")[-1].startswith(X)]
|
IronPython: How to call a function that expects an array of value-types?
Question: I have come across a problem with IronPython that I can't solve. I need to
call a function that takes a parameter of type array to value-type. The
function-signature (in C++/CLI notation) is this:
static int PLGServiceInterface::PLGService::MapLowSpeedRealtimeNames(cli::array<System::String ^> ^ SignalNames, int timeout_ms, cli::array<PLGServiceInterface::LVCluster_1> ^% Channels, System::String ^% ReportText)
When I call the function from IronPython
import clr
clr.AddReferenceToFileAndPath('PLGServiceAPI.dll')
from System import String, Array
from PLGServiceInterface import PLGService, LVCluster_1
outtext = clr.Reference[String]()
outdata = clr.Reference[Array[LVCluster_1]]()
PLGService.MapLowSpeedRealtimeNames(('hello', 'world'), 300, outdata, outtext)
I get the following error
Traceback (most recent call last):
File "test2.py", line 9, in <module>
TypeError: expected StrongBox[Array[LVCluster_1]], got StrongBox[Array[LVCluster_1]]
The error message is not very helpful but I assume the problem is that
"outdata" is an array containing value types instead of reference types.
Apparently IronPython doesn't know how to do the boxing in that case.
With C++/CLI I can use the function just fine:
using namespace System;
using PLGServiceInterface::LVCluster_1;
using PLGServiceInterface::PLGService;
int main(array<System::String ^> ^args)
{
array<LVCluster_1> ^outdata;
array<String^> ^names = gcnew array<String^>{"one", "two"};
String ^o;
PLGService::MapLowSpeedRealtimeNames(names, 300, outdata, o);
Console::WriteLine(o);
Console::Read();
return 0;
}
I assume if the function would instead expect an array of references
array<LVCluster_1 ^> ^outdata
I could call it with IronPython.
Is there any way to make this work with IronPython? By the way, the assembly
was created with LabView and LVCluster_1 is a LabView-Cluster (structure).
Edit: The Intermediate Language signature of MapLowSpeedRealtimeNames is this:
.method public hidebysig static int32 MapLowSpeedRealtimeNames(string[...] SignalNames,
int32 timeout_ms,
[out] valuetype PLGServiceInterface.LVCluster_1[...]& Channels,
[out] string& ReportText) cil managed
Does anybody know what the meaning of the 3 dots in the array brackets is?
When I compile a function that takes an array in C++/CLI, I only get the
opening and closing brackets without the dots in between.
These dots seem suspicious to me because I also get the TypeError when calling
a method that takes an out-parameter of type array of double(float64):
.method public hidebysig static int32 ReadVariables(string[...] SignalNames,
int32 timeout_ms,
[out] string& ReportText,
[out] float64[...]& Data) cil managed
Generates the error
TypeError: expected StrongBox[Array[float]], got StrongBox[Array[float]]
Answer: Have you tried using Array.CreateInstance(LVCluster_1, length) to create the
array? See example below:
outdata = Array.CreateInstance(LVCluster_1, 2)
The code above works for me, but my c++/cli function signature don't use the
tracking reference (%). So if I were to re-write your function signature it
would look like:
static int PLGServiceInterface::PLGService::MapLowSpeedRealtimeNames(cli::array<System::String ^> ^ SignalNames, int timeout_ms, cli::array<PLGServiceInterface::LVCluster_1> ^ Channels, System::String ^ ReportText)
|
Why does OLS raise LinAlgError: SVD did not converge?
Question: I have an array:
Num Col2 Col3 Col4
1 6 1 1
2 60 0 2
3 60 0 1
4 6 0 1
5 60 1 1
And the code:
y = df.loc[:,'Col3'] # response
X = df.loc[:,['Col2','Col4']] # predictor
X = sm.add_constant(X) #add constant
est = sm.OLS(y, X) #build regression
est = est.fit() #full model
And when it reaches .fit() it raises an error which is:
Traceback (most recent call last):
File "D:\Users\Anna\workspace\mob1\mobols.py", line 36, in <module>
est = est.fit() #full model
File "C:\Python27\lib\site-packages\statsmodels\regression\linear_model.py", line 174, in fit
self.pinv_wexog, singular_values = pinv_extended(self.wexog)
File "C:\Python27\lib\site-packages\statsmodels\tools\tools.py", line 392, in pinv_extended
u, s, vt = np.linalg.svd(X, 0)
File "C:\Python27\lib\site-packages\numpy\linalg\linalg.py", line 1327, in svd
u, s, vt = gufunc(a, signature=signature, extobj=extobj)
File "C:\Python27\lib\site-packages\numpy\linalg\linalg.py", line 99, in _raise_linalgerror_svd_nonconvergence
raise LinAlgError("SVD did not converge")
numpy.linalg.linalg.LinAlgError: SVD did not converge
What's the problem? And how can I solve it?
Thank you
Answer: It looks like you're using Pandas and statsmodels. I ran your snippet and did
not get the 'raise LinAlgError("SVD did not converge")' exception. Here's what
I ran:
import numpy as np
import pandas
import statsmodels.api as sm
d = {'col2': [6, 60, 60, 6, 60], 'col3': [1, 0, 0, 0, 1], 'col4': [1, 2, 1, 1, 1]}
df = pandas.DataFrame(data=d, index=np.arange(1, 6))
print df
Prints:
col2 col3 col4
1 6 1 1
2 60 0 2
3 60 0 1
4 6 0 1
5 60 1 1
y = df.loc[:, 'col3']
X = df.loc[:, ['col2', 'col4']]
X = sm.add_constant(X)
est = sm.OLS(y, X)
est = est.fit()
print est.summary()
This prints:
OLS Regression Results
==============================================================================
Dep. Variable: col3 R-squared: 0.167
Model: OLS Adj. R-squared: -0.667
Method: Least Squares F-statistic: 0.2000
Date: Sat, 28 Mar 2015 Prob (F-statistic): 0.833
Time: 16:43:02 Log-Likelihood: -3.0711
No. Observations: 5 AIC: 12.14
Df Residuals: 2 BIC: 10.97
Df Model: 2
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
const 1.0000 1.003 0.997 0.424 -3.316 5.316
col2 -8.674e-18 0.013 -6.62e-16 1.000 -0.056 0.056
col4 -0.5000 0.866 -0.577 0.622 -4.226 3.226
==============================================================================
Omnibus: nan Durbin-Watson: 1.500
Prob(Omnibus): nan Jarque-Bera (JB): 0.638
Skew: -0.000 Prob(JB): 0.727
Kurtosis: 1.250 Cond. No. 187.
==============================================================================
So this seems to work, so no problem, at least with this code. Could it be
that you're calling the wrong matrix as df?
|
python3: doctest helper/internal functions?
Question: How do I make the following work so that helpers's test is run? It doesen't.
def B():
def helper():
"""
>>> some doctest
result
"""
...
if __name__ == "__main__":
import doctest
doctest.testmod()
Answer: Nested functions cannot be found, because the function object doesn't _exist_
until the `B()` function is run. You'd have to _return_ it as the result of
calling the `B()` function, then assign it to the `__test__` dictionary:
def B()
def helper()
"""
>>> some doctest
result
"""
return helper
# ...
if __name__ == "__main__":
import doctest
__test__ = {'helper': B()}
doctest.testmod()
`doctest.testmod()` looks for the `__test__` global dictionary and looks for
docstrings on any classes, methods, functions and modules in the values; any
string values are directly executed as docstring tests.
If `B()` does _other_ things besides, then you probably should make `helper()`
a simple global function instead:
def B():
# uses helper
def helper()
"""
>>> some doctest
result
"""
# ...
if __name__ == "__main__":
import doctest
doctest.testmod()
|
How to make my Piecewise function zero outside the provided interval in python
Question: Here is my code:
In [61]: import sympy as sp
In [62]: x = sp.Symbol('x')
In [63]: phi_1 = sp.Piecewise( ( (1.3-x)/0.3, 1<=x <=1.3 ))
In [64]: phi_1.subs(x,1.2)
Out[64]: 0.333333333333334
In [65]: phi_1.subs(x,1.4)
Out[65]: Piecewise()
More specifically, I want to get zero as an answer to the input no. 65, since
1.4 is outside the interval [1, 1.3].
Answer: You need to tell `Piecewise` that you want the function to evaluate to zero
when outside the bounds, for example:
import sympy as sp
x = sp.Symbol('x')
phi_1 = sp.Piecewise(
(0, x < 1),
(0, x > 1.3),
( (1.3-x)/0.3, True )
)
print(phi_1.subs(x,1.2)) # 0.333333333333334
print(phi_1.subs(x,1.4)) # 0
Note that this syntax works in 0.7.1 and 0.7.6 -- your code raises an
`TypeError` in 0.7.6 with the "compound conditional" `1 <=x <=1.3`
|
python how to uppercase some characters in string
Question: Here is what I want to do but doesn't work:
mystring = "hello world"
toUpper = ['a', 'e', 'i', 'o', 'u', 'y']
array = list(mystring)
for c in array:
if c in toUpper:
c = c.upper()
print(array)
`"e"` and `"o"` are not uppercase in my array.
Answer: You can use the [`str.translate()`
method](https://docs.python.org/2/library/stdtypes.html#str.translate) to have
Python replace characters by other characters in one step.
Use the [`string.maketrans()`
function](https://docs.python.org/2/library/string.html#string.maketrans) to
map lowercase characters to their uppercase targets:
try:
# Python 2
from string import maketrans
except ImportError:
# Python 3 made maketrans a static method
maketrans = str.maketrans
vowels = 'aeiouy'
upper_map = maketrans(vowels, vowels.upper())
mystring.translate(upper_map)
This is the faster and more 'correct' way to replace certain characters in a
string; you can always turn the result of `mystring.translate()` into a list
but I strongly suspect you wanted to end up with a string in the first place.
Demo:
>>> try:
... # Python 2
... from string import maketrans
... except ImportError:
... # Python 3 made maketrans a static method
... maketrans = str.maketrans
...
>>> vowels = 'aeiouy'
>>> upper_map = maketrans(vowels, vowels.upper())
>>> mystring = "hello world"
>>> mystring.translate(upper_map)
'hEllO wOrld'
|
How can I ask for entry data in tkinter in python 3.4?
Question: How can I ask for entry data in tkinter? I tried using standard if statement
but it seems like I'm doing something wrong.
from tkinter import *
class Search(Tk):
def __init__(self):
Tk.__init__(self)
self.entry = Entry(self)
self.search = Button(self, text="Search", command=self.search_button)
self.search.pack(side=LEFT)
self.entry.pack(side=LEFT)
def search_button(self):
(self.entry.get())
if Entry=="example1":
print ("example1")
app = Search()
app.mainloop()
Answer: I think you have an indentation problem. Try this:
def search_button(self):
if self.entry.get() == "example1":
print("example1")
I've indented this code block an extra level to indicate that it should be a
`Search` method rather than a global function.
|
Twist failure when using scrapy to crawl a bbs
Question: I am a newbie to python scrapy, and wrote a simple script to crawl posts from
my school's bbs. However, when my spider runs, it get error messages like
this:
> 015-03-28 11:16:52+0800 [nju_spider] DEBUG: Retrying
> http://bbs.nju.edu.cn/bbstcon?board=WarAndPeace&file=M.1427299332.A> (failed
> 2 times): [>] 2015-03-28 11:16:52+0800 [nju_spider] DEBUG: Gave up retrying
> http://bbs.nju.edu.cn/bbstcon?board=WarAndPeace&file=M.1427281812.A> (failed
> 3 times): [>] 2015-03-28 11:16:52+0800 [nju_spider] ERROR: Error downloading
> http://bbs.nju.edu.cn/bbstcon?board=WarAndPeace&file=M.1427281812.A>: [>]
>
> 2015-03-28 11:16:56+0800 [nju_spider] INFO: Dumping Scrapy stats:
> {'downloader/exception_count': 99,
> 'downloader/exception_type_count/twisted.web._newclient.ResponseFailed': 99,
> 'downloader/request_bytes': 36236, 'downloader/request_count': 113,
> 'downloader/request_method_count/GET': 113, 'downloader/response_bytes':
> 31135, 'downloader/response_count': 14,
> 'downloader/response_status_count/200': 14, 'dupefilter/filtered': 25,
> 'finish_reason': 'finished', 'finish_time': datetime.datetime(2015, 3, 28,
> 3, 16, 56, 677065), 'item_scraped_count': 11, 'log_count/DEBUG': 127,
> 'log_count/ERROR': 32, 'log_count/INFO': 8, 'request_depth_max': 3,
> 'response_received_count': 14, 'scheduler/dequeued': 113,
> 'scheduler/dequeued/memory': 113, 'scheduler/enqueued': 113,
> 'scheduler/enqueued/memory': 113, 'start_time': datetime.datetime(2015, 3,
> 28, 3, 16, 41, 874807)} 2015-03-28 11:16:56+0800 [nju_spider] INFO: Spider
> closed (finished)
It seems that the spider tries the url but fails, but this url does really
exists. And there are about thousands of posts in the bbs, but every time I
ran my spider, it can only get a random few of them. My code is like
following, and really appreciate for your help
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
from ScrapyTest.items import NjuPostItem
class NjuSpider(CrawlSpider):
name = 'nju_spider'
allowed_domains = ['bbs.nju.edu.cn']
start_urls = ['http://bbs.nju.edu.cn/bbstdoc?board=WarAndPeace']
rules = [Rule(LinkExtractor(allow=['bbstcon\?board=WarAndPeace&file=M\.\d+\.A']),
callback='parse_post'),
Rule(LinkExtractor(allow=['bbstdoc\?board=WarAndPeace&start=\d+']),
follow=True)]
def parse_post(self, response):
# self.log('A response from %s just arrived!' % response.url)
post = NjuPostItem()
post['url'] = response.url
post['title'] = 'to_do'
post['content'] = 'to_do'
return post
Answer: First, make sure you are not violating the web-site's Terms of Use by taking
the web-scraping approach. [Be a good web-scraping
citizen](http://programmers.stackexchange.com/questions/91760/how-to-be-a-
good-citizen-when-crawling-web-sites).
Next, you can set the `User-Agent` header to pretend to be a browser. Either
provide a `User-Agent` in the `DEFAULT_REQUEST_HEADERS` setting:
DEFAULT_REQUEST_HEADERS = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.104 Safari/537.36'
}
or, you can rotate User Agents with a middleware. Here is the one I've
implemented based on `fake-useragent` package:
* [scrapy-fake-useragent](https://github.com/alecxe/scrapy-fake-useragent)
* * *
Another possible problem could be that you are hitting the web-site too often,
consider tweaking [`DOWNLOAD_DELAY`
setting](http://doc.scrapy.org/en/latest/topics/settings.html#download-delay):
> The amount of time (in secs) that the downloader should wait before
> downloading consecutive pages from the same website. This can be used to
> throttle the crawling speed to avoid hitting servers too hard.
There is an another relevant setting that can have a positive impact:
[`CONCURRENT_REQUESTS`](http://doc.scrapy.org/en/latest/topics/settings.html#concurrent-
requests):
> The maximum number of concurrent (ie. simultaneous) requests that will be
> performed by the Scrapy downloader.
|
Python - count words not in a distance of three (3) words from specific words
Question: I am using the following `Python` code to `count words in text (.txt) files`,
checking whether any of the words in text file belong in any of the `two lists
of words that I am considering (the word lists are .csv files, imagine these
as "dictionaries")`
import re
import collections
from collections import Counter
import csv
import sys
find_words = re.compile(r'(?<!\S)[A-Za-z]+(?!\S)').findall
wanted1 = set(find_words(open('word_list1.csv').read().lower()))
wanted2 = set(find_words(open('word_list2.csv').read().lower()))
for f in sys.argv[1:]:
cnt1 = cnt2 = cntWords = 0
WANTED = 20
with open(f) as inputfile:
for line in inputfile:
for word in find_words(line.lower()):
myfile.write(word+ "\n")
cntWords += 1
if word in wanted1:
file1.write(word+ "\n")
cnt1 += 1
if word in wanted2:
file2.write(word+ "\n")
cnt2 += 1
`At the moment, I am counting every word in the .txt file` that happens to
`belong in the word lists wanted1 and wanted2.`
What I want is to count these words `only when there is no negator in a
distance of three words from these words.`
A negator is `any one of the following three words: no, not, never.`
In this case, `if a negator is in the distance [-3,+3] words from the word I
am examining, the word should not be counted even if it belongs in one of the
word lists I am examining.`
Any idea how to implement this in my code? Thanks.
Example1:
`Word-2 Word-1 Word0 Word1 Word2 not Word3 Word4 Word5 Word6 Word7` -> none of
Word0 up to Word5 should be counter, Word-2, Word-1, Word6, Word7 should be
counted (if they belong in the csv word lists). Instead of "not" it could be
"never" or "no".
Example2: `never Word-2 Word-1 Word0 Word1 Word2` -> Word-2 Word-1 Word0
should not be counted, Word1 Word2 should be counted (if they belong in the
csv word lists). Instead of "never" it could be "not" or "no".
Answer: I put together a little script to do something similar to what you're asking.
I implemented the contents of the _.txt file_ as a multi-line string and
hardcoded the word lists just to simplify things for this example. You can
replace those bits with your file open / reading code. This is likely to be a
very inefficient solution, but it was the clearest way to organise it in my
head. Feel free to optimise to your liking.
# -*- coding: utf-8 -*-
import pprint
from collections import defaultdict
from string import punctuation
# Get a word count for each word in a pair of wordlists that appear in a block of text.
# Exclude the appearance of a word from the count if any of the 3 words before or after the word
# in question are a member of the negator set (no, not, never).
def main():
wordlist1 = ['mickey', 'pluto', 'goofy', 'minnie', 'donald']
wordlist2 = ['bugs', 'daffy', 'elmer', 'foghorn', 'porky']
# Whether to ensure the words in the wordlists are lowercase depends on your use-case
wordlist1 = [element.lower() for element in wordlist1]
wordlist2 = [element.lower() for element in wordlist2]
mergedwordset = set(wordlist1 + wordlist2)
negatorset = set(['no', 'not', 'never'])
# Using collections.defaultdict here so that we can add a key with the value of 1
# if it doesn't already exist and increment the value of the key if it does exist.
countincludingneg = defaultdict(int)
countexcludingneg = defaultdict(int)
# Using a multi-line string here just to simplify this example.
# This will be parsed for the word count.
# Adapt it to your own uses.
# Text excerpts from wikipedia:
# http://en.wikipedia.org/wiki/Pluto_(Disney)
textblock = '''
Pluto, also called Pluto the Pup, is a cartoon character created in 1930 by Walt Disney Productions. He is a red-colored, medium-sized, short-haired dog with black ears. Unlike most Disney characters, Pluto is not anthropomorphic beyond some characteristics such as facial expression, though he did speak for a short portion of his history. He is Mickey Mouse's pet. Officially a mixed-breed dog, he made his debut as a bloodhound in the Mickey Mouse cartoon The Chain Gang. Together with Mickey Mouse, Minnie Mouse, Donald Duck, Daisy Duck, and Goofy, Pluto is one of the "Sensational Six"—the biggest stars in the Disney universe. Though all six are non-human animals, Pluto alone is not dressed as a human.
Pluto debuted in animated cartoons and appeared in 24 Mickey Mouse films before receiving his own series in 1937. All together Pluto appeared in 89 short films between 1930 and 1953. Several of these were nominated for an Academy Award, including The Pointer (1939), Squatter's Rights (1946), Pluto's Blue Note (1947), and Mickey and the Seal (1948). One of his films, Lend a Paw (1941), won the award in 1942. Because Pluto does not speak, his films generally rely on physical humor. This made Pluto a pioneering figure in character animation, which is expressing personality through animation rather than dialogue.
Like all of Pluto's co-stars, the dog has appeared extensively in comics over the years, first making an appearance in 1931. He returned to theatrical animation in 1990 with The Prince and the Pauper and has also appeared in several direct-to-video films. Pluto also appears in the television series Mickey Mouse Works (1999–2000), House of Mouse (2001–2003), and Mickey Mouse Clubhouse (2006–2013).
In 1998, Disney's copyright on Pluto, set to expire in several years, was extended by the passage of the Sonny Bono Copyright Term Extension Act. Disney, along with other studios, lobbied for passage of the act to preserve their copyrights on characters such as Pluto for 20 additional years.
Pluto first and most often appears in the Mickey Mouse series of cartoons. On rare occasions he is paired with Donald Duck ("Donald and Pluto", "Beach Picnic", "Window Cleaners", "The Eyes Have It", "Donald's Dog Laundry", & "Put Put Troubles").
The first cartoons to feature Pluto as a solo star were two Silly Symphonies, Just Dogs (1932) and Mother Pluto (1936). In 1937, Pluto appeared in Pluto's Quin-Puplets which was the first instalment of his own film series, then headlined Pluto the Pup. However, they were not produced on a regular basis until 1940, by which time the name of the series was shortened to Pluto.
His first comics appearance was in the Mickey Mouse daily strips in 1931 two months after the release of The Moose Hunt. Pluto Saves the Ship, a comic book published in 1942, was one of the first Disney comics prepared for publication outside newspaper strips. However, not counting a few cereal give-away mini-comics in 1947 and 1951, he did not have his own comics title until 1952.
In 1936 Pluto got an early title feature in a picture book under title "Mickey Mouse and Pluto the Pup" by Whitman Publishing.
Pluto runs his own neighborhood in Disney's Toontown Online. It's called the Brrrgh and it's always snowing there except during Halloween. During April Toons Week, a weekly event that is very silly, Pluto switches playgrounds with Minnie (all other characters do this as well). Pluto actually talks in Minnie's Melodyland.
Pluto has also appeared in the television series Mickey Mouse Works (1999–2000), Disney's House of Mouse (2001–2003) and Mickey Mouse Clubhouse (2006–present). Curiously enough, however, Pluto was the only standard Disney character not included when the whole gang was reunited for the 1983 featurette Mickey's Christmas Carol, although he did return in The Prince and the Pauper (1990) and Runaway Brain (1995). He also had a cameo in Who Framed Roger Rabbit (1988). In 1996, he made a cameo in the Quack Pack episode "The Really Mighty Ducks".
'''
# Removing leading and trailing whitespace.
# Removing new-lines so we can extend the look-aheads / look-behinds across lines.
# Removing punctuation.
# Setting all text to lowercase
# Adjust to your use-cases
textblock = textblock.strip().replace('\n', ' ').translate(None, punctuation).lower()
textblockwords = textblock.split()
# Construct a list of 7-gram (or less) word windows.
# The window will center on each individual word of the textblock
# and include the 3 words before and after its appearance.
windows = n_gram_word_windows(textblockwords, 3)
# Un-comment the following line if you'd like to see a representation of the n-gram word windows
#pprint.pprint(windows)
for windowdict in windows:
for key, ngramlist in windowdict.iteritems():
# Is the word a member of the wordlists?
if key in mergedwordset:
countincludingneg[key] += 1
# Do the words preceeding or following appear in the set of negators?
if len(negatorset.intersection(set(ngramlist))) == 0:
countexcludingneg[key] += 1
print "Count including negators"
pprint.pprint(countincludingneg)
print "Count excluding negators"
pprint.pprint(countexcludingneg)
# The idea here is to examine each word in the textblock and
# create a list containing the 3 words before the word, the word itself, and the 3 words following the word.
# This method will return a list of dictionaries.
# The dictionary will be comprised of the examined word as the key, and its n-gram word window as the value.
def n_gram_word_windows(textlist, lookaheadbehind=3):
wordwindows = []
for index, item in enumerate(textlist):
intermediatelist = []
if index < lookaheadbehind:
for preceedingword in textlist[:index]:
intermediatelist.append(preceedingword)
else:
for preceedingword in textlist[index-lookaheadbehind:index]:
intermediatelist.append(preceedingword)
if index < len(textlist):
for lookaheadword in textlist[index:index+lookaheadbehind+1]:
intermediatelist.append(lookaheadword)
wordwindows.append({item: intermediatelist})
return wordwindows
if __name__ == '__main__':
main()
The results look like the following:
macbook:stackoverflow joeyoung$ python negatorparser.py
Count including negators
defaultdict(<type 'int'>, {'mickey': 12, 'donald': 3, 'goofy': 1, 'minnie': 2, 'pluto': 27})
Count excluding negators
defaultdict(<type 'int'>, {'mickey': 12, 'donald': 3, 'goofy': 1, 'minnie': 2, 'pluto': 24})
|
Face Detection using Haar - OpenCV python
Question: I am trying this python code snippet:
import numpy as np
import cv2
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
img = cv2.imread('img.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 3)
for (x,y,w,h) in faces:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
but I get this error:
Traceback (most recent call last):
File "test.py", line 14, in <module>
roi_color = img[y:y+h, x:x+w]
TypeError: 'NoneType' object has no attribute '__getitem__'
I found this program [here](http://opencv-python-
tutroals.readthedocs.org/en/latest/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html#face-
detection).
Answer: I had the same problem and it appears that there is a bug in the code with the
img variable. It looks like there is confusion with the second setting of the
img variable within the first 'for' loop. Just make sure you also have the
path to your image set correctly as well.
I changed the code to the following and it worked for me.
import numpy as np
import cv2
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
img = cv2.imread('/path/to/your/image/img.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 3)
for (x,y,w,h) in faces:
img2 = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
|
PySerial doesn't work in script
Question: I have problem with this script:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import serial
import time
ser = serial.Serial("COM3", 9600, timeout=1)
ser.write("test")
print "Test started\n"
time.sleep(1)
ans = ser.read(4)
print ans
In Arduino is echo program (everything what has been sent to Arduino has been
sent back to computer). With serial monitor everything works fine.
When I'm running my python script it stops on ser.read() (it is waiting for
incoming data). But when commands from my script I've written directly to
python console everything works without problems.
Why my code started from file didn't work?
My OS: Windows 8.1 (64 bits) Python version: 2.7.9 (64 bits)
Answer: Try this way
import serial
import time
ser = serial.Serial("COM3", 9600, timeout=1)
ser.write("test")
print "Test started - data sent \n"
while True:
ans = ser.read(4)
if and:
print and
time.sleep(1)
|
Forcing a thread to block all other threads from executing
Question: UPDATE:
[This answer](http://stackoverflow.com/a/16262657/336527 "This answer") states
that what I'm trying to do is impossible as of April 2013. This, however,
seems to contradict what Alex Martelli says in [Python
Cookbook](http://shop.oreilly.com/product/0636920027072.do) (p. 624, 3rd ed.):
> Upon return, PyGILState_Ensure() always guarantees that the calling thread
> has exclusive access to the Python interpreter. This is true even if the
> calling C code is running a different thread that is unknown to the
> interpreter.
[The docs](https://docs.python.org/3.4/c-api/init.html#c.PyGILState_Ensure)
also seem to suggest GIL can be acquired, which would give me hope (except I
don't think I can call `PyGILState_Ensure()` from pure python code, and if I
create a C extension to call it, I'm not sure how to embed my
`memory_daemon()` in that).
Perhaps I'm misreading either the answer or Python Cookbook and the docs.
ORIGINAL QUESTION:
I want a given thread (from `threading` module) to prevent any other thread
from running while a certain segment of its code is executing. What's the
easiest way to achieve it?
Obviously, it would be great to minimize code changes in the other threads, to
avoid using C and direct OS calls, and to make it cross-platform for windows
and linux. But realistically, I'll be happy to just have any solution
whatsoever for my actual environment (see below).
Environment:
* CPython
* python 3.4 (but can upgrade to 3.5 if it helps)
* Ubuntu 14.04
Use case:
For debugging purposes, I calculate memory used by all the objects (as
reported by `gc.get_objects()`), and print some summary report to
`sys.stderr`. I do this in a separate thread, because I want this summary
delivered asynchronously from other threads; I put `time.sleep(10)` at the end
of the `while True` loop that does the actual memory usage calculation.
However, the memory reporting thread takes a while to complete each report,
and I don't want all the other threads to move ahead before the memory
calculation is finished (otherwise, the memory snapshot will be really hard to
interpret).
Example (to clarify the question):
import threading as th
import time
def report_memory_consumption():
# go through `gc.get_objects()`, check their size and print a summary
# takes ~5 min to run
def memory_daemon():
while True:
# all other threads should not do anything until this call is complete
report_memory_consumption()
# sleep for 10 sec, then update memory summary
# this sleep is the only time when other threads should be executed
time.sleep(10)
def f1():
# do something, including calling many other functions
# takes ~3 min to run
def f2():
# do something, including calling many other functions
# takes ~3 min to run
def main():
t_mem = th.Thread(target = memory_daemon)
t1 = th.Thread(target = f1)
t2 = th.Thread(target = f2)
t_mem.start()
t1.start()
t2.start()
# requirement: no other thread is running while t_mem is not sleeping
Answer: You should use threading locks to execute code synchronously between threads.
The answer given is somewhat correct but I would use reentrant locals to check
again to see if you indeed have the lock.
Do not use variables as described in another answer to check for lock
possession. The variables can get corrupted between multiple threads.
Reentrant locks were meant to solve this problem.
Also what's incorrect in that code is that lock is released assuming the code
between doesn't throw exception. so always do in `with` context or `try-catch-
finally`.
Here is an excellent [article](http://effbot.org/zone/thread-
synchronization.htm) explaining synchronization in Python and threading
[docs](https://docs.python.org/3.4/library/threading.html).
**Edit: Answering OP's update on embedding Python in C**
You misunderstood what he said in the cookbook. `PyGILState_Ensure` returns
the GIL if a GIL is available in the **current python interpreter** but not C
threads which is unknown to the python interpreter.
You can't force to get GIL from other threads in the current interpreter.
Imagine if you were able to, then basically you will cannibalize all other
threads.
|
Stripping timestamp from Json int
Question: In a third-party script, it's meant to spit out Json for me to decode but
instead prepends a timestamp:
2015-03-28T16:32:41.875199+00:00 {"1": {"Power (kW)": "0.301", "Energy Imported (kWh)": "62.281"...}}
All of this is in one big integer. I've tried to split this based on the
space/whitespace contained before the first curly bracket but can simply not
do it. I'd really appreciate some pointers - I'm aware my questions here are
very basic, and apparently I'm going to be blocked soon unless I improve, so
please don't shoot me for asking another simple one!
Updated 31/3/2015 for @Alex.
I realise it may seem impossible but assure you that it returns a datatype of
int. You may not believe it, but you'll have to, and [check the guy's code
here](https://github.com/edrabbit/neurio). This is what I've used to call the
script:
get_power=os.system("python /fetch_neurio.py --ip 172.16.0.8 --format json --type sensor")
thedata = get_power
print type(thedata)
If you find that too unbelievable then look at the code of fetch_neurio
yourself. If you're struggling to help a cretin like me, then no worries. I
just came here to ask for help.
Thanks
Answer: No idea what you mean by "one big `int`" in the text **and** subject since
pretty clearly the data is a **string** , not an integer. I'm going to answer
on the assumption that your repeated use of `int` is just a weird repeated
typo and you meant "string" instead.
Given
data = '2015-03-28T16:32:41.875199+00:00 {"1": {"Power (kW)": "0.301", "Energy Imported (kWh)": "62.281"}}'
(removing the `...` to make the JSON part syntactically valid),
import json
timestamp, jsondata = data.split(None, 1)
pythondata = json.loads(jsondata)
...yep, it **is** as simple as this!-)
"Splitting on `None`" means to split on sequences of 1+ whitespace characters
(in case that blank-looking thing might be a tab or whatever, we don't
care:-). The second optional argument, here `1`, to the split method, means to
do just one split (on the very first sequence of whitespace) -- thus producing
two pieces that we assign correspondingly to two variables.
There are of course other possibilities, for example using regular
expressions, if the anomalies in `data` are worse than you have communicated
so far (i.e, more than just a pre-pended, whitespace separated timestamp), but
based on what you **have** communicated, this simplistic approach should
suffice.
|
Exception gevent.hub.LoopExit: LoopExit('This operation would block forever',)
Question: I am always getting this error when running my Flask App with Websockets. I
have tried to follow this guide - <http://blog.miguelgrinberg.com/post/easy-
websockets-with-flask-and-gevent>
I have a flask app that provides the GUI interface for my network sniffer. The
sniffer is inside a thread as shown below : ( l is the thread for my sniffer;
isRunning is a boolean to check if the thread is already running)
try:
if l.isRunning == False: # if the thread has been shut down
l.isRunning = True # change it to true, so it could loop again
running = True
l.start() # starts the forever loop / declared from the top to be a global variable
print str(running)
else:
running = True
print str(running)
l.start()
except Exception, e:
raise e
return flask.render_template('test.html', running=running) #goes to the test.html page
The sniffer runs fine without the socketio and i am able to sniff the network
while traversing my gui.However, when I included the socketio in the code, I
first see the socketio working in my index page and i am able to receive the
messages from the server to the page. I could also traverse fine to the other
static paages in my GUI ;however, activating my threaded network sniffer would
leave my browser hangup. I always get the Exception gevent.hub.LoopExit:
LoopExit('This operation would block forever',) error and when I rerun my
program, the console would say that the address is already in use. It seems to
me that I may not be closing my sockets correctly as this is happening. I also
think that some operation is blocking based from the error. The code in my
python flask app is shown below
def background_thread():
"""Example of how to send server generated events to clients."""
count = 0
while True:
time.sleep(10)
count += 1
socketio.emit('my response',{'data': 'Server generated event', 'count': count},namespace='/test')
if socketflag is None:
thread = Thread(target=background_thread)
thread.start()
@socketio.on('my event', namespace='/test')
def test_message(message):
emit('my response', {'data': message['data']})
@socketio.on('my broadcast event', namespace='/test')
def test_message(message):
emit('my response', {'data': message['data']}, broadcast=True)
@socketio.on('connect', namespace='/test')
def test_connect():
emit('my response', {'data': 'Connected'})
@socketio.on('disconnect', namespace='/test')
def test_disconnect():
print('Client disconnected')
Here is the code present in my index page.
<script type="text/javascript" charset="utf-8">
$(document).ready(function() {
namespace = '/test'; // change to an empty string to use the global namespace
// the socket.io documentation recommends sending an explicit package upon connection
// this is specially important when using the global namespace
var socket = io.connect('http://' + document.domain + ':' + location.port + namespace);
socket.on('connect', function () {
socket.emit('my event', {data: 'I\'m connected!'});
});
// event handler for server sent data
// the data is displayed in the "Received" section of the page
socket.on('my response', function (msg) {
$('#log').append('<br>Received #' + msg.count + ': ' + msg.data);
});
});
</script>
Answer: you should avoid using `time.sleep()` with gevent . You should use
`gevent.sleep()` instead. `time.sleep()` will block the gevent loop
|
The view didn't return an HttpResponse object. It returned None instead. Request Method: POST
Question: The problem appears when I submit the form. A few days ago it worked, I
changed nothing in this code, but now it fails. Is it possible that different
module makes that part of code crashed? Traceback:
Environment:
Request Method: POST
Request URL: http://127.0.0.1:8000/users/new/
Django Version: 1.7.3
Python Version: 3.4.0
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'mojblog',
'users')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Traceback:
File "/home/marta/django/djangovenv/lib/python3.4/site-packages/django/core/handlers/base.py" in get_response
130. % (callback.__module__, view_name))
Exception Type: ValueError at /users/new/
Exception Value: The view users.views.create didn't return an HttpResponse object. It returned None instead.
Code:
from django.shortcuts import render, redirect
from django.http import HttpResponse
from users.forms import UserCreationForm
from django.contrib.auth.models import User
from users.models import Profile
def test(request):
return HttpResponse('users!')
def create(request):
if request.method =="POST":
new_user=User()
form = UserCreationForm(request.POST, instance=new_user)
if form.is_valid():
new_user=form.save(commit=False)
new_user.is_active=False
new_user.save()
return redirect("users:new")
else:
form = UserCreationForm()
return render(request, 'users/create.html', {'form1': form})
# else:
# return HttpResponse("something")
def activate(request, key):
profile=Profile.objects.get(secret=key)
user=profile.user
user.is_active=True
user.save()
return render(request, 'users/activate.html', {})
Answer: In the `create` view, if the request is a POST but the form is not valid, the
code does not return anything.
Rather than have a separate GET block, you should use the standard pattern:
if request.method =="POST":
form = ...
if form.is_valid():
...
return redirect(...)
else:
form = ...
return render(request, 'users/create.html', {'form1': form})
In this case the final return catches that case, and re-renders the form so
that it displays the errors.
|
How to convert '*' to *
Question: I am making a math quiz game in python, where the computer selects 2 numbers
and 1 symbol from a list and prints it, then the user answers the question. In
my list with the symbols, I have the math symbols as strings, but when I want
the computer to get the answer, It can't, because the symbols are strings. How
do I convert the '*' string to the * symbol used in math? Any help is
appreciated, I will post what I have of the game so far as well.
import random
import time
import math
symbols = ('*', '+', '-', '/')
count = 0
def intro():
print("Hi")
print("Welcome to the math quiz game, where you will be tested on addition")
print("Subtraction, multiplication, and division, and other math skills")
time.sleep(3)
print("Lets begin")
def main(count):
number_1 = random.randrange(10,20+1)
number_2 = random.randrange(1,10+1)
symbol = (random.choice(symbols))
print("Your question is: What is %d %s %d") %(number_1, symbol, number_2)
main(count)
Answer: You have to give a meaning to those symbols. One way would be by realizing
that each of these symbols is a function that takes two arguments. Thus you
can do:
symbols = {
'*': lambda x, y: x*y,
'+': lambda x, y: x+y,
'-': lambda x, y: x-y,
'/': lambda x, y: x/y,
}
and then you can do
number_1 = random.randrange(10,20+1)
number_2 = random.randrange(1,10+1)
symbol = random.choice(symbols.keys())
operator = symbols[symbol]
result = operator(number_1, number_2)
|
Send data to Arduino on keypress Raspberry Pi
Question: I'm trying to build a car (Wild Thumper) that i can drive from my Raspberry
Pi. Currently i'm using my Raspberry Pi over SSH. It should send to data to my
Arduino so it knows when it has to go forward or when to turn.
I've tried making scripts that get called by jQuery (apache on Pi) and send an
integer over the serial port but it requires a delay and this is not ideal.
(example forwardStart.py:)
import serial
ser = serial.Serial('/dev/ttyACM0', 9600)
ser.open()
# here a delay is needed
ser.write('4') # go forward
ser.close()
To solve this i tried looking for a single python script that read my keyboard
and send the correct integer. However, all keylisteners require display and
can't be used over SSH.
Can anybody help me with the Python script or another idea that would works?
Thanks!
Answer: You should start reading from
[here](http://stackoverflow.com/questions/983354/how-do-i-make-python-to-wait-
for-a-pressed-key). The idea would be something like
import serial
ser = serial.Serial('/dev/ttyACM0', 9600)
ser.open()
# here a delay is needed
try:
while 1:
try:
key = sys.stdin.read(1) # wait user input
actionKey = key2action(key) # translate key to action
ser.write(actionKey) # go forward
except IOError: pass
finally:
ser.close()
**Note:** this code will fail, it's more like pseudo-code to illustrate the
idea.
|
Python queue linking object running asyncio coroutines with main thread input
Question: I have a script running where the main thread takes input from stdin and then
passes it to a child thread using a queue. In the child thread I'm using
asyncio coroutines to spin up a listener on a socket and wait for connections.
Once a connection is made I can now send data through the listener from the
main thread.
It all seems to work well enough, but since asyncio.BaseEventLoop is not
thread safe am I going to run into problems?
This is my attempt to solve the problem of using a blocking library like
python's cmd module with asyncio.
My code is below.
import sys
import asyncio
from time import sleep
from threading import Thread
from queue import Queue
stdin_q = Queue()
clients = {} # task -> (reader, writer)
def client_connected_handler(client_reader, client_writer):
# Start a new asyncio.Task to handle this specific client connection
task = asyncio.Task(handle_client(client_reader, client_writer))
clients[task] = (client_reader, client_writer)
def client_done(task):
# When the tasks that handles the specific client connection is done
del clients[task]
# Add the client_done callback to be run when the future becomes done
task.add_done_callback(client_done)
@asyncio.coroutine
def handle_client(client_reader, client_writer):
# Handle the requests for a specific client with a line oriented protocol
while True:
cmd = yield from get_input()
client_writer.write(cmd.encode())
data = yield from client_reader.read(1024)
print(data.decode(),end="",flush=True)
@asyncio.coroutine
def get_input():
while True:
try:
return stdin_q.get()
except:
pass
class Control:
def start(self):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
self.loop = asyncio.get_event_loop()
server = self.loop.run_until_complete(asyncio.start_server(client_connected_handler, '0.0.0.0', 2222))
self.loop.run_forever()
self.stop()
def stop(self):
self.loop.stop()
self.loop.close()
def fire_control():
con = Control()
con.start()
if __name__ == "__main__":
stdin_q.put("\n")
t = Thread(target=fire_control)
t.start()
sleep(2)
_cmd = ""
while _cmd.lower() != "exit":
_cmd = input("")
if _cmd == "":
_cmd = "\r\n"
stdin_q.put(_cmd)
Answer: This isn't going to work quite right, because the call to `stdin_q.get()` is
going to block your event loop. This means that if your server has multiple
clients, all of them will be completely blocked by whichever one happens to
get to `stdin_q.get()` first, until you send data into the queue. The simplest
way to get around this is use
[`BaseEvent.loop.run_in_executor`](https://docs.python.org/3/library/asyncio-
eventloop.html#asyncio.BaseEventLoop.run_in_executor) to run the `stdin_q.get`
in a background `ThreadPoolExecutor`, which allows you to wait for it without
blocking the event loop:
@asyncio.coroutine
def get_input():
loop = asyncio.get_event_loop()
return (yield from loop.run_in_executor(None, stdin_q.get)) # None == use default executor.
**Edit (1/27/16):**
There is a library called [`janus`](https://github.com/aio-libs/janus), which
provides an asyncio-friendly, thread-safe queue implementation.
Using that library, your code would look like this (I left out unchanged
parts):
...
import janus
loop = asyncio.new_event_loop()
stdin_q = janus.Queue(loop=loop)
...
@asyncio.coroutine
def get_input():
loop = asyncio.get_event_loop()
return (yield from stdin_q.async_q.get())
class Control:
def start(self):
asyncio.set_event_loop(loop)
self.loop = asyncio.get_event_loop()
server = self.loop.run_until_complete(asyncio.start_server(client_connected_handler, '0.0.0.0', 2222))
self.loop.run_forever()
self.stop()
def stop(self):
self.loop.stop()
self.loop.close()
...
if __name__ == "__main__":
stdin_q.sync_q.put("\n")
t = Thread(target=runner)
t.start()
sleep(2)
_cmd = ""
while _cmd.lower() != "exit":
_cmd = input("")
if _cmd == "":
_cmd = "\r\n"
stdin_q.sync_q.put(_cmd)
|
Why is Python sort faster than that of C++
Question: Sorting a list of ints in python 3 seems to be faster than sorting an array of
ints in C++. Below is the code for 1 python program and 2 C++ programs that I
used for the test. Any reason why the C++ programs are slower? It doesn't make
sense to me.
\----- Program 1 - python 3.4 -----
from time import time
x = 10000
y = 1000
start = time()
for _ in range(y):
a = list(range(x))
a.reverse()
a.sort()
print(round(time() - start, 2), 'seconds')
\----- Program 2 - c++ using sort from algorithm ------
using namespace std;
#include <iostream>
#include <algorithm>
int main(){
int x = 10000;
int y = 1000;
int b[10000];
cout << "start" << endl;
for (int j = 0; j < y; j++){
for (int i = 0; i < x; i++){
b[i] = x - i;
} // still slower than python with this clause taken out
sort(b, b + x); // regular sort
}
cout << "done";
system("pause");
}
\----- Program 3 - c++ using hand written merge sort ------
using namespace std;
#include <iostream>
void merge(int * arr, int *temp, int first_start, int second_start, int second_finish){
int a1 = first_start, b1 = second_start, r = 0;
while (a1 < second_start && b1 < second_finish){
if (arr[a1] < arr[b1]){
temp[r] = arr[a1];
a1++; r++;
}
else {
temp[r] = arr[b1];
b1++; r++;
}
}
if (a1 < second_start){
while (a1 < second_start){
temp[r] = arr[a1];
a1++; r++;
}
}
else {
while (b1 < second_finish){
temp[r] = arr[b1];
b1++; r++;
}
}
for (int i = first_start; i < second_finish; i++){
arr[i] = temp[i - first_start];
}
}
void merge_sort(int *a, int a_len, int *temp){
int c = 1, start = 0;
while (c < a_len){
while (start + c * 2 < a_len){
merge(a, temp, start, start + c, start + c * 2);
start += c * 2;
}
if (start + c <= a_len){
merge(a, temp, start, start + c, a_len);
}
c *= 2; start = 0;
}
}
int main(){
int x = 10000; // size of array to be sorted
int y = 1000; // number of times to sort it
int b[10000], temp[10000];
cout << "start" << endl;
for (int j = 0; j < y; j++){
for (int i = 0; i < x; i++){
b[i] = x - i; // reverse sorted array (even with this assignment taken out still runs slower than python)
}
merge_sort(b, x, temp);
}
cout << "done";
system("pause");
}
Answer: The core reason is no doubt **timsort** \--
<http://en.wikipedia.org/wiki/Timsort> \-- first conceived by Tim Peters for
Python though now also in some Java VMs (for non-primitives only).
It's a truly amazing algorithm and you can find a C++ implementation at
<https://github.com/swenson/sort> for example.
Lesson to retain: the proper _architecture_ and _algorithms_ can let you run
circles around supposedly-faster _languages_ if the latter are using less-
perfect A & As!-) So, if you have really big problems to solve, deal with
determining perfect architecture and algorithms first -- the language and
optimizations within it are inevitably lower-priority issues.
|
Read specific column from .dat-file in Python
Question: I have a results.dat file with some data like this:
7522126 0 0 0 0 0 0 -419.795 -186.24 1852.86 0.134695 -0.995462 -2.53153
7825452 0 0 0 0 0 0 -419.795 -186.24 1852.86 0.134695 -0.995462 -2.53153
8073799 0 0 0 0 0 0 -345.551 -140.711 1819.04 -0.0220266 -0.85992 -2.29598
The values are each separated by a tab.
I want to extract the value in e.g the 8th column for every single line, and
save it to an array. So the output should be this:
-419.795
-419.795
-345.551
What's the easiest way to accomplish this?
Answer: You could use [csv module](https://docs.python.org/3/library/csv.html).
import csv
with open('file') as f:
reader = csv.reader(f, delimiter="\t")
for line in reader:
print(line[7])
|
Why are my dictionary values changing after the loop?
Question: Following is my python code:
from collections import defaultdict
data = {"Manoj":{"India":{"Airtel":2000,"Vodafone":5000},"Pakistan":{"Airtel":3000}},
"Shafiq":{"Pakistan":{"Airtel":5000,"BP":10000}},
"Shams":{"India":{"BP":400}},
"Govind":{"India":{"Airtel":3000,"Vodafone":2000}},
"Zakir":{"SriLanka":{"Etisalat":7000}}
}
print("Printing data:")
print(data)
sumOperator={}
countryCustomers = defaultdict(list)
sum=0
for name,value in data.items():
for country,value2 in value.items():
countryCustomers[country].append(name)
if country not in sumOperator:
sumOperator[country]=value2
else:
for operator,value3 in value2.items():
if operator not in sumOperator[country]:
sumOperator[country][operator] = value3
else:
sum = sumOperator[country][operator] + value3
sumOperator[country][operator] = sum
print("")
print("Printing sumOperator:")
print(sumOperator)
print("")
print("Priniting data:")
print(data)
Output:
Printing data:
{'Zakir': {'SriLanka': {'Etisalat': 7000}}, 'Manoj': {'Pakistan': {'Airtel':3000}, 'India': {'Vodafone': 5000, 'Airtel': 2000}}, 'Shams': {'India': {'BP': 400}}, 'Shafiq': {'Pakistan': {'BP': 10000, 'Airtel': 5000}}, 'Govind': {'India': {'Vodafone': 2000, 'Airtel': 3000}}}
Printing sumOperator:
{'Pakistan': {'BP': 10000, 'Airtel': 8000}, 'SriLanka': {'Etisalat': 7000}, 'India': {'Vodafone': 7000, 'BP': 400, 'Airtel': 5000}}
Priniting data:
{'Zakir': {'SriLanka': {'Etisalat': 7000}}, 'Manoj': {'Pakistan': {'BP': 10000, 'Airtel': 8000}, 'India': {'Vodafone': 7000, 'BP': 400, 'Airtel': 5000}}, 'Shams': {'India': {'BP': 400}}, 'Shafiq': {'Pakistan': {'BP': 10000, 'Airtel': 5000}}, 'Govind': {'India': {'Vodafone': 2000, 'Airtel': 3000}}}
Why is my dictionary "data" changing values after initializing another
dictionary? The last print data statement produces another data dictionary, it
is not the same as the one initialized in the beginning
Answer: The problem happens when your program adds values to the _sumOperator_
dictionary, and mainly with the final `else` statement in your loop. You are
trying to build this dictionary from the values of another dictionary, so you
have `sumOperator[key] = value`, where `value` refers to an item in the other
dictionary. This simply points one dictionary to the values of another; the
underlying value isn't copied. Thus, when you update the values of
_sumOperator_ , the main _data_ dictionary also got updated. To fix this,
simply copy the values as you add them to the new dictionary.
The following code works, we simply add `copy()` whenever we update
_sumOperator_ :
from collections import defaultdict
from copy import copy
data = {"Manoj":{"India":{"Airtel":2000,"Vodafone":5000},"Pakistan":{"Airtel":3000}},
"Shafiq":{"Pakistan":{"Airtel":5000,"BP":10000}},
"Shams":{"India":{"BP":400}},
"Govind":{"India":{"Airtel":3000,"Vodafone":2000}},
"Zakir":{"SriLanka":{"Etisalat":7000}}
}
print("Printing data:")
print(data)
sumOperator={}
countryCustomers = defaultdict(list)
sum=0
for name,value in data.items():
for country,value2 in value.items():
countryCustomers[country].append(name)
if country not in sumOperator:
sumOperator[country] = copy(value2)
else:
for operator,value3 in value2.items():
if operator not in sumOperator[country]:
sumOperator[country][operator] = copy(value3)
else:
sum = sumOperator[country][operator] + copy(value3)
sumOperator[country][operator] = sum
print("Printing sumOperator:")
print(sumOperator)
print("Priniting data:")
print(data)
As others have pointed out, you may wish to view the Python doc pages on
[assignment statements and
copying](https://docs.python.org/3.4/library/copy.html). It is important to
keep in mind of when you need to create an explicit copy if you do not wish to
modify the original.
|
How to open a simple image using streams in Pillow-Python
Question:
from PIL import Image
image = Image.open("image.jpg")
file_path = io.BytesIO();
image.save(file_path,'JPEG');
image2 = Image.open(file_path.getvalue());
I get this error `TypeError: embedded NUL character` on the last statement
`Image.open` on running the program
What is the correct way to open a file from streams?
Answer: <http://effbot.org/imagingbook/introduction.htm#more-on-reading-images>
from PIL import Image
import StringIO
buffer = StringIO.StringIO()
buffer.write(open('image.jpeg', 'rb').read())
buffer.seek(0)
image = Image.open(buffer)
print image
# <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7FE2EEE2B098>
# if we try open again
image = Image.open(buffer)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 2028, in open
raise IOError("cannot identify image file")
IOError: cannot identify image file
Make sure you call buff.seek(0) before reading any StringIO objects. Otherwise
you'll be reading from the end of the buffer, which will look like an empty
file and is likely causing the error you're seeing.
|
'connection refused' with Celery
Question: I have a Django project on an Ubuntu EC2 node, which I have been using to set
up an asynchronous using `Celery`.
I am following [How to list of the queued items in
celery?](http://stackoverflow.com/questions/29319889/how-to-list-of-the-
queued-items-in-celery) along with the docs, to experiment with celery at the
command line.
I've been able to get a basic task working at the command line, using:
(env1)ubuntu@ip-172-31-22-65:~/projects/tp$ celery --app=myproject.celery:app worker --loglevel=INFO
However, if I run other celery commands like below I'm getting the following:
(env1)ubuntu@ip-172-31-22-65:~/projects/tp$ celery inspect ping
Traceback (most recent call last):
File "/home/ubuntu/.virtualenvs/env1/bin/celery", line 11, in <module>
sys.exit(main())
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/__main__.py", line 30, in main
main()
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 81, in main
cmd.execute_from_commandline(argv)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 769, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/base.py", line 307, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 761, in handle_argv
return self.execute(command, argv)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 693, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/base.py", line 311, in run_from_argv
sys.argv if argv is None else argv, command)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/base.py", line 373, in handle_argv
return self(*args, **options)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/base.py", line 270, in __call__
ret = self.run(*args, **kwargs)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 324, in run
return self.do_call_method(args, **kwargs)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 346, in do_call_method
callback=self.say_remote_command_reply)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/bin/celery.py", line 385, in call
return getattr(i, method)(*args)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/app/control.py", line 100, in ping
return self._request('ping')
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/app/control.py", line 71, in _request
timeout=self.timeout, reply=True,
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/celery/app/control.py", line 307, in broadcast
limit, callback, channel=channel,
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/kombu/pidbox.py", line 283, in _broadcast
chan = channel or self.connection.default_channel
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/kombu/connection.py", line 756, in default_channel
self.connection
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/kombu/connection.py", line 741, in connection
self._connection = self._establish_connection()
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/kombu/connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/kombu/transport/pyamqp.py", line 112, in establish_connection
conn = self.Connection(**opts)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/home/ubuntu/.virtualenvs/env1/lib/python3.4/site-packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
OSError: [Errno 111] Connection refused
The installed python packages:
(env1)ubuntu@ip-172-31-22-65:~/projects/tp$ pip freeze
amqp==1.4.6
anyjson==0.3.3
billiard==3.3.0.19
celery==3.1.17
Django==1.7.7
django-redis-cache==0.13.0
kombu==3.0.24
pytz==2015.2
redis==2.10.3
requests==2.6.0
uWSGI==2.0.10
/projects/tp/tp/celery.py
from __future__ import absolute_import
import os
import django
from celery import Celery
from django.conf import settings
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'tp.settings')
django.setup()
app = Celery('hello_django')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
also , in redis.conf:
# Specify the path for the unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
unixsocket /var/run/redis/redis.sock
unixsocketperm 777
tp.settings.py:
# CELERY SETTINGS
BROKER_URL = 'redis://localhost:6379/0'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CACHES = {
'default': {
'BACKEND': 'redis_cache.RedisCache',
'LOCATION': '/var/run/redis/redis.sock',
},
}
edit 2:
ubuntu@ip-172-31-22-65:~$ redis-cli ping
PONG
ubuntu@ip-172-31-22-65:~$ service redis-server status
redis-server is not running
edit 3:
(env1)ubuntu@ip-172-31-22-65:~/projects/tp$ redis-cli ping
PONG
(env1)ubuntu@ip-172-31-22-65:~/projects/tp$ sudo service redis-server start
Starting redis-server: failed
(env1)ubuntu@ip-172-31-22-65:~/projects/tp$ service redis-server status
redis-server is not running
What am I doing wrong?
Answer: I think you are using `rabbitmq` as queue.So check
sudo service rabbitmq-server status
if stop,
sudo service rabbitmq-server start
|
Python: Print data while waiting for input
Question: Writing a chat program and am looking for the ability for it to print messages
that are being received when they are received while the user can enter a
message to the other person. Currently the program sends message to other
computer then receives the other computer's message and then repeats.
import sys
import socket as so
import platform
from socket import *
import string
import base64
import os
import random
dmi = 0
lmi = 0
host = ""
checkcode = ''.join(random.choice('0123456789QWERTYUIOPLKJHGFDSAZXCVBNMasdfgjhklpoiuytrewqmnzbxvc') for i in range(16))
BLOCK_SIZE = 32
hostl = raw_input("> Input IP of computer message will be sent to: ")
port = 13000
buf = 1024
addr = (host, port)
addrl = (hostl, port)
UDPSock = socket(AF_INET, SOCK_DGRAM)
UDPSock.bind(addr)
addr = (host, port)
addrl = (hostl, port)
while dmi == 0:
datal = raw_input("Enter message to send or type 'exit': ")
UDPSock.sendto(datal, addrl)
if datal == "exit":
UDPSock.close()
dmi +=1
(data, addr) = UDPSock.recvfrom(buf)
print "Received message: "+ data
print "Done!"
Don't ask about the un-needed libraries that I have imported, this is a test
script.
Answer: You need to learn how to use multiple threads
(<http://www.tutorialspoint.com/python/python_multithreading.htm>). One thread
waits for user input, while the other receives the messages and prints them.
|
Troubles with MySQL, SQLAlchemy, Flask, and flask.ext.script -- create_all() doesn't create tables
Question: I am not able to create tables when I do the following in my virtualenv:
(plasma) $ python checkup_web.py shell
> from app import db
> db.create_all()
There is no error message thrown at all. Here's the structure of my project:
root
---app
------__init__.py
------models.py
------[various blueprints]
---checkup_web.py
---config.py
I can include more code if necessary, but I believe that these are the
important files for consideration:
1) checkup_web.py
#!/usr/bin/env python
import os
from app import create_app, db
from flask.ext.script import Manager
app = create_app(os.getenv('FLASK_CONFIG') or 'default')
manager = Manager(app)
if __name__ == '__main__':
manager.run()
2) config.py
import os
class Config:
SECRET_KEY = os.environ.get('SECRET_KEY')
class DevelopmentConfig(Config):
DEBUG = True
SECRET_KEY = os.environ.get('SECRET_KEY') or 't0p s3cr3t'
SQLALCHEMY_DATABASE_URI = os.environ.get('DEV_DATABASE_URL') or \
'mysql+pymysql://root:root_password@localhost/checkup_db'
class TestingConfig(Config):
TESTING = True
SECRET_KEY = 'secret'
class ProductionConfig(Config):
pass
config = {
'development': DevelopmentConfig,
'testing': TestingConfig,
'production': ProductionConfig,
'default': DevelopmentConfig
}
3) app/__init__.py
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
from config import config
db = SQLAlchemy()
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(config[config_name])
db.init_app(app)
from .families import families as families_blueprint
app.register_blueprint(families_blueprint)
from .patients import patients as patients_blueprint
app.register_blueprint(patients_blueprint)
from .doctors import doctors as doctors_blueprint
app.register_blueprint(doctors_blueprint)
from .conversations import conversations as conversations_blueprint
app.register_blueprint(conversations_blueprint)
from .appointments import appointments as appointments_blueprint
app.register_blueprint(appointments_blueprint)
return app
4) app/models.py (not sure if I'm doing anything wrong here - first time using
SQLAlchemy)
from datetime import datetime
from . import db
family_2_doc = db.Table('family_2_doc',
db.Column('family_id', db.Integer, db.ForeignKey('families.id')),
db.Column('doctor_id', db.Integer, db.ForeignKey('users.id'))
)
patient_2_doc = db.Table('patient_2_doc',
db.Column('patient_id', db.Integer, db.ForeignKey('users.id')),
db.Column('doctor_id', db.Integer, db.ForeignKey('users.id'))
)
class Family(db.Model):
__tablename__ = 'families'
id = db.Column(db.Integer, primary_key=True)
street_address = db.Column(db.String(64))
city = db.Column(db.String(64))
zipcode = db.Column(db.String(64))
country = db.Column(db.String(64))
phone_number = db.Column(db.String(64))
stripe_id = db.Column(db.String(64))
billing_valid = db.Column(db.Boolean)
num_relationships = db.Column(db.Integer)
doctors = db.relationship('Doctor', secondary=family_2_doc,
backref=db.backref('families', lazy='dynamic'), lazy='dynamic')
patients = db.relationship('Patient', backref='family', lazy='dynamic')
class Clinic(db.Model):
__tablename__ = 'clinics'
id = db.Column(db.Integer, primary_key=True)
street_address = db.Column(db.String(64))
city = db.Column(db.String(64))
zipcode = db.Column(db.String(64))
country = db.Column(db.String(64))
phone_number = db.Column(db.String(64))
doctors = db.relationship('Doctor', backref='clinic', lazy='dynamic')
admins = db.relationship('Admin', backref='clinic', lazy='dynamic')
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(64), unique=True, index=True)
first_name = db.Column(db.String(64))
last_name = db.Column(db.String(64))
password_hash = db.Column(db.String(128))
date_joined = db.Column(db.DateTime())
discriminator = Column('type', String(50))
__mapper_args__ = {'polymorphic_on': discriminator}
class Patient(User):
__mapper_args__ = {'polymorphic_identity': 'patient'}
dob = db.Column(db.String(64))
relationship_to_creator = db.Column(db.Boolean) # is equal to 'creator' if patient is creator
is_primary = db.Column(db.Boolean)
street_address = db.Column(db.String(64))
city = db.Column(db.String(64))
zipcode = db.Column(db.String(64))
country = db.Column(db.String(64))
phone_number = db.Column(db.String(64))
profile_pic = db.Column(db.String(64))
doctors = db.relationship('Doctor', secondary=patient_2_doc,
backref=db.backref('patients', lazy='dynamic'), lazy='dynamic')
appointments = db.relationship('Appointment', backref='patient', lazy='dynamic')
class Doctor(User):
__mapper_args__ = {'polymorphic_identity': 'doctor'}
practice_street_address = db.Column(db.String(64))
practice_city = db.Column(db.String(64))
practice_zipcode = db.Column(db.String(64))
practice_country = db.Column(db.String(64))
practice_phone_number = db.Column(db.String(64))
personal_street_address = db.Column(db.String(64))
personal_city = db.Column(db.String(64))
personal_zipcode = db.Column(db.String(64))
personal_country = db.Column(db.String(64))
personal_phone_number = db.Column(db.String(64))
schedule_start = db.Column(db.DateTime())
schedule_end = db.Column(db.DateTime())
appointments = db.relationship('Appointment', backref='doctor', lazy='dynamic')
class Admin(User):
__mapper_args__ = {'polymorphic_identity': 'admin'}
class Appointment(db.Model):
__tablename__ = 'appointments'
is_subscriber = db.Column(db.Boolean)
start = db.Column(db.DateTime())
end = db.Column(db.DateTime())
notes = db.Column(db.Text())
class Message(db.Model):
__tablename__ = 'messages'
body = db.Column(db.Text())
timestamp = db.Column(db.DateTime())
user1_id = db.Column(db.Integer) # smaller id
user2_id = db.Column(db.Integer) # larger id
user_sender_id = db.Column(db.Integer)
message_number = db.Column(db.Integer)
class Conversation(db.Model):
__tablename__ = "conversations"
user1_id = db.Column(db.Integer) # smaller id
user2_id = db.Column(db.Integer) # larger id
total_messages = db.Column(db.Integer)
Unfortunately despite not throwing any errors, the call to create_all() does
not produce any tables in my mysql database:
$ mysql -u root -p
Password: *********
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| channels |
| checkup_db |
| comments |
| media |
| mysql |
| notifications |
| performance_schema |
| reactions |
| sessions |
| shares |
| sparks |
| test |
| users |
+--------------------+
14 rows in set (0.01 sec)
mysql> use checkup_db
Database changed
mysql> show tables;
Empty set (0.00 sec)
I'd very much appreciate any thoughts/advice!
Answer: I think the problem is that the models are not imported. When you do:
from app import db
you trigger the creation of the SQLAlchemy object, but the models are not
registered. So then `db.create_all()` thinks there are no tables to create.
If you import the models, then they are going to register themselves with the
database object. This may seem odd, because you don't really need to use the
models directly, but it is the only way the database object learns about them.
|
Starting CherryPy application with HTTP method dispatcher
Question: I am new to cherrypy, and I am trying to start a simple application using the
method dispatcher. I have been trying to learn about cherrypy configuration
using this site:
<https://cherrypy.readthedocs.org/en/3.2.6/concepts/config.html>, but I am
still not understanding what I am doing wrong. When I start the application,
and go to 127.0.0.1:8080, I get the error message: the path '/' was not found.
Here is my python file that I am using to start the application:
import cherrypy
import re
import json
import requests
class root(object):
def GET(self):
return "<html> <p> Hello </p> </html>"
if __name__ == '__main__':
conf = {'server.socket_host': '127.0.0.1',
'server.socket_port': 8080}
cherrypy.config.update(conf)
cherrypy.tree.mount(root(), '/', {
'/': {
'request.dispatch': cherrypy.dispatch.MethodDispatcher(),
'tools.trailing_slash.on': False,
}
})
cherrypy.engine.start()
cherrypy.engine.block()
I am trying to set up this root application so that I can use the _cp_dispatch
function to dispatch the application based on the path given. Is this the best
way of doing it?
Answer: You have to expose the object defining an attribute "exposed":
import cherrypy
import re
import json
import requests
class root(object):
exposed = True
def GET(self):
return "<html> <p> Hello </p> </html>"
if __name__ == '__main__':
conf = {'server.socket_host': '127.0.0.1',
'server.socket_port': 8080}
cherrypy.config.update(conf)
cherrypy.tree.mount(root(), '/', {
'/': {
'request.dispatch': cherrypy.dispatch.MethodDispatcher(),
'tools.trailing_slash.on': False,
}
})
cherrypy.engine.start()
cherrypy.engine.block()
|
Indexing pandas dataframes inside pandas dataframes with python
Question: I have a series of dataframes inside a dataframe.
The top level dataframe is structured like this :
24hr 48hr 72hr
D1 x x x
D2 x x x
D3 x x x
In each case x is a dataframe created with `pandas.read_excel()`
One of the columns in each x dataframe has the title 'Average Vessels Length'
and there are three entries (i.e. rows, indices) in that column.
What I want to return is the mean value for the column 'Average Vessels
Length'. I'm also interested in how to return an particular cell in that
column. I know there's a .mean method for pandas dataframes, but I can't
figure out the indexing syntax to use it.
Below is an example
import pandas as pd
a = {'Image name' : ['Image 1', 'Image 2', 'Image 3'], 'threshold' : [20, 25, 30], 'Average Vessels Length' : [14.2, 22.6, 15.7] }
b = pd.DataFrame(a, columns=['Image name', 'threshold', 'Average Vessels Length'])
c = pd.DataFrame(index=['D1','D2','D3'], columns=['24hr','48hr','72hr'])
c['24hr']['D1'] = a
c['48hr']['D1'] = a
c['72hr']['D1'] = a
c['24hr']['D2'] = a
c['48hr']['D2'] = a
c['72hr']['D2'] = a
c['24hr']['D3'] = a
c['48hr']['D3'] = a
c['72hr']['D3'] = a
This returns the mean of the values in the column 'Average Vessels Length' :
print b['Average Vessels Length'].mean()
This returns all the values in 24hr, D1, 'Average Vessels Length'
print c['24hr']['D1']['Average Vessels Length']
This doesn't work :
print c['24hr']['D1']['Average Vessels Length'].mean()
And I can't figure out how to access any particular value in
c['24hr']['D1']['Average Vessels Length']
Ultimately I want to take the mean from each column of Dx['Average Vessels
Length'].mean() and divide it by the corresponding D1['Average Vessels
Length'].mean()
Any help would be greatly appreciated.
Answer: I'm assuming that since you said each element of your big dataframe was a
dataframe, your example data should have been:
import pandas as pd
a = {'Image name' : ['Image 1', 'Image 2', 'Image 3'], 'threshold' : [20, 25, 30], 'Average Vessels Length' : [14.2, 22.6, 15.7] }
b = pd.DataFrame(a, columns=['Image name', 'threshold', 'Average Vessels Length'])
c = pd.DataFrame(index=['D1','D2','D3'], columns=['24hr','48hr','72hr'])
c['24hr']['D1'] = b
c['48hr']['D1'] = b
c['72hr']['D1'] = b
c['24hr']['D2'] = b
c['48hr']['D2'] = b
c['72hr']['D2'] = b
c['24hr']['D3'] = b
c['48hr']['D3'] = b
c['72hr']['D3'] = b
To get the mean of each individual cell you can use `applymap`, which maps a
function to each cell of the DataFrame:
cell_means = c.applymap(lambda e: e['Average Vessels Length'].mean())
cell_means
Out[14]:
24hr 48hr 72hr
D1 17.5 17.5 17.5
D2 17.5 17.5 17.5
D3 17.5 17.5 17.5
And once you have those yo can get the column means etc. and go on to
normalize by the mean:
col_means = cell_means.mean(axis=0)
col_means
Out[11]:
24hr 17.5
48hr 17.5
72hr 17.5
dtype: float64
|
BeautifulSoup/Python/HTML - Return the div class right after a specific div class
Question: Example:
<div class="label">Employee Count</div>
<div class="field">331,000</div>
How can I use beautiful soup (or a different python lib) to search an HTML
file for "Employee Count" then return the value (331,000) that comes right
after it?
Using
result = soup.body.find(text='Employee Count')
I can find Employee Count, but how can I return the field right after it?
Answer: Find the `div` element with `Employee Count` text and get the [next
sibling](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#find-next-
siblings-and-find-next-sibling):
soup.find('div', text='Employee Count').find_next_sibling().text
Demo:
>>> from bs4 import BeautifulSoup
>>> data = """
... <body>
... <div class="label">Employee Count</div>
... <div class="field">331,000</div>
... </body>
... """
>>>
>>> soup = BeautifulSoup(data)
>>> soup.find('div', text='Employee Count').find_next_sibling().text
331,000
|
Python character and word counts
Question: I'm a beginner with python and would like to know how to use two txt files to
count the characters as well as counter the 10 most common characters. also
how to convert all characters in the file to lower case and eliminate all
characters other than a-z
here's what i've tried and had no luck with:
from string import ascii_lowercase
from collections import Counter
with open ('document1.txt' , 'document2.txt') as f:
print Counter(letter for line in f
for letter in line.lower()
if letter in ascii_lowercase)
Answer: Here is a simple example. You can adapt this code to fit your needs
from string import ascii_lowercase
from collections import Counter
with open('file1.txt', 'r') as file1data: #opening an reading file one
file1 = file1data.read().lower() #convert the entire file contents to lower
with open('file2.txt', 'r') as file2data: #opening an reading file two
file2 = file2data.read().lower()
#The contents of both file 1 and 2 are stored in fil1 and file2 variables
#Examples of how to work with one file repeat for two files
file1_list = []
for ch in file1:
if ch in ascii_lowercase: #makes sure only lowercase alphabet is appended. All Non alphabet characters are removed
file1_list.append(ch)
elif ch in [" ", ".", ",", "'"]: #remove this elif block is you just want the letters
file1_list.append(ch) #make sure basic punctionation is kept
print "".join(file1_list) #this line is not needed. Just to show what the text looks like now
print Counter(file1_list).most_common(10) #prints the top ten
print Counter(file1_list) #prints the number of characters and how many times they repeat
Now that you've reviewed that mess above and have an idea of what each line is
doing, here is a cleaner version, that does what you were looking for.
from string import ascii_lowercase
from collections import Counter
with open('file1.txt', 'r') as file1data:
file1 = file1data.read().lower()
with open('file2.txt', 'r') as file2data:
file2 = file2data.read().lower()
file1_list = []
for ch in file1:
if ch in ascii_lowercase:
file1_list.append(ch)
file2_list = []
for ch in file2:
if ch in ascii_lowercase:
file2_list.append(ch)
all_counter = Counter(file1_list + file2_list)
top_ten_counter = Counter(file1_list + file2_list).most_common(10)
print sorted(all_counter.items())
print sorted(top_ten_counter)
|
Calling `sed` from inside a python program
Question: I have the following code snippet in bash which I use for processing a JSON
file.I want to do the exactly same functionality in python. Can I call `sed`
from python? Or how do I achieve the same thing.
for file in /home/Work/ts_parts/*
do
event=$(cat $file | jq '.Event')
name="$(basename "$file")"
if [[ "$event" =~ "aaa.bbb.ccc.dddd" ]] || [[ "$event" =~ "eeee.ffff.gggg.hhhh" ]]; then
echo "Coming Here"
jq ".url = \"$(jq '.Msg' $file | sed 's/.*proxy=\([^&]*\).*/\1/')\"" $file > ts_parts/out_$name
else
jq ".url = null" $file > ts_parts/out_$name
fi
rm -f $file
done
Answer: You could use regular expressions:
import re
s = "abc proxy=123&456" # Test string
mat = re.search(r'proxy=([^&]*)', s)
print(mat.group(1)) # 123
This code only does what your sed replacement (extraction) does. Anything else
from the script you'd also have to implement, or call programs directly.
|
Attribute error in python unittest
Question: On executing this unittest getting
AttributeError: 'BaseUnit' object has no attribute 'driver'
import unittest
import redis
from selenium import webdriver
redis = redis.Redis(host='abc', port='123')
keys = redis.keys('*')
raw_baseunit = redis.get('test:baseunit')`enter code here`
class BaseUnit(unittest.TestCase):
def setup(self):
self.driver = webdriver.PhantomJS()
def test(self):
self.driver.get("myurl")
self.driver.find_element_by_id('username').send_keys("ngeo_pur1")
self.driver.find_element_by_id('password').send_keys("anything")
self.driver.find_element_by_xpath('html/body/div[1]/div[3]/div/section/div/form/ul/li[5]/div[2]/div/input').click()
self.driver.get("url")
self.driver.find_element_by_partial_link_text("18757424").click()
self.driver.find_element_by_xpath(".//*[@id='tabs']/nav/ul/li[2]/a/i").click()
Actual = self.driver.find_element_by_xpath(".//*[@id='subcat_baseModelSection.baseModelChoice']/div/div[1]").text
keys = redis.keys('*')
raw_baseunit = redis.get('test:baseUnit')
print "Actual Base Unit=",Actual
print "Expected Base Unit=",raw_baseunit
self.assetEquals(raw_baseunit,Actual)
def teardown(self):
self.driver.quit()
if __name__ == '__main__':
unittest.main()
Tried changing the class name 'BaseUnit' as well
Answer: The methods you want to override in
[`unittest.TestCase`](https://docs.python.org/2/library/unittest.html#unittest.TestCase)
are `setUp` and `tearDown` (note the capital "U" and "D"), not `setup` and
`teardown`.
Your all-lowercase `setup` method doesn't called before the test methods run
(if it gets called at all), so the `self.driver` attribute doesn't exist when
then `test` method tries to use it.
|
Django Tutorial Error: Setting up the test client
Question: I'm on part 5 of the Django Tutorial and I've hit a snag. I'm trying to setup
the test client with the following commands in the python shell:
from django.test.utils import setup_test_environment
setup_test_environment()
I get the following error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mohitgupta/.virtualenvs/Django/lib/python2.7/site-packages/django/test/utils.py", line 104, in setup_test_environment
mail._original_email_backend = settings.EMAIL_BACKEND
File "/Users/mohitgupta/.virtualenvs/Django/lib/python2.7/site-packages/django/conf/__init__.py", line 46, in __getattr__
self._setup(name)
File "/Users/mohitgupta/.virtualenvs/Django/lib/python2.7/site-packages/django/conf/__init__.py", line 40, in _setup
% (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting EMAIL_BACKEND, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
I've looked into defining the DJANGO_SETTINGS_MODULE with no real progress.
Just a note that I'm using PyCharm as an IDE but I don't use the internal
terminal they provide.
Any suggestions?
Answer: As mentioned in previous sections of the tutorial, to start a shell in Django
you should always do `./manage.py shell`, rather than starting Python
directly. That sets up the environment for you.
|
Yet another python scoping issue
Question: I've read bunch of scoping posts but I haven't found answer to mine. I use
python 2.7.6. Here's the code:
def play(A, B):
state = START_STATE
#player = None
while state[1] < goal and state[2] < goal:
if state[0]:
player = B
else:
player = A
state = resolve_state(state, player(state))
return player
This raises UnboundLocalError. Uncommenting line 3 effects in always returning
None variable, yet I am sure that the player variable is always either A or B.
Making player a global variable solves the problem. Can anyone explain this
behaviour? From what I've read while and if statements don't create their
scopes so the function should be the scope for variables declared in while/if
block.
Error says: "UnboundLocalError: local variable 'player' referenced before
assignment"
I am sure that the loop executes because START_STATE = (0, 0, 0, 0) + I double
checked it with printing + making player global solves the problem and it
doesn't affect the loop entrance conditions
@jonathan -> it stayed from older version
Answer: You code is _not_ going through the loop - here's a simplified code that
demonstrate it:
# noloop.py
def loop(A, B, x):
#player = None
while x:
if True:
player = B
else:
player = A
x = False
return player
and the calls and results:
>>> import noloop
>>> noloop.loop("A", "B", True)
'B'
>>> noloop.loop("A", "B", False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "noloop.py", line 12, in loop
return player
UnboundLocalError: local variable 'player' referenced before assignment
>>>
So your assertions are wrong, point cleared. Note that your code relies on two
global variables, `START_STATE` and `goal`, which makes debugging harder.
First rewrite your function to get rid of all globals (hint : pass
`START_STATE` and `goal` as arguments), then add some debugging code (like a
few print statements before, within and after the loop), and you'll probably
find out by yourself what went wrong.
|
Networkx node deletion / removal callback
Question: Given a directed networkx graph. I would like to have a function, say
"`abc()`", called for every node that is deleted from the graph. I went
through the networkx documentation but did not find any such callback feature.
What I considered:
1. Adding a call to `abc()` to the `__del__()` (deconstructor) method of the object associated with a node. Yet, besides the pitfalls of using `__del__()` (see e.g. [here](http://stackoverflow.com/questions/865115/how-do-i-correctly-clean-up-a-python-object)) this does not work if any links to the node object exist somewhere in memory.
2. Subclassing the `networkx.DiGraph()` class and overriding the remove_node() method. Drawback: This would require overriding all methods that remove a node, e.g. `remove_nodes_from` (are there any more?)
3. As the networkx graph implementation is based on dictionaries, it could be a solution to somehow 'hook' the `del` function of this dictionary. Interfering this deep into networkx seems inappropriate though.
What is the easiest way to implement a callback function which is called
everytime a networkx node is deleted?
Answer: In this case you only need to subclass two methods. Here is an example from
networkx import DiGraph
class RemoveNodeDiGraph(DiGraph):
def remove_node(self, n):
DiGraph.remove_node(self, n)
print("Remove node: %s"%n)
def remove_nodes_from(self, nodes):
for n in nodes:
self.remove_node(n)
if __name__=='__main__':
G = RemoveNodeDiGraph()
G.add_node('foo')
G.add_nodes_from('bar',weight=8)
G.remove_node('b')
G.remove_nodes_from('ar')
This won't be quite as fast as the original methods in the DiGraph class
(especially remove_nodes_from()) but unless you have very large graphs it
won't likely be significant. If you need better performance you could copy the
code directly from those methods instead of calling the superclass.
|
Need to create a user application in python-Django framework where based on input data will insert or update in the database
Question: I am a newbie in python-Django and i have a requirement where i need to create
a template where there will be a text input name (id) and a drop down as
OPS(view,insert,edit) and by giving the id and selecting view as ..it will
check into the db and will show all the project details in the template.
For view:it will display all the records in the table.
For Insert:It will insert the records in the table based on the input
parameters passed in the form.
For update:It will show all the fields and a user can edit and save it which
will update the database from backend.
I wanted to know how to create a model which will link the existing database
with that of this django for the table.So it can update and insert records
through template.
Please help me on this
Answer: check out the official [example
(models)](https://docs.djangoproject.com/en/1.8/intro/tutorial01/#creating-
models)
models in the django orm look like this
from django.db import models
class MyModel(models.Model):
foo = models.IntegerField() # a field in the model
You will need to read the official documentation or check out other django
applications.
|
Dictionaries x.update(y) for pandas DataFrames?
Question: I am sitting in front of a probably very simple problem. I have two pandas
DataFrames with some common Indices, like so:
import pandas as pd
x = pd.DataFrame(index=[1, 2, 3, 4],
data={'d': [5, 5, 5, 5]})
y = pd.DataFrame(index=[3, 4, 5, 6],
data={'d': [6, 6, 6, 6]})
What i now want to do is to update `x` by `y`. This means to me three things:
1. The indices `1, 2` are only in `x` and not in `y`. Keep the values from `x`.
2. The indices `3, 4` are common indices in `x` and `y`. Update the values with the new info from `y`.
3. The indices `5, 6` are only in `y`. Add them with their respective values to `x`.
In total, the result should look like this:
x = pd.DataFrame(index=[1, 2, 3, 4, 5, 6],
data={'d': [5, 5, 6, 6, 6, 6]})
Thinking in terms of python dictionaries, I tried `x.update(y)`, which did
steps 1. and 2., but doesn't do step 3. I am confident that this is a one-
liner, but i just cannot find it.
## Addendum
I mentioned dictionaries (with the index as key), the approach there would
look like this:
a = {1: 5,
2: 5,
3: 5,
4: 5}
b = {3: 6,
4: 6,
5: 6,
7: 6}
a.update(b)
It returns:
{1: 5, 2: 5, 3: 6, 4: 6, 5: 6, 7: 6}
Answer: You can call `combine_first` but using y as the destination, this will
overwrite the values from x that are missing from y:
In [75]:
y.combine_first(x)
Out[75]:
d
1 5
2 5
3 6
4 6
5 6
6 6
you can't use [`update`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.DataFrame.update.html#pandas.DataFrame.update) to
achieve what you want as this only updates the existing values:
In [79]:
x.update(y)
x
Out[79]:
d
1 5
2 5
3 6
4 6
|
selenium - python - identify suspected phishing sites
Question: I'm trying running code (in Python) that can identify suspected phishing
sites. I'm using Selenium's chromedriver. This is my code:
import os, os.path, sys
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_experimental_option( "prefs", {'safebrowsing.enabled':1})
chromedriver = "my chromedriver path"
os.environ["webdriver.chrome.driver"] = chromedriver
driver = webdriver.Chrome(chromedriver, chrome_options=chrome_options)
driver.get('site url I want to check')
My code checks 'V' on "enable phishing and malware protection" in the privacy
settings, but for some reason while using Chrome (not the window one opened by
Python) the site I check is suspected as phishing and the Chrome window opened
by my Python code is not showing anything related to phishing.
Any ideas?
Answer: Instead of using `selenium`, use [Google Safe Browsing
API](https://developers.google.com/safe-browsing/) directly ([python
wrapper](https://github.com/juliensobrier/google-safe-browsing-lookup-
python)):
>>> key = 'your own key'
>>> from safebrowsinglookup import SafebrowsinglookupClient
>>> client = SafebrowsinglookupClient(key)
>>> client.lookup('http://addonrock.ru/Debugger.js')
{'http://addonrock.ru/Debugger.js': 'malware'}
>>> client.lookup('http://google.com')
{'http://google.com': 'ok'}
|
Extracting data from `:` till <br> using python
Question:
Page load for http://xxxx?roxy=www.yahoo.com&eventto=https://mywebsite?event took 4001 ms (Ne: 167 ms, Se: 2509 ms, Xe: 1325 ms)<br><br><br>Topic: Yahoo!! My website is a good website | Mywebsite<br>
I want to use regular expressions in python to extract the `Topic` from the above message.I want the extracted message to be `Yahoo!! My website is a good website | Mywebsite`
Answer: You can try the RegEx `r'Topic: (.*)\<br\>'`
>>> s = 'Page load for http://xxxx?roxy=www.yahoo.com&eventto=https://mywebsite?event took 4001 ms (Ne: 167 ms, Se: 2509 ms, Xe: 1325 ms)<br><br><br>Topic: Yahoo!! My website is a good website | Mywebsite<br>'
>>> import re
>>> re.search(r'Topic: (.*)\<br\>',s).group(1)
'Yahoo!! My website is a good website | Mywebsite'
Note: This can be done faster using string processing than regex
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.